Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks. Learn more here: https://lnkd.in/giU_-6_M A few highlights of DINOv3: 1️⃣SSL enables 1.7B-image, 7B-param training without labels, supporting annotation-scarce scenarios including satellite imagery 2️⃣Produces excellent high-resolution features and state-of-the art performance on dense prediction tasks 3️⃣Versatile application across vision tasks and domains, all with a frozen backbone (no fine-tuning required) 4️⃣ Includes distilled smaller models (ViT-B, ViT-L) and ConvNeXt variants for deployment flexibility To help foster innovation and collaboration in the computer vision community, we’re releasing DINOv3 under a commercial license with a full suite of pre-trained models, adapters, training and evaluation code, and (much!) more. Find them here: https://lnkd.in/gEptEtVR
AI at Meta
Research Services
Menlo Park, California 1,009,172 followers
Together with the AI community, we’re pushing boundaries through open science to create a more connected world.
About us
Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.
- Website
-
https://ai.meta.com/
External link for AI at Meta
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Menlo Park, California
- Specialties
- research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing
Updates
-
🏆 We're thrilled to announce that Meta FAIR’s Brain & AI team won 1st place at the prestigious Algonauts 2025 brain modeling competition. Their 1B parameter model, TRIBE (Trimodal Brain Encoder), is the first deep neural network trained to predict brain responses to stimuli across multiple modalities, cortical areas, and individuals. The approach combines pretrained representations of several foundational models from Meta – text (Llama 3.2), audio (Wav2Vec2-BERT from Seamless) and video (V-JEPA 2) – to predict a very large amount (80 hours per subject) of spatio-temporal fMRI brain responses to movies acquired by the Courtois NeuroMod project Download the code: https://lnkd.in/gmFRzFJQ Read the paper: https://lnkd.in/gy5YQnc6 Learn about the challenge: https://lnkd.in/ga8fYeFt Download the data: https://www.cneuromod.ca/
-
-
The Meta FAIR Chemistry team continues to make meaningful strides. 1️⃣ Today we’re announcing FastCSP, a workflow that generates stable crystal structures for organic molecules. This accelerates material discovery efforts and cuts down the time to design molecular crystals from months to days. Read the paper: https://lnkd.in/gGemkqat The workflow will be available soon here: https://lnkd.in/dyNcGGrs 2️⃣ We’re also releasing the Open Molecular Crystals (OMC25) dataset, comprised of 25 million structures, that was created to enable the FastCSP workflow. Read the paper: https://lnkd.in/gfMz-qen Find the dataset here: https://lnkd.in/gfT2yMfU We’re excited to share these advances, developed in partnership with Carnegie Mellon University, and see how the community uses them to progress in fields like electronics and healthcare.
-
-
We’re excited to introduce the Open Direct Air Capture 2025 dataset, the largest open dataset for discovering advanced materials that capture CO2 directly from the air. Developed by Meta FAIR, Georgia Institute of Technology, and CuspAI, this release enables rapid, accurate screening of carbon capture materials and can help accelerate climate solutions using AI. Explore the full dataset here: https://lnkd.in/gj8GdXrn
-
-
Today Mark shared Meta’s vision for the future of personal superintelligence for everyone. Read his full letter here: meta.com/superintelligence
-
We're excited to have Shengjia Zhao at the helm as Chief Scientist of Meta Superintelligence Labs. Big things are coming! 🚀 See Mark's post: https://lnkd.in/gCZvXCzf
-
-
We're rapidly expanding our AI infrastructure and have adopted a novel approach of building weather-proof tents to house GPU clusters. This enables us to get new data centers online in months instead of years. 🚀 Read more in this Fast Company article: https://lnkd.in/gvjeBHj5
-
-
We’re thrilled to see our advanced ML models and EMG hardware — that transform neural signals controlling muscles at the wrist into commands that seamlessly drive computer interactions — appearing in the latest edition of Nature Magazine. Read the story: https://lnkd.in/g6JJwcf8 Find more details on this work and the models on GitHub: https://lnkd.in/g-xiJ2Nm
-
Meta FAIR recently released the Seamless Interaction Dataset, the largest known high-quality video dataset of its kind, with: 4,000+ diverse participants 4,000+ hours of footage 65k+ interactions 5,000+ annotated samples This dataset of full-body, in-person, face-to-face interaction videos represents a crucial stepping stone to understanding and modeling how people communicate and behave when they’re together—advancing AI's ability to generate more natural conversations and human-like gestures. Download the dataset on @huggingface: https://lnkd.in/ebDSm3Wq Learn more about the dataset: https://lnkd.in/e9V4CVms
-
"Our mission with the lab is to deliver personal superintelligence to everyone in the world. So that way, we can put that power in every individual's hand." - Mark Watch Mark's full interview with The Information as he goes deeper on Meta's vision for superintelligence and investment in AI compute infrastructure.
The Information | TITV | July 15th, 2025 The Information’s TITV is first in tech news and analysis from the people that break and shape the story. The rest is just commentary. Watch every weekday live at 10 am PT/ 1 ET on TheInformation.com/titv, App, YouTube, X—and on demand wherever you get your podcasts.
The Information | TITV | July 15th, 2025
www.linkedin.com