AMI Labs announced a $1.03 billion Series A funding round earlier this month. The Paris-based startup, chaired by Turing Prize winner Yann LeCun, is seeking to revolutionize AI through what they call a “new breed of AI systems.”
Known as world models, these are systems built not on language prediction, but on a grounded understanding of physical reality.
Here’s what that means, why current AI falls short, and why this funding round is a signal worth paying attention to.
The core problem with language models
The dominant paradigm in AI today is the large language model. LLMs like OpenAI’s ChatGPT, Gemini, and Claude are trained on large amounts of text, learning to predict the next token with remarkable accuracy.
The emergent capabilities from this approach have been genuinely surprising, such as coherent reasoning, code generation, and nuanced summarization, but that approach to AI has a hard ceiling.
LLMs have no grounded understanding of the physical world. They model the statistical distribution of language about reality, not reality itself. The practical consequence is well-known: hallucinations, brittle reasoning on causal or spatial tasks, and a fundamental inability to plan actions in dynamic environments – an imperative gap to consider.
“Algorithms are great at anticipating needs, but they are currently incapable of anticipating feelings. This creates a digital experience that feels functional but cold,” noted Martin Lewit, SVP Corporate Development at the global technology firm Nisum.
You can prompt an LLM to describe how a robot should stack boxes, but it has no internal model of gravity, friction, or object permanence to draw from. It’s pattern-matching on descriptions of physics, not reasoning from physical principles.
This isn’t a scaling problem. Adding more parameters or more training data doesn’t resolve it. It’s an architectural one.
The consequences show up in practical workflows. Paras Pandey, an AWS data engineer, put it plainly in an email response to questions from The Sociable:
“In production data environments, one of the most persistent frustrations with LLMs is schema hallucination, the model confidently generates a database key structure or a data schema that looks completely plausible but doesn’t reflect your actual data model. You end up building validation layers just to catch what amounts to confident fabrication. What excites me about the world model direction is the premise that the system maintains a grounded, updateable representation of the environment it’s operating in, which is precisely what’s missing today. The infrastructure implications are significant, since you’re moving from stateless inference toward something much more stateful and continuous, but if it delivers on that promise, it addresses a real gap.“
What World Models actually are
A world model is an internal representation of how the environment works. It allows a system to simulate outcomes, predict future states, and plan actions without needing explicit instructions for every scenario.
The concept isn’t new. It has roots in model-based reinforcement learning, cognitive science, and control theory. What’s changed is the ambition: researchers like LeCun are now pursuing general world models. Systems that can build a rich, transferable understanding of physical and causal reality from raw sensory data, the same way biological systems do.
A well-functioning world model should be able to:
- Predict consequences: given an action and a current state, estimate the resulting state
- Reason under uncertainty: maintain multiple hypotheses about the world and update them with new observations
- Generalize across contexts: apply learned physical intuitions to novel situations without retraining
- Support planning: use internal simulation to evaluate potential action sequences before committing to one
This is the kind of reasoning that underpins robotics, autonomous systems, and any AI that needs to operate reliably in open-ended, real-world environments.
“Enabling systems to learn directly from noisy, unstructured real-world experience, building an internal model of how the world works (hence their name). This reflects a meaningful shift in training capability: models become more grounded in reality, and as a result, more robust and reliable in practice,” says Solutions Architect and Head of Data Science at Making Sense, JD Raimondi.

LeCun’s approach: JEPA
The architecture AMI Labs is building on is JEPA, the Joint Embedding Predictive Architecture, proposed by LeCun in 2022. It represents a deliberate departure from both generative models and contrastive learning approaches.
The key idea: instead of training a model to reconstruct the full input (as generative models do) or to contrast positive and negative pairs (as contrastive models do), JEPA trains a model to predict the abstract representation of future states from the abstract representation of current ones. Prediction happens in latent space, not pixel or token space.
Why does this matter? Reconstructing raw inputs forces models to model irrelevant low-level detail. Predicting in a learned embedding space allows the model to focus on semantically meaningful, causally relevant features, the kind of representations that actually support downstream reasoning and planning.
It’s a more information-efficient approach to world modeling, and one that sidesteps some of the mode collapse and hallucination problems endemic to purely generative architectures.
AMI Labs’ funding round and why it signals a shift
AMI Labs closed a $1.03 billion round at a $3.5 billion pre-money valuation, co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Strategic investors included NVIDIA, Samsung, Toyota Ventures, and Temasek. A lineup that reflects serious downstream interest from industries that need reliable, real-world AI.
The company’s leadership is worth noting. LeCun serves as chairman; CEO Alexandre LeBrun previously founded Wit.ai (acquired by Facebook) and led Nabla, a clinical AI company where he encountered firsthand the risks of deploying hallucination-prone LLMs in high-stakes medical environments.
Chief Science Officer Saining Xie and VP of World Models Michael Rabbat round out a research team with serious credentials.
What makes this round significant beyond the number is the timeline it implies. LeBrun has been explicit: AMI Labs is not building a product for immediate deployment.
This is a fundamental research effort, likely measured in years before commercial applications emerge. Investors backing a multi-year basic research program at this scale suggests genuine conviction, not just in LeCun’s thesis, but in the idea that the current LLM paradigm has run into architectural limits that won’t be resolved by the next training run.
Why this matters
AMI Labs is not operating in isolation. Fei-Fei Li’s World Labs recently raised $1 billion for spatial intelligence and 3D world modeling. SpAItial secured a $13 million seed investment for related work out of Europe. The clustering of capital and talent around this problem is itself a signal.
The applications that most clearly benefit from world models are those where LLMs have struggled most:
Healthcare and clinical AI: where hallucinations have real consequences and models need to reason about physiology, not just medical literature. Nabla is AMI Labs’ first disclosed partner for early model access.
Robotics and physical automation: where systems need to handle novel objects, unexpected environments, and real-time physical feedback. No LLM-based system handles this well today.
Scientific simulation: where models need to predict experimental outcomes, not just retrieve prior results.
Autonomous systems: vehicles, drones, industrial controllers, basically any domain where an agent must plan actions in a dynamic, partially observable world.
Systems improvements will help businesses better forecast too. “It is telling that even with all this tech, nearly 80% of leaders still struggle with inaccurate forecasting. The problem usually isn’t the math, but the trust,” said Asparuh Koev, CEO of Transmetrics, a technology firm that uses AI to support logistics and transportation companies.
Are we there yet?
World models are not a solved problem, and the path from LeCun’s theoretical framework to robust, deployable systems is long. The history of AI is littered with paradigm shifts that took far longer than their proponents expected.
LLMs are impressive but architecturally unsuited for action-oriented intelligence. The question isn’t really whether a different approach is needed; it’s which approach will work, and when.
AMI Labs is betting on JEPA, on open research, and on a team with both theoretical depth and operational experience.
Whether or not this specific bet pays off, the underlying argument that the next meaningful leap in AI capability requires moving beyond language modeling is increasingly hard to dismiss.

Disclosure: This article mentions clients of an Espacio portfolio company.

