Last Updated on April 5, 2026 by Eytan Bijaoui
⚡ Quick Answer: The billion-dollar bet against LLMs: a new class of AI startups is building alternatives to large language models, arguing that the current architecture is fundamentally limited. Investors are paying attention — and writing big checks.
📅 Last updated: March 29, 2026
You know that scene in The Big Short where Michael Burry is sitting alone, headphones on, reading mortgage data that nobody else bothered to read, and quietly betting against the entire housing market?
That’s basically what Yann LeCun just did. Except instead of mortgages, it’s Large Language Models. And instead of a hedge fund, it’s a $1.03 billion seed round.
Let me say that again. A billion-dollar seed round. The largest in European history. To build something that says ChatGPT, Claude, and Gemini are all fundamentally wrong.
If you’re a founder building anything on top of AI right now, you should probably pay attention.
What Actually Happened
Yann LeCun is not some random contrarian on Twitter. He won the Turing Award (the Nobel Prize of computer science). He was Meta’s chief AI scientist for years. He’s one of the three people credited with inventing deep learning.
He left Meta. And within four months, he raised $1.03 billion for a startup called AMI Labs, valued at $3.5 billion before writing a single line of production code.
The investors read like an Avengers roster of tech money. Bezos. Nvidia. Samsung. Temasek. Eric Schmidt. Mark Cuban. Tim Berners-Lee. These aren’t people who throw money at ideas for fun. Well, maybe Cuban. But the rest are dead serious.
The Thesis That Should Make You Uncomfortable
Here’s LeCun’s argument, and honestly, it’s hard to dismiss.
Every LLM you’ve ever used (ChatGPT, Claude, Gemini, all of them) works by predicting the next word. That’s it. Very sophisticated next-word prediction. They don’t understand what they’re talking about. They don’t model reality. They don’t plan. They autocomplete.
LeCun’s alternative is something called JEPA (Joint Embedding Predictive Architecture). Instead of predicting text, it learns abstract representations of how the world actually works. Think of it as the difference between someone who can write a convincing essay about gravity versus someone who actually understands physics.
His target applications? Robotics. Autonomous vehicles. Industrial systems. Healthcare. Basically, everything where you need a machine that understands reality, not just text about reality.
So Should You Panic?
No. But also, maybe stop being so comfortable.
Here’s what I think is actually going on, and I could be wrong about some of this.
LeCun is probably right that LLMs have fundamental limitations. They hallucinate. They can’t reliably reason about physical systems. They don’t actually “know” anything. These aren’t controversial claims. Even the people building LLMs will quietly admit this over a beer.
But here’s the thing. Being right about the limitations doesn’t mean LLMs are useless. The iPhone’s first camera was terrible. It still changed photography forever. Not because it was good, but because it was good enough and always in your pocket.
LLMs are the “good enough” that’s already reshaping entire industries. AMI Labs is building something that might be fundamentally better, but “might” and “in several years” don’t pay your rent.
The Real Lesson for Founders (And It’s Not About JEPA)
I’ve seen over a hundred startups build on top of someone else’s platform. And there’s a pattern that repeats so reliably it’s almost boring.
Phase 1: The platform is amazing. You build on it. Everything is great.
Phase 2: The platform changes something. Your entire product breaks overnight.
Phase 3: You scramble, pivot, or die.
Remember when thousands of companies built their entire business on the Facebook API? Then Facebook changed the rules and half of them disappeared. Twitter did the same thing. Google has killed more APIs than I can count.
Right now, the AI startup ecosystem is in Phase 1. OpenAI’s API is amazing. Anthropic’s Claude is incredible. Building on them feels like free power. And it kind of is, for now.
But here’s the question nobody wants to ask: what happens when the foundation shifts?
Maybe LeCun’s JEPA architecture becomes the new standard in 5 years. Maybe OpenAI pivots to something completely different. Maybe a regulation changes the game. The specific threat doesn’t matter. What matters is that you’re building on someone else’s foundation, and foundations can crack.
What Smart Founders Do About This
I’m not going to tell you to stop building on LLMs. That would be ridiculous. The opportunity is real and it’s right now.
But I will tell you three things.
First, validate the problem, not the technology. If your startup only works because GPT-5 exists, you don’t have a startup. You have a feature that OpenAI might ship next Tuesday. The businesses that survive platform shifts are the ones solving real problems that would exist regardless of which AI architecture wins.
Second, own your distribution. The startups that survived Facebook’s API apocalypse were the ones who had built their own audience, their own brand, their own relationship with customers. When the platform rug-pulled them, they had somewhere to stand. If your only distribution is “we’re the best wrapper around Claude,” you’re one API change away from irrelevance.
Third, stay curious about what’s coming. You don’t need to understand JEPA architecture. But you should know that the smartest people in AI think the current approach has an expiration date. Build for today, but keep one eye on the road ahead.
The Billion-Dollar Validation
There’s actually something weirdly inspiring about LeCun’s move if you zoom out.
Here’s a guy in his 60s, at the peak of his career, with nothing left to prove. He could have stayed at Meta, collected his paycheck, and coasted. Instead, he bet his reputation on a thesis that 99% of the industry thinks is wrong. Or at least premature.
And investors gave him a billion dollars for it.
You know what that is? That’s the ultimate market validation. Not a survey. Not a landing page test. Not “my friends think it’s a good idea.” A billion dollars from people who’ve spent their careers separating signal from noise.
For founders, the takeaway isn’t “build world models.” It’s that conviction, backed by expertise and a contrarian thesis, is still the most valuable currency in startups. If you believe something the market doesn’t see yet, and you can articulate why, the capital will find you.
But please, for the love of everything, validate first. Because the difference between conviction and delusion is evidence.


