What's new
GR WEB DEV | Buy and Download | Watch and Download | one line of code

Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

NEWS Five architects of the AI economy explain where the wheels are coming off

Latest News Tech
55252231372_4afd845df5_o.jpg


The first StrictlyVC of 2026 hits SF on April 30. Tickets are going fast. Register now.

Buy one Disrupt pass, and get the second at 50% off. Ends May 8. Register now.

TechCrunch Desktop Logo TechCrunch Mobile Logo Latest Startups Venture Apple Security AI Apps Events Podcasts Newsletters Search Submit Site Search Toggle Mega Menu Toggle Topics Latest

Five architects of the AI economy explain where the wheels are coming off Connie Loizos 10:25 PM PDT · May 6, 2026 Earlier this week, five people who touch every layer of the AI supply chain sat down at the Milken Global Conference in Beverly Hills, where they talked with this editor about everything from chip shortages to orbital data centers to the possibility that the whole architecture that undergirds the tech is wrong.

On stage with TechCrunch: Christophe Fouquet, CEO of ASML, the Dutch company that holds a monopoly on the extreme ultraviolet lithography machines without which modern chips would not exist; Francis deSouza, COO of Google Cloud, who is overseeing one of the biggest infrastructure bets in corporate history; Qasar Younis, co-founder and CEO of Applied Intuition, a $15 billion physical AI company that started in simulation and has since moved into defense; Dimitry Shevelenko, the chief business officer of Perplexity, the AI-native search-to-agents company; and Eve Bodnia, a quantum physicist who left academia to challenge the foundational architecture most of the AI industry takes for granted at her startup, Logical Intelligence. (Meta’s former chief AI scientist, Yan LeCun, signed on as founding chair of its technical research board earlier this year.)

The AI boom is running into hard physical limits, and the constraints begin further down the stack than many may realize. Fouquet was the first to say it, describing a “huge acceleration of chips manufacturing,” while expressing his “strong belief” that despite all that effort, “for the next two, three, maybe five years, the market will be supply limited,” meaning the hyperscalers — Google, Microsoft, Amazon, Meta — aren’t going to get all the chips they’re paying for, full stop.

DeSouza highlighted how big — and how fast growing — an issue this is, reminding the audience that Google Cloud’s revenue crossed $20 billion last quarter, growing 63%, while its backlog — the committed but not yet delivered revenue — nearly doubled in a single quarter, from $250 billion to $460 billion. “The demand is real,” he said with impressive calm.

For Younis, the constraint comes primarily from elsewhere. Applied Intuition builds autonomy systems for cars, trucks, drones, mining equipment and defense vehicles, and his bottleneck isn’t silicon — it’s the data that one can only gather by sending machines into the real world and watching what happens. “You have to find it from the real world,” he said, and no amount of synthetic simulation fully closes that gap. “There will be a long time before you can fully train models that run on the physical world synthetically.”

Techcrunch event This Week Only: Buy one pass, get the second at 50% off Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register before May 8 to bring a +1 at half the cost. This Week Only: Buy one pass, get the second at 50% off Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register before May 8 to bring a +1 at half the cost. San Francisco, CA | October 13-15, 2026 REGISTER NOW The energy problem is also real

If chips are the first bottleneck, energy is the one looming behind it. DeSouza confirmed that Google is exploring data centers in space as a serious response to energy constraints. “You get access to more abundant energy,” he noted. Of course, even in orbit, it isn’t simple. DeSouza observed space is a vacuum, so eliminates convection, leaving radiation as the only way to shed heat into the surrounding environment (a much slower and harder-to-engineer process than the air and liquid cooling systems that data centers rely on today). But the company is still treating it as a legitimate path.

The deeper argument de Souza made, somewhat unsurprisingly, was about efficiency through integration. Google’s strategy of co-engineering its full AI stack — from custom TPU chips through to models and agents — pays dividends in flops per watt (more computation per unit of energy) that a company buying off-the-shelf components simply can’t replicate, he suggested. “Running Gemini on TPUs is much more energy efficient than any other configuration,” because chip designers know what’s coming in the model before it ships, he said.

Fouquet’s made a similar point later in the discussion. “Nothing can be priceless,” he said. The industry is in an strange moment right now, investing extraordinary amounts of capital, driven by strategic necessity. But more compute means more energy, and more energy has a price.

While the rest of the industry debates scale, architecture, and inference efficiency within the large language model paradigm, Bodnia is building something very different.

Her company, Logical Intelligence, is built on so-called energy-based models (EBMs), a class of AI that doesn’t predict the next token in a sequence but instead attempts to understand the rules underlying data, in a way she argues is closer to how the human brain actually works. “Language is a user interface between my brain and yours,” she said. “The reasoning itself is not attached to any language.”

Her largest model runs to 200 million parameters — compared to the hundreds of billions in leading LLMs — and she claims it runs thousands of times faster. More importantly, it’s designed to update its knowledge as data changes, rather than requiring retraining from scratch.

___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.


 
Back
Top