What happens when AI starts building itself? Russell Brandom 12:57 PM PDT · May 14, 2026 Richard Socher has been a major figure in AI for some time, best known for founding the early chatbot startup You.com and, before that, his work on ImageNet. Now he’s joining the current generation of research-focused AI startups with Recursive Superintelligence, a San Francisco-based startup that came out of stealth on Wednesday with $650 million in funding.
Socher is joined in the new venture by a cohort of prominent AI researchers, including Peter Norvig and Cresta co-founder Tim Shi. Together, they’re working to create a recursively self-improving AI model, one that can autonomously identify its own weaknesses and redesign itself to fix them, without human involvement — a long-held holy grail of contemporary AI research.
I spoke with him on Zoom after the launch, digging into Recursive’s unique technical approach and why he doesn’t think of this new project as a neolab, the informal term for a new generation of AI startups that prioritize research over building products.
This interview has been edited for length and clarity.
We hear a lot about recursion these days! It feels like a very common goal across different labs. What do you see as your unique approach?
Our unique approach is to use open-endedness to get to recursive self-improvement, which no one has yet achieved. It’s an elusive goal for a lot of people. A lot of people already assume it happens when you just do auto-research. You know, you can take AI and ask it to make some other thing better, which could be a machine learning system, or just a letter that you write, or, you know, whatever it might be, right? But that’s not recursive self-improvement. That’s just improvement.
Our main focus is to build truly recursive, self-improving superintelligence at scale, which means that the entire process of ideation, implementation, and validation of research ideas would be automatic.
First [it would automate] AI research ideas, eventually any kind of research ideas, even eventually in the physical domains. But it's particularly powerful when it's AI working on itself, and it's developing a new kind of sense of self-awareness of its own shortcomings.
You used the term open-ended — does that have a specific technical meaning?
It does. In fact, Tim Rocktäschel, one of our co-founders, led the open-endedness and self-improvement teams at Google DeepMind and particularly worked on the world model Genie 3, which is a great example of open-endedness. You can tell it any concept, any world, any agent, and it just creates it, and it's interactive.
In biological evolution, animals adapt to the environment, and then others counter-adapt to those adaptations. It's just a process that can evolve for billions of years, and interesting stuff keeps happening, right? That's how we developed eyes in our [heads].
Another example is rainbow teaming, from another paper from Tim . Have you heard of red teaming?
So, red teaming also has to be done in an LLM context. Basically you try to get the LLM to tell you how to build a bomb, and you want to make sure that it doesn’t do it.
Now, humans can sit there for a long time and come up with interesting examples of what the AI shouldn't say. But what if you tested this first AI with a second AI, and that second AI now has the task of making the first AI [try to] say all the possible bad things. And then they can go back and forth for millions of iterations.
You can actually allow two AIs to co-evolve. One keeps attacking the other, and then comes up with not just one angle but many different angles, and hence the rainbow analogy. And then you can inoculate the first AI, and you become safer and safer. This was an idea from Tim Rocktaeschel, and it’s now used in all the major labs.
___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.
