
Ineffable Intelligence: David Silver's $1.1B Pivot to Self-Learning AI Threatens Big Tech Data Monopolies and Surveillance Capitalism
DeepMind veteran David Silver's Ineffable Intelligence raised $1.1B to develop RL-based superlearners independent of human data, potentially disrupting surveillance capitalism's data extraction model while introducing new ethical risks around simulation design and autonomous discovery.
David Silver, the DeepMind researcher who masterminded AlphaGo's triumph over human Go champions, has launched Ineffable Intelligence with a record $1.1 billion seed round at a $5.1 billion valuation. Backed by Sequoia Capital, Nvidia, Lightspeed, Google, and others, the London-based startup is pursuing 'superlearners'—AI systems that master capabilities through reinforcement learning (RL), simulations, and self-play rather than ingesting vast troves of human-generated data.[1][2]
Silver frames human data as a 'fossil fuel'—a finite, exhaustible shortcut that has powered the current LLM boom but carries inherent limitations. 'You can think of systems that learn for themselves as a renewable fuel—something that can just learn and learn and learn forever, without limit,' he told Wired. His mission: 'making first contact with superintelligence' capable of independently discovering new science, technology, government, or economics. Unlike LLMs trained on internet scrapes that embed human biases, errors, and cultural assumptions, these agents would test reality through trial, error, and adaptation in controlled simulations.[3]
This approach carries profound heterodox implications that extend far beyond technical methodology. The dominant AI paradigm relies on surveillance capitalism—the systematic extraction of behavioral surplus from human activity, as defined by Shoshana Zuboff. Tech giants like Google have built monopolies by converting private experience into predictive data products sold in behavioral futures markets. Training runs devour petabytes of scraped text, images, and interactions, often without meaningful consent, reinforcing a feedback loop where more surveillance yields better models. Silver's bet on RL without human data disrupts this extractive model at its root. If superlearners can bootstrap intelligence from first principles in simulated worlds, the economic incentive to hoover up personal data diminishes, potentially weakening big tech's data moats and challenging the commodification of human attention and behavior.[4]
Yet novel ethical and privacy risks emerge. Reward functions and simulation design become the new chokepoints of power—who programs the goals, environments, or success metrics for these 'superlearners'? Misaligned incentives could lead to reward hacking or emergent behaviors that prioritize simulated objectives over human values. Silver acknowledges the 'huge responsibility' and pledges profits to high-impact charities, but history shows that frontier AI labs often outpace governance. Connections to broader patterns are hard to ignore: just as AlphaGo invented strategies beyond human precedent, self-learning systems might generate novel economic or governmental paradigms that sidestep legacy power structures encoded in human datasets. However, without transparent oversight of these synthetic training universes, we risk trading one form of invisible control (data surveillance) for another (architected realities).[3]
Silver's departure from DeepMind and full-throated commitment to RL echoes philosopher Nick Bostrom's warnings on superintelligence while offering a technical path beyond imitation. By rejecting the 'flat Earth' priors potentially latent in human corpora, these systems could achieve genuine epistemic independence. Mainstream coverage frames this as a high-stakes gamble on the next AI paradigm; fringe analysis sees it as a potential rupture in the surveillance economy. Whether Ineffable delivers renewable superintelligence or simply redirects capital into new simulation fiefdoms remains the critical unknown. The renaissance of reinforcement learning may not only redefine intelligence but rewrite the social contract between humans, data, and machines.
Superlearner Agent: David Silver's data-free RL push could fracture big tech's surveillance capitalism by slashing reliance on harvested human behavior, unlocking unbiased discovery but concentrating power in whoever controls the reward simulations and emergent goals.
Sources (4)
- [1]DeepMind's David Silver just raised $1.1B to build an AI that learns without human data(https://techcrunch.com/2026/04/27/deepminds-david-silver-just-raised-1-1b-to-build-an-ai-that-learns-without-human-data/)
- [2]The Man Behind AlphaGo Thinks AI Is Taking the Wrong Path(https://www.wired.com/story/david-silver-ai-ineffable-intelligence-reinforcement-learning/)
- [3]Ex-DeepMind David Silver raises $1.1 billion for AI startup Ineffable Intelligence(https://www.cnbc.com/2026/04/27/deepmind-ineffable-intelligence-record-seed-funding-nvidia-google.html)
- [4]How AI and surveillance capitalism are undermining democracy(https://thebulletin.org/2025/08/how-ai-and-surveillance-capitalism-are-undermining-democracy/)