This article was featured in the Think newsletter. Get it in your inbox.
Meta is pouring billions into a new research lab to develop an AI it says could rival or exceed human abilities across disciplines, supported by a Manhattan-scale data center and a global recruitment drive for top scientists.
Superintelligence, once discussed mainly in academic circles, is now part of corporate strategy. Meta’s investments in infrastructure, financing and research teams suggest that it is approaching this advanced form of AI as a concrete goal, similar to initiatives underway at companies including OpenAI and Anthropic.
Superintelligence refers to an AI that outperforms people in every area, from scientific creativity to emotional understanding. Kunal Sawarkar, Distinguished Engineer and Chief Data Scientist at IBM, told IBM Think in an interview that superintelligence would be “smarter than humans in every domain, not just memory or math, but reasoning, creativity, emotional intelligence and even social manipulation.” It is this level of capability that Meta and other tech giants are now racing to achieve, pouring resources into research programs aimed at turning what was once science fiction into an engineering reality.
Meta’s planned AI infrastructure to bolster superintelligence efforts includes multi-gigawatt “supercluster” data centers, one of which will cover an area comparable to most of Manhattan. These facilities, codenamed Prometheus and Hyperion, are designed to support large-scale model training.
Recruitment for the initiative has drawn top talent from Google DeepMind, OpenAI and other research institutions. Meta has also brought in Alexandr Wang, Founder of Scale AI, to lead parts of the effort. The company is reported to be offering substantial compensation packages, in some cases exceeding USD 100 million in total value, to secure experienced AI researchers.
CEO Mark Zuckerberg has described the goal as building “personal superintelligence,” an AI integrated into devices, such as smart glasses, that can manage information, anticipate needs and assist in achieving personal and professional objectives.
Industry newsletter
Get curated insights on the most important—and intriguing—AI news. Subscribe to our weekly Think newsletter. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
Sawarkar says achieving superintelligence will be far harder and costlier than the progress seen with today’s AI systems. Current models such as ChatGPT, Claude and Gemini have made notable advances in generating text, images and code. Yet he notes that the arrival of true superintelligence may not come as a single breakthrough, but as a series of developments, such as models solving problems previously thought intractable or producing new scientific frameworks—milestones that will demand enormous technical, financial and organizational resources of the kind Meta is now trying to marshal.
“Scaling to that level requires significant advances in compute efficiency, learning algorithms and sustainable energy use,” Sawarkar said. Training the largest models already costs USD millions per run, and sustaining this level of development requires careful optimization of both hardware and software. Researchers are exploring hybrid architectures that combine large language models (LLMs) with symbolic reasoning and retrieval-augmented generation, aiming for systems that can handle more complex reasoning tasks reliably.
“Long-term memory and context retention are also key challenges,” Sawarkar said. Current models operate on session-by-session inputs and lack persistent awareness over time. Researchers are investigating memory modules and continual learning systems to address this limitation.
Meta’s investment places it in a small group of companies pursuing similarly ambitious goals. OpenAI has announced its “Safe SuperIntelligence” program, described as a dedicated track for building more powerful systems with safety measures integrated from the outset.
Roman Yampolskiy, an Associate Professor of Computer Science and Engineering at the University of Louisville who studies AI safety, views Zuckerberg’s public embrace of superintelligence as a notable shift in industry rhetoric. “What was once dismissed as speculative is now being normalized by mainstream tech leadership,” he told IBM Think in an interview.
He said he believes large technology platforms are already accelerating progress. “With enough capital, data and incentive misalignment, progress toward superintelligence is not aspirational; it is inevitable.” Experts including Yampolskiy, have warned that without guardrails, superintelligent systems could become uncontrollable and cause large-scale harm. “The question is not if they can, but whether they should, and that question is not being asked loudly enough.”
While the timeline for achieving superintelligence remains unclear, Sawarkar remains optimistic about its long-term prospects. “Superintelligence isn’t science fiction,” he said. “It’s a design question.”