3 Priorities Data Leaders Can’t Ignore Register for the fireside chat on April 28 to stay ahead

One giant leap for AI

Critics call orbital data centers “peak insanity.” Engineers are chipping away at the challenges.

A rendering of a future Starcloud satellite as it travels along the terminator, the boundary between day and night
A rendering of a future Starcloud satellite as it travels along the terminator, the boundary between day and night. Image courtesy of Starcloud

The AI boom has created a massive appetite for computing power. Global data center electricity demand is expected to double by 2030, according to Gartner. These centers’ need for resources is straining the capacity of land-based infrastructure and sparking local opposition over utility bills and environmental impact.

According to a growing chorus of tech leaders at Google, SpaceX and elsewhere, the answer is to expand off-planet, immediately. But are data centers in space technically possible? And even if they are, what about their astronomical price tag? IBM Think talked to some of the researchers and entrepreneurs trying to make orbital data centers a reality.

Why build data centers in space?

The main draw is simple: continuous, unlimited energy. The sun emits more power than 100 trillion times humanity’s total electricity production. The challenge is figuring out how to efficiently harness it.

That’s the thinking behind Google’s Project Suncatcher, a space data center moonshot project announced late last year and targeting test launches as early as 2027. “We want to put these data centers in space, closer to the sun,” Google’s Sundar Pichai said in a December interview. “We will send tiny, tiny racks of machines and have them in satellites, test them out, and then start scaling from there. But there is no doubt to me that, a decade or so away, we will be viewing it as a more normal way to build data centers.”

Then there’s the impact terrestrial data centers have on the environment. The AI economy currently uses 23 cubic kilometers of water annually, but usage is predicted to jump by 129% by 2050, exceeding 54 cubic kilometers (14 trillion US gallons), according to recent research.

“This planet is so beautiful, and so unusual, this is the one that we’re going to want to protect,” said Amazon Founder Jeff Bezos at a 2024 summit. “There is no plan B.” This March, Bezos’s space technology company, Blue Origin, filed with the FCC for permission to deploy nearly 52,000 satellites as part of Project Sunrise, its proposed orbital data center system.

And, of course, there’s all the space in space.

“In the long term, space-based AI is obviously the only way to scale,” Elon Musk wrote on SpaceX’s blog in February. “I mean, space is called ‘space’ for a reason.” That month, SpaceX filed with the FCC to launch as many as 1 million solar-powered satellites to create an orbital data center system—an epic number compared to the company’s competitors. Amazon filed a petition to deny SpaceX’s application, and astronomers said the satellites would “permanently scar” the night sky.

In March, SpaceX offered a first look at its plans, which would include a data center longer than the 109-meter International Space Station. SpaceX suggested future models could be larger still. “I think the cost of deployed AI in space will drop below the cost of terrestrial AI much sooner than people expect,” Musk said during the presentation. “I think it may be only two or three years.”

Are orbital data centers even possible?

Not everyone thinks launching data centers into space is a good plan. OpenAI’s Sam Altman called the idea “ridiculous,” at least in the current landscape. A Gartner report described the excitement as “peak insanity” and a “bubble,” saying practical applications won’t arrive “for decades, if ever.” Popular science YouTuber Kyle Hill went further, calling orbital data centers “a stupid idea for almost every reason.”

One reason is cosmic radiation. In space, hardware is constantly bombarded by high-energy particles that can corrupt data or permanently fry a chip, and the electronics that make today’s AI possible weren’t designed with that environment in mind.

Then there’s cooling. Space may intuitively seem cold, but without air there is no convection, which is the mechanism that makes fans and heatsinks work. Heat can only escape by radiating away from a surface, which requires large radiator panels. In many of the current designs, the cooling system can rival or even exceed the size of the computing hardware itself.

The third problem is power. The ISS covers roughly the area of a football field and is the largest structure ever deployed in orbit. Its eight solar arrays generate enough power for a hundred-plus GPUs—enough for a single rack in a terrestrial data center, which can house thousands.

Those are some of the physical challenges. But there are economic challenges, too. Can we make processing data in space as affordable as doing it on Earth? 

A helpful framework for thinking about cost is a web-based calculator built by Andrew McCalip, an engineer at space startup Varda, that allows users to compare the cost orbital data center with a terrestrial one. At current launch prices, a 1-gigawatt orbital facility would cost around USD 51 billion to build and operate for five years—more than three times the USD 16 billion it would cost to build an equivalent facility on Earth.

“If you run the numbers honestly, the physics doesn’t immediately kill it, but the economics are savage,” McCalip wrote of his findings. “Orbit doesn’t get points for being cool. Orbit has to win on cost.”

Running an orbital data center for fun and profit

A growing number of researchers think cost can be reframed entirely by rethinking how orbital infrastructure is built and shared.

Remember the last time your phone seamlessly switched between cell towers as you drove down the highway? You don’t, because the transfer is so well-engineered that it disappears into the background. Now, imagine trying to pull off the same trick, but the “phones” are rovers crawling across the lunar surface, the “towers” are satellites hurtling through orbit and there are virtually no ground stations for thousands of miles.

That’s the engineering challenge that Martin Schmatz and his colleagues at IBM Research set out to examine in their 2024 paper, “Designing (Not Only) Lunar Space Data Centers.” That invisible handoff is commonplace on Earth, but in space, it becomes the foundation of an entirely new computing architecture—and potentially a new business model.

Instead of a single monolithic computing platform in orbit, the paper envisions a tiered pipeline that moves data up the chain from where it’s collected to where it can actually be processed. At the bottom sit the sensors, things like rovers and low-orbit satellites, that are chock-full of data but have barely enough energy to keep themselves going. They hand off the data they collect to space data centers, beefier nodes that aggregate signals from dozens of sources and filter out the noise, then compress the results before sending anything home.

“You have many sensors producing a lot of data, meaning a lot of bandwidth,” Schmatz, whose work focuses on secure computing and key/certificate management at IBM’s Zurich lab, told IBM Think. “But you can’t send all that data to Earth in one go.” The space data center’s job, he explained, is to be smart about what it forwards.

It’s clever engineering that also happens to suggest an equally clever commercial proposition. Most of what the public hears about space compute involves massive, vertically integrated projects where one company owns the satellites, the ground stations and everything in between. Those systems can be tightly optimized for a single mission.

Instead, Schmatz envisions a shared infrastructure. A proprietary mission built for one purpose can optimize its compute tiers precisely for that goal, but multi-user architecture has to serve many different operators at once—and that’s exactly what drives the need for high-performance orbital compute. Smaller companies could launch niche sensor hardware and simply rent access to the upper tiers of the network rather than building their own.

“It’s like a small telecom company using the network of a larger one,” Schmatz said.

One plausible near-term application, he said, might be Earth observation. Satellite sensor capabilities have expanded dramatically over the past two decades; sensors that once could barely make out a building can now, as he puts it, “practically read The New York Times.” These sensors could be employed for profitable use cases like detailed agricultural monitoring and predicting local weather.

IBM Research has already been moving in that direction. Working with NASA and the European Space Agency, IBM scientists have developed lightweight versions of the company’s Prithvi and TerraMind Earth observation models, which are small enough to be uploaded to a satellite mid-orbit and specifically designed to process geospatial data at the edge, rather than sending it back to the ground.

Any such infrastructure, Schmatz stressed, would also have to be designed for longevity. Space hardware cannot be serviced easily, and software stacks evolve. Secure over-the-air updates, Schmatz said, would be a “must-have” from day one, including the ability to upgrade security algorithms without compromising the system. The harder problem, he said, is that the security algorithms used to verify those updates would eventually become vulnerable themselves. The satellite can’t know today what verification methods it will need tomorrow, which means the update system has to be designed to update itself.

Schmatz also flagged the broader systemic risk of orbital congestion. Uncoordinated satellite deployment risks triggering cascading debris events that would be nearly impossible to clean up, a phenomenon known as the Kessler Effect. “The worst thing is that somebody comes to the idea, ‘Hmm, I’ll place 200 of my satellites 3,000 miles in the air,’ and some other operator says the same,” he said. “And then, all of a sudden, satellites crash.” Individual pieces of debris are trackable, but a collision between two satellites could generate thousands of fragments too small to monitor individually—each traveling fast enough to cause major damage.

“There should be a common understanding that this must be avoided,” Schmatz said. He advocates for UN-style international coordination. He’s not sure if it will happen, though. “The world is not always how we want it to be.”

How Starcloud trained AI in orbit

“Greetings, Earthlings! Or, as I prefer to think of you—a fascinating collection of blue and green,” the AI wrote. “Let’s see what wonders this view of your world holds. I’m Gemma, and I’m here to observe, analyze and perhaps, occasionally offer a slightly unsettlingly insightful commentary. Let’s begin!”

These words came from Google Deepmind’s Gemma, running on an NVIDIA H100 aboard Starcloud-1, a satellite the size of a small fridge built and operated by Redmond, Washington-based Starcloud. The orbital data center startup also used the same chip to train NanoGPT, a lightweight model by OpenAI founding member Andrej Karpathy, on the works of Shakespeare. Starcloud called it the first language model ever trained in space.

The Starcloud-1 satellite separating from the SpaceX rocket that brought it to orbit. The Starcloud-1 satellite separating from the SpaceX rocket that brought it to orbit.

When asked about the feasibility of data centers in space, Starcloud Founder Philip Johnston doesn’t hedge. “The physics are clear,” he told IBM Think, pointing to Starcloud-1’s success as proof.

Others seem to agree. Starcloud is part of NVIDIA’s Inception program and has the backing of Andreessen Horowitz and other major tech names. In March, the FCC accepted Starcloud’s proposal for a constellation of up to 88,000 satellites—not quite the size of SpaceX’s million-satellite concept, but bigger than Amazon’s 52,000. Later that month the company raised USD 170 million in a Series A, valuing it at USD 1.1 billion.

The company has set its sights on a 5-gigawatt orbital data center, a structure with solar and cooling panels stretching roughly 4 kilometers in both width and height. The biggest obstacle is power. A 5-gigawatt facility would need a substantial solar array—but Starcloud’s white paper claims that in a dawn-dusk orbit, the energy yield per square meter would be more than six times that of a ground installation, meaning the arrays can be surprisingly compact.

Radiation and heat were the two obstacles most critics said couldn’t be solved. “There were still many people who said you couldn’t run an H100 in space because of the thermal dissipation and radiation hardening problems,” Johnston said. “We have proved that both are solvable issues.”

To determine radiation resistance, Starcloud ran H100s through extensive testing in particle accelerators to map exactly where the chips break down under bombardment. The team then used that knowledge to modify the company’s shielding design. The thermal problem demanded a different kind of innovation, with Starcloud developing new manufacturing techniques to produce radiator panels light enough to launch and effective enough to prevent the chips from throttling under load.

Data from Starcloud-1 has since validated those pre-launch engineering choices, Johnston said, though it has also revealed new constraints. For instance, the company now knows exactly how thick its radiation shielding needs to be to hit the optimal tradeoff between mass and performance, which will directly shape the design of its second spacecraft. “The exact specifications are, understandably, a secret,” he said.

How to launch a data center into space

Imagine, for a minute, putting together a box of flat-pack furniture. Engineers design every piece in that box twice: once for how people will use it, and once for how the shipper will pack and move it. That’s manageable for designing a bookshelf. For an orbital data center, though, that constraint becomes enormous. In space engineering, launch requirements affect almost everything.

This is the reality confronting Sameh Tawfick, a professor at the University of Illinois Urbana-Champaign whose research focuses on structural design for extreme environments like space. His focus isn’t the processors inside orbital data centers so much as the bones that house them.

“The structure of a data center on Earth is taken for granted,” Tawfick told IBM Think. Engineers have been refining building technology over roughly 500 years, he explained, but space structures have only had about 50. That gap shows up everywhere, from the available materials and skilled labor to the basic design vocabulary engineers reach for when they start sketching a new facility.

But the same environment that makes space construction so difficult also hands engineers something they never get on Earth. “The orbital microgravity environment and the separation from Earth’s atmospheric climate enable much lighter building structures,” Tawfick said. Structures can span enormous areas, stretching solar panels and cooling radiators across distances that would be impractical on Earth, while using a lot less material to do it.

Deployable structures like the ISS’s solar arrays, each stretching 34 meters, show that large-scale orbital construction is possible. But every component was still built on Earth first. Tawfick’s approach skips that step entirely.

Each piece of a space structure has to fit inside a rocket fairing and survive the violence of launch. That means everything folds, collapses or compresses. “The launch foldability requirement is a huge constraint on designs,” Tawfick said. His answer is to eliminate the constraint—which is the logic behind Mission Illinois.

Currently slated for a 2026 demonstration on the International Space Station under a DARPA-sponsored program, Tawfick’s team is testing a manufacturing approach called “frontal polymerization,” in which a chemical reaction propagates spatially through a material, converting a liquid monomer into a solid composite structure with almost no external energy input. The monomer itself is the fuel. The target for the ISS demo, Tawfick said, is carbon fiber composite tubes. They’re simple components, but they serve as a proof of concept for the broader idea of building structures directly in orbit rather than launching them pre-assembled.

“While metals need high energy to be melted or formed on orbit,” Tawfick said, “the polymer composites developed in my team can be manufactured on orbit using chemical reactions with self-embodied energy.” The process also requires no human setup, enabling manufacturing beyond the reach of a repair crew.

Rendering of composite tube manufacturing machine on the ISS Mission Illinois is preparing to send a composite tube manufacturing machine to the International Space Station to produce new materials for space construction. Image courtesy of Sameh Tawfick

Getting the materials right is its own challenge, including for the radiators that shed waste heat, which need to scale far beyond anything Earth-based construction would require.

“Many of the existing materials suffer when tested for long-term space use,” Tawfick said. Things like UV radiation and atomic oxygen attack chemial bonds in ways that simply don’t happen at sea level.

Researchers are already employing AI tools to design better molecules for surviving in space. Some cutting-edge research ideas, Tawfick noted, go even further, designing materials that actually improve over time, as UV radiation and vacuum trigger structural changes that strengthen them.

Tawfick’s team has also developed a class of multifunctional materials that are strong and lightweight and conduct electricity and heat. A single class of material that does all of these jobs at once can dramatically simplify design and reduce mass. The team has already sent samples to the ISS—the first test of whether the materials will hold up where they’ll eventually need to work.

Keeping projects within budget, however, is another matter. And according to Tawfick, making data centers in space financially feasible will require economists working directly with everyone from materials scientists to orbital engineers: “No single individual nor existing team that I know of can answer any of these questions.”

“Don’t say it won’t work,” he added. “Say, ‘Who can we bring in to the team to solve this problem?’”

Countdown to launch

When—assuming there’s a when—will we see a constellation of sparkling data centers in the night sky? Elon Musk says 30 to 36 months. Jeff Bezos says 10 to 20 years. It’s still anyone’s guess.

What is clear is that the momentum is building. In March, NVIDIA introduced the Space-1 Vera Rubin Module, built from the ground up for orbital data centers, and rated up to 25 times the AI compute of the H100 used on Starcloud-1. Starcloud is among NVIDIA’s first partners; so is Axiom Space, which has already tested an orbital data center prototype powered by Red Hat Device Edge software aboard the ISS. (Red Hat is an IBM subsidiary.)

“Space computing, the final frontier, has arrived,” NVIDIA’s Jensen Huang said in his keynote at NVIDIA GTC. “As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated.”

The timeline, however, depends heavily on factors outside any single company’s control. Cost-competitive orbital data centers will likely require a new generation of heavy-lift rockets—such as SpaceX’s Starship—launching at high frequency, something Johnston and others don’t expect before 2028 or 2029 at the earliest.

For now, orbital data centers occupy a nebulous middle ground—they’re not science fiction anymore, but they’re not anywhere close to being reliable infrastructure, either. If they’re going to work at scale, they will have to clear the same bar as any other new tech: they will have to be better, or cheaper, or both. Until they are, Earth will remain the center of the computing universe.

Antonia Davison

Staff Writer

3D render of a spiral of several icons lined up such as a camera, volume knob and a clipboard
Related solutions
IBM® watsonx.data™

Watsonx.data enables you to scale analytics and AI with all your data, wherever it resides, through an open, hybrid and governed data store.

Discover watsonx.data
Data management software and solutions

Design a data strategy that eliminates data silos, reduces complexity and improves data quality for exceptional customer and employee experiences.

Discover data management solutions
Data and AI consulting services

Successfully scale AI with the right strategy, data, security and governance in place.

Explore data and AI consulting services
Take the next step

Unify all your data for AI and analytics with IBM® watsonx.data™. Put your data to work, wherever it resides, with the hybrid, open data lakehouse for AI and analytics.

  1. Discover watsonx.data
  2. Explore data management solutions