The Race to Build Data Centers in Space 

Every data center ever built has had 3 things in common: it sits on land someone had to acquire, it draws power from a grid already struggling to keep up, and it spends more energy fighting heat than it does running actual computation. For decades, those 3 constraints were treated as engineering problems to optimise around. 

In 2026, a growing number of companies have decided to eliminate all 3 at once by leaving Earth entirely.

Space offers unlimited solar power with no atmosphere to block it, a vacuum that allows heat to radiate away without pumps or water, and zero competition for land. 

The question that felt theoretical 3 years ago is now a funded, launched, and in several cases already operational industry. 

Today, 10 companies are actively building, testing, or deploying data center technology in orbit, and the window between the first demonstration and commercial scale is closing faster than anyone in terrestrial infrastructure expected.

The 10 Companies Building It Now

The following 10 companies span the full spectrum of the orbital data center race. From a startup that has already trained an LLM in orbit to a networking giant preparing its hardware for space deployment, across 4 distinct approaches: orbital compute, space solar power, ISS-deployed nodes, and enabling infrastructure.

1. Starcloud 

Starcloud is the furthest along. Backed by NVIDIA, it launched Starcloud-1 in 2025, carrying an NVIDIA H100 GPU and trained the first LLM in orbit in December 2025. Starcloud-2 is scheduled for October 2026 with 100 times the power generation of the first satellite. The company’s long-term target is a 5-gigawatt orbital hypercluster, a scale that would make it one of the largest computing deployments in history, terrestrial or otherwise.

2. Google 

Google is pursuing space-based AI infrastructure under a project called Suncatcher, pairing its proprietary Tensor Processing Units with satellite expertise from Planet Labs. Google plans to launch 2 prototype satellites in early 2027 to test AI-powered satellite constellations. Google’s involvement signals that this is not a fringe experiment. It is a strategic hedge against terrestrial infrastructure constraints by the world’s largest search and AI company.

3. Aetherflux 

Aetherflux was founded by Baiju Bhatt, co-founder of Robinhood, and takes a distinct approach. The company is developing a constellation of satellites designed for orbital compute powered by space-based solar energy beamed to Earth via laser. A demonstration satellite is planned for 2026. The laser energy transmission model, converting solar energy collected in space into a directed beam delivered to ground receivers, has been studied theoretically for decades and is now entering its first commercial demonstration phase.

4. Axiom Space 

Axiom Space partnered with Spacebilt to deploy optically interconnected data center infrastructure on the International Space Station. The AxDCU-1 orbital data center node was deployed in early 2026, making Axiom Space the first company to have an operational orbital data center node on crewed space infrastructure. Future deployment targets include commercial space stations that will replace the ISS.

5. Lonestar Data Holdings

Lonestar Data Holdings operates a dual strategy: orbiting data centers in LEO and data storage facilities on the Moon. The Moon strategy is not a marketing concept. Lonestar has positioned lunar data storage as geopolitically neutral, jurisdictionally ambiguous, and physically isolated from any terrestrial threat. The company targets its first commercial LEO service for Q4 2026.

6. SpaceX and xAI

SpaceX and xAI have filed plans with the FCC for up to 1 million data center satellites. A figure that dwarfs every other proposal in this space combined. Elon Musk has stated publicly that within 2 to 3 years, space will be the lowest-cost method to generate AI compute. Whether that timeline is accurate or aspirational, the FCC filing establishes SpaceX’s intent to dominate orbital compute at a scale no other operator is currently planning.

7. OrbitsEdge

OrbitsEdge partners with Hewlett Packard Enterprise to develop space-hardened micro data centers for LEO, specifically designed to process satellite data before it is transmitted back to Earth. Rather than building large orbital facilities, OrbitsEdge focuses on distributed edge computing in orbit, processing data where it is generated rather than downlinking raw data for terrestrial processing.

8. Sophia Space

Sophia Space is developing modular server racks with integrated solar panels under a product line called TILES, designed for passive cooling in space. The company targets an orbital test by late 2027 or early 2028, making it the furthest from deployment of the 9, but its modular architecture could enable rapid scaling once the core technology is validated.

9. Blue Origin

Blue Origin supports the broader movement through its Terawave constellation initiative, developing foundational technology for large-scale orbital data infrastructure. Terawave is positioned as enabling infrastructure for other operators rather than a direct data center service.

10. Cisco 

Cisco is approaching the space data center movement from the networking layer rather than the compute layer. Chuck Robbins, Cisco CEO, confirmed the company is actively preparing its portfolio for space deployment, specifically examining how its networking hardware must be redesigned to operate without conventional cooling systems and in the atmospheric and temperature conditions of orbit. 

Cisco’s Chief Product Officer Jeetu Patel went further, stating that space data centers “are actually starting to get built” and that the company’s teams began formally preparing for the transition approximately 2 to 3 months before his public statements in early 2026. For Cisco, the opportunity is the same one that drove its terrestrial data center growth, wherever compute infrastructure is built, networking infrastructure must connect it.

Why Companies Are Choosing Space To Build Data Centres?

Companies are moving towards space to build data centers because Earth’s data centers are running out of room.

To understand why space is being treated as a serious solution, it helps to understand how severely AI has disrupted terrestrial data centre planning.

A single large AI training run now consumes more electricity than 1,000 US households use in a year. The global data center industry consumed approximately 4% of global electricity in 2024. That figure is projected to reach 8% by 2030 and potentially 12% by 2035 as AI model training and inference scale across every industry. Utility grids in major US markets are struggling to provide the power commitments that hyperscale operators require.

Land is the second constraint. Hyperscale data centers require enormous footprints, not just for the servers themselves, but for cooling infrastructure, power substations, and access roads. Land acquisition in viable locations near power grids and network connectivity is becoming contested. Communities in Lansing, New York, Saline Township, Michigan, and Tucson, Arizona have organised resistance to planned data center developments, citing water consumption, noise, and visual impact. More than $98 billion in planned data center projects faced opposition or delays in Q2 2025 alone.

The cooling problem may be the most acute of the 3. Cisco President and Chief Product Officer Jeetu Patel stated publicly that 90% of rack weight is cooling infrastructure. Not computing hardware. 

Terrestrial data centers cool their servers using air conditioning, water cooling, or liquid immersion systems, all of which require continuous energy input, large physical footprints, and significant water consumption. A large hyperscale data center can consume millions of gallons of water per year for cooling alone.

Space solves all 3 of these problems simultaneously, which is why it is being taken seriously.

How Space Actually Solves the Infrastructure Problem

Space solves the infrastructure problem of building data centers in 3 advantageous ways:

1. The cooling advantage

In space, there is no atmosphere to conduct or convect heat away from hardware. This sounds like a problem, and in some configurations it is, but it enables something terrestrial data centers cannot replicate: passive radiative cooling. Heat can be radiated away as infrared radiation directly into the cold vacuum of space, requiring no pumps, no refrigerants, no water, and no active cooling systems. The engineering challenge is designing radiator panels large enough to shed the required thermal load. Compared to building and operating an active cooling plant for a terrestrial facility, this is a significantly lower ongoing operational cost.

2. The solar advantage

Earth’s atmosphere absorbs and scatters approximately 30% of incoming solar radiation before it reaches the surface. In space, solar panels receive the full unattenuated solar constant, approximately 1,361 watts per square metre, compared to roughly 1,000 watts per square metre at Earth’s surface under ideal conditions. More importantly, satellites in certain orbital configurations receive continuous solar illumination with no day-night cycle. Continuous, high-intensity, uninterrupted power, with no land acquisition, no grid connection, and no utility contract, is the core economic proposition of space-based computing.

3. The land and regulation advantage

A data center in orbit requires no land, no planning permission, no community consultation, and no connection to a terrestrial power grid. It also sits in a regulatory environment that has not yet been fully defined, which creates both opportunity and uncertainty for operators.

The Timeline: From Concept to Commercial in 3 Years

The acceleration of this industry is difficult to overstate.

DateMilestone
December 2025Starcloud trains first LLM in orbit
Early 2026Axiom Space deploys AxDCU-1 on ISS
2026Aetherflux launches demonstration satellite
Q4 2026Lonestar targets first commercial LEO service
October 2026Starcloud-2 launches. 100x power of Starcloud-1
Early 2027Google launches 2 Suncatcher prototype satellites
Late 2027–2028Sophia Space targets orbital TILES test

Cisco’s Jeetu Patel described the innovation pace directly: “The compression of innovation is so much that now what used to happen in 10 years is now happening within six months.”

The Challenges That Have Not Been Solved

The following 4 challenges represent the core unsolved engineering and economics problems standing between today’s orbital demonstrations and commercial-scale space data centers, each one capable of independently stalling the entire industry if not resolved.

1. Radiation hardening is among the most expensive and technically complex challenges in space electronics. Cosmic rays and solar particle events corrupt memory, damage transistors, and cause logic errors in standard silicon. The NVIDIA H100 GPU that Starcloud launched was not designed for space operation; running commercial GPUs in orbit without radiation protection is an ongoing engineering experiment, not a resolved problem.

2. Launch cost has fallen dramatically with SpaceX’s reusable rockets, but remains measured in thousands of dollars per kilogram. Building a 5-gigawatt orbital hypercluster requires launching thousands of tonnes of hardware, an economic equation that has not yet closed.

3. In-orbit maintenance is functionally impossible for most satellite designs. A failed server in a terrestrial data center is replaced in minutes. A failed compute node in orbit is replaced by launching a new satellite, a process measured in months and millions of dollars.

4. Latency and connectivity remain practical constraints. Latency from LEO to Earth is approximately 1 to 4 milliseconds for a single hop, comparable to cross-continental terrestrial connections, but the bandwidth of satellite downlinks limits how much data can move between orbital compute and terrestrial applications at any given moment.

The Scale of the Problem It Is Trying to Solve

Cisco’s Jeetu Patel projected that at 10 to 100 AI agents per human, the world’s 8 billion people would require between 80 and 800 billion AI agents operating 24 hours a day, 7 days a week. “Imagine the level of infrastructure requirements that are going to be needed,” Patel said. “It’s going to be non-trivial.

Join the IT Horizon Community

Stay connected with a community of curious minds following the ideas, breakthroughs, and disruptions shaping our digital future. Join the conversation.

Related blogs

Top Stories

April 14, 2026

Google Maps Just Got Its Biggest Upgrade in a Decade, and It Changes Everything About How You Find Places

April 14, 2026

Japan Just Bet $16 Billion on a Chip Startup Nobody Had Heard of 3 Years Ago

April 14, 2026

Blue Light and Sleep: Why Your Phone Isn’t the Real Reason You’re Tired at Night

April 14, 2026

Trump Posted an AI Image of Himself as Jesus, Then Deleted It After His Own Base Turned on Him

April 14, 2026

Has Neuralink Made a Miscalculation? The Reality Behind the Hype

April 14, 2026

Art schools vs AI: adaptation or erosion?