Scientists create the world’s first Milky Way simulation following 100 billion stars over 10,000 years

New AI-powered model delivers the first star-by-star simulation of the Milky Way, giving scientists a faster way to explore how galaxies grow.

Joshua Shavit
Joseph Shavit
Written By: Joseph Shavit/
Edited By: Joshua Shavit
AI helps create the first star-level Milky Way simulation, running over 100 times faster than past models.

AI helps create the first star-level Milky Way simulation, running over 100 times faster than past models. (CREDIT: Shutterstock)

A long-held dream in astronomy has finally come into reach, and it carries a surprising emotional weight. For years, scientists have imagined watching the Milky Way evolve one star at a time, almost like turning the pages of a family album that shows where everything in our cosmic neighborhood came from.

That dream has always run into the same wall. Galaxies operate on wildly different scales, and computers struggle to capture slow and gentle motions alongside violent events that unfold in the blink of an eye. Now a team led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences has changed the landscape with a model that follows more than 100 billion stars with remarkable accuracy.

The core challenge has always been the extreme range of physical conditions within a galaxy. The Milky Way spans nearly 200,000 parsecs and holds swirling gas at temperatures from about 10 kelvin in cold clouds to nearly 10 million kelvin around supernova explosions.

Material circulation in a galaxy: Diffuse warm gas loses energy through radiation and conduction and form a disk like structure (galactic disk). (CREDIT: ACM Digital Library)

Massive stars burn out and explode within a few million years, while the full rotation of the disk takes hundreds of millions of years. Trying to compute all of these processes on the same timeline forces computers to use painfully small timesteps. Supernova blasts shift so quickly that conventional codes must track them year by year, which freezes the rest of the galaxy in place and slows the entire simulation to a crawl.

Why Supernovae Slow Everything Down

If you zoom in on a forming star or the gas left behind after a stellar explosion, the problem becomes clear. Hot gas moves at high speeds, and the physics that governs fluid motion requires the simulation to take smaller and smaller steps as conditions grow more extreme.

When scientists try to model gas down to about one solar mass of material per particle, each step near a supernova can shrink to around 100 years. Stretch that over a million years of simulated time and the number of required updates becomes enormous. Even powerful supercomputers choke on the communication demands and data transfers needed to track billions of particles across thousands of processing units.

For many years, this forced scientists to make a painful trade-off. They could simulate a whole galaxy but treat each “particle” as hundreds of stars bundled together, or they could track individual stars but only in small galaxies with far fewer total stars. Either way, the structure and evolution of a Milky Way-sized galaxy at star-by-star detail remained out of reach.

The total mass of the system and the resolution of the DM (left) and gas (right) particles of the current state-of-the-art simulations. (CREDIT: ACM Digital Library)

How Deep Learning Helped Break the Barrier

Hirashima’s team finally pushed through by attacking the problem at its most fragile point. Rather than calculating every tiny step of an exploding star, they trained a deep learning model to jump ahead and predict how the surrounding gas will behave for about 100,000 years after a supernova. This lets the main part of the simulation move forward smoothly while a small group of processors quickly handles each explosion.

The team built a surrogate model using a 3D U-Net architecture trained on high-resolution simulations of supernova blast waves. When the main code detects that a star has reached the end of its life, it sends a small cube of nearby gas to a “pool node.” There, the neural network predicts the density, temperature, and motion of the gas long after the explosion. The updated data is then placed back into the larger simulation every 50 timesteps.

This approach avoids the usual slowdown that supernovae cause. Instead of being forced into tiny increments, the entire galaxy evolves in steady 2,000-year steps. Because many pool nodes work in parallel, dozens of supernovae can be handled at once without dragging down the overall pace.

Schematic illustration of our simulation method. The main nodes integrate the entire region of a galaxy using a shared timestep (Δtglobal) with a large number of computational nodes. (CREDIT: ACM Digital Library)

Running a Galaxy on the World’s Fastest Machines

To reach full Milky Way scale, the researchers ran their method on the Fugaku supercomputer in Japan. At peak, the simulation used nearly 149,000 nodes and more than seven million CPU cores. Their largest run tracked 300 billion particles, including stars, gas, and dark matter, each at close to one solar mass. It processed each timestep in about 20 seconds and reached speeds above eight petaflops.

Put another way, the model uses around 500 times more particles than previous Milky Way simulations while running more than 100 times faster. It captures the behavior of individual stars and the fine details of hot gas without slowing to a halt. Tests on other large machines showed similar strength, which means the method can work across different computing platforms.

A Shift in How Scientists Study Complex Worlds

The speed difference is dramatic. Earlier models would need more than 300 hours of computation to simulate just 1 million years of galactic evolution. Hirashima’s method can do it in less than three hours. A full 1-billion-year model, once considered nearly impossible, could now finish in under four months instead of several decades.

Snapshots of gas distribution of the galactic disks integrated with our new scheme with DL surrogate model. (CREDIT: ACM Digital Library)

The work, presented at the SC ’25 supercomputing conference, shows the power of mixing physical laws with AI-driven shortcuts. It also highlights a broader change in how researchers study complicated systems. Weather models, climate forecasts, black hole simulations, and models of turbulent flows all face the same hurdles created by tiny, fast-moving regions hidden inside much larger environments. Surrogate models like this one could make these fields move faster as well.

As Hirashima put it, “Integrating AI with high-performance computing marks a fundamental shift in how we tackle multi-scale, multi-physics problems across the computational sciences.” The achievement points toward a future in which simulations become not just clearer but also far more accessible.

Practical Implications of the Research

This approach opens the door to faster and more realistic galaxy models that match the complexity revealed by modern telescopes. Researchers can now test theories of how stars form, how elements spread through galaxies, and how cosmic structures change over billions of years.

The technique could also help other fields that rely on heavy simulations. Climate scientists may soon be able to model storms, ocean currents, and long-term warming trends with greater accuracy by using similar AI shortcuts.

By blending physics with machine learning, scientists gain tools that save time, cut energy use, and reveal patterns that would otherwise remain hidden for decades.

Research findings are available online in the journal ACM Digital Library.




Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Joseph Shavit
Joseph ShavitScience News Writer, Editor and Publisher

Joseph Shavit
Science News Writer, Editor-At-Large and Publisher

Joseph Shavit, based in Los Angeles, is a seasoned science journalist, editor and co-founder of The Brighter Side of News, where he transforms complex discoveries into clear, engaging stories for general readers. With experience at major media groups like Times Mirror and Tribune, he writes with both authority and curiosity. His work spans astronomy, physics, quantum mechanics, climate change, artificial intelligence, health, and medicine. Known for linking breakthroughs to real-world markets, he highlights how research transitions into products and industries that shape daily life.