New AI system helps scientists understand complex systems that change over time

Duke researchers built an AI that finds compact, interpretable equations for complex systems, from circuits to climate models.

Joseph Shavit
Shy Cohen
Written By: Shy Cohen/
Edited By: Joseph Shavit
This is a graph of recorded temperatures (left) and a model of temperatures (right) around the globe for a fixed latitude over time.

This is a graph of recorded temperatures (left) and a model of temperatures (right) around the globe for a fixed latitude over time. (CREDIT: Duke University)

Duke University engineers are using artificial intelligence to do something scientists have chased for centuries; turn messy, real-world motion into simple rules you can write down. The work comes from Boyuan Chen, director of the General Robotics Lab, and his team, including lead author Sam Moore, a PhD candidate. They reported the results in the journal npj Complexity.

Their new AI framework studies time-series data, meaning measurements taken over time, and then produces compact equations that describe how a system changes. It targets the kinds of problems that show up everywhere, from weather patterns and electrical circuits to mechanical devices and biological signals. The goal is not just prediction. It is understanding.

“Scientific discovery has always depended on finding simplified representations of complicated processes,” said Chen, the Dickinson Family Assistant Professor of Mechanical Engineering and Materials Science at Duke. “We increasingly have the raw data needed to understand complex systems, but not the tools to turn that information into the kinds of simplified rules scientists rely on. Bridging that gap is essential.”

Automated global analysis of experimental dynamics. (CREDIT: npj Complexity)

Why complex systems still resist simple explanations

Since Isaac Newton’s 1687 Principia, dynamics has offered a way to explain change. Over time, that work grew into dynamical systems theory, which tracks “state variables” as they evolve. Those variables can describe far more than moving objects. They can capture shifting conditions in engineering, climate science, neuroscience, physiology and ecology.

Yet many real systems stay hard to pin down. You can measure what a system does, but still struggle to identify the rules driving it. Nonlinear behavior makes the problem worse because small changes can lead to very different outcomes. High dimensionality adds another barrier. When a system involves many interacting states, interpretation becomes difficult and analysis tools can break down.

Even familiar motion shows the tradeoff. A cannonball’s path depends on exit speed, angle, drag, wind and temperature. But you can still get a close approximation from a simple equation that uses only the first two. Science often advances by finding that kind of “good enough” reduction without losing the core truth.

A 1930s mathematical idea, upgraded with modern AI

The Duke approach builds on an idea mathematician Bernard Koopman proposed in 1931. Koopman showed that a nonlinear system can sometimes be represented through a linear model, if you describe it in the right coordinates. Linear models are attractive because they let you do global analysis and use tools like spectral decomposition, which can reveal a system’s modes and stability.

Diagrams detailing the studied dynamical systems and the prediction error as a function of latent dimension. (CREDIT: npj Complexity)

"The catch is scale. Koopman-style modeling often pushes you into a very large, even infinite, space of variables. That reality has fed a long-running problem in the field. Methods such as Dynamic Mode Decomposition and Extended DMD can be useful, but they often balloon into huge representations for nonlinear systems. Deep learning has also been used to find linear embeddings, yet many approaches still land on latent spaces far larger than the system you started with'," Chen shared with The Brighter Side of News.

The paper points to famous benchmarks. Past work has represented the two-dimensional Duffing system with embeddings that reached 100 dimensions, and in some cases far more. Similar inflation has appeared for the Van der Pol oscillator. Those large embeddings can work, but they can also add redundancy and increase the risk of false modes and overfitting.

How the new framework shrinks the problem

The Duke framework tries to keep the linear representation as small as possible while still predicting well over long time windows. It takes experimental time series data, then uses deep learning plus physics-inspired constraints to discover a reduced set of hidden variables that still capture the system’s essential behavior.

In practice, the model learns a latent space, labeled ψ, where the dynamics behave like a linear system. The approach leans on time-delay embedding, which feeds the model short windows of past states to help it infer what comes next. The team also developed a mutual-information method to help pick an effective time-delay length, since that choice strongly affects prediction error.

Training emphasizes long-horizon accuracy. The researchers used a discounted loss over future steps, then adjusted that discount over time in a curriculum-like way to help the model generalize beyond the training window. They also explored different latent dimensions and selected the smallest one that did not meaningfully harm performance.

“What stands out is not just the accuracy, but the interpretability,” said Chen, who also holds appointments in electrical and computer engineering and computer science. “When a linear model is compact, the scientific discovery process can be naturally connected to existing theories and methods that human scientists have developed over millennia. It’s like connecting AI scientists with human scientists.”

Nine testbeds, from pendulums to neural circuits to weather models

To test the method, the team built nine datasets spanning simulated and experimental nonlinear systems. The lineup moved from simple to complex, which matters because a method that only works on textbook motion does not help much in the wild.

A single pendulum offered the simplest case, with two measured variables and a stable resting state. The Van der Pol oscillator raised the difficulty with a repeating cycle, known as a limit cycle. The Hodgkin-Huxley model added four variables and strong nonlinearity, describing how neurons generate action potentials. The Lorenz-96 system, used in weather predictability research, introduced a high-dimensional setting with periodic and chaotic behaviors.

The study also focused on multistability, when a system can settle into more than one long-term pattern. The Duffing oscillator, often described as a particle moving in a double-well landscape, served as a key example. Other testbeds included interacting magnetic systems, nested limit cycles, an experimental magnetic pendulum and a double pendulum showing chaotic behavior.

Predicted trajectories from low-dimensional linear embeddings of nonlinear dynamics. a Predicted trajectories in latent space for the single pendulum modeled as a 3D linear system. b The same latent space trajectories for the pendulum, decomposed into separate modes. c, d Ground truth and predicted trajectories for angular position and velocity after decoding into state space. (CREDIT: Nature Methods)

Across many of these systems, the framework found reduced models more than 10 times smaller than what previous machine-learning approaches required, while still producing reliable long-term forecasts. For example, the team reported that three-dimensional and six-dimensional representations were enough to model the Van der Pol and Duffing oscillators, respectively. In a higher-dimensional case, a limit-cycle Lorenz-96 system dropped from 40 states to 14 latent dimensions while keeping strong predictive performance.

Finding the “landmarks” that explain stability and change

Prediction is only part of the payoff. The framework also aims to reveal structures that dynamicists care about, including attractors, which are stable states or patterns a system tends to approach over time. Those structures can help you judge whether a system is behaving normally, drifting or edging toward instability.

“For a dynamicist, finding these structures is like finding the landmarks of a new landscape,” said Moore. “Once you know where the stable points are, the rest of the system starts to make sense.”

The method supports spectral analysis of the learned linear system, extracting eigenvalues and eigenfunctions that describe modes, frequencies and decay rates. It also produces learned stability tools called neural Lyapunov functions, built from decaying modes. Those functions can provide a practical way to assess global stability, an area where nonlinear systems often force researchers to settle for local answers.

The researchers also penalized unstable growth in training by discouraging eigenvalues with positive real parts. That choice aims to keep learned dynamics physically realistic, rather than exploding in ways that fit training data but fail in reality.

“This is not about replacing physics,” Moore continued. “It’s about extending our ability to reason using data when the physics is unknown, hidden, or too cumbersome to write down.”

Practical Implications of the Research

This work points toward AI tools that do more than spot patterns. If a model can uncover compact, interpretable rules from messy measurements, you can test hypotheses faster and design better experiments. That matters in fields where the governing equations are incomplete or too hard to derive, including parts of climate science, neuroscience and complex engineering systems.

The framework could also improve early warning and control. Stable states and drift toward instability show up in real settings, from electrical grids and aircraft dynamics to biological rhythms. A reliable way to identify attractors and assess stability from data could help researchers detect when a system is shifting into a risky regime, and help engineers decide how to intervene.

The team also sees the method guiding what data to collect next, which could reduce cost when experiments are expensive. Over time, this approach could support “machine scientists” that help human researchers move from raw measurements to clear, testable rules.

Research findings are available online in the journal npj Complexity.



Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Shy Cohen
Shy CohenScience and Technology Writer

Shy Cohen
Science & Technology Writer

Shy Cohen is a Washington-based science and technology writer covering advances in AI, biotech, and beyond. He reports news and writes plain-language explainers that analyze how technological breakthroughs affect readers and society. His work focuses on turning complex research and fast-moving developments into clear, engaging stories. Shy draws on decades of experience, including long tenures at Microsoft and his independent consulting practice to bridge engineering, product, and business perspectives. He has crafted technical narratives, multi-dimensional due-diligence reports, and executive-level briefs, experience that informs his source-driven journalism and rigorous fact-checking. He studied at the Technion – Israel Institute of Technology and brings a methodical, reader-first approach to research, interviews, and verification. Comfortable with data and documentation, he distills jargon into crisp prose without sacrificing nuance.