Multi-wavelength photonics breakthrough performs AI math at light speed
Researchers created an optical system that completes full tensor operations in a single pass of light, offering a path toward faster and cleaner AI computing.

Edited By: Joseph Shavit

New optical method performs full tensor operations in one pass of light, offering faster and more efficient AI computing. (CREDIT: AI-generated image / The Brighter Side of News)
Artificial intelligence grows more demanding every year. Modern models learn and operate by pushing huge volumes of data through repeated matrix operations that sit at the heart of every neural network. These operations, called tensor computations, usually rely on graphics processing units. GPUs are powerful, but as networks expand, they strain under heavy memory loads, rising electricity use and the limits of semiconductor hardware. Many researchers see this widening gap and wonder how much longer conventional chips can keep up.
Optical computing has been pitched as an answer for decades. Light moves quickly, carries massive amounts of data and consumes very little energy. Yet most optical systems struggle to perform the key math behind neural networks. They often break a single tensor calculation into several steps, slowing the whole process. As a result, they have not been able to replace GPU-based methods.
A new optical strategy may finally shift the landscape. An international research team led by Dr. Yufeng Zhang at Aalto University has created a method called parallel optical matrix-matrix multiplication, or POMMM. It performs an entire tensor operation in one pass of light. Their goal is simple to say and hard to do: match the accuracy of digital chips while delivering the speed and efficiency that only light can bring.
Light as a New Kind of Calculator
Matrix multiplication normally follows two steps. First, each row of one matrix multiplies element by element with each column of another matrix. Then all the partial values are summed to create the final result. Optical methods already handle the first part well by shaping the intensity of light. The trouble comes when trying to sum many values without blurring the signals together.
The researchers turned to a pair of mathematical properties known from Fourier transforms. A pattern that shifts in space keeps the same spectrum. A simple linear phase pattern, however, moves that spectrum to a different frequency. With those insights, the team gave each row of the first matrix its own phase code. When the light passed through lenses that created Fourier transforms, these patterns separated cleanly. None of them overlapped. None of them mixed.
The full process begins when the first matrix is imprinted onto a beam of coherent light. Each row receives its own phase gradient so it can be tracked later. A column-wise Fourier transform stacks the rows into a combined pattern. The second matrix is then encoded onto the light, performing the element-by-element multiplication all at once. A final Fourier transform along the rows adds up every product and produces the full output matrix in a single shot.
Instead of multiple passes of light or complex routing tricks, everything happens at once.
Building the Hardware to Prove It
To move from theory to experiment, the team built a tabletop system using standard optical parts. A laser supplied the input beam. The first spatial light modulator encoded the first matrix. A second modulator created the phase patterns. Cylindrical lenses formed the required images and Fourier transforms along different directions. The second matrix was encoded using another modulator, and a high-resolution camera captured the final result.
The entire calculation unfolded in parallel. Every value appeared at the same moment. To test accuracy, the researchers compared the outputs with GPU-based results on many matrices, including symmetric, triangular, real-valued and complex-valued examples. Across dozens of trials involving matrices as large as 50 by 50, the errors stayed low. The mean absolute error remained under 0.15. The normalized root-mean-square error stayed below 0.1. These results showed that the optical system produced reliable answers.
The researchers also studied what caused the remaining errors. They evaluated imperfections in their prototype and found ways to reduce them through simulations.
Bringing Neural Networks Into the Optical World
Because POMMM behaves like digital matrix multiplication, the team tested whether GPU-trained models could run on their optical system. They trained several networks on MNIST and Fashion-MNIST datasets using non-negative weights. These included convolutional networks and transformer-style models. When these networks ran through the optical system, their predictions closely matched GPU outputs.
The group tested larger tasks as well. In one case, they used a U-Net model for image style transfer that required multiplying matrices as large as 256 by 9,216 and 9,216 by 256. The optical system handled these large operations without losing accuracy.
When errors rose in extreme cases, the researchers trained models directly with the optical kernel. Those models still reached accuracy levels similar to GPU-trained networks. This suggests that perfect agreement between optical and digital multiplication may not be necessary. Networks can learn to adapt to the quirks of their hardware.
Extending the Method With Multiple Colors of Light
Light has another advantage. Different wavelengths can coexist without interfering. Because POMMM relies only on amplitude and phase patterns, it can spread work across several wavelengths. Each wavelength shifts to its own place in the frequency domain. When the light moves through the Fourier transform, it creates many distinct outputs at once.
The researchers tested this idea using two wavelengths, 540 and 550 nanometers. They encoded different parts of a complex matrix onto these two colors. After two passes of the method, they obtained a full complex-valued matrix multiplication. Once again, the optical results matched GPU outputs.
What Comes Next for Optical AI
POMMM shows stronger theoretical performance than earlier optical computing approaches. The prototype relied only on standard parts, yet it achieved more than two billion operations per joule. That number may grow as the method moves to faster platforms and larger photonic chips. Most steps in the process depend on passive shaping of light, so improved hardware could push performance far beyond the limits of electronics.
Challenges remain. Extra phase modulation can add complexity to real-valued tasks. Some vision applications lean on complex-valued operations, which may require additional adjustments. Even so, the work presents a path toward processors that compute at the speed of light.
Dr. Zhang sees a clear path forward. “Our method performs the same kinds of operations that today’s GPUs handle, like convolutions and attention layers, but does them all at the speed of light,” he says. Professor Zhipei Sun adds that the group hopes to integrate the framework into photonic chips for extremely low-power AI processing.
The research team expects that major technology companies could adopt the approach within a few years.
Research findings are available online in the journal Nature Photonics.
Related Stories
- World's first photon-based NPU is 50x faster and uses 30x less power
- Google AI solves a decade-long superbug mystery in just two days
- What do people think about AI replacing human jobs? Major study yields surprising results
Like these kind of feel good stories? Get The Brighter Side of News' newsletter.
Shy Cohen
Science & Technology Writer



