Classical Indian dance is teaching robots how to move and use their hands
A new study shows classical Indian dance gestures offer deeper insights into brain-controlled hand movement than everyday grasps.

Edited By: Joseph Shavit

Ashwathi Menon dancing with a robot. Researchers find Indian dance gestures reveal richer movement building blocks than natural hand grasps. (CREDIT: University of Maryland)
Hands move constantly during conversation. They signal emotion, stress a point, and form full languages such as American Sign Language. Behind this everyday motion lies a complex challenge. Each human hand has more than 20 degrees of freedom. Yet the nervous system controls these movements with speed and ease.
Researchers at the University of Maryland, Baltimore County set out to understand how this control works. The team is led by Ramana Vinjamuri, a professor whose lab studies how the brain organizes complex hand motion. Their findings, published in Scientific Reports, show that structured dance gestures may hold deeper clues than ordinary hand movements.
The work focuses on Bharatanatyam, a classical Indian dance form known for precise hand shapes called mudras. These gestures are not decorative. They carry meaning and follow strict rules. Vinjamuri and his colleagues found that mudras contain a richer set of movement building blocks than natural hand grasps. That insight could reshape rehabilitation tools and how robots learn to move like humans.
Why Gestures Matter Beyond Communication
Hand gestures do more than support speech. They also play a growing role in therapy. Movement-based approaches such as Dance Movement Therapy have helped people recovering from neurological disorders. Studies involving Parkinson’s disease patients, older adults in care homes, and people under high stress show gains in mood, thinking, grip strength, and dexterity.
"These improvements point to a larger idea. Structured movement can reshape the brain. Rhythmic and intentional gestures train the sensorimotor system. When gestures carry personal or emotional meaning, the benefits often extend further," Vinjamuri told The Brighter Side of News.
"Mudras offer a unique case. Each gesture requires strength, precision, and coordination across finger joints. The movements engage the metacarpophalangeal, proximal interphalangeal, and distal interphalangeal joints in defined patterns. That structure makes them ideal for studying how complex motions are organized," Vinjamuri continued.
Searching for the Alphabet of Movement
For more than a decade, Vinjamuri has studied kinematic synergies. These are coordinated patterns of joint motion that simplify control. Instead of managing each joint alone, the brain groups them into units. Vinjamuri often compares this to language. Thousands of words arise from a small set of letters.
Further inspiration came in 2023 at a neuroscience conference hosted by the Indian Institute of Technology Mandi. While discussing how ancient traditions could inform modern science, Vinjamuri saw an opportunity in classical dance.
“We noticed dancers tend to age super gracefully: They remain flexible and agile because they have been training,” said Vinjamuri. “That was a huge inspiration for us when we started looking for richer alphabets of movement. With dance, we are looking not just at healthy movement, but super healthy. And so the question became, could we find a ‘superhuman’ alphabet from the dance gestures?”
Capturing and Modeling Hand Motion
The team analyzed 75 hand gestures. These included 30 Bharatanatyam mudras, 30 natural grasps, and 15 American Sign Language positions. Each gesture was recorded using a single RGB camera and a real-time tracking system. The system identified 21 skeletal landmarks and focused on 14 major finger and thumb joints.
Each joint’s angular velocity followed a bell-shaped curve. This matched earlier findings about finger motion. Using these curves, the team built a virtual hand that moved only through finger flexion and extension. Wrist motion and side-to-side finger movement were excluded.
Despite these limits, the model recreated all 75 gestures in stable three-dimensional form. The next step was to extract synergies. Using principal component analysis, the researchers identified the smallest number of components needed to explain most movement variation.
Six synergies captured more than 94 percent of the variation in mudras. The same number explained nearly 99 percent of variation in natural grasps.
What the Comparisons Revealed
The key test came next. The team examined how well these synergies could rebuild gestures they were not trained on. Mudra-derived synergies reconstructed American Sign Language gestures with an average error of 8.65 percent. Natural grasp synergies produced an error of 11.08 percent.
Mudra-based synergies also rebuilt natural grasps more accurately than grasp-based synergies rebuilt mudras. The pattern was clear. Structured, codified gestures produced more transferable building blocks.
“When we started this type of research more than 15 years ago, we wondered: Can we find a golden alphabet that can be used to reconstruct anything?” said Vinjamuri. “Now I highly doubt that there is such a thing. But the mudras-derived alphabet is definitely better than the natural grasp alphabet because there is more dexterity and more flexibility.”
Planar gestures reconstructed best. Movements requiring depth cues produced higher error. The camera system relied mostly on two-dimensional data, which limited accuracy for finger spreading and opposing motions.
Teaching Robots to Gesture
The researchers transferred the reconstructed gestures to a humanoid robot called Mitra. The robot has 21 degrees of freedom and five motor-driven digits. Its fingers follow simplified mechanical rules. Even small modeling errors became visible.
Still, Mitra reproduced symbolic, sign language, and grasp gestures in line with the model’s predictions. The demonstration showed that robots do not need to store every pose. With core movement primitives, new gestures can emerge through recombination.
The lab is also testing these ideas on a stand-alone robotic hand. Each platform requires different translation methods from math to motion.
“Once I learned about synergies, I became so curious to see if we could use them to make a robotic hand respond and perform the same way as a human hand,” said Parthan Olikkal, a doctoral student in computer science and longtime member of the lab. “Adding my own work to the research efforts, and seeing the results has been gratifying.”
Practical Implications of the Research
The findings could shape future rehabilitation tools. Mudras carry meaning and structure, which may boost motivation during therapy. Their transferable synergies offer a stable framework for retraining fine motor control.
Robotics stands to benefit as well. Gesture libraries based on movement primitives reduce storage needs and improve flexibility. This approach could enhance assistive robots, human-robot interaction, and teleoperation systems.
The team also emphasizes accessibility. Their system relies on a simple camera and software. That opens the door to affordable home-based therapy coaching tools.
Future work will expand the dataset and include full three-dimensional motion. Continuous gesture sequences will also be studied. These steps aim to reflect real-world hand movement more closely.
Research findings are available online in the journal Scientific Reports.
Related Stories
- Hand gestures make talking easier to follow, study finds
- Soft robotic wearable could transform stroke and ALS recovery
- Paralyzed man moves robotic arm with his mind
Like these kind of feel good stories? Get The Brighter Side of News' newsletter.



