When a baby smiles, it often prompts an automatic smile in return. This reaction is part of how humans communicate emotions and mental states through facial expressions. The neural mechanisms behind recognizing faces have been studied extensively, but the processes that generate facial expressions have remained less understood.
Researchers at Rockefeller University, led by Winrich Freiwald in the Laboratory of Neural Systems, have now identified a network in the brain responsible for controlling facial movements. Their findings were published in Science and reveal how different brain regions coordinate to produce various facial gestures.
Previously, scientists believed that emotional expressions originated from one area of the frontal lobe while voluntary actions came from another. However, Freiwald's team found that both lower-level and higher-level brain regions are involved in encoding all types of facial gestures.
"We had a good understanding of how facial gestures are received, but now we have a much better understanding of how they're generated," said Freiwald. His research is supported by the Price Family Center for the Social Brain at Rockefeller.
Co-lead author Geena Ianni explained: "We found that all regions participated in all types of facial gestures but operate on their own distinct timescales, suggesting that each region is uniquely suited to the 'job' it performs."
The study used fMRI scanning to observe macaque monkeys as they made different facial expressions. The researchers identified three cortical areas with direct access to face muscles: the cingulate motor cortex (medial), primary and premotor cortices (lateral), and somatosensory cortices. They mapped out a network involving these areas along with parts of the parietal lobe.
By recording neural activity during specific expressions—such as threatening looks, lipsmacking (a social gesture among macaques), and chewing—the team linked patterns of brain activity to coordinated movements in various parts of the face.
The results showed that both higher and lower cortical regions contributed to emotional and voluntary expressions. Each region operated at its own tempo: lateral areas responded quickly while medial areas showed slower dynamics.
"Lateral regions like the primary motor cortex housed fast neural dynamics that changed on the order of milliseconds, while medial regions like the cingulate cortex housed slow, stable neural dynamics that lasted for much longer," said Ianni.
Further analysis indicated these regions form an interconnected sensorimotor network rather than acting independently. Yuriria Vázquez, co-lead author and former postdoc in Freiwald's lab, stated: "This suggests facial motor control is dynamic and flexible rather than routed through fixed, independent pathways." Freiwald added: "This is contrary to the standard view that they work in parallel and separate action... That really underscores the connectivity of the facial motor network."
Freiwald plans future studies examining perception and expression together to better understand emotions. He said: "We think that will help us better understand emotions... We would like to find the areas controlling emotional states—we have ideas about where they are—and then understand how they work together with motor areas to generate different kinds of behaviors."
Vázquez suggested further research could explore how social cues or internal states influence this system or lead to clinical applications such as improved brain-machine interfaces for communication after injury. Freiwald noted: "As with our approach, those devices also involve implanting electrodes to decode brain signals... And because of the importance of facial expression to communication, it will be very useful to have devices that can decode and translate these kinds of facial signals."
Ianni concluded: "I hope our work moves the field, even the tiniest bit, towards more naturalistic and rich artificial communication designs that will improve lives of patients after brain injury."