UC Santa Barbara researchers have used artificial intelligence to reveal new insights into the brain mechanisms behind covert attention, the process of focusing on parts of a visual scene without moving one's eyes. The team, including Sudhanshu Srivastava, Miguel Eckstein, and William Wang, applied convolutional neural networks (CNNs) to study this behavior and discovered previously unknown neuron types. Their findings were published in the Proceedings of the National Academy of Sciences.
Srivastava described the research as "a clear case of AI advancing neuroscience, cognitive sciences and psychology." Covert attention has traditionally been associated with humans and primates due to its complexity and connection with specific brain regions. However, recent evidence shows that other species such as archer fish, mice, and bees also exhibit similar behaviors.
The researchers questioned whether covert attention is not just managed by specialized modules in the brain but could emerge from interactions among many neurons. Because direct measurement of individual neuronal activity in large numbers is currently beyond imaging technology capabilities, they turned to artificial intelligence models for answers.
In previous work from 2024, Srivastava, Eckstein and Wang demonstrated that CNNs containing hundreds of thousands to a million artificial neurons could mimic human covert attention behaviors during target detection tasks—even though these networks had no explicit mechanism for directing attention.
The new study focused on understanding what within these CNNs allowed such emergent behavior. Eckstein explained: "For this paper we thought we could analyze these convolutional neural networks instead of treating them like a black box. If you're doing single-cell physiology, trying to isolate individual neurons, you can record activity from maybe thousands of neurons, but a million is not presently possible. But in the CNN, we could characterize every single of its units, and that might guide our understanding of the real neurons in brains."
By testing 1.8 million artificial neurons across multiple CNNs using visual cueing tasks similar to those employed in neuroscience labs (such as Posner cueing), they found units resembling known neuron types involved in attention as well as novel ones not previously reported. Some units showed reduced response when presented with cues ("cue inhibitory"), while others—described by Eckstein as "location opponent"—increased activity at one location while suppressing it elsewhere.
"The most surprising one is a 'location opponent,'" said Eckstein. He added: "It's kind of a push-pull." While similar mechanisms are common in other aspects of vision science (like color opponency), their role in attention was unexpected.
To verify their AI findings against biological data, the team analyzed mouse brain studies involving cueing tasks. They identified neuron types matching those predicted by their models—including location-opposing cells—in the mouse superior colliculus. Other new neuron types suggested by the AI also appeared in mouse data; however, some AI-predicted types did not appear biologically, hinting at constraints unique to living systems.
"These neurons might be one of a variety mediating emergent attentional behavior," said Eckstein.
Srivastava reflected on how this work changes perspectives: "It fundamentally changed how we think about attention," he said. The research suggests both behaviors and underlying neural mechanisms related to covert attention can emerge from learning processes rather than being hardwired.
Eckstein and Wang continue exploring connections between human cognition and machine intelligence through UCSB's Mind & Machine Intelligence Initiative—a program supported by Duncan and Suzanne Mellichamp—that brings together experts working at this interdisciplinary frontier.