Mathematical Frameworks for Adaptive Behavior: From Neuromechanical Motor Control to Neurocognitive Control
Time and place
12:30–1:30 PM on Friday, April 17th, 2026; MR 1128
Dr. Zhuojun Yu
Abstract
Many fundamental behaviors require the brain to adapt in real time to changing environments. This talk will present a unified computational framework for understanding such adaptive behavior across two settings: motor control and cognition. On the motor side, physiological behaviors such as walking, breathing, and feeding are modeled as closed-loop systems integrating neural circuitry, biomechanics, and sensory feedback, giving rise to oscillatory limit-cycle dynamics. As control theory provides a natural framework for “reverse-engineering” biological regulation, we took an initial step of extending linear control theory to limit cycles by developing variational analysis tools that predict how perturbations alter the timing, shape, and functional performance of the system. These tools also informed our design of bio-inspired robotics and guided our analysis of neuromodulatory tradeoffs in motor behavior. On the cognitive side, significant effort has been spent looking at the roles of specific neural populations in the cortico-basal ganglia circuit in guiding decisions and learning. Here we took on the much less investigated questions of how activity evolves within the whole circuit during decision-making, the nature of variability within this process, and how reward learning reshapes these dynamics over time. To address these questions, we developed a novel computational framework, CLAW (Circuit Logic Assessed via Walks), to track the flow of neural activity from high-dimensional, stochastic neural dynamics. By mapping these circuit-level dynamics to the drift-diffusion model, we provide an algorithmic interpretation of how learning improves both decision speed and accuracy. Together, these two sides demonstrate the complementary goals of making complex dynamics predictable and tractable. Finally, I will conclude with future directions in large-scale neural data analysis, optimal control and reinforcement learning, and reliable AI tools for scientific discovery.