Non-Invasive Models of Human Brain-Computer-Interface Control of Robots Grant

Non-Invasive Models of Human Brain-Computer-Interface Control of Robots .

abstract

  • There are many situations where a skilled human operator must manipulate a large number of control variables in real-time to direct the dexterous motion of a robotic device. These include controlling an excavator, teleoperating a surgical robot, and operating a cutting-edge brain-controlled prosthetic limb. However, it remains unknown how large sets of control variables can be organized to optimize how people learn to control complex machines. This project seeks to promote the progress of science and advance the national health by addressing two questions related to the design and implementation of adaptive brain-machine-interfaces such as those used by severely impaired people to control assistive robotics. An important novelty of the researched approach is the non-invasive recording of finger motions as a proxy for the high-dimensional inputs typically provided by intracortical brain-computer interfaces (iBCI). The specific research questions addressed by this project include: 1) "How should control signals be presented to the user at the control interface to optimize output behavior of the machine?", and 2) "Should the robotic system predict what the user wants it to do and adapt its behavior accordingly, and if so, how should task-level control be shared between the user and the machine to optimize task performance?". Project outcomes promise to be applicable to a wide range of difficult human-machine interaction problems. The awardee's institution is a Hispanic Serving Institution; the research includes outreach activities that specifically engage underrepresented groups, undergraduate students, and the local community.

    The project will use two models of intracortical brain-computer interfaces (iBCI) to evaluate how high-dimensional human input should be mapped onto command variables for a 6 degree-of-freedom embodied robotic arm. The project uses non-invasive recording of finger motions as a proxy for the high-dimensional inputs typically provided by iBCIs. The first model linearly projects finger articulations into one of seven different robot command spaces (effector position, joint velocity, motor torques, etc.). The project team will evaluate how human subjects perform with the assistive robot on tasks of daily living (e.g., moving objects on a tabletop or bringing a cup to their mouth) with each of the seven different interfaces. By doing so, they will determine the role that command space encoding plays in the rate of human learning and the ultimate extent of task proficiency. The second model acquires human kinematic input to drive a deep neural network model of motor cortex neurons, whose firing rates are then passed through a decoding algorithm to infer commands for the robot; this is an explicit and validated model of intracortical brain-computer interfaces. The project team will use this model to determine optimal rates of online decoder adaptation to emulated neural input, and the extent to which the adaptation rates interact with the choice of command space in optimizing task performance of the assistive robotic machine.

    This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

date/time interval

  • December 1, 2021 - November 30, 2025

administered by

sponsor award ID

  • 2128465

contributor