Nature Neuroscience
nature neuroscience
https://doi.org/10.1038/s41593-023-01500-7
Technica Report
Decoding motor plans using a closed-loop
ultrasonic brain–machine interface
Whitney S. Griggs
1,2
,1 4
, Sumner L. Norman
1,1 4
, Thomas Deffieux
3,4
,
Florian Segura
3,4
, Bruno-Félix Osmanski
5
, Geeling Chau
1
,
Vasileios Christopoulos
6,7
, Charles Liu
1,8,9,10
, Mickael Tanter
3,4
,
Mikhail G. Shapiro
1 1,1 2 ,1 3
& Richard A. Andersen
1,6
Brain–machine interfaces (BMIs) enable people living with chronic paralysis
to control computers, robots and more with nothing but thought. Existing
BMIs have trade-offs across invasiveness, performance, spatial coverage
and spatiotemporal resolution. Functional ultrasound (fUS) neuroimaging
is an emerging technology that balances these attributes and may
complement existing BMI recording technologies. In this study, we use fUS
to demonstrate a successful implementation of a closed-loop ultrasonic
BMI. We streamed fUS data from the posterior parietal cortex of two rhesus
macaque monkeys while they performed eye and hand movements. After
training, the monkeys controlled up to eight movement directions using
the BMI. We also developed a method for pretraining the BMI using data
from previous sessions. This enabled immediate control on subsequent
days, even those that occurred months apart, without requiring extensive
recalibration. These findings establish the feasibility of ultrasonic BMIs,
paving the way for a new class of less-invasive (epidural) interfaces that
generalize across extended time periods and promise to restore function to
people with neurological impairments.
Brain–machine interfaces (BMIs) translate complex brain signals into
computer commands and are a promising method to restore the capa
-
bilities of human patients with paralysis
1
. Numerous methods have
been used to record brain signals for these BMIs, including intracortical
multielectrode arrays (MEAs), electrocorticography (ECoG), functional
near-infrared spectroscopy (fNIRS), electroencephalography (EEG)
and functional magnetic resonance imaging (fMRI) (Extended Data
Fig. 1). These methods have various trade-offs between performance,
invasiveness, spatial coverage, spatiotemporal resolution, portability
and decoder stability across sessions (Supplementary Table 1). For
example, intracortical MEAs have been used to decode up to 62 words
per minute
2
and control a robotic arm
3
, but each array can only sample
from a small area of cortex (
∼
4 ×
∼
4 mm) located on gyral crowns. Con
-
versely, fMRI is noninvasive and samples from the entire brain, however,
fMRI-based BMIs have only been demonstrated to decode approxi
-
mately 1 char per min
4
or control up to four movement directions
5
.
Received: 17 January 2023
Accepted: 16 October 2023
Published online: xx xx xxxx
Check for updates
1
Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
2
David Geffen School of Medicine at UCLA,
Los Angeles, CA, USA.
3
Physics for Medicine Paris, INSERM, CNRS, ESPCI Paris, PSL Research University, Paris, France.
4
INSERM Technology Research
Accelerator in Biomedical Ultrasound, Paris, France.
5
Iconeus, Paris, France.
6
T&C Chen Brain-Machine Interface Center, California Institute of Technology,
Pasadena, CA, USA.
7
Department of Bioengineering, University of California, Riverside, Riverside, CA, USA.
8
Department of Neurological Surgery, Keck
School of Medicine of USC, Los Angeles, CA, USA.
9
USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA.
10
Rancho Los
Amigos National Rehabilitation Center, Downey, CA, USA.
11
Division of Chemistry & Chemical Engineering, California Institute of Technology, Pasadena,
CA, USA.
12
Andrew and Peggy Cherng Department of Medical Engineering, California Institute of Technology, Pasadena, CA, USA.
13
Howard Hughes
Medical Institute, Pasadena, CA, USA.
14
These authors contributed equally: Whitney S. Griggs, Sumner L. Norman.
e-mail:
wsgriggs@gmail.com
;
sumner.norman@gmail.com
Nature Neuroscience
Technica Report
https://doi.org/10.1038/s41593-023-01500-7
Online decoding of two eye-movement directions
To demonstrate feasibility of an fUS-BMI, we first performed online,
closed-loop decoding of two movement directions (Fig.
2
). Each mon
-
key initially performed 100 successful memory-guided saccades to the
cued left or right target (Fig.
1b
) while we streamed fUS images from the
left PPC. After 100 trials, we switched to closed-loop decoding where
the monkey now controlled the task direction using his movement
intention, that is, the brain activity detected by the fUS-BMI in the last
three fUS images of the memory period (Fig.
1c,e
). At the conclusion of
each closed-loop decoding trial, the monkey received visual feedback
about the fUS-BMI prediction. We added the fUS images from each
successful trial to our training set and retrained the decoder after each
trial (Fig.
1c
). We assessed the decoder’s performance throughout the
training (20–100 trials; blue line) and closed-loop decoding (101+ trials;
green line) using cumulative percent correct (Fig.
2a
). During the initial
training period (20–100 trials), the decoder’s prediction was not visible
to the monkey, that is, no green dot was shown until the closed-loop
decoding began after trial 100.
In the second closed-loop two-direction session, the decoder
reached significant accuracy (
P
< 0.05; one-sided binomial test) after
55 training trials and improved in accuracy until peaking at 82% accu-
racy at trial 114 (Fig.
2a
). The decoder predicted both directions well
above chance level but displayed better performance for rightward
movements (Fig.
2b
). To understand which brain regions were most
important for the decoder performance, we performed a searchlight
analysis with a 200 μm, that is, 2 voxel, radius (Fig.
2c
). Dorsal LIP and
area 7a contained the voxels most informative for decoding intended
movement direction.
An ideal BMI needs very little training data and no retraining
between sessions. BMIs using intracortical electrodes typically require
recalibration for each subsequent session due to nonstationarities
across days, including from difficulty recording the same neurons
across multiple days
15
,
16
. Thanks to its wide field of view, fUS neuroim-
aging can image from the same brain regions over time, and therefore
may be an ideal technique for stable decoding across many sessions.
The neuron population identification problem is also present with
ultrasound imaging, including from brain shifts relative to the ultra
-
sound transducer between sessions. To test our hypothesis that we
would have stable decoding across many sessions, we pretrained the
fUS-BMI using a previous session’s data and then tested the decoder
in an online, closed-loop experiment. To perform this pretraining, we
first aligned the data from the previous session’s imaging plane to the
current session’s imaging plane (Extended Data Fig. 3). This addressed
the neuron population identification problem by allowing us to track
the same neurovascular populations across different sessions. We used
semiautomated rigid-body alignment to find the transform between
the previous and current imaging plane, applied this two-dimensional
(2D) image transform to each frame of the previous session and saved
the aligned data. This semiautomated alignment process took <1 min.
After we performed this image alignment, the fUS-BMI automatically
loaded this aligned dataset and pretrained the initial decoder. As in the
models without pretraining, we continued to use real-time retraining to
incorporate the most recent successful trials. This adaptive retraining
of the BMI after each successful trial allowed the BMI to incorporate
session-specific changes (behavioral, anatomical, neurovascular and so
on) and may allow the BMI to achieve better performance. The fUS-BMI
reached significant performance substantially faster (Fig.
2d
) when
we used pretraining. The fUS-BMI achieved significant accuracy at
Trial 7, approximately 15 min faster than the example session without
pretraining.
To quantify the benefits of pretraining upon fUS-BMI training
time and performance, we compared fUS-BMI performance across
all sessions when (1) using only data from the current session, ver
-
sus (2) pretraining with data from a previous session (Fig.
3
). For all
real-time sessions that used pretraining, we also created a post hoc
Functional ultrasound (fUS) imaging is a recently developed
technology that is poised to enable a new class of epidural BMIs
that can record from large regions of the brain and decode spatio
-
temporally precise patterns of activity. fUS neuroimaging uses ultra
-
fast pulse-echo imaging to simultaneously sense changes in cerebral
blood volume (CBV) from multiple brain regions
6
. These CBV changes
are well correlated with single-neuron activity and local field poten-
tials
7
,
8
. It has a high sensitivity to slow blood flow (
∼
1 mm s
−1
velocity)
and balances good spatiotemporal resolution (100 μm; <1 s) with a
large and deep field of view (
∼
2 cm; Extended Data Fig. 1). fUS can
successfully image through the dura and multiple millimeters of
granulation tissue
9
(Extended Data Fig. 2a). However, fUS imaging
currently requires either a cranial opening or an acoustic window
10
in large animals because the ultrasound signal is severely attenuated
by bone
11
.
Previously, we demonstrated that fUS neuroimaging possesses
the sensitivity and field-of-view to decode movement intention on a
single-trial basis simultaneously for two directions (left/right), two
effectors (hand/eye) and task state (go/no-go)
9
. However, we performed
this post hoc (offline) analysis using prerecorded data. In this study,
we demonstrate an online, closed-loop functional ultrasound brain–
machine interface (fUS-BMI). In addition, we present key advances that
build on previous fUS neuroimaging studies, including decoding eight
movement directions and designing decoders stable across >40 days.
Results
We used a miniaturized 15.6 MHz ultrasound transducer paired with
a real-time ultrafast ultrasound acquisition system to stream 2 Hz
fUS images from two monkeys as they performed memory-guided
eye movements (Fig.
1
and Extended Data Fig. 2a). Before the experi
-
ments, we performed a craniectomy over the left posterior parietal
cortex (PPC) in both monkeys. During each experiment session
(
n
= 24 sessions; Extended Data Table 1), we positioned the trans
-
ducer surface normal to the brain above the dura mater (Fig.
1a
and
Extended Data Fig. 2b) and recorded from coronal planes of the left
PPC, a sensorimotor association area important for goal-directed
movements and attention
12
. This technique achieved a large field of
view (12.8-mm width, 16-mm depth,
∼
400-μm plane thickness) while
maintaining high spatial resolution (100 μm × 100 μm in-plane).
This allowed us to stream high-resolution hemodynamic changes
across multiple PPC regions simultaneously, including the lateral
(LIP) and medial (MIP) intraparietal cortex (Fig.
1a
). The LIP and MIP
are involved in planning eye and reach movements respectively
9
,
13
,
14
,
making the PPC a good region from which to record effector-specific
movement signals.
We streamed real-time fUS images into a BMI decoder that used
principal component (PCA) and linear discriminant analysis (LDA) to
predict planned movement directions. The BMI output then directly
controlled the behavioral task (Fig.
1c
). To build the initial training
set for the decoder, each monkey initially performed instructed eye
movements to a randomized set of two or eight peripheral targets.
We used the fUS activity during the delay period preceding success-
ful eye movements to train the decoder. During this initial training
phase, successful trials were defined as the monkey performing
the eye movement to the correct target and receiving the liquid
reward. After 100 successful training trials, we switched to the
closed-loop BMI mode where the intended movement came from
the fUS-BMI (Fig.
1e
). During this closed-loop BMI mode, the mon-
key continued to fixate on the center cue until reward delivery.
During the interval between a successful trial and the subsequent
trial, we retrained the decoder, continuously updating the decoder
model as each monkey used the fUS-BMI. During the closed-loop
fUS-BMI mode, successful trials were defined as a correct predic
-
tion plus the monkey maintaining fixation on the center cue until
reward delivery.
Nature Neuroscience
Technica Report
https://doi.org/10.1038/s41593-023-01500-7
Target positions
OR
2 targets
8 targets
0.4 s
5.0 s*
<0.5 s
1.5 s*
5.0 s*
Cue
Fixation
Memory
Movement
Hold
Reward
b
Memory-guided saccade task
Fixation/target (visible to NHP)
Eye position (invisible to NHP)
Saccade
a
Monkey L
Monkey P
Coronal ield of view
ips
cis
7a
LIP
MIP
5
23c
1–2
VIP
ls
ips
VIP
LIP
MIP
7a
5
7b
7op
ls
ips
5
LIP
MIP
7a
VIP
Tpt
7op
Plane 2
2-target decoding
Plane 3
8-target decoding
Plane 1
All decoding
1
10 mm
2
3
1 mm
e
Memory-guided BMI task
Saccade
Training
BMI
Real-time fUS-BMI
...After 100 trials ...
Eye position (invisible to NHP)
BMI position (visible to NHP)
d
Multicoder prediction
Vertical
Horizontal
L
R
A
P
10 mm
L
R
D
V
Successful trials from previous session
Raw data sampled and
beamformed to 2-Hz
functional images
Decoder
Task
control
Training set
Successful trials from current session
and/or
Add functional image(s) to training set
Retrain decoder after each trial
Correct
prediction
c
Last 1.5 s of memory period
Image 1
Image 2
Image 3
Fig. 1 | Anatomical recording planes and behavioral tasks.
a
, Coronal fUS
imaging planes used for monkeys P and L. The approximate fUS field of view
superimposed on a coronal MRI slice. The recording chambers were placed
surface normal to the skull above a craniectomy (black square). The ultrasound
transducer was positioned to acquire a consistent coronal plane across different
sessions (red line). The vascular maps show the mean power Doppler image
from a single imaging session. Different brain regions are labeled in white text,
and the labeled arrows point to brain sulci. D, dorsal; V, ventral; L, left; R, right;
A, anterior; P, posterior; ls, lateral sulcus; ips, intraparietal sulcus; cis, cingulate
sulcus. Anatomical labels are based upon ref.
63
.
b
, Memory-guided saccade
task. * ±1,000 ms of jitter for fixation and memory periods; ±500 ms of jitter for
hold period. The peripheral cue was chosen from two or eight possible target
locations depending on the specific experiment. Red square, monkey’s eye
position (not visible to the monkey). NHP, nonhuman primate, that is, monkey.
c
, fUS-BMI algorithm. Real-time 2-Hz functional images were streamed to a linear
decoder that controlled the behavioral task. The decoder used the last three
fUS images of the memory period to make its prediction. If the prediction was
correct, the data from that prediction were added to the training set. The decoder
was retrained after every successful trial. The training set consisted of trials
from the current session and/or from a previous fUS-BMI session.
d
, Multicoder
algorithm. For predicting eight movement directions, the vertical component
(blue) and the horizontal component (red) were separately predicted and then
combined to form each fUS-BMI prediction (purple).
e
, Memory-guided BMI task.
The BMI task is the same as in
b
except that the movement period is controlled by
the brain activity (via fUS-BMI) rather than eye movements. After 100 successful
eye movement trials, the fUS-BMI controlled the movement prediction (closed-
loop control). During the closed-loop mode, the monkey had to maintain fixation
on the center fixation cue until reward delivery. Red square, monkey’s eye
position (not visible to the monkey); green square, BMI-controlled cursor
(visible to the monkey).
Nature Neuroscience
Technica Report
https://doi.org/10.1038/s41593-023-01500-7
(offline) simulation of the fUS-BMI results without using pretraining.
For these simulated sessions without pretraining, the recorded data
passed through the same classification algorithm used for the real-time
fUS-BMI but did not use any data from a previous session.
Using only data from the current session
.
The cumulative decoding
accuracy reached significance (
P
< 0.05; one-sided binomial test) at the
end of each online, closed-loop recording session (2 of 2 sessions, mon
-
key P; 1 of 1 session, monkey L) and most offline, simulated recording
Confusion matrix
a
Cumulative
percent correct (%)
b
c
Most informative voxels
d
e
f
95.5%
4.5%
27.8%
72.2%
63.3%
36.7%
10.3%
89.7%
True class
Trial 7
1
100
200
300
400
Trial number
Pretrained on day 8 session
Recorded on day 20
100
75
50
25
0
100
75
50
25
0
Trial 55
Trial number
Recorded on day 8
1
100
200
300
400
Chance
Chance
Chance
Decoder training
Closed-loop BMI
Chance envelope (
α
= 0.05)
Last nonsigniicant trial
Percent correct (%)
100
50
0
True class
Predicted class
Cumulative
percent correct (%)
Pretraining
Retraining
2
6
10
14
0
5
10
Width (mm)
Depth (mm)
2
6
10
14
0
5
10
Width (mm)
Depth (mm)
Pretraining
Retraining
Fig. 2 | Example sessions decoding two saccade directions (monkey P).
a
,
Cumulative decoding accuracy as a function of trial number. Blue represents fUS-
BMI training where the monkey controlled the task using overt eye movements.
The BMI performance shown in blue was generated post hoc with no impact on
the real-time behavior. Green represents trials under fUS-BMI control where the
monkey maintained fixation on the center cue and the movement direction was
controlled by the fUS-BMI. Gray chance envelope, 90% binomial distribution;
red line, last nonsignificant trial.
b
, Confusion matrix of final decoding accuracy
across the entire session represented as a percentage (rows add to 100%).
c
,
Searchlight analysis represents the 10% voxels with the highest decoding accuracy
(threshold is
q
≤
1.66×10
−6
). White circle, 200-μm searchlight radius. Scale bar,
1 mm.
d
–
f
, Same format as in
a
–
c
. fUS-BMI was pretrained using data collected
from a previous session.
d
, Cumulative decoding accuracy as a function of trial
number.
e
, Confusion matrix of final decoding accuracy.
f
, Searchlight analysis
represents the 10% of voxels with the highest decoding accuracy (threshold is
q
≤
2.07 × 10
−13
). White circle, 200-μm searchlight radius. Scale bar, 1 mm.
Cumulative
percent correct (%)
Monkey
P
Trial number
Monkey
L
1
100
200
Last nonsigniicant trial per session
400
1
100
200
300
400
Day 1
Day 8
Day 13
Day 15
Day 20
Day 21
Day 26
Day 29
Day 36
Day 48
Pretrained on
day 8
session
Pretrained on
day 21
session
Chance
First closed-loop
fUSI-BMI experiment
Oline, simulated fUS-BMI
Online, closed-loop fUS-BMI
Pretraining
Retraining
Pretraining
Retraining
300
100
75
50
25
0
100
75
50
25
0
Fig. 3 | Performance across sessions for decoding two saccade directions.
Cumulative decoder accuracy during each session for monkey P and L. Solid lines
are real-time results and fine-dashed lines are simulated sessions from post hoc
analysis of real-time fUS imaging data. Vertical marks above each plot represent
the last nonsignificant trial for each session. Day number is relative to the first
fUS-BMI experiment. Coarse-dashed horizontal black line represents chance
performance.