Decoding motor imagery from the posterior parietal cortex of a
tetraplegic human
Tyson Aflalo
1,*
,
Spencer Kellis
1,*
,
Christian Klaes
1
,
Brian Lee
2
,
Ying Shi
1
,
Kelsie Pejsa
1
,
Kathleen Shanfield
3
,
Stephanie Hayes-Jackson
3
,
Mindy Aisen
3
,
Christi Heck
2
,
Charles Liu
2
,
and
Richard A. Andersen
1,†
1
Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA
91125, USA
2
USC Neurorestoration Center and the Departments of Neurosurgery and Neurology, University
of Southern California, Los Angeles, CA 90033, USA
3
Rancho Los Amigos National Rehabilitation Center, Downey, CA 90242, USA
Abstract
Nonhuman primate and human studies have suggested that populations of neurons in the posterior
parietal cortex (PPC) may represent high-level aspects of action planning that can be used to
control external devices as part of a brain-machine interface. However, there is no direct neuron-
recording evidence that human PPC is involved in action planning, and the suitability of these
signals for neuroprosthetic control has not been tested. We recorded neural population activity
with arrays of microelectrodes implanted in the PPC of a tetraplegic subject. Motor imagery could
be decoded from these neural populations, including imagined goals, trajectories, and types of
movement. These findings indicate that the PPC of humans represents high-level, cognitive aspects
of action and that the PPC can be a rich source for cognitive control signals for neural prosthetics
that assist paralyzed patients.
The posterior parietal cortex (PPC) in humans and nonhuman primates (NHPs) is situated
between sensory and motor cortices and is involved in high-level aspects of motor behavior
(
1
,
2
). Lesions to this region do not produce motor weakness or primary sensory deficits but
rather more complex sensorimotor losses, including deficits in the rehearsal of movements
(i.e., motor imagery) (
3
–
7
). The activity of PPC neurons recorded in NHPs reflects the
movement plans of the animals, and they can generate these signals to control cursors on
computer screens without making any movements (
8
–
10
). It is tempting to speculate that the
animals have learned to use motor imagery for this “brain control” task, but it is of course
not possible to ask the animals directly. These brain control results are promising for neural
prosthetics because imagined movements would be a versatile and intuitive method for
controlling external devices (
11
). We find that motor imagery recorded from populations of
human PPC neurons can be used to control the trajectories and goals of a robotic limb or
†
Corresponding author: ; Email: andersen@vis.caltech.edu
*
These authors contributed equally to this work.
Supplementary Materials:
www.sciencemag.org/content/348/6237/906/suppl/DC1
HHS Public Access
Author manuscript
Science
. Author manuscript; available in PMC 2016 June 07.
Published in final edited form as:
Science
. 2015 May 22; 348(6237): 906–910. doi:10.1126/science.aaa5417.
Author Manuscript
Author Manuscript
Author Manuscript
Author Manuscript
computer cursor. Also, the activity is often specific for the imagined effector (right or left
limb), which holds promise for bimanual control of robotic limbs.
A 32-year-old tetraplegic subject, EGS, was implanted with two microelectrode arrays on 17
April 2013. He had a complete lesion of the spinal cord at cervical level C3-4, sustained 10
years earlier, with paralysis of all limbs. Using functional magnetic resonance imaging
(fMRI), we asked EGS to imagine reaching and grasping. These imagined movements
activated separate regions of the left hemisphere of the PPC (fig. S1). A reach area on the
superior parietal lobule (putative human area 5d) and a grasp area at the junction of the
intraparietal and postcentral sulci (putative human anterior intraparietal area, AIP) were
chosen for implantation of 96-channel electrode arrays. Recordings were made over more
than 21 months with no adverse events related to the implanted devices. Spike activity was
recorded and used to control external devices, including a 17-degree-of-freedom robotic
limb and a cursor in two dimensions (2D) or 3D on a computer screen.
Recordings began 16 days after implantation. The subject could control the activity of single
cells through imagining particular actions. An example of volitional control is shown in
movie S1. The cell is activated when EGS imagines moving his hand to his mouth but not
for movements with similar gross characteristics such as imagined movements of the hand to
the chin or ear. Another example (movie S2) shows EGS increasing the activity of a different
cell by imagining rotation of his shoulder, and decreasing activity by imagining touching his
nose. In many cases, the subject could exert volitional control of single neurons by
imagining simple movements of the upper arm, elbow, wrist, or hand.
We found that EGS's neurons coded both the goal and imagined trajectory of movements. To
characterize these forms of spatial tuning, we used a masked memory reach paradigm
(MMR, Fig. 1A). In the task, EGS imagined a continuous reaching movement to a spatially
cued target after a delay period during which the goal was removed from the screen. On
some trials, motion of the cursor was blocked from view by using a mask. This allowed us to
characterize spatial tuning for goals and trajectories (Fig. 1B) while controlling for visual
confounds.
The number of recorded units was relatively constant through time, but units would appear
and disappear on individual channels over the course of hours, days, or weeks (fig. S2). This
allowed us to sample the functional properties of a large population of PPC neurons. From
124 spatially tuned units recorded across 7 days with the MMR task, 19% coded the goal of
movement exclusively, 54% coded the trajectory of the movement exclusively, and 27%
coded both goal and trajectory (Fig. 2A). Goal-tuned units supported accurate classification
of spatial targets (>90% classification with as few as 30 units), representing the first known
instance of decoding high-level motor intentions from human neuronal populations (Fig.
2B). The goal encoding was rapid with significant classification (shuffle test) occurring
within 190 ms of cue presentation and remaining high during the delay period in which there
was no visual goal present (Fig. 2C). Similarly, this population of neurons enabled
reconstructions of the moment-to-moment velocity of the effector (Fig. 2D) with coefficient
of determination (
R
2
) comparable to those reported for offline reconstructions of velocity in
human M1 studies [e.g., (
12
,
13
); see also fig. S3]. In other tasks, trajectory-tuned units
Aflalo et al.
Page 2
Science
. Author manuscript; available in PMC 2016 June 07.
Author Manuscript
Author Manuscript
Author Manuscript
Author Manuscript