A Caltech Library Service

AdvDO: Realistic Adversarial Attacks for Trajectory Prediction

Cao, Yulong and Xiao, Chaowei and Anandkumar, Anima and Xu, Danfei and Pavone, Marco (2022) AdvDO: Realistic Adversarial Attacks for Trajectory Prediction. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


Trajectory prediction is essential for autonomous vehicles (AVs) to plan correct and safe driving behaviors. While many prior works aim to achieve higher prediction accuracy, few study the adversarial robustness of their methods. To bridge this gap, we propose to study the adversarial robustness of data-driven trajectory prediction systems. We devise an optimization-based adversarial attack framework that leverages a carefully-designed differentiable dynamic model to generate realistic adversarial trajectories. Empirically, we benchmark the adversarial robustness of state-of-the-art prediction models and show that our attack increases the prediction error for both general metrics and planning-aware metrics by more than 50% and 37%. We also show that our attack can lead an AV to drive off road or collide into other vehicles in simulation. Finally, we demonstrate how to mitigate the adversarial attacks using an adversarial training scheme.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Cao, Yulong0000-0003-3007-2550
Xiao, Chaowei0000-0002-7043-4926
Anandkumar, Anima0000-0002-6974-6797
Xu, Danfei0000-0002-8744-3861
Pavone, Marco0000-0002-0206-4337
Record Number:CaltechAUTHORS:20221221-004655506
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118546
Deposited By: George Porter
Deposited On:22 Dec 2022 18:47
Last Modified:02 Jun 2023 01:29

Repository Staff Only: item control page