Gao, Hongbo and Shi, Guanya and Xie, Guotao and Cheng, Bo (2018) Car-following method based on inverse reinforcement learning for autonomous vehicle decision-making. International Journal of Advanced Robotic Systems, 15 (6). pp. 1-11. ISSN 1729-8814. doi:10.1177/1729881418817162. https://resolver.caltech.edu/CaltechAUTHORS:20190102-155140213
![]() |
PDF
- Published Version
Creative Commons Attribution. 974kB |
Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20190102-155140213
Abstract
There are still some problems need to be solved though there are a lot of achievements in the fields of automatic driving. One of those problems is the difficulty of designing a car-following decision-making system for complex traffic conditions. In recent years, reinforcement learning shows the potential in solving sequential decision optimization problems. In this article, we establish the reward function R of each driver data based on the inverse reinforcement learning algorithm, and r visualization is carried out, and then driving characteristics and following strategies are analyzed. At last, we show the efficiency of the proposed method by simulation in a highway environment.
Item Type: | Article | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Related URLs: |
| ||||||||||
ORCID: |
| ||||||||||
Additional Information: | © The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 License (http://www.creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Received: May 09, 2018; Accepted: October 11, 2018. Article first published online: December 6, 2018; Issue published: November 1, 2018. The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by Junior Fellowships for Advanced Innovation Think-tank Program of China Association for Science and Technology under grant no. DXB-ZKQN-2017-035, Project funded by China Postdoctoral Science Foundation under grant no. 2017M620765, Project funded by China Postdoctoral Science Foundation Special Foundation under grant no. 2018T110095, the National Key Research and Development Program of China under grant no. 2017YFB0102603. The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | ||||||||||
Funders: |
| ||||||||||
Subject Keywords: | Car-following, inverse reinforcement learning (IRL), autonomous vehicle, decision-making, automatic driving | ||||||||||
Issue or Number: | 6 | ||||||||||
DOI: | 10.1177/1729881418817162 | ||||||||||
Record Number: | CaltechAUTHORS:20190102-155140213 | ||||||||||
Persistent URL: | https://resolver.caltech.edu/CaltechAUTHORS:20190102-155140213 | ||||||||||
Official Citation: | Gao, Hongbo, et al. “Car-Following Method Based on Inverse Reinforcement Learning for Autonomous Vehicle Decision-Making.” International Journal of Advanced Robotic Systems, Nov. 2018, doi:10.1177/1729881418817162. | ||||||||||
Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. | ||||||||||
ID Code: | 92021 | ||||||||||
Collection: | CaltechAUTHORS | ||||||||||
Deposited By: | George Porter | ||||||||||
Deposited On: | 03 Jan 2019 19:23 | ||||||||||
Last Modified: | 16 Nov 2021 03:46 |
Repository Staff Only: item control page