A Caltech Library Service

POLAR: Preference Optimization and Learning Algorithms for Robotics

Tucker, Maegan and Li, Kejun and Yue, Yisong and Ames, Aaron D. (2022) POLAR: Preference Optimization and Learning Algorithms for Robotics. . (Unpublished)

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


Parameter tuning for robotic systems is a time-consuming and challenging task that often relies on domain expertise of the human operator. Moreover, existing learning methods are not well suited for parameter tuning for many reasons including: the absence of a clear numerical metric for `good robotic behavior'; limited data due to the reliance on real-world experimental data; and the large search space of parameter combinations. In this work, we present an open-source MATLAB Preference Optimization and Learning Algorithms for Robotics toolbox (POLAR) for systematically exploring high-dimensional parameter spaces using human-in-the-loop preference-based learning. This aim of this toolbox is to systematically and efficiently accomplish one of two objectives: 1) to optimize robotic behaviors for human operator preference; 2) to learn the operator's underlying preference landscape to better understand the relationship between adjustable parameters and operator preference. The POLAR toolbox achieves these objectives using only subjective feedback mechanisms (pairwise preferences, coactive feedback, and ordinal labels) to infer a Bayesian posterior over the underlying reward function dictating the user's preferences. We demonstrate the performance of the toolbox in simulation and present various applications of human-in-the-loop preference-based learning.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Tucker, Maegan0000-0001-7363-6809
Li, Kejun0000-0002-0823-9839
Yue, Yisong0000-0001-9127-1989
Ames, Aaron D.0000-0003-0848-3177
Record Number:CaltechAUTHORS:20221219-234115665
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118472
Deposited By: George Porter
Deposited On:21 Dec 2022 19:27
Last Modified:02 Jun 2023 01:28

Repository Staff Only: item control page