Semantic Predictive Control for Explainable and Efficient Policy Learning

Xinlei Pan, Xiangyu Chen, Qizhi Cai, John Canny, Fisher Yu
ICRA 2019

Semantic Predictive Control for Explainable and Efficient Policy Learning

Abstract

Visual anticipation of ego and object motion over a short time horizons is a key feature of human-level performance in complex environments. We propose a driving policy learning framework that predicts feature representations of future visual inputs; our predictive model infers not only future events but also semantics, which provide a visual explanation of policy decisions. Our Semantic Predictive Control (SPC) framework predicts future semantic segmentation and events by aggregating multi-scale feature maps. A guidance model assists action selection and enables efficient sampling-based optimization. Experiments on multiple simulation environments show that networks which implement SPC can outperform existing model-based reinforcement learning algorithms in terms of data efficiency and total rewards while providing clear explanations for the policy’s behavior.

Video

Paper

Code

paper
github.com/ucbdrive/spc

Citation

@inproceedings{pan2019semantic,
  title={Semantic predictive control for explainable and efficient policy learning},
  author={Pan, Xinlei and Chen, Xiangyu and Cai, Qizhi and Canny, John and Yu, Fisher},
  booktitle={2019 International Conference on Robotics and Automation (ICRA)},
  pages={3203--3209},
  year={2019},
  organization={IEEE}
}

Related


Instance-Aware Predictive Navigation in Multi-Agent Environments

Instance-Aware Predictive Navigation in Multi-Agent Environments

ICRA 2021 A new visual model-based RL method with consideration of multiple hypotheses for future object movement.


End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

ICCV 2021 We demonstrated that an RL coach (Roach) would be a better choice to supervise imitation learning agents.


Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving

Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving

ICCV 2023 VTD is a promising new direction for exploring the unification of perception tasks in autonomous driving.


Deep Object-Centric Policies for Autonomous Driving

Deep Object-Centric Policies for Autonomous Driving

ICRA 2019 We show that object-centric models outperform object-agnostic methods in scenes with other vehicles and pedestrians.


End-to-end Learning of Driving Models from Large-scale Video Datasets

End-to-end Learning of Driving Models from Large-scale Video Datasets

CVPR 2017 Oral We develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state.