Workshop on Automated Vehicle Safety: Verification, Validation, and Transparency

This is going to be a half day workshop on Sunday 27 October 2019 from 13:00hs to 17:00hs.

Our 2nd ITSC workshop on AV safety seeks industry unity in solving the safety challenges. We invite academics, researchers, industry professionals, government officials, and lawmakers to discuss, challenge, and develop a holistic AV safety model. We’ll discuss state-of-the-art contributions to verification, validation, and transparency of AV safety. Paper submissions will be complemented with industry and government insights into the challenges and applicability methods in automated driving worldwide. We encourage an active dialogue in a panel Q&A with presenters and invited industry figures. We seek to inspire collaboration in this open dialogue of your contributions, points of view, and visions.

Invited Speakers

Prof. Eduardo M. Nebot
Director of Australian Center for Field Robotics
University of Sydney - http://its.acfr.usyd.edu.au/research-areas/
nebot@acfr.usyd.edu.au

Assuring Robustness and Safety of Connected Autonomous Vehicles
One of the fundamental challenges in the design of autonomous systems is the validation of the performance of each algorithm. The robustness of each process must be evaluated under a comprehensive variety of operating conditions and structural changes of the intended area of operations. This is the case for perception, navigation, path planning, decision and control systems. This presentation will address the evaluation of metrics to assess the robustness of critical CAV’s algorithms. This is essential to provide assurance of safety of a CAV while operating in a given Operational Design Domain (ODD). Different approaches and case studies will be presented making use of ACFR autonomous vehicles.

Jack Weast
Senior Principal Engineer, Intel Corporation
VP of Autonomous Vehicle Standards, Mobileye
jack.weast@intel.com

An Open, Transparent, Industry-Driven Approach to AV Safety
At Intel and Mobileye, saving lives drives us. But in the world of automated driving, we believe safety is not merely an impact of AD, but the bedrock on which we all build this industry. And so we proposed Responsibility-Sensitive Safety (RSS), a formal model to define safe driving and what rules an automated vehicle, independent of brand or policy, should abide to always keep its passengers safe. We intend this open, non-proprietary model to drive cross-industry discussion; let’s come together as an industry and use RSS as a starting point to clarify safety today, to enable the autonomous tomorrow.

Prof. Fei-Yue Wang
Director and Chief Scientist of State Key Laboratory for Management of Intelligent Industries
Chinese Academy of Sciences (CASIA)
feiyue.wang@ia.ac.cn

A Right of Way Based Strategy to Implement Safe and Efficient Autonomous Driving at Non-Signalized Intersections
Non-signalized intersection is a typical and common scenario for connected and automated vehicles (CAVs). How to balance safety and efficiency remains difficult for researchers. To improve the original Responsibility Sensitive Safety (RSS) driving strategy on the non-signalized intersection, we propose a new strategy in this paper, based on right-of-way assignment (RWA). The performances of RSS strategy, cooperative driving strategy, and RWA based strategy are tested and compared. Testing results indicate that our strategy yields better traffic efficiency than RSS strategy, but not satisfying as the cooperative driving strategy due to the limited range of communication and the lack of long-term planning. However, our new strategy requires much fewer communication costs among vehicles.

Prof. Krzysztof Czarnecki
Head of the Generative Software Lab
University of Waterloo
kczarnec@gsd.uwaterloo.ca

PURSS: Towards a Perceptual Uncertainty Aware Responsibility Sensitive Safety
RSS assumes perfect perception but there is always perceptual uncertainty. The presence of perceptual uncertainty causes the possibility of misperceptions and these can cause safety risk because RSS can allow unsafe actions to be performed. In this paper, we propose a formal model of perception coupled with RSS to help assess the impact of misperception and guide the improvement of perceptual subsystems. In addition, we propose a novel approach for integrating estimates of perceptual uncertainty during driving with RSS in order to help mitigate the impact of misperception. We illustrate our approach using examples and discuss its implications and limitations.

Marisa Walker
Senior Vice President, Strategic Planning/Infrastructure
Arizona Commerce Authority - Institute for Automated Mobility (IAM)
marisaw@azcommerce.com

TBD

Prof. Chen Chai
Key Laboratory of Road and Traffic Engineering, Ministry of Education
Tongji University
chaichen@tongji.edu.cn

Safety Evaluation of Responsibility-Sensitive Safety (RSS) on Autonomous Car-Following Maneuvers Based on Surrogate Safety Measurements
This paper evaluates the safety effect of Responsibility-Sensitive Safety (RSS) model on autonomous car-following maneuvers. The RSS model is embedded into Model Predictive Control (MPC) based Adaptive Cruise Control (ACC) algorithm. Car-following scenarios with sudden deceleration of lead vehicle at various front gap and velocity conditions are simulated. Vehicle movements characteristics and surrogate safety measurements are analyzed to evaluate safety performance of simulation before and after RSS is embedded. Results show that ACC safety performance can be improved significantly by RSS at large front gap conditions. As ACC is an optimization-based algorithm, large initial front gap leads to late deceleration decision and thus result in low safety performance. RSS model, which is derived from drivers’ perception and reaction according to emergencies, can be applied as a safety guarantee to improve safety performance of such optimization-based algorithms.

Anthony Corso
Standford Intelligent Systems Lab
Stanford University
acorso@standford.edu

Adaptive Stress Testing with Reward Augmentation for Autonomous Vehicle Validation
Determining possible failure scenarios is a critical step in the evaluation of autonomous vehicle systems. Real-world vehicle testing is commonly employed for autonomous vehicle validation, but the costs and time requirements are high. Consequently, simulation-driven methods such as Adaptive Stress Testing (AST) have been proposed to aid in validation. AST formulates the problem of finding the most likely failure scenarios as a Markov decision process, which can be solved using reinforcement learning. In practice, AST tends to find scenarios where failure is unavoidable and tends to repeatedly discover the same types of failures of a system. This work addresses these issues by encoding domain-relevant information into the search procedure. With this modification, the AST method discovers a larger and more expressive subset of the failure space when compared to the original AST formulation. We show that our approach is able to identify useful failure scenarios of an autonomous vehicle policy.

Pascal Brunner
Research assistant at Center of Automotive Research on Integrated Systems and Measurement Area (CARISSMA)
pascal.brunner@carissma.eu

Center of Automotive Research on Integrated Safety Systems and Measurement Area efforts on AV Safety
CARISSMA stands for Center of Automotive Research on Integrated Safety Systems and Measurement Area. The aim of this facility is to conduct applied research in order to enhance traffic safety in Germany and abroad. To this end, CARISSMA works with car manufacturers, scientists and research institutions all over the world. Working on an interdisciplinary basis, the scientists involved seek to tackle the social challenge of “Vision Zero” – achieving the ultimate goal of zero traffic deaths. The CARISSMA Research and Test Center investigates active, passive and integrated vehicle safety as well as the development of innovative test systems and safe electric mobility. In addition to a short introduction to Carissma, current work on test systems and methods is presented, in particular on virtual testing

Final Agenda

Time Speaker Talk

13:00

Academic Presentation 1: Eduardo Nebot – University of Syndey

"Assuring Robustness and Safety of Connected Autonomous Vehicles"

13:20

Industry Presentation 1: Jack Weast – Intel

"An Open, Transparent, Industry-Driven Approach to AV Safety"

13:40

Academic Presentation 2: Krzysztof Czarnecki – University of Waterloo

“PURSS: Towards a Perceptual Uncertainty Aware Responsibility Sensitive Safety”

14:00

Government Insights 1: Marisa Walker - Arizona Commerce Authority - Institute for Automated Mobility 

TBD

14:20

Academic Presentation 3: Chen Chai - Tongji University

“Safety Evaluation of Responsibility-Sensitive Safety (RSS) on Autonomous Car-Following Maneuvers Based on Surrogate Safety Measurements”

14:40

Academic Presentation 3: Fei-Yue Wang – Chinese Academy of Science - IA

"A Right of Way Based Strategy to Implement Safe and Efficient Autonomous Driving at Non-Signalized Intersections"

15:00

30 min- Coffee break

15:30

Academic Presentation 5: Pascal Brunner - CARISSMA

“CARISSMA efforts on AV Safety”

15:50

Academic Presentation 6: Anthony Corso – Stanford University “Adaptive Stress Testing with Reward Augmentation for Autonomous Vehicle Validation”

16:10

Panel Discussion with Q&A (All speakers)

 

17:00

End of the session

 

Organizers

Name

Affiliation

Email

Ignacio Alvarez

Intel Labs, Autonomous Driving Research Lab

ignacio.j.alvarez@intel.com

Maria Soledad Elli

Intel Corporation, Autonomous Driving Group

maria.elli@intel.com

Prof. Xuesong Wang

Tongji University, School of Transportation Engineering

wangxs@tongji.edu.cn

Bernd Gassmann

Intel Labs Germany, Autonomous Mobile Systems Research Lab

bernd.gassmann@intel.com

Xiangbin Wu

Intel China Research Center

xiangbin.wu@intel.com