The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability. Formal verification can address these concerns by guaranteeing that a deep learning system operates as intended, but the state-of-the-art is limited to small systems...
Authors
Lindsey Kuper
Research Scientist, Parallel Computing Lab
Guy Katz
Kyle Julian
Clark Barrett
Mykel Kochenderfer
Related Content
Intel® nGraph™: An Intermediate Representation, Compiler, and Executor...
The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains...
Greenhouse: A Zero-Positive Machine Learning System for Time-Series...
This short paper describes our ongoing research on Greenhouse - a zero-positive machine learning system for time-series anomaly detection...
Precision and Recall for Range-Based Anomaly Detection
Classical anomaly detection is principally concerned with point-based anomalies, anomalies that occur at a single data point. In this paper,...
Mixed Precision Training of Convolutional Neural Networks Using...
The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular,...