Systems

Safe Artificial Intelligence via Abstract Interpretation
Certifying and training robust neural networks using approaches based on abstract interpretation.

Publications

2019

Beyond the Single Neuron Convex Barrier for Neural Network Certification
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev
NeurIPS 2019
Certifying Geometric Robustness of Neural Networks
Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev
NeurIPS 2019
Online Robustness Training for Deep Reinforcement Learning
Marc Fischer, Matthew Mirman, Martin Vechev
arXiv 2019
Universal Approximation with Certified Networks
Maximilian Baader, Matthew Mirman, Martin Vechev
arXiv 2019
DL2: Training and Querying Neural Networks with Logic
Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, Martin Vechev
ICML 2019
Boosting Robustness Certification of Neural Networks
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
ICLR 2019
A Provable Defense for Deep Residual Networks
Matthew Mirman, Gagandeep Singh, Martin Vechev
ArXiv 2019
An Abstract Domain for Certifying Neural Networks
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
ACM POPL 2019

2018

Fast and Effective Robustness Certification
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev
NIPS 2018
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev
IEEE S&P 2018

Talks

Safe and Robust Deep Learning
Waterloo ML + Security + Verification Workshop
Safe and Robust Deep Learning
University of Edinburgh, Robust Artificial Intelligence for Neurorobotics 2019
AI2: AI Safety and Robustness with Abstract Interpretation
Machine Learning meets Formal Methods, FLOC 2018