Project: safeai.ethz.ch

Safe Artificial Intelligence
Building safe and robust artifical intelligence systems.

Startups

The world's first platform for building and deploying Trustworthy AI.

Publications

2021

Robustness Certification for Point Cloud Models
Tobias Lorenz, Anian Ruoss, Mislav Balunovic, Gagandeep Singh, Martin Vechev
ICCV 2021
Scalable Polyhedral Verification of Recurrent Neural Networks
Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev
CAV 2021
Scalable Certified Segmentation via Randomized Smoothing
Marc Fischer, Maximilian Baader, Martin Vechev
ICML 2021
Certified Defenses: Why Tighter Relaxations May Hurt Training
Nikola Jovanovic*, Mislav Balunovic*, Maximilian Baader, Martin Vechev
arXiv 2021 * Equal contribution
Boosting Randomized Smoothing with Variance Reduced Classifiers
Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev
arXiv 2021
Fast and Precise Certification of Transformers
Gregory Bonaert, Dimitar I. Dimitrov, Maximilian Baader, Martin Vechev
PLDI 2021
Fair Normalizing Flows
Mislav Balunovic, Anian Ruoss, Martin Vechev
arXiv 2021
Certify or Predict: Boosting Certified Robustness with Compositional Architectures
Mark Niklas Müller, Mislav Balunovic, Martin Vechev
ICLR 2021
Scaling Polyhedral Neural Network Verification on GPUs
Christoph Müller*, François Serre*, Gagandeep Singh, Markus Püschel, Martin Vechev
MLSys 2021 * Equal contribution
Robustness Certification with Generative Models
Matthew Mirman, Alexander Hägele, Timon Gehr, Pavol Bielik, Martin Vechev
PLDI 2021
PRIMA: Precise and General Neural Network Certification via Multi-Neuron Convex Relaxations
Mark Niklas Müller, Gleb Makarchuk, Gagandeep Singh, Markus Püschel, Martin Vechev
arXiv 2021
Efficient Certification of Spatial Robustness
Anian Ruoss, Maximilian Baader, Mislav Balunovic, Martin Vechev
AAAI 2021

2020

Learning Certified Individually Fair Representations
Anian Ruoss, Mislav Balunovic, Marc Fischer, Martin Vechev
NeurIPS 2020
Certified Defense to Image Transformations via Randomized Smoothing
Marc Fischer, Maximilian Baader, Martin Vechev
NeurIPS 2020
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models
Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev
ICML 2020
Scalable Inference of Symbolic Adversarial Examples
Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev
Arxiv 2020
Adversarial Training and Provable Defenses: Bridging the Gap
Mislav Balunovic, Martin Vechev
ICLR (Oral) 2020
Universal Approximation with Certified Networks
Maximilian Baader, Matthew Mirman, Martin Vechev
ICLR 2020
Robustness Certification of Generative Models
Mathew Mirman, Timon Gehr, Martin Vechev
arXiv 2020

2019

Beyond the Single Neuron Convex Barrier for Neural Network Certification
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev
NeurIPS 2019
Certifying Geometric Robustness of Neural Networks
Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev
NeurIPS 2019
Online Robustness Training for Deep Reinforcement Learning
Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev
arXiv 2019
DL2: Training and Querying Neural Networks with Logic
Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, Martin Vechev
ICML 2019
Boosting Robustness Certification of Neural Networks
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
ICLR 2019
A Provable Defense for Deep Residual Networks
Matthew Mirman, Gagandeep Singh, Martin Vechev
ArXiv 2019
An Abstract Domain for Certifying Neural Networks
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
ACM POPL 2019

2018

Fast and Effective Robustness Certification
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev
NIPS 2018
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev
IEEE S&P 2018

Talks

Safe and Robust Deep Learning
Waterloo ML + Security + Verification Workshop
Safe and Robust Deep Learning
University of Edinburgh, Robust Artificial Intelligence for Neurorobotics 2019
AI2: AI Safety and Robustness with Abstract Interpretation
Machine Learning meets Formal Methods, FLOC 2018