The Secure, Reliable, and Intelligent Systems (SRI) Lab is a research group in the Department of Computer Science at ETH Zurich. Our current research focus is on the areas of reliable, secure, robust and fair machine learning, probabilistic and quantum programming, and machine learning for code. Our work led to three successful ETH spin-offs: DeepCode.ai (AI for code), ChainSecurity (security verification), and LatticeFlow (robust machine learning). See our Publications to learn more about our work.

Latest Blog Posts

Latest News

Latest News & Blog Posts

Probing Google DeepMind's SynthID-Text Watermark: We apply the techniques from our recent work to investigate how SynthID-Text, the first large-scale deployment of an LLM watermarking scheme, fares in several adversarial scenarios. We discuss a range of findings, provide novel insights into the properties of this scheme, and outline interesting future research directions.

Dr. Benjamin Bichsel, former PhD student and Postdoc at SRI Lab, now CEO and Co-Founder of NetFabric was awarded with the ACM SIGPLAN John J. Reynolds Doctoral Dissertation Award for his doctoral thesis HIGH-LEVEL QUANTUM PROGRAMMING.

The Role of Red Teaming in PETs: In February, our team won the Red Teaming category of the U.S. PETs Prize Challenge, securing a prize of 60,000 USD. In this blog post, we will provide a brief overview of the significance of Red Teaming in the field of Privacy Enhancing Technologies (PETs) research in the context of the competition.

Our paper on Self-Contradictory Hallucinations was presented at ICLR 2024. Check out the project website to learn more about detecting and removing hallucinations of Large Language Models.

LAMP: Extracting text from gradients with language model priors: In this work we present an attack on federated learning's privacy specific to the text domain. We show that federated learning in the text domain can expose a lot of user data.

Our latest paper on LLM watermark stealing has been featured in MIT Technology Review! Check out the article and learn more on our project website.

Reliability guarantees on private data: We present Phoenix (CCS '22), the first system for privacy-preserving neural network inference with robustness and fairness guarantees.

Some of our latest research on LLM privacy has been featured in WIRED magazine! Check out their article and our corresponding paper.