The Secure, Reliable, and Intelligent Systems (SRI) Lab is a research
group in the Department of Computer Science at ETH
Zurich.
Our research focuses on reliable, secure, and trustworthy machine learning, with emphasis on large language
models.
We currently study issues of controllability, security and privacy, and reliable
evaluation of LLMs, their application to mathematical reasoning and coding, as well as generative AI
watermarking, AI regulations, federated learning privacy, robustness and
fairness certification, and quantum computing.
Our work has led to six ETH spin-offs:
NetFabric (AI for systems),
LogicStar (AI code agents),
LatticeFlow (robust ML),
InvariantLabs (secure AI agents; acquired),
DeepCode (AI for code; acquired),
and ChainSecurity (security verification; acquired).
To learn more about our work see our Research page, recent Publications, and GitHub
releases. To stay up to date follow our group on Twitter.
Latest News
24.10.2025
Our work on sycophantic behavior in large language models was featured in a Nature article on the risks of LLM sycophancy in scientific research.
14.07.2025
SRI Lab is presenting 14 papers at ICML 2025 in Vancouver: 9 at the main conference and 5 at workshops. See the twitter thread for more details.
25.06.2025
Our ETH spin-off Invariant Labs was acquired by Snyk. See the article on the D-INFK news channel.
Most Recent Publications
Constrained Decoding of Diffusion LLMs with Context-Free Grammars
Niels Mündler, Jasper Dekoninck, Martin Vechev
ICLR
2026
DL4C @ NeurIPS'25 Oral
DL4C @ NeurIPS'25 Oral
Robust LLM Fingerprinting via Domain-Specific Watermarks
Thibaud Gloaguen, Robin Staab, Nikola Jovanović and Martin Vechev
ICLR
2026
Watermarking Diffusion Language Models
Thibaud Gloaguen, Robin Staab, Nikola Jovanović, Martin Vechev
ICLR
2026
Fewer Weights, More Problems: A Practical Attack on LLM Pruning
Kazuki Egashira, Robin Staab, Thibaud Gloaguen, Mark Vero, Martin Vechev
ICLR
2026