This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2022 iteration, 11 teams participated on a diverse set of 12 scored benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.


@article{mueller2023VNN, author = {Mark Niklas M{\"{u}}ller and Christopher Brix and Stanley Bak and Changliu Liu and Taylor T. Johnson}, title = {The Third International Verification of Neural Networks Competition {(VNN-COMP} 2022): Summary and Results}, journal = {CoRR}, volume = {abs/2212.10376}, year = {2022}, url = {https://doi.org/10.48550/arXiv.2212.10376}, doi = {10.48550/arXiv.2212.10376}, eprinttype = {arXiv}, eprint = {2212.10376}, }