top of page

FoMLAS 2021
July 18-19, 2021, Los Angeles, California, USA

4th Workshop on Formal Methods for ML-Enabled Autonomous Systems

Affiliated with CAV 2021

Important Dates

  • Paper submission: May 7th (extended)

  • Author notification: June 4

  • Workshop: July 18-19

Update: the workshop, along with the main CAV conference,
will be held virtually.

The workshop's informal proceedings are now available here.

Scope and Topics of Interest

In recent years, deep learning has emerged as a highly effective way for creating real-world software, and is revolutionizing the way complex systems are being designed all across the board. In particular, this new approach is being applied to autonomous systems (e.g., autonomous cars, aircraft), achieving exciting results that are beyond the reach of manually created software. However, these significant changes have created new challenges when it comes to explainability, predictability and correctness: Can I explain why my drone turned right at that angle? Can I predict what it will do next? Can I know for sure that my autonomous car will never accelerate towards a pedestrian? These are questions with far-reaching consequences for safety, accountability and public adoption of ML-enabled autonomous systems. One promising avenue for tackling these difficulties is by developing formal methods capable of analyzing and verifying these new kinds of systems.

The goal of this workshop is to facilitate discussion regarding how formal methods can be used to increase predictability, explainability, and accountability of ML-enabled autonomous systems. The workshop welcomes results ranging from concept formulation (by connecting these concepts with existing research topics in verification, logic and game theory), through algorithms, methods and tools for analyzing ML-enabled systems, to concrete case studies and examples.

This year, the workshop will host the 2nd International Verification of Neural Networks Competition (VNN-COMP'21). In addition, the workshop will feature a special session and discussion on the VNNLIB initiative, aimed at creating a standard format and a benchmark library for neural network verification.

The topics covered by the workshop include, but are not limited to, the following:

  • Formal specifications for systems with ML components

  • SAT-based and SMT-based methods for analyzing systems with deep neural network components

  • Mixed-integer Linear Programming and optimization-based methods for the verification of systems with deep neural network components

  • Testing approaches for ML components

  • Statistical approaches to the verification of systems with ML components

  • Approaches for enhancing the explainability of ML-based systems

  • Techniques for analyzing hybrid systems with ML components

  • Verification of quantized and low-precision neural networks

  • Abstract interpretation techniques for neural networks

Paper Submission and Proceedings

Three categories of submissions are invited:

  • Original papers: contain original research and sufficient detail to assess the merits and relevance of the submission. For papers reporting experimental results, authors are strongly encouraged to make their data available.

  • Presentation-only papers: describe work recently published or submitted. We see this as a way to provide additional access to important developments that the workshop attendees may be unaware of.

  • Extended abstracts: given the informal style of the workshop, we strongly encourage the submission of preliminary reports of work in progress. These reports may range in length from very short to full papers, and will be judged based on the expected level of interest for the community.

All accepted papers will be posted online as part of informal proceedings on the day of the conference. 

Papers in all categories will be peer-reviewed. We recommend that papers be submitted as a single column, standard-conforming PDF, using  the LNCS style, with a suggested page limit of 10 pages, not counting references - although other formats will be accepted. The review process will be single-blind.

To submit a paper, use EasyChair.

Invited Talk

The workshop will feature an invited talk by Caterina Urban from INRIA.

Title: An Abstract Interpretation Recipe for Machine Learning Fairness

Abstract: Nowadays, machine learning software is increasing used for assisting or even automating decisions with socio-economic impact. In April 2021, the European Commission proposed a first legal framework on machine learning software – the Artificial Intelligence Act — which imposes strict requirements to minimize the risk of discriminatory outcomes. In this context, methods and tools for formally certifying fairness or detecting bias are extremely valuable. In this talk, we will be cooking together a static analysis for certifying fairness of neural network classifiers. We will get familiar with the basic recipe for abstract interpretation-based static analyses. We will then gather and put together the best ingredients to achieve an optimal trade-off between precision and scalability.

Bio: Caterina Urban is a research scientist in the Inria Paris research team ANTIQUE (ANalyse StaTIQUE) since 2019. Previously, she was a postdoctoral researcher at ETH Zurich. She completed her Ph.D in Computer Science in 2015, working under the supervision of Radhia Cousot and Antoine Miné at École Normale Supérieure. Caterina's research interests span the whole spectrum of formal methods. Her main area of expertise is static analysis based on abstract interpretation. Caterina is currently engaged in a long-term research effort to enhance the reliability and our understanding of data science and machine learning software, which nowadays has an ever increasing impact on our lives.


bottom of page