Formal verification under uncertainty
Being able to ensure that AI systems behave as intended is essential for their adoption in many areas. This series covers formal methods for the verification of safety, fairness, and other relevant properties of (probabilistic) ML systems. We document the progress in research domains such as neuro-symbolic verification, probabilistic model-checking, and weighted model integration.
All content in this series
Other series in Safety and reliability in ML systems
Check all of our work