Standardizing the Probabilistic Sources of Uncertainty for the sake of Safety Deep Learning

Date: February 13, 2023
Location: Washington D.C., USA

The Safexplain team from the CAOS research group of the Barcelona Supercomputing Center (BSC-CNS) presented their latest research at the AAAI’s Workshop on Artificial Intelligence Safety of the AAAI 2023, held on February 14, 2023, in Washington D.C. The team’s research focuses on the use of neural networks in critical systems and the need for a unified probabilistic formal methodology to model the sources of uncertainty for the sake of safety deep learning.

During their presentation, the team discussed the challenges of using neural networks in critical systems and the importance of addressing uncertainty to ensure safety. The team’s methodology aims to apply a unified probabilistic formal approach to any standard AI-based forecasting process from a probabilistic viewpoint, including anomalous input values, bias produced for selecting a certain model, and irreducible variability of the correct predicted values given the same input.

The team believes that their approach can be extended to other fields beyond avionics, automotive, and space and can advance the use of neural networks in critical systems while ensuring safety. The research presented by the Safexplain team was based on Axel Brando’s doctoral dissertation thesis titled “Aleatoric Uncertainty Modelling for Regression problems with Deep Learning,” completed in 2022.

Check their presentation here.