
Demos and Videos
Learn about the project
Safexplain introduction video
This first video explores the SAFEXPLAIN project’s approach to addressing safety and explainability challenges in deep learning (DL) for autonomous systems like cars, trains, and satellites. Learn how SAFEXPLAIN tailors DL solutions to certification needs, ensuring safety and industry adoption.
Project Coordinator Reflects on progress in last year of project
Jaume Abella sets the context for the 3rd SAFEXPLAIN webinar by recapping the work done by the project to safely integrate AI in safety-critical systems. He highlights the challenge of doing so when “AI software is developed in the opposite way you would build SW for safety-critical systems.”
Presentations with our TrustworthyAI Cluster
BoF webinar Trustworthy AI: Landscaping verifiable robustness and transparency
Get to know the nine projects that form part of the TrustWorthyAI Cluster. In this webinar they join together to share the role TrustworthyAI plays in their use cases. Partner Enrico Mezzetti from the BSC represented the project.
BoF workshop as part of the “Future-Ready” ADRA/AIoD event
Project coordinator Jaume Abella from the Barelona Supercomputing Center represented the project at the Birds of a Feather session, “TrustworthyAI Cluster: Main innovations and future challenges” together with cluster siblings EVENFLOW, TALON and ULTIMATE. The workshop took place on 18 February at 10:30. Learn about the common highlights and challenges shared by the projects
Our webinar series
SAFEXPLAIN Webinar Series: Webinar 1-AI-Functional Safety Management
On 4 July 2024, speakers from IKERLAN share an in-depth look into the SAFEXPLAIN project developments in AI-Functional Safety Management (FSM) methodology. Dr Irune Yarza and Dr Javier Fernández Muñoz present “Towards functional safety managemet for AI-based critical systems”, which provides an uderstanding of the challenges and opportunites associated with integrating AI into safety-critical systems, while also offering practical insights and strategies for ensuring the continued safety and reliability of such systems.
SAFEXPLAIN Webinar Series: Webinar 2- ExplainableAI
Dr Robert Lowe, Senior Researcher in AI and Driver Monitoring Systems from RISE discusses the integration of explainable AI (XAI) algorithms into the machine learning (ML) lifecycles in this webinar on “XAI for systems with functional safety requirements” from 23 October 2024. This webinar provides an overview of the complexities related to XAI with regard to functional, transparency and compliance with safety standards.
SAFEXPLAIN Webinar Series: Webinar 3- Platform and toolsets
On 11 February, 2025, Dr Enrico Mezzetti, Established Researcher at the BSC, shared the innovative SAFEXPLAIN platform and toolset supporting the development, execution and analysis of the solutions proposed as part of the project and deployed through the SAFEXPLAIN case studies. In his presentation “Putting it together: The SAFEXPLAIN platform and toolsets”, he synthesizes the relationship behind the project’s AI-Functional Safety Management (AI-FSM) Methodology and explainable (XAI) methodologies explored in the first two webinars of the SAFEXPLAIN webinar series.