Robert Lowe, Senior Researcher in AI and Driver Monitoring Systems from partner RISE, will introduce new complexities to XAI in relation to functional safety, transparency and compliance with safety standards. In this 1.5 hour webinar, hosted by HiPEAC, Robert will focus on integrating Explainable AI (XAI) into safety-critical systems like automotive, rail and space, that have functional safety requirements.
The challenge
Compliance with safety standards is essential in safety-critical domains like automotive, rail and space. While traditional approaches to functional safety are well-established, the introduction of AI into safety-critical systems presents new complexities that challenge existing frameworks and methodologies due to the “black box” characteristic of deep learning models.
Webinar goal
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance. Despite the value of XAI, there is currently a lack of systematic approaches to integration within AI-based systems and the Machine Learning lifecycle, especially in domains where safety is non-negotiable. This webinar seeks to address this gap by introducing the SAFEXPLAIN explainability by design approach.
Learning goals
Webinar attendees will:
- Learn about the current challenges and gaps in integrating XAI with ML lifecycle processes.
- Explore a structured approach to integrating XAI within the development and deployment of SAFEXPLAIN AI models to ensure compliance with functional safety standards.
- Gain insights into the innovative SAFEXPLAIN approach for leveraging AI in automotive, rail and space applications
- Have access to the latest XAI research coming from the SAFEXPLAIN project (link to deliverable, website resources)
Audience
- Regulatory and compliance personnel
- AI researchers and developers
Resources
- D3.1 Specificability, explainability, traceability, and robustness proof-of-concept and argumentation
- RISE explains XAI for systems with functional safety requirements
This is the second webinar in the SAFEXPLAIN webinar series. The first webinar, “Towards functional safety management for AI-based critical systems” shared the project’s framework and software architecture for incorporating AI-based solutions into safety-critical systems (top-down approach). This second webinar presents AI solutions, how to tailor them, and how to build the overall software architecture (bottom-up).