Core Demo Webinar- Making certifiable AI a reality for critical systems

Date: July 02, 2025

June 18, 2025 marked the first public unveiling of the Core Demo released by the SAFEXPLAIN project. The Core Demo showcases the integration of the technologies and tools emerging from the project and their interoperability, executed in case studies from three diverse industrial domains: space, automotive and rail.

The webinar walks through how SAFEXPLAIN technology can accomodate scenarios with critical functionalities and highlights the project’s work closing the gap between black box AI software development and the rigourous, certifiable and explainable process required by safety-critical systems.

 

Carlo Donzella, from partner exida development, opened the webinar with an explanation of the historical background driving the question for greater automation in vehicles and the important and inconvenient reality that AI challenges are a common in Functional Safety-related software.

He analyzed the different perspectives and controversy toward full autonomy in vehicles and shared the SAFEXPLAIN approach as a “third position” in the debate, one which recognizes the inevitability of using consolidated Machine Learning/ Deep Learning solutions for advanced functions and the need for reconciling this technology with new and updated standards and regulations on Quality, Safety and Security of AI.

Core Demo Webinar

The project’s Core Demo is a small-scale, modular demonstrator that highlights the platform’s key technologies and showcases how AI/ML components can be safely integrated into critical systems. 

The Core Demo focuses on an illustrative safety pattern scenario- where AI influences, but does not solely determine, decision-making- to illustrate how SAFEXPLAIN technology makes AI-based safety-critical systems safe by construction while preserving performance.

 

This fully functional and configurable teaser shows how SAFEXPLAIN technology can accommodate scenarios with critical functionalities in three selected ‘toy’ examples from the automotive, rail and space domains. The SAFEXPLAIN technology deployed in this demo is beneficial for developers and decision-makers focused on transport and mobility in Critical Autonomous AI- Based Systems (CAIS) who need evidence of what type of evaluation and support will be available in the near future regarding the certifiability of their CAIS products.

The technical specification describing this skeleton and it applicability can be found here.

Functional Safety Compliant - Safety Pattern 2

The Core Demo provides a generic skeleton that accomodates simple, functionally relevant examples. It focuses on ‘Safety Pattern 2’ where an AI/ML constitutent partially affects the decision-process

Core Demo-Space

The Core Demo is instantiated to the space domain in a satellite position tracker sub-system.

Core Demo-Automotive

The Core Demo is instantiated to the automotive domain in a pedestrian detection, warning system and emergency braking.

This Core Demo  is available for early users. It provides a preview of what is to come with the car, rail, and space
full demos, equipped with completely developed user cases with ODD-based scenarios and test-suites.
These full demos will be available in September 2025 as final results of the SAFEXPLAIN project. The project’s platform-level support can be extended to other set ups as ROS2 ensures exceptionally high portability. Limitations or improvements may arise from specific HW/SW set ups.

SAFEXPLAIN partners invite early users and interested stakeholders to explore its capabilities in their own operational environments. Contact safexplainproject@bsc.es for more information.