SAFEXPLAIN: From Vision to Reality

AI Robustness & Safety

Explainable AI

Compliance & Standards

Safety Critical Applications
THE CHALLENGE: SAFE AI-BASED CRITICAL SYSTEMS
- Today’s AI allows advanced functions to run on high performance machines, but its “black‑box” decision‑making is still a challenge for automotive, rail, space and other safety‑critical applications where failure or malfunction may result in severe harm.
- Machine- and deep‑learning solutions running on high‑performance hardware enable true autonomy, but until they become explainable, traceable and verifiable, they can’t be trusted in safety-critical systems.
- Each sector enforces its own rigorous safety standards to ensure the technology used is safe (Space- ECSS, Automotive- ISO26262/ ISO21448/ ISO8800, Rail-EN 50126/8), and AI must also meet these functional safety requirements.
MAKING CERTIFIABLE AI A REALITY
Our next-generation open software platform is designed to make AI explainable, and to make systems where AI is integrated compliant with safety standards. This technology bridges the gap between cutting-edge AI capabilities and the rigorous demands for safety-crtical environments. By joining experts on AI robustness, explainable AI, functional safety and system design, and testing their solutions in safety critical applications in space, automotive and rail domains, we’re making sure we’re contribuiting to trustworthy and reliable AI.
Key activities:
SAFEXPLAIN is enabling the use of AI in safety-critical system by closing the gap between AI capabilities and functional safety requirements.
See SAFEXPLAIN technology in action
CORE DEMO
The Core Demo is built on a flexible skeleton of replaceable building blocks for Interference, Supervision or Diagnoistic components that allow it to be adapted to different secnarios. Full domain-specific demos are available in the technologies page.
SPACE
Mission autonomy and AI to enable fully autonomous operations during space missions
Specific activities: Identify the target, estimate its pose, and monitor the agent position, to signal potential drifts, sensor faults, etc
Use of AI: Decision ensemble
AUTOMOTIVE
Advanced methods and procedures to enable self-driving carrs to accurately detect road users and predict their trajectory
Specific activities: Validate the system’s capacity to detect pedestrians, issue warnings, and perform emergency braking
Use of AI: Decision Function (mainly visualization oriented)
SAFEXPLAIN deliverables now available!
Twelve deliverables reporting on the work undertaken by the project have been published in the results section of the website. The SAFEXPLAIN deliverables provide key details about the project and how it is progressing. The following deliverables have been created for...
SAFEXPLAIN takes part in 1st intacs® certified ML for Automotive SPICE® (pilot) training
SAFEXPLAIN partner, exida development provided invaluable contributions to the two days of pilot training for the intacs® certified machine learning (ML) automotive SPICE® training.
Integrating the Railway Case Study into the Reference Safety Architecture Pattern
Within the SAFEXPLAIN (SE) project, project partner, Ikerlan, leads the railway case study (CS), which is specifically centred on Automatic Train Operation (ATO). This article highlights how this CS is integrated into the reference safety architecture, building on the...
TÜV Rheinland International Symposium 2023
Image from TÜV Rheinland International Symposium website The TÜV Rheinland International Symposium is a specialist event intended as a platform for intensive experience exchange for those involved in Functional Safety and Cybersecurity of different industrial...
EXIDA Automotive Symposium 2023
The 2023 Exida-hosted Automotive Symposium will be held from 18-20 October 2023 in the alpine town of Spitzingsee, Germany. This two-day event will encourage the exchange of information and contacts in the automotive industry.
EXIDA presents SAFEXPLAIN at Automotive Spin Italia
On 30 May 2023, G. Nicosia from the SAFEXPLAIN project presented information related to SAFEXPLAIN in the Functional Safety Session of the 21st Workshop on Automotive Software and Systems. He will present on "User Cases and Scenario Catalogue for ML/DL-based solutions...








