Making certifiable AI a reality for critical systems

Pursuing a holistic approach to safe and explainable AI-based autonomous systems

FINAL EVENT

TRUSTWORTHY AI IN SAFETY-CRITICAL SYSTEMS

Overcoming adoption barriers

SAFEXPLAIN: From Vision to Reality

AI Robustness & Safety

Explainable AI

Compliance & Standards

Safety Critical Applications

THE CHALLENGE: SAFE AI-BASED CRITICAL SYSTEMS
  • Today’s AI allows advanced functions to run on high performance machines, but its “black‑box” decision‑making is still a challenge for automotive, rail, space and other safety‑critical applications where failure or malfunction may result in severe harm.
  • Machine- and deep‑learning solutions running on high‑performance hardware enable true autonomy, but until they become explainable, traceable and verifiable, they can’t be trusted in safety-critical systems.
  • Each sector enforces its own rigorous safety standards to ensure the technology used is safe (Space- ECSS, Automotive- ISO26262/ ISO21448/ ISO8800, Rail-EN 50126/8), and AI must also meet these functional safety requirements.

 

MAKING CERTIFIABLE AI A REALITY

Our next-generation open software platform is designed to make AI explainable, and to make systems where AI is integrated compliant with safety standards. This technology bridges the gap between cutting-edge AI capabilities and the rigorous demands for safety-crtical environments. By joining experts on AI robustness, explainable AI, functional safety and system design, and testing their solutions in safety critical applications in space, automotive and rail domains, we’re making sure we’re contribuiting to trustworthy and reliable AI.

Key activities:

SAFEXPLAIN is enabling the use of AI in safety-critical system by closing the gap between AI capabilities and functional safety requirements.

See SAFEXPLAIN technology in action

CORE DEMO

The Core Demo is built on a flexible skeleton of replaceable building blocks for Interference, Supervision or Diagnoistic components that allow it to be adapted to different secnarios. Full domain-specific demos will be available at our final event.

APPLICATIONS IN SPACE, AUTOMOTIVE AND RAIL DOMAINS

SPACE

Mission autonomy and AI to enable fully autonomous operations during space missions

Specific activities: Identify the target, estimate its pose, and monitor the agent position, to signal potential drifts, sensor faults, etc

Use of AI: Decision ensemble

AUTOMOTIVE

Advanced methods and procedures to enable self-driving carrs to accurately detect road users and predict their trajectory

Specific activities: Validate the system’s capacity to detect pedestrians, issue warnings, and perform emergency braking

Use of AI: Decision Function (mainly visualization oriented)

RAIL

Review of the viability of a safety architectural pattern for the completely autonomous operation of trains

Specific activities: Validate the system’s capacity to detect obstacles, issue warnings, and perform service braking

Use of AI: Data-quality and temporal consistency scores

SAFEXPLAIN Update: Building Trustworthy AI for Safer Roads

SAFEXPLAIN Update: Building Trustworthy AI for Safer Roads

For enhanced safety in AI-based systems in the railway domain, the SAFEXPLAIN project has been working to integrate cutting-edge simulation technologies with robust communication frameworks. Learn more about how we’re integrating Unreal Engine (UE) 5 with Robot Operating System 2 (ROS 2) to generate dynamic, interactive simulations that accurately replicate real-world railway scenarios.

SAFEXPLAIN to Participate in 2025 HiPEAC Conference

SAFEXPLAIN to Participate in 2025 HiPEAC Conference

Join us for two workshops at this year’s HiPEAC conference. Partners IKERLAN and RISE will be participating in workshops: MCS: Mixed Critical Systems – Safe Intelligent CPS and the development cycle WS and the Women@HPC MAR WHPC chapter: Building the diversity continuum in cutting-edge technologies. These workshops will take place during the second day of the workshop.

SAFEXPLAIN @ AI, Data, Robotics Forum

SAFEXPLAIN @ AI, Data, Robotics Forum

SAFEXPLAIN is happy to support the 2024 edition of the AI, Data and Robotics Forum. This two-day event is helping to unite the AI, Data and Robotics (ADR) community to support responsible innovation. The theme of this year's forum is "European Sovereignty in AI, Data...

Webinar: XAI for systems with functional safety requirements

Webinar: XAI for systems with functional safety requirements

Robert Lowe, Senior Researcher in AI and Driver Monitoring Systems from partner RISE, will introduce new complexities to XAI in relation to functional safety, transparency and compliance with safety standards. In this 1.5 hour webinar, hosted by HiPEAC, Robert will...