Introducing SAFEXPLAIN:
Safe and Explainable Critical Embedded Systems based on AI
Objectives
To improve the explainability and traceability of DL components
To provide clear safety patterns for the incremental adoption of DL software in Critical Autonomous AI-based Systems (CAIS)
To integrate the SAFEXPLAIN libraries with an industrial system-testing toolset
To create architectures of DL components with quantifiable and controllable confidence, and that have the ability to identify when predictions should not be released based on applicability’s scope or security concerns
To design, implement, or update selected representative DL software libraries according to safety patterns and safety lifecycle considerations, meeting specific performance requirements on relevant platforms
Deep Learning (DL) techniques are key for most future advanced
software functions in Critical Autonomous AI-based Systems (CAIS) in
cars, trains and satellites. Hence, those CAIS industries depend on their
ability to design, implement, qualify, and certify DL-based software
products under bounded effort/cost
Case studies
Railway: This case studies the viability of a safety architectural pattern for the completely autonomous operation of trains (Automatic Train Operation, ATO) using intelligent Deep Learning (DL)-based solutions.
Space: This case employs state-of-the-art mission autonomy and artificial intelligence technologies to enable fully autonomous operations during space missions. These technologies are developed through high safety-critical scenarios.
SAFEXPLAIN invited talk, workshop and panel participations at 28th Ada-Europe conference
Coordinator Jaume Abella introduces Irune Yarza as part of SAFEAI workshop co-located within 28th Ada-Europe conference The 28th Ada-Europe International Conference on Reliable Software Technologies (AEiC 2024) was held in Barcelona, Spain from 11-14 June 2024. This...
Developing safe and explainable AI for autonomous driving: Automotive case study
NAVINFO has been working to validate the real-world applicability of their work by deploying an autonomous driving system on an embedded compute platform. Two videos showcase the performance of their driving agent in relevant safety scenarios.
SAFEXPLAIN shares strategies for diverse redundancy in ML/AI Critical Systems session at ERTS ’24
Martí Caro from the Barcelona Supercomputing Center presents at the 2024 Embedded Real Time System Congress Barcelona Supercomputing Center researcher Martí Caro presented "Software-Only Semantic Diverse Redundancy for High-Integrity AI-Based Functionalities" at the...
TrustworthyAI Cluster Webinar hosted by ADRA-e
SAFEXPLAIN partner Enric Mezzetti from Barcelona Supercomputing Center will join the ADRA-e hosted webinar on "Trustworthy AI: Landscaping veriable robustness and transparency" on 29 May 2024 from 10-12h. The TrustworthyAI Cluster, nine EU-projects under call Horizon...
SAFEXPLAIN presents at 22º Automotive SPIN Italia WS
Announcement from the Automotive SPIN ITALIA website The SAFEXPLAIN project will mark its presence at the Automotive SPIN Italia 22º Workshop on Automotive Software & System. Carlo Donzella from partner exida development will share insights into "A Tale of Machine...
Challenges and approaches for the development of Artificial Intelligence (AI)-based Safety-Critical Systems
Jon Perez Cerrolaza from SAFEXPLAIN was invited to give a presentation on the “Challenges and approaches for the development of Artificial Intelligence (AI)-based Safety-Critical Systems” at the Instituto Tecnológico de Informática (ITI). The talk was well received by around 25 researchs from the ITI and the Polytechnic University of Valencia who were interested in the SAFEXPLAIN perspective on AI, Safety & Explainability and Trustworthiness.