Making critical autonomous AI-based systems safe

Making critical autonomous AI-based systems safe

Objectives

To improve the explainability and traceability of DL components

To provide clear safety patterns for the incremental adoption of DL software in Critical Autonomous AI-based Systems (CAIS)

To integrate the SAFEXPLAIN libraries with an industrial system-testing toolset

To create architectures of DL components with quantifiable and controllable confidence, and that have the ability to identify when predictions should not be released based on applicability’s scope or security concerns

To design, implement, or update selected representative DL software libraries according to safety patterns and safety lifecycle considerations, meeting specific performance requirements on  relevant platforms

Deep Learning (DL) techniques are key for most future advanced
software functions in Critical Autonomous AI-based Systems (CAIS) in
cars, trains and satellites. Hence, those CAIS industries depend on their
ability to design, implement, qualify, and certify DL-based software
products under bounded effort/cost

Case studies

Railway: Based on Automatic Train Operation (ATO), this case study seeks to check the viability of a safety architectural pattern composed of: DL artificial vision software elements that serve as “sensors” to provide information to safety-related software elements

Space: This case study envisions the use of state-of-the-art mission autonomy and artificial intelligence technologies to enable fully autonomous operations during space missions

Automotive: This case study will consider Apollo deployed on a variety of prototype vehicles. It supports state-of the-art hardware such as latest LIDARs and cameras as well as GPU acceleration

SAFEXPLAIN to present in COMPSAC Autonomous Systems Symposium

SAFEXPLAIN to present in COMPSAC Autonomous Systems Symposium

The paper "Efficient Diverse Redundant DNNs for Autonomous Driving", coauthored by BSC authors Martí Caro, Jordi Fornt and Jaume Abella, has been accepted for publication in the 47th IEEE International Conference on Computers, Software & Applications (COMPSAC)....

Tweets 

👋in today's Automotive SPIN Italia 21st Workshop
on #functional saftety, #safexplainproject partner @exidadev presented on

👉 "User Cases and Scenario Catalogue for ML/DL-based solutions testing in Vehicles"

👥to 180+ participants.

More info https://t.co/ekDlUGYLbB https://t.co/3bnG76mbDL
The #safexplainproject will be at the Automotive SPIN Italia 21st WS on #automotive Software & System

@exidadev presents in the Functional Safety session "User Cases and Scenario Catalogue for ML/DL-based solutions testing in Vehicles"

Registration 👇
https://t.co/IXIN94E2o9
💡Learn how the #safexplainproject is exploring the application of #explainable AI algorithms to each stage of the #ML lifecycle using #data explainers

Read what our partner @RISEsweden has to say about this challenge👇

https://t.co/lDsGB5jYwl https://t.co/qoKGUNwQnr
SafexplainAI photo