News
SAFEXPLAIN talks Safety and AI at the 2023 VDA Conference on Quality, Safety and Security for automotive Software-based Systems
VDA pocket programme cover SAFEXPLAIN partner EXIDA development presented the SAFEXPLAIN project and the platform framework overview during the first day of the VDA Automotive SYS conference focusing on Quality, Safety and Security for Automotive Software-based...
Gauging requirements and testing models for Space, Automotive and Railway Case Studies
Using three case studies from different industrial domains ensures that the project considers the needs of multiple fields whose common thread is the potential use of autonomous systems in complex environments, where AI can enable critical and powerful features.
COMPSAC 23: Presenting acceleration solutions based on Deep Neural Networks (DNNs) for use in safety-critical systems
BSC researcher Martí Caro presented “Efficient Diverse Redundant DNNs for Autonomous Driving” on 27 June 2023 at the Autonomous Systems Symposium (ASYS) within the 47th IEEE International Conference on Computers, Software & Applications (COMPSAC). The theme of...
Talking about Automotive Functional Safety at Automotive SPIN Italia
Safexplain project partner EXIDA-dev presented at the 21st Workshop on Automotive Software and Systems hosted by Automotive SPIN Italia
Integrating Explainable AI techniques into Machine Learning Life Cycles
Written by Robert Lowe & Thanh Bui, Humanized Autonomy Unit, RISE, Sweden. Machine Learning life cycles for data science projects that deal with safety critical outcomes require assurances of expected outputs at each stage of the life cycle for them to be...
SAFEXPLAIN to present in COMPSAC Autonomous Systems Symposium
The paper "Efficient Diverse Redundant DNNs for Autonomous Driving", coauthored by BSC authors Martí Caro, Jordi Fornt and Jaume Abella, has been accepted for publication in the 47th IEEE International Conference on Computers, Software & Applications (COMPSAC)....
SAFEXPLAIN Presentation on Safe and Explainable Critical Embedded Systems Based on AI at DATE Conference in Antwerp
SAFEXPLAIN presented its latest research on Safe and Explainable Critical Embedded Systems Based on AI at the Design, Automation and Test in Europe Conference (DATE) in Antwerp, Belgium. The DATE conference is a leading event for electronic system design and test,...
Status at month 6: The SAFEXPLAIN consortium meets at IKERLAN
Figure 1: Members of the SAFEXPLAIN consortium meet at IKERLAN's headquarters SAFEXPLAIN partners spent two days in IKERLAN presenting the work carried out in the first six months of the project. Although project partners have been meeting frequently online to discuss...
Towards the Safety of Critical Embedded Systems Based on Artificial Intelligence: the Standardization Landscape
AI safety means ensuring that the operation of an AI system does not contain any unacceptable risks . It is essential to ensure that the AI system operates reliably, that unintended behavior is mitigated and that it is possible to explain how the AI system arrived at a particular decision
Press Release: SAFEXPLAIN facilitates the safety certification of critical autonomous AI-based systems for a more competitive EU industry
Barcelona, 13 February 2023. - The EU-funded SAFEXPLAIN (Safe and Explainable CriticalEmbedded Systems based on AI) project, launched on 1 October 2022, seeks to lay the foundationfor Critical Autonomous AI-based Systems (CAIS) applications that are smarter and safer...