by Janine Marie Gehrig Lux | Oct 14, 2025 | Uncategorized
The final phase of the automotive case study focused on building and validating a ROS2-based demonstrator that implements safe and explainable pedestrian emergency braking. Our architecture integrates AI perception modules (YOLOS-Tiny pedestrian detector, lane...
by Janine Marie Gehrig Lux | Oct 8, 2025 | Uncategorized
On 23 September 2025, members of the SAFEXPLAIN consortium joined forces with fellow European projects ULTIMATE and EdgeAI-Trust for the event “Trustworthy AI in Safety-Critical Systems: Overcoming Adoption Barriers.” The gathering brought together experts from...
by Janine Marie Gehrig Lux | Sep 18, 2025 | Uncategorized
SAFEXPLAIN partners shared key projects results at the 2025 28th Euromicro Conference Series on Digital System Design (DSD). Two papers were accepted to the conference proceedings and presented on 10 and 11 September 2025 by Francisco J. Cazorla from the Barcelona...
by Ariadna Rodríguez | Aug 14, 2025 | Uncategorized
SAFEXPLAIN’s success can be understood as a combination of outstanding scientific results and the vision to put them together to solve fundamental industrial challenges to make AI-based systems trustworthy. The project’s results and network of interested parties...
by Ariadna Rodríguez | Jul 22, 2025 | Uncategorized
As part of SAFEXPLAIN, Exida has contributed a methodology related to a verification and validation (V&V) strategy of AI-based components in safety-critical systems. The approach combines the two standards ISO 21448 (also known as SOTIF) and ISO 26262 to address...
by Janine Marie Gehrig Lux | Jul 3, 2025 | Uncategorized
Barcelona, 03 July 2025 The SAFEXPLAIN project has just publicly unveiled its Core Demo, offering a concrete look at how its open software platform can bring safe, explainable and certifiable AI to critical domains like space, automotive and rail. Showcased in a...
Recent Comments