Towards the Safety of Critical Embedded Systems Based on Artificial Intelligence: the Standardization Landscape
AI safety means ensuring that the operation of an AI system does not contain any unacceptable risks . It is essential to ensure that the AI system operates reliably, that unintended behavior is mitigated and that it is possible to explain how the AI system arrived at a particular decision
Press Release: SAFEXPLAIN facilitates the safety certification of critical autonomous AI-based systems for a more competitive EU industry
Barcelona, 13 February 2023. – The EU-funded SAFEXPLAIN (Safe and Explainable CriticalEmbedded Systems based on AI) project, launched on 1 October 2022, seeks to lay the foundationfor Critical Autonomous AI-based Systems (CAIS) applications that are smarter and...Recognizing the Contributions of Women in Science
Women have made huge contributions to the scientific community. Raising their visibility and representation of women in science is key for ensuring that the next generation of scientists have positive role models and learn to value diversity and equity in...Safexplain at HiPEAC conference 2023
Safexplain gives a talk and presents a poster during the HiPEAC conference 2023, that took place from 16 – 18 January 2023 in Toulouse, France.
Recent Comments