Artificial Intelligence offers unprecedented possibilities for innovation and improving the quality of life, but it also raises critical issues related to security, privacy, fairness and ethics.
On these issues Accredia – The Italian Accreditation Body, in collaboration with the National Laboratory of Artificial Intelligence and Intelligent Systems (AIIS) of CINI (National Interuniversity Consortium for Informatics), has developed the new study “Technical standards and accredited conformity assessment for the development of Artificial Intelligence systems”.
It was presented at the Conference “Artificial Intelligence between Risk Assessment and Accredited Certification”, which took place on 16 of October at the Sapienza University of Rome. The meeting was opened by the greetings of Antonella Polimeni, Rector of the Sapienza University of Rome, and the report of Massimo De Felice, President of Accredia, and saw the participation of Daniele Nardi, Professor of Artificial Intelligence at the Sapienza University of Rome, Piercosma Bisconti, researcher at the AIIS, and Daniele Gerundino, Member of the Study Centre for Standardisation of UNI (Italian Standardization Body).
High-risk AI systems
Based on the detailed analysis of the AI Act (EU Regulation 2024/1689), which provides a framework of rules to protect the interests of European citizens, the study reflects on the tools of technical regulation, conformity assessment and accreditation, which will play a significant role in its implementation.
In particular, the study highlights how accreditation could support the development of reliable and secure Artificial Intelligence (AI) systems that protect citizens’ fundamental rights.
In Italy, Accredia’s role will be to support the Authorities, which will represent governance in this sector, in ensuring the competence of the bodies and laboratories called upon to assess the conformity of AI systems to technical standards. The standards will be developed by the European Standardization Bodies, making the procedures necessary to comply with the AI Act clearer and more effective.
In particular, High-Risk AI systems will have to comply with a number of requirements aimed at minimising the potential damage associated with their use, including cybersecurity measures, data set quality and human supervision requirements. Only if the conformity assessment procedure is successful the CE mark can be affixed, testifying compliance with the regulations.
The role of accreditation and training
“As in other European Regulations, the AI Act calls for the role of Notification Authorities and Notified Bodies in cases where the protection needs of European citizens require special attention” explained Massimo De Felice, President of Accredia.
“Training plays a crucial role” highlighted De Felice, “because in order to ensure that the implementation of AI systems complies with ethical and regulatory standards, it is essential that professionals, developers and organisations acquire a solid knowledge of emerging technologies and their impacts.”
Technical standards and Proof of Concept
“The definition of standards and technical regulations plays a crucial role in the application of the AI Act,” highlighted AI Professor Daniele Nardi. “The Artificial Intelligence and Intelligent Systems Laboratory of CINI actively contributes to this path, interacting with different institutional actors and in particular with Accredia, with the aim of providing the necessary technical-scientific support.”
“In the project with Accredia” explained Nardi, “the development of the Proof of Concept, i.e. concrete examples of the application of standards, has been of particular importance: only through dedicated experimentation is it possible to define the steps of a process that represents the operational counterpart of the action aimed at making the use of AI systems safe and respectful of fundamental rights.”
Case studies in the medical field
In the medical field, the application of AI systems is one of the uses that require the intervention of a Notified Body. The study, in this context, develops two case studies – one on melanoma detection and one for the stratification of patients with multiple sclerosis by simulating a conformity verification on the ISO/IEC TR 24027:2021 “Information technology – Artificial intelligence (AI) – Bias in AI systems and AI aided decision making”.
The simulation of conformity verifications in the development of IA systems, and prior to their release, highlighted the importance of a systematic and technical approach aimed at minimising the risk of bias with an impact on the effectiveness of IA systems.
Moreover, the study reports the case study in the Public Administration sector with INAIL (National Institute for Insurance against Accidents at Work) that highlights the importance of the ISO 42001:2023 “Information technology – Artificial intelligence – Management system”. Compliance with this standard ensures that every automated decision-making process is subjected to rigorous verifications before being implemented. This approach is particularly relevant in view of the AI Act, which emphasises the importance of adequate risk and Quality Management in AI.