Semantic Frameworks to Support the EU AI Act's Risk Management and Documentation

File Type:
PDFItem Type:
ThesisDate:
2025Author:
Access:
openAccessCitation:
S Golpayegani, Seyedeh Delaram, Semantic Frameworks to Support the EU AI Act's Risk Management and Documentation, Trinity College Dublin, School of Computer Science & Statistics, Computer Science, 2025Download Item:
Abstract:
The European Union (EU) Artificial Intelligence Act (AI Act), which entered into force on 1 August 2024, stands as a landmark legal regime for development and use of AI, adopting a risk-based approach to govern the potential risks of AI to key areas of concern, including health, safety, and fundamental rights. Under the AI Act, AI systems are subject to a set of regulatory obligations according to the level of risk they pose. Within this risk-based classification, high-risk AI systems need to comply with more rigorous provisions of the Act, which should be addressed by AI providers and deployers.
Translating the AI Act's legal provisions into practical approaches and technical measures for implementing the essential requirements needs a range of guidelines, many of which need to be acquired from evidence-based regulatory learning. With the recent enforcement of the Act, such regulatory insights are not yet established, which has created legal uncertainty in regard to compliance with the AI Act. In this context, Regulatory Technology (RegTech) can serve as an enabling force to support the effective implementation and enforcement of the Act, while enhancing legal certainty through regulatory learning.
Focusing on risk management as a central element of the AI Act, this thesis addresses the current lack of RegTech solutions by proposing a compendium of Semantic Web-based artefacts to facilitate compliance with the requirements of the AI Act regarding risk management, risk documentation, and registration of AI systems in a Findable, Accessible, Interoperable, and Reusable (FAIR) manner. To achieve this, specific requirements of the AI Act, related to risk management, documentation, and registration, are analysed. In the current absence of authoritative guidelines and harmonised European standards to guide compliance with the Act, this analysis utilises existing ISO/IEC standards on AI, which are strong candidates for harmonisation and can therefore potentially support the implementation of the AI Act.
As a major contribution, this work proposes a novel compendium of artefacts based on Semantic Web technologies to assist with AI Act compliance tasks. This compendium is centred around the AI Risk Ontology (AIRO), a foundational ontology for modelling AI risks, and its specialisation the AI Risk Vocabulary (VAIR), which is a taxonomy of concepts provided in AIRO to enable its use in practical applications. Using these two ontologies, this thesis illustrates how open, transparent, traceable, comparable, and interoperable information models of AI use cases, that include information about the system, context of use, and risks, can be created. To further demonstrate the functionality of AIRO and VAIR, this thesis leverages the capabilities offered by the Semantic Web technology stack in rule-checking, querying, expressing policies, and cataloguing information to assist with AI Act compliance tasks in regard to risk management, documentation, and registration.
AIRO and VAIR are novel AI risk ontologies developed based on the AI Act that are explicitly aligned with relevant ISO/IEC standards in anticipation of harmonised standards for the Act. This thesis also implements the first set of open, standardised, and extensible artefacts for determining high-risk AI systems, generating AI and risk documentation, expressing AI use policies, and cataloguing AI systems as required by the AI Act.
Complementing this contribution, this thesis introduces AI Cards as a documentation framework that provides a holistic view of an AI use case and its risks in both human- and machine-readable formats, aligned with the AI Act, to facilitate communication and sharing of key AI and risk information among various AI stakeholders.
The contributions of this thesis support development of standards-based automated tools to address AI risk management and documentation challenges, particularly those related to compliance with the AI Act. This is especially important for providers and deployers of AI systems in maintaining and sharing AI and risk information in a manageable, transparent, interoperable, and verifiable manner. In addition, this standards-based automation enables the tracking and verification of claims regarding risk management and thereby facilitates conformity assessment tasks for authorities, particularly conformity assessment bodies.
Sponsor
Grant Number
European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 813497 (PROTECT ITN)
ADAPT SFI Centre for Digital Media Technology funded by Science Foundation Ireland (Grant#13/RC/2106_P2)
Author's Homepage:
https://tcdlocalportal.tcd.ie/pls/EnterApex/f?p=800:71:0::::P71_USERNAME:SGOLPAYSDescription:
APPROVED
Author: S Golpayegani, Seyedeh Delaram
Sponsor:
European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 813497 (PROTECT ITN)ADAPT SFI Centre for Digital Media Technology funded by Science Foundation Ireland (Grant#13/RC/2106_P2)
Advisor:
Lewis, DavidPandit, Harshvardhan J.
O'Sullivan, Declan
Publisher:
Trinity College Dublin. School of Computer Science & Statistics. Discipline of Computer ScienceType of material:
ThesisCollections
Availability:
Full text availableMetadata
Show full item recordThe following license files are associated with this item: