Trustworthy AI and accountability: yes, but how?
As AI systems penetrate deeper into organizations and society, legitimate concerns about their effectiveness for the full range of users are driving legislators worldwide to seek ways to curb the downsides of these systems for individuals and society. One such regulatory initiative is the (proposed) EU AI Act, which is expected to become a reference point in the international debate on the regulation of AI.
The AI Act specifically introduces an obligation for providers of (so defined) high-risk AI systems to demonstrate compliance with safety requirements for Trustworthy AI through two accountability mechanisms: (a) pre-market conformity assessment and (b) post-market monitoring. However, the elaboration of these mechanisms in the Act is seen as hardly providing guidance on what this practice should entail, creating a regulatory gray area that destabilizes the Act’s relevance.
This dissertation focuses on the gap between the high-level objectives and requirements of the Act and the day-to-day practice of Trustworthy AI, and makes recommendations to address the current lack of methodology between them. The study arrives at these recommendations by:
• juxtaposing (i) the proposed top-down approach of the Act to demonstrate Trustworthy AI with (ii) the emerging, interdisciplinary theory and bottom-up practice of algorithm auditing, to examine how the parallel trajectory of the latter may fertilize the former.
• applying foundations of Design Science Research to establish design-oriented recommendations regarding logic to demonstrate Trustworthy AI.
This results in 17 detailed recommendations across 7 focus areas, aimed at strengthening the Act’s ability to provide effective and workable solutions for those engaged in the day-to-day practice of demonstrating Trustworthy AI and those responsible for designing the laws and regulations that concern this practice.