
The European AI Act introduces the Fundamental Rights Impact Assessment (FRIA) as a key mechanism to evaluate the potential risks of high-risk AI systems. The need for FRIA is evident, given the rapid evolution of AI and its growing impact on fundamental rights. However, implementing FRIA raises crucial questions: How can organizations effectively implement the FRIA in a way that ensures compliance and legitimacy? And more importantly, is operationalizing this requirement truly feasible?
This blog is written by Sahar Samavati Lavrsen, a seasoned lawyer with nearly a decade of experience in IT and technology law. She currently focuses on developing effective and practical governance frameworks for AI use.
The European AI Act introduced a new concept known as the "FRIA" (Fundamental Rights Impact Assessment). Article 27 of the AI Act establishes the framework for the FRIA, focusing on the assessment of high-risk AI systems, which includes identifying any groups or individuals who may be affected by the system's use. Much like the Data Protection Impact Assessment (DPIA), a FRIA must be carried out before the high-risk AI system is put into use. Additionally, the assessment must be updated whenever significant changes occur, or when the existing FRIA no longer reflects the current situation.
The rationale behind the FRIA concept is clear and certainly called for, especially given the rapid advancements in AI and consequently the associated risks. However, a critical question arises: how can organizations effectively implement this concept in a way that ensures its legitimacy? And more importantly, is such operationalization even feasible?
When assessing article 27 and recital 96 in the AI Act the purpose of the FRIA appears to be divided into two parts: first – organizations must identify the affected individual and/or groups and second - identify risks of harm to the fundamental rights.
Referring to Recital 96 it states the deployer's obligation to identify the specific risks to the rights of individuals or groups of individuals who are likely to be affected.
Article 27 and recital 96, offers little to no guidance on how this can be operationalized. When examining Recital 96 further, it states, "Where appropriate, to collect relevant information necessary to perform the impact assessment, deployers of high-risk AI systems, in particular when AI systems are used in the public sector, could involve relevant stakeholders, including the representatives of groups of persons likely to be affected by the AI system." This seems to be the only suggestion on how to meet this requirement. However, it is not phrased as an actual obligation, as indicated by the wording "could involve." This suggests that it is entirely at the discretion of the organizations whether or not to involve relevant groups. There is no doubt that involving relevant groups would be beneficial when mapping the potential risks, but one could argue that in a fast-paced organizations where time to market is critical, this "option" will most likely be neglected.
Referring to the second citation of Recital 96 concerning deployer's, obligation to identify specific risks of harm likely to have an impact on the fundamental rights of those persons or groups. This leads to the Charter of Fundamental Rights of the European Union – the charter comprises 50 articles, each outlining fundamental rights that are recognized and safeguarded to the highest standard.
From a legal perspective, all fundamental rights carry equal importance and are/should be protected with the same level of rigor. Consequently, this implies that FRIA’s must always consider and evaluate all fundamental rights one by one during the assessment process. The scope of such assessment will therefore be extensive and potentially faulty since the direct and indirect impact on the fundamental rights are typically highly complex.
From a practical standpoint, the FRIA concept sets exceptionally high expectations whilst offering very little operational advice for organizations seeking to leverage high-risk AI systems. It seems nearly impossible to carry out an evaluation that genuinely assesses and documents how a high-risk AI system impacts fundamental rights and the associated risks for individuals and groups.
As technology evolves quicker than most of us could have ever imagined, one might fear that these well-intended assessments may become "best-guess" documentation, created merely to satisfy governing bodies. Even worse, producing the needed documentation may become an impediment to certain organizations for leveraging high-risk AI in the future.
Collaborate with us!
As always, we appreciate you taking the time to read our blog post.
If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn (dina@womeninai.co).
Silvia A. Carretta and Dina Blikshteyn
- Editors
Comments