Article

Practical fundamental rights impact assessments

The European Union’s General Data Protection Regulation tasks organizations to perform a Data Protection Impact Assessment (DPIA) to consider fundamental rights risks of their artificial intelligence (AI) system. However, assessing risks can be challenging, as fundamental rights are often considered abstract in nature. So far, guidance regarding DPIAs has largely focussed on data protection, leaving broader fundamental rights aspects less elaborated. This is problematic because potential negative societal consequences of AI systems may remain unaddressed and damage public trust in organizations using AI. Towards this, we introduce a practical, four-Phased framework, assisting organizations with performing fundamental rights impact assessments. This involves organizations (i) defining the system’s purposes and tasks, and the responsibilities of parties involved in the AI system; (ii) assessing the risks regarding the system’s development; (iii) justifying why the risks of potential infringements on rights are proportionate; and (iv) adopt organizational and/or technical measures mitigating risks identified. We further indicate how regulators might support these processes with practical guidance.

H. Janssen, M. Seng Ah Lee, J. Singh, Practical Fundamental Rights Impact Assessments” (2022) International Journal of Law and Information Technology

Image credits

Header image: Pixabay - yellow

Icon image: Pixabay

Media

Documents