Presentations by dr. Mariangela Veikou and dr. Heleen Janssen.
Chair: prof.dr. Caroline Nevejan.
(All participants agreed on filming and publishing the meeting on openresearch).
Digitalisation is changing the way how we define and organize our society, and even citizenship. At the workshop we are looking to see how Digital Citizenship may provide a range of possibilities for inclusivity, other than merely the inherent challenges.
NIAS Digital Urban Citizen fellows dr. Mariangela Veikou and dr. Heleen Janssen examine how the use of digital technologies plays out in the real world. Accessing digital technologies is only part of the problem of our unequal world. From the very notion of citizenship and concerns on equity to the government’s use of digital data generated by private sectors or old structures, our work explores how the use of digital technology might directly or indirectly affect individuals, communities, and the (local) government itself.
Through theoretical and methodological explorations of contemporary law, political philosophy, social theory, interviews and fieldwork research, we study how digitalisation impacts current structures of participation and inclusivity.
Specifically, we explore how: (1) digital tools may shape the way we find solutions or make political decisions over inclusivity in society on sectors traditionally considered sensitive and unbreachable, such as citizenship and nationhood; and (2) data held by private sectors could be shared responsibly with the municipality, thereby respecting the rights and interests of citizens and private sector, respectively.
This meeting took place on March 23rd 2023 at the DataLab Amsterdam.
Presentations by dr. Mariangela Veikou and dr. Heleen Janssen.
Chair: prof.dr. Caroline Nevejan.
(All participants agreed on filming and publishing the meeting on openresearch).
In dit onderzoeksrapport analyseren we het toepasselijke juridisch kader bij verwerkingen van
sensorgegevens door private partijen over het gedrag van mensen in de openbare ruimte en adviseren we de gemeente over de wijze waarop zij de gemeentelijke juridische instrumenten zou kunnen inzetten om de naleving van de fundamentele rechten te verbeteren. Daarnaast analyseren we het juridisch kader rondom B2G-gegevensdeling en adviseren we hoe de gemeente de eerdergenoemde juridische instrumenten zou kunnen toepassen om, waar dat nodig is, vaker B2G-gegevensdeling te kunnen laten plaatsvinden ten behoeve van een betere vervulling van de gemeentelijke publieke taak en ten behoeve van innovatiedoeleinden van andere professionele partijen die actief zijn binnen de gemeente. De adviezen steunen op juridisch onderzoek en op semigestructureerde interviews met medewerkers van de gemeente Amsterdam en van bedrijven die actief zijn in de gemeente Amsterdam.
Onderzoek in opdracht van de gemeente Amsterdam
Mr. dr. H.L. Janssen
Mr. L.W. Verboeket
Mr. A. Meiring
Mr. dr. J.V.J. van Hoboken
Prof. mr. M.M.M. van Eechoud
Prof. mr. J.E. van den Brink
Prof. mr. R. Ortlep
Dr. B. Bodó
Universiteit van Amsterdam
Instituut voor Informatierecht
Afdeling Publiekrecht, sectie Staats- en bestuursrecht
Data intermediaries serve as a mediator between those who wish to make their data available, and those who seek to leverage that data. The intermediary works to govern the data in specific ways, and provides some degree of confidence regarding how the data will be used.
Janssen, H. & Singh, J. (2022). Data intermediary. Internet Policy Review, 11(1).
https://doi.org/10.14763/2022.1.1644
Though discussions of data protection have focused on the larger, more established organisations, start-ups also warrant attention. This is particularly so fortech startups, who are often innovating at the ‘cutting‐
edge’—pushing the boundaries of technologies that typically lack established data protection best practices. Initial decisions taken by startups could well have long‐term impacts, and their actions may inform (for better or for worse) how particular technologies and the applications they support are implemented, deployed, and perceived for years to come. Ensuring that the innovations and practices of tech startups are sound, appropriate and acceptable should therefore be a high priority. This paper explores the attitudes and preparedness of tech start-ups to issues of data protection. We interviewed a series of UK‐based emerging tech startups as the EU's General Data Protection Regulation (GDPR) came into effect, which revealed areas in which there is a disconnect between the approaches of the startups and the nature and requirements of the GDPR. We discuss the misconceptions and associated risks facing innovative tech startups and offer a number of considerations for the firms and supervisory authorities alike. In light of our discussions, and given what is at stake, we argue that more needs to be done to help ensure that emerging technologies and the practices of the companies that operate them better align with the regulatory obligations. We conclude that tech startups warrant increased attention, support, and scrutiny to raise the standard of data protection for the benefit of us all.
Norval, Chris and Janssen, Heleen and Cobbe, Jennifer and Singh, Jatinder, Data Protection and Tech Startups: The Need for Attention, Support, and Scrutiny (June 6, 2019).
The European Union’s General Data Protection Regulation tasks organizations to perform a Data Protection Impact Assessment (DPIA) to consider fundamental rights risks of their artificial intelligence (AI) system. However, assessing risks can be challenging, as fundamental rights are often considered abstract in nature. So far, guidance regarding DPIAs has largely focussed on data protection, leaving broader fundamental rights aspects less elaborated. This is problematic because potential negative societal consequences of AI systems may remain unaddressed and damage public trust in organizations using AI. Towards this, we introduce a practical, four-Phased framework, assisting organizations with performing fundamental rights impact assessments. This involves organizations (i) defining the system’s purposes and tasks, and the responsibilities of parties involved in the AI system; (ii) assessing the risks regarding the system’s development; (iii) justifying why the risks of potential infringements on rights are proportionate; and (iv) adopt organizational and/or technical measures mitigating risks identified. We further indicate how regulators might support these processes with practical guidance.
H. Janssen, M. Seng Ah Lee, J. Singh, Practical Fundamental Rights Impact Assessments” (2022) International Journal of Law and Information Technology
Emerging technologies permeate and potentially disrupt a wide spectrum of our social, economic,and political relations. Various state institutions, including education, law enforcement, and healthcare, increasingly rely on technical components, such as automated decision-making systems, e-government systems, and other digital tools to provide cheap, efficient public services, and supposedly fair, transparent, disinterested, and accountable public administration. The increased interest in various blockchain-based solutions from central bank digital currencies, via tokenized educational credentials, and distributed ledger-based land registries to self-sovereign identities is the latest, still mostly unwritten chapter in a long history of standardized, objectified, automated, technocratic, and technologized public administration. The rapid, (often) unplanned, and uncontrolled technologization of public services (as happened in the hasty adoption of distance-learning and teleconferencing systems during Corona Virus Disease (COVID) lockdowns) raises complex questions about the use of novel technological components, which may or may not be ultimately adequate for the task for which they are used. The question whether we can trust the technical infrastructures the public sector uses when providing public services is a central concern in an age where trust in government is declining: If the government’s artificial intelligence system that detects welfare fraud fails, the public’s confidence in the government is ultimately hit. In this paper, we provide a critical assessment of how the use of potentially untrustworthy (private) technological systems including blockchain-based systems in the public sector may affect trust in government. We then propose several policy options to protect the trust in government even if some of their technological components prove fundamentally untrustworthy.
B. Bodo & H. Janssen, Maintaining trust in a technologized public sector’ (2022) Policy & Society 41(3).
Personal information management systems (PIMS) aka personal data stores (PDSs) represent an emerging class of technology that seeks to empower individuals regarding their data. Presented as an alternative to current ‘centralised’ data processing approaches, whereby user data is (rather opaquely) collected and processed by organisations, PDSs provide users with technical mechanisms for aggregating and managing their own data, determining when and with whom their data is shared, and the computation that may occur over that data. Though arguments for decentralisation may be appealing, there are questions regarding the extent to which PDSs actually address data processing concerns. This paper explores these questions from the perspective of PDS users. Specifically, we focus on data protection, including how PDSs relate to rights and the legal bases for processing, as well as how PDSs affect the information asymmetries and surveillance practices inherent online. We show that, despite the purported benefits of PDSs, many of the systemic issues of online/data ecosystems remain.
H. Janssen, J. Cobbe, J. Singh, ‘Personal Information Management Systems: A user-centric privacy utopia?’ (2020) Internet Policy Review 9(4).
- Data subject rights constitute critical tools for empowerment in the digitized society. There is a growing trend of relying on third parties to facilitate or coordinate the collective exercises of data rights, on behalf of one or more data subjects.
- This contribution refers to these parties as ‘Data Rights Intermediaries’ (DRIs), ie where an ‘intermediating’
party facilitates or enables the collective exercise of data rights. The exercise of data rights by these DRIs on behalf of the data subjects can only be effectuated with the help of mandates.
- Data rights mandates are not expressly framed in the GDPR their delineation can be ambiguous. It is important to highlight that data rights are mandatable and this without affecting their inalienability
in light of their fundamental rights’ nature.
- This article argues that contract law and fiduciary duties both have longstanding traditions and robust
norms in many jurisdictions, all of which can be explored towards shaping the appropriate environment to regulate data rights mandates in particular.
- The article concludes that the key in unlocking the full potential of data rights mandates can already be found in existing civil law constructs, whose diversity reveals the need for solidifying the responsibility and accountability of mandated DRIs. The continued adherence to fundamental contract law principles will have to be complemented by a robust framework of institutional safeguards. The need for such safeguards stems from the vulnerable position of data subjects, both vis-a`-vis DRIs as well as data
controllers.
A. Giannopoulou, J. Ausloos, S. Delacroix, H. Janssen ‘Intermediating data rights exercises: the role of legal mandates’ (2022) International Data Privacy Law pp. 1 – 16.
Onderzoeksvoorstel van Heleen Janssen voor het NIAS Digital Urban Citizen fellowship.
Auteur: Heleen Janssen.