Digital technology is changing people’s lives. The EU’s digital strategy aims to make this transformation work for people and businesses, while helping to achieve its target of a climate-neutral Europe by 2050. The Commission is determined to make this Europe's “Digital Decade”. Europe must now strengthen its digital sovereignty and set standards, rather than following those of others – with a clear focus on data, technology, and infrastructure.
EU and AI
Artificial intelligence (AI) can help find solutions to many of society’s problems. This can only be achieved if the technology is of high quality, and developed and used in ways that earns peoples’ trust. Therefore, an EU strategic framework based on EU values will give citizens the confidence to accept AI-based solutions, while encouraging businesses to develop and deploy them.
This is why the European Commission has proposed a set of actions to boost excellence in AI, and rules to ensure that the technology is trustworthy.
The Regulation on a European Approach for Artificial Intelligence and the update of the Coordinated Plan on AI will guarantee the safety and fundamental rights of people and businesses, while strengthening investment and innovation across EU countries.
Building trust through the first-ever legal framework on AI
The Commission is proposing new rules to make sure that AI systems used in the EU are safe, transparent, ethical, unbiased and under human control. Therefore they are categorised by risk:
Unacceptable: Anything considered a clear threat to EU citizens will be banned: from social scoring by governments to toys using voice assistance that encourages dangerous behaviour of children.
High risk:
Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk
Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams)
Safety components of products (e.g. AI application in robot-assisted surgery)
Employment, workers management and access to self-employment (e.g. CV sorting software for recruitment procedures)
Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
Migration, asylum and border control management (e.g. verification of authenticity of travel documents)
Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts)
They will all be carefully assessed before being put on the market and throughout their lifecycle.
Limited risk: AI systems such as chatbots are subject to minimal transparency obligations, intended to allow those interacting with the content to make informed decisions. The user can then decide to continue or step back from using the application.
Minimal risk: Free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems falls into this category where the new rules do not intervene as these systems represent only minimal or no risk for citizen’s rights or safety.
Proposal for a Regulation laying down harmonised rules on artificial intelligence
The Commission is proposing the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
This explanatory memorandum accompanies the proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Intelligence (AI) is a fast evolving family of technologies that can bring a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising service delivery, the use of artificial intelligence can support socially and environmentally beneficial outcomes and provide key competitive advantages to companies and the European economy. Such action is especially needed in high-impact sectors, including climate change, environment and health, the public sector, finance, mobility, home affairs and agriculture. However, the same elements and techniques that power the socio-economic benefits of AI can also bring about new risks or negative consequences for individuals or the society. In light of the speed of technological change and possible challenges, the EU is committed to strive for a balanced approach. It is in the Union interest to preserve the EU’s technological leadership and to ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles.
Report on safety and liability implications of Artificial Intelligence, the Internet of Things and robotics
Artificial Intelligence (AI), the Internet of Things (IoT) and robotics will create new opportunities and benefits for our society. The Commission has recognised the importance and potential of these technologies and the need for significant investment in these areas. It is committed to making Europe a world-leader in AI, IoT and robotics. In order to achieve this goal, a clear and predictable legal framework addressing the technological challenges is required.
The overall objective of the safety and liability legal frameworks is to ensure that all products and services, including those integrating emerging digital technologies, operate safely, reliably and consistently and that damage having occurred is remedied efficiently. High levels of safety for products and systems integrating new digital technologies and robust mechanisms remedying occurred damage (i.e. the liability framework) contribute to better protect consumers. They also create trust in these technologies, a prerequisite for their uptake by industry and users. This in turn will leverage the competitiveness of our industry and contribute to the objectives of the Union. A clear safety and liability framework is particularly important when new technologies like AI, the IoT and robotics emerge, both with a view to ensure consumer protection and legal certainty for businesses.
The Union has a robust and reliable safety and product liability regulatory framework and a robust body of safety standards, complemented by national, non-harmonised liability legislation. Together, they ensure the well-being of our citizens in the Single Market and encourage innovation and technological uptake. However, AI, the IoT and robotics are transforming the characteristics of many products and services.
The Communication on Artificial Intelligence for Europe , adopted on 25 April 2018, announced that the Commission would submit a report assessing the implications of the emerging digital technologies on the existing safety and liability frameworks. This report aims to identify and examine the broader implications for and potential gaps in the liability and safety frameworks for AI, the IoT and robotics. The orientations provided in this report accompanying the White Paper on Artificial Intelligence are provided for discussion and are part of the broader consultation of stakeholders. The safety section builds on the evaluation of the Machinery Directive7 and the work with the relevant expert groups. The liability section builds on the evaluation of the Product Liability Directive, the input of the relevant experts groups and contacts with stakeholders. This report does not aim to provide an exhaustive overview of the existing rules for safety and liability, but focuses on the key issues identified so far.
Source: Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. 2020. European Commission.
The European AI Forum is an initiative launched in June 2020 on the final day of the Croatian Presidency over the Council of the European Union. Initially planned as a conference that takes place towards the completion of each Council Presidency, the European AI Forum has turned into a network of associations that want to make sure startups and innovators in Member States have a voice when it comes to building an AI regulatory framework.
Founded by CroAI, Hub France IA and KI Bundesverband, our Community has grown to include AI4SI, AI Austria, AI Cluster Bulgaria, AI, AI Sweden, NL AI Coalitie and AI Poland.
Our mission is to set the agenda on how AI is approached in Europe, both in terms of policy and entrepreneurship.
Our goal is to serve as a platform where entrepreneurs and policymakers get together to jointly determine the path forward for European AI innovation.
Facial recognition: A solution in search of a problem?
“Be water”. This is the evocative and enigmatic phrase of the current mask-wearing protestors in Hong-Kong. It seems to represent the fight of citizens for the right to be shapeless and anonymous among the crowd, including when exercising the right to protest, versus surveillance by the state authorities.
It is undeniable that facial recognition, the biometric application used to identify or verify a person’s identity, has become increasingly present in many aspects of daily life. It is used for ‘tagging’ people on social media platforms and to unlock smart phones. In China it is used for airport check-in, for monitoring the attentiveness of pupils at school and even for dispensing paper in public latrines.
In the general absence of specific regulation so far, private companies and public bodies in both democracies and authoritarian states have been adopting this technology for a variety of uses. There is no consensus in society about the ethics of facial recognition, and doubts are growing as to its compliance with the law as well as its ethical sustainability over the long term.
The purposes that triggered the introduction of facial recognition may seem uncontroversial at a first sight: it seems unobjectionable to use it to verify a person’s identity against a presented facial image, such as at national borders including in the EU. It is another level of intrusion to use it to determine the identity of an unknown person by comparing her image against an extensive database of images of known individuals.
In your face
There appear to be two big drivers behind this trend.
Firstly, politicians react to a popular sense of insecurity or fear that associates the movements of foreigners across borders with crime and terrorism. Facial recognition presents itself as a force for efficient security, public order and border control. Facial recognition is a key component of the general surveillance apparatus deployed to control the Uighur population in Xinjiang, justified by the government on grounds of combating terrorism.
The second justification is the lure of avoiding physical and mental efforts - ‘convenience’: some people would prefer to be able to access to an area or a service without having to produce a document.
France aims to be the first European country to use such technology for granting a digital identity. Meanwhile the Swedish data protection authority recently imposed a fine on a school for testing facial recognition technology to track its students’ attendance. Although there was no great debate on facial recognition during the passage of negotiations on the GDPR and the law enforcement data protection directive, the legislation was designed so that it could adapt over time as technologies evolved.
Face/Off
The privacy and data protection issues with facial recognition, like all forms of data mining and surveillance, are quite straightforward.
First, EU data protection rules clearly cover the processing of biometric data, which includes facial images: ‘relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person’ (GDPR Art. 2(14)). The GDPR generally forbids the processing of biometric data for uniquely identifying purposes unless one can rely on one of the ten exemptions listed in Art. 9(2).
Second, any interference in fundamental rights under the Article 52 of the Charter must be demonstrably necessary. The bar for this test becomes higher the deeper the interference. Is there any evidence yet that we need the technology at all? Are there really no other less intrusive means to achieve the same goal? Obviously, ‘efficiency’ and ‘convenience’ could not stand as sufficient.
Third, could there be a valid legal basis for the application of such technology given that it relies on the large-scale processing of sensitive data? Consent would need to be explicit as well as freely-given, informed and specific. Yet unquestionably a person cannot opt out, still less opt in, when they need access to public spaces that are covered by facial recognition surveillance. Under Article 9(2)(g) the national and EU legislators have the discretion to decide the cases where the use of this technology guarantees a proportionate and necessary interference with human rights.
Fourth, accountability and transparency. The deployment of this technology so far has been marked by obscurity. We basically don’t know how data is used by those who collect it, who has access and to whom it is sent, how long do they keep it, how a profile is formed and who is responsible at the end for the automated decision-making. Furthermore, it is almost impossible to trace the origin of the input data; facial recognition systems are fed by numerous images collected by the internet and social media without our permission. Consequently, anyone could become the victim of an algorithm’s cold testimony and be categorised (and more than likely discriminated) accordingly.
Finally, the compliance of the technology with principles like data minimisation and the data protection by design obligation is highly doubtful. Facial recognition technology has never been fully accurate, and this has serious consequences for individuals being falsely identified whether as criminals or otherwise. The goal of ‘accuracy’ implies a logic that irresistibly leads towards an endless collection of (sensitive) data to perfect an ultimately unperfectible algorithm. In fact, there will never be enough data to eliminate bias and the risk of false positives or false negatives.
Saving face
It would be a mistake, however, to focus only on privacy issues. This is fundamentally an ethical question for a democratic society.
A person’s face is a precious and fragile element her identity and sense of uniqueness. It will change in appearance over time and she might choose to obscure or to cosmetically change it - that is her basic freedom. Turning the human face into another object for measurement and categorisation by automated processes controlled by powerful companies and governments touches the right to human dignity - even without the threat of it being used as a tool for oppression by an authoritarian state.
Moreover, it tends to be tested on the poorest and most vulnerable in society, ethnic minorities, migrants and children.
Where combined with other publicly available information and the techniques of Big Data, it could obviously chill individual freedom of expression and association. In Hong Kong the face has become a focal point. The wearing of masks has been a reaction to the use of facial recognition and in turn has been prohibited under a new law.
Does my face look bothered?
It seems that facial recognition is being promoted as a solution for a problem that does not exist. That is why a number of jurisdictions around the world have moved to impose a moratorium on the use of the technology.
We need to assess not only the technology on its own merits, but also the likely direction of travel if it continues to be deployed more and more widely. The next stage will be pressure to adopt other forms of objectification of the human being, gait, emotions, brainwaves. Now is the moment for the EU, as it discusses the ethics of AI and the need for regulation, to determine whether- if ever - facial recognition technology can be permitted in a democratic society. If the answer is yes, only then do we turn questions of how and safeguards and accountability to be put in place.
Independent DPAs will be proactive in these discussions.
The next 5 years could prove to be a global turning point for privacy and personal data protection. Most of the world will have ageneral data protection law, including the largest countries currently without one –India, Indonesia and, quite possibly, the United States. Most policy interventions addressing social, environmental and public health issues, will involve technology and data usage. Data protection will become relevant in almost every context. The Covid-19 crisis, which, initially, seemed to be a danger to such an evolution, has, instead, strengthened the call for the protection of individuals’ privacy. This is especially the case when governments take measures to defend society and the economy against such an extraordinary threat.
Artificial intelligence (AI) already plays a role in deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates. Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI. This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising.
This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It discusses the potential implications for fundamental rights and shows whether and how those using AI are taking rights into account.
FRA interviewed just over a hundred public administration officials, private company staff, as well as diverse experts – including from supervisory and oversight authorities, non-governmental organisations and lawyers – who variously work in the AI field.
Based on these interviews, the report analyses how fundamental rights are taken into consideration when using or developing AI applications. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising. The AI uses differ in terms of how complex they are, how much automation is involved, their potential impact on people, and how widely they are being applied.
The findings underscore that a lot of work lies ahead – for everyone.
Voor Amsterdam biedt een optimale positionering in Europa op het gebied van Artificial Intelligence (AI) veel kansen. De EU-monitor op AI dient deze positionering te faciliteren, zowel voor de gemeente Amsterdam als voor de leden van de Amsterdamse AI coalitie.
Het Europees Parlement wil een verbod op het gebruik van gezichtsherkenning. Na een aantal dagen van onrust over het rapport inzake AI en het gebruik ervan door gerechtelijke instanties, stemden de leden van het Europees Parlement op 6 oktober met een overweldigende meerderheid voor een resolutie waarin wordt opgeroepen tot een verbod op het gebruik van geautomatiseerde gezichtsherkenning in publieke ruimtes door politie en justitie. Het EP steunt ook de Europese Commissie in haar poging om in het AI-voorstel sociale scoresystemen te verbieden. Hoewel de resolutie niet-bindend is, geeft het wel een indicatie over hoe het EP zich zal opstellen in de komende onderhandelingen over het AI-voorstel.
Bron: EU Monitoring Gemeente Amsterdam, 29 oktober 2021: Relevante ontwikkelingen op het gebied van de Artificial Intelligence in Europa.
EU Monitoring Gemeente Amsterdam, 26 februari 2020
Voor Amsterdam biedt een optimale positionering in Europa op het gebied van Artificial Intelligence (AI) veel kansen. De EU-monitor op AI dient deze positionering te faciliteren, zowel voor de gemeente Amsterdam als voor de leden van de Amsterdamse AI coalitie.
De huidige onderhandelingen over het adviesrapport biedt Amsterdam een mooie kans om haar positie te verkondigen. Indien de gemeente en de AI-coalitie haar positie wil verkondigen of invloed wil uitoefenen op de compromis amendementen op het rapport, wordt dit het beste gedaan bij de rapporteur van het adviesrapport, Deirde Clune (EVP, IR) of bij de Nederlandse leden van de IMCO commissie, i.e. Kim van Sparrentak (Greens/EFA) of Liesje Schreinemacher (Renew Europe).
Results of the workshop connecting data exchange initiatives in Europe and identifying key factors for successful growth.
On 7 – 8 November 2019 the European Commission DG CONNECT, the City of Amsterdam and the Amsterdam Economic Board organized a first workshop connecting data exchange initiatives in Europe to identify key factors for successful growth, as part of a series of events on data exchange organized by the European Commission in view of future policy and funding activities for data economy.
The purpose of the event was to:
Learn from each other and to enhance the informal network of data exchange initiatives,
Analyse and discuss common challenges, similarities and differences of data exchange initiatives.
Develop recommendations for the upcoming policy and funding initiatives of the European Commission and Member States, to support data exchange initiatives.
The event focussed on bringing together data exchange coalitions and initiatives, whether researchers or companies, developing a specific data exchange service or solution with crosssectoral relevance and with a decentralized service model.
Member States and business representatives were also invited to make sure the insights of data initiatives were combined with policy insights and applications for industry.
With strong interaction between data exchange initiatives amongst each other and with different levels of policy experts and business representatives, the workshop was productive and emphasized the need and potential for strengthening cooperation between data exchange initiatives to establish effective governance for a fair and open European data market. A presentation and participation by Yvo Volman, Head of Unit Data Policy & Innovation at the European Commission DG CONNECT set out the current European policy framework and included insights from a European perspective on data economy. Caroline Nevejan highlighted the importance of data exchange for cities and regions and for a European approach to data exchange and referred to the Amsterdam Data Exchange as an example of how cities and regions contribute to the development of a future data market.
Source: Accelerating a sustainable European data economy: Report workshop EU data exchange initiatives. 2019. City of Amsterdam, European Commision & Amsterdam Economic Board.