Artikel

A European approach to artificial intelligence

The EU’s approach to artificial intelligence centres on excellence and trust, aiming to boost research and industrial capacity and ensure fundamental rights. The European approach to artificial intelligence (AI) will help build a resilient Europe for the Digital Decade where people and businesses can enjoy the benefits of AI. It focuses on 2 areas: excellence in AI and trustworthy AI. The European approach to AI will ensure that any AI improvements are based on rules that safeguard the functioning of markets and the public sector, and people’s safety and fundamental rights.

To help further define its vision for AI, the European Commission developed an AI strategy to go hand in hand with the European approach to AI. The AI strategy proposed measures to streamline research, as well as policy options for AI regulation, which fed into work on the AI package.

The Commission published its AI package in April 2021, proposing new rules and actions to turn Europe into the global hub for trustworthy AI. This package consisted of:

A European approach to excellence in AI

Fostering excellence in AI will strengthen Europe’s potential to compete globally.

The EU will achieve this by:

  1. enabling the development and uptake of AI in the EU;
  2. making the EU the place where AI thrives from the lab to the market;
  3. ensuring that AI works for people and is a force for good in society;
  4. building strategic leadership in high-impact sectors.

The Commission and Member States agreed boost excellence in AI by joining forces on AI policy and investment. The revised Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.

Maximising resources and coordinating investments is a critical component of the Commission’s AI strategy. Through the Digital Europe and Horizon Europe programmes, the Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade.

The newly adopted Recovery and Resilience Facility makes €134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.

A European approach to trust in AI

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  1. a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;
  2. EU rules to address liability issues related to new technologies, including AI systems (last quarter 2021-first quarter 2022);
  3. a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive, second quarter 2021).

European proposal for a legal framework on AI

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

A European approach to trust in AI

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

European proposal for a legal framework on AI

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

Source: A European approach to artificial intelligence - European Commission.

Afbeelding credits

Icon afbeelding: AI_cio.jpg