Article

Responsible Artificial Intelligence in Practice

The research lab 'Responsible AI Lab' of the Amsterdam University of Applied Sciences (AUAS) conducts research on how they can create responsible AI, using three tools. These three tools can be used by those who want to develop responsible AI systems. Furthermore, the article explains what responsible AI is and why AI is not neutral in the first place.

Flickr - Artificial Intelligence

AI has created a wealth of opportunities for innovation across many domains. However, along with these opportunities comes unexpected and sometimes unwanted consequences. For example, algorithms can discriminate or lead to unfair treatment of groups of people. This calls for a responsible approach to AI.

The ECAAI Responsible AI Lab

The need for a responsible approach to AI has been recognized worldwide as reflected by the many manifestos and ethical guidelines that have been developed in the last few years. The European Union for example, calls for Trustworthy AI and defines a number of key requirements such as a need for human agency and oversight, transparency  and accountability. But what does this mean in practice? How can practitioners who want to create trustworthy AI do so? That is the question driving the research of the Responsible AI Lab of the Amsterdam University of Applied Sciences (AUAS).

The Responsible AI Lab is one of seven labs established by the Expertise Centre of Applied AI (ECAAI). The lab researches applied, responsible AI that empowers people and benefits society with a particular focus on the creative industries and the public domain.

Understanding AI in context

Responsible AI means different things to different people. For the Responsible AI Lab, responsible AI starts with the realization that AI systems impact people’s lives in both expected and unexpected ways. This is true for all technology, but what makes AI different is that a system can learn the rules that govern its behaviour and that this behaviour may change over time. In addition, many AI systems have a certain amount of agency to come to conclusions or actions without human interference.

To better understand this impact, one needs to study an AI system in context and through experiment. Next to an understanding of the technology, this also requires an understanding of the application field and the involvement of the (future) users of the technology.

AI is not neutral

There has been much attention on bias, unfairness and discrimination by AI systems, a recent example is the problem with face recognition on Twitter and Zoom. What you see here is that data mirrors culture, including prejudices, conscious and unconscious biases and power structures, and the AI picks up these cultural biases. So, bias is a fact of life, not just an artifact of some data set. 

The same holds for another form of bias, or rather subjectivity, that influences the impact an AI system may have: the many decisions, large and small, taken in the design and development process of such a system. Imagine for example a recommendation system for products or services, such as flights. The order in which the results are shown may influence the amount of clicks each receives and by that the profit of the competing vendors. Any choice made during the design process will have an effect, however small. Ideally, designers and developers reflect upon such choices during development. That in itself is difficult enough, but for AI systems that learn part of their behaviour from data, this is even more challenging.

Tools for Responsible AI

To develop responsible AI systems worthy of our trust, practitioners need tools to:

  1. Understand and deal with the preexisting cultural bias that a system may pick up
  2. Reflect upon and deal with the bias introduced in the development process
  3. Anticipate and assess the impact an AI system has during deployment

Tools can take several shapes. They include responsible algorithms, such as algorithms that provide an explanation of the choices made by an AI system or algorithms that optimize, among other things, for fairness to ensure that the outcomes will not benefit one group of people more than others. Tools may also take the form of assessment or auditing tools that test AI algorithms for particular forms of bias. Such tools can be used during development and deployment to see if any changes to the system may result in unwanted outcomes.

Both types of tools can help in achieving responsible AI, but technology alone can take us only so far in dealing with bias. As bias reflects culture, it takes human understanding to make informed choices. Therefore, responsible AI tools also include best practices, design patterns and in particular design methodologies. These range from co-creation workshop format to prototyping and checklists that help to explicate the values that are now implicitly embedded in technology. These methodologies help to critically reflect upon these values and to design and implement AI from desired values while giving the end users a voice throughout the development and deployment of AI applications.

Responsible AI research now

At ECAAI, and in particular in the Responsible AI Lab, they are doing research with practitioners from different domains to develop and evaluate all three types of tools: responsible AI algorithms, automated assessment tools and AI design methodologies. They want to ensure that the AI that surrounds us will be the AI we want to live with. For example, together with Dutch broadcasting organisations NPO and RTL and the Applied Universities of Rotterdam and Utrecht they are developing design tools for pluriform recommendation systems and for inclusive language processing. Furthermore, they are working with the City of Amsterdam to research how to guarantee inclusion and diversity in AI systems for recruitment.

Source: Wiggers, P. 2020. Responsible Artificial Intelligence in Practice, Amsterdam Data Science.

For more information on responsible AI see: 

Image credits

Header image: PIxabay - Technology AI

Icon image: Taken from https://www.smartcitiesworld.net/news/news/ai-algorithm-capable-of-multi-task-deep-learning-2419