-
Article
Video: Ontdek het Responsible AI Lab van de HvA
Er wordt veel over Artificial Intelligence (AI) gepraat, maar hoe kun je AI nu echt toepassen? En hoe ontwerp je Responsible AI? Wat kun je studenten en AI-toepassers leren?
-
Article
Video: Responsible AI in practice
What does responsible AI look like, that increases human capabilities and skills and benefits society? The Responsible AI Lab of the Center of Expertise Applied AI (AUAS) studies AI technology and enters into a dialogue about it. The lab makes Artificial Intelligence accessible to everyone: inclusive and diverse. Specifically, the public sector and the creative industry are central. The lab mainly focuses on the broader question of how we can design meaningful and useful AI applications.
Tuesday, June 8, we were live from Pakhuis de Zwijger with our livecast on applying Responsible AI in practice!
With Marleen Stikker van Waag as an inspiring keynote, Pascal Wiggers from the Responsible AI Lab with an interesting presentation about recruitment & selection, and our own Katrien de Witte and Kevin de Bruin to tell more about our Center of Expertise Applied AI.
Translated by the editors of openresearch.amsterdam.
Source: YouTube - Pakhuis De Zwijger
The livecast referenced Coded Bias, a documentary about the biases that AI entails. Below is a brief summary and trailer of the documentary.
Modern society sits at the intersection of two crucial questions: What does it mean when artificial intelligence increasingly governs our liberties? And what are the consequences for the people AI is biased against? When MIT Media Lab researcher Joy Buolamwini discovers that many facial recognition technologies do not accurately detect darker-skinned faces or classify the faces of women, she delves into an investigation of widespread bias in algorithms. As it turns out, artificial intelligence is not neutral, and women are leading the charge to ensure our civil rights are protected.
Source: Amsterdam University of Applied Sciences - Coded Bias
-
Article
Responsible Artificial Intelligence in Practice
The research lab 'Responsible AI Lab' of the Amsterdam University of Applied Sciences (AUAS) conducts research on how they can create responsible AI, using three tools. These three tools can be used by those who want to develop responsible AI systems. Furthermore, the article explains what responsible AI is and why AI is not neutral in the first place.
AI has created a wealth of opportunities for innovation across many domains. However, along with these opportunities comes unexpected and sometimes unwanted consequences. For example, algorithms can discriminate or lead to unfair treatment of groups of people. This calls for a responsible approach to AI.
The ECAAI Responsible AI Lab
The need for a responsible approach to AI has been recognized worldwide as reflected by the many manifestos and ethical guidelines that have been developed in the last few years. The European Union for example, calls for Trustworthy AI and defines a number of key requirements such as a need for human agency and oversight, transparency and accountability. But what does this mean in practice? How can practitioners who want to create trustworthy AI do so? That is the question driving the research of the Responsible AI Lab of the Amsterdam University of Applied Sciences (AUAS).
The Responsible AI Lab is one of seven labs established by the Expertise Centre of Applied AI (ECAAI). The lab researches applied, responsible AI that empowers people and benefits society with a particular focus on the creative industries and the public domain.
Understanding AI in context
Responsible AI means different things to different people. For the Responsible AI Lab, responsible AI starts with the realization that AI systems impact people’s lives in both expected and unexpected ways. This is true for all technology, but what makes AI different is that a system can learn the rules that govern its behaviour and that this behaviour may change over time. In addition, many AI systems have a certain amount of agency to come to conclusions or actions without human interference.
To better understand this impact, one needs to study an AI system in context and through experiment. Next to an understanding of the technology, this also requires an understanding of the application field and the involvement of the (future) users of the technology.
AI is not neutral
There has been much attention on bias, unfairness and discrimination by AI systems, a recent example is the problem with face recognition on Twitter and Zoom. What you see here is that data mirrors culture, including prejudices, conscious and unconscious biases and power structures, and the AI picks up these cultural biases. So, bias is a fact of life, not just an artifact of some data set.
The same holds for another form of bias, or rather subjectivity, that influences the impact an AI system may have: the many decisions, large and small, taken in the design and development process of such a system. Imagine for example a recommendation system for products or services, such as flights. The order in which the results are shown may influence the amount of clicks each receives and by that the profit of the competing vendors. Any choice made during the design process will have an effect, however small. Ideally, designers and developers reflect upon such choices during development. That in itself is difficult enough, but for AI systems that learn part of their behaviour from data, this is even more challenging.
Tools for Responsible AI
To develop responsible AI systems worthy of our trust, practitioners need tools to:
- Understand and deal with the preexisting cultural bias that a system may pick up
- Reflect upon and deal with the bias introduced in the development process
- Anticipate and assess the impact an AI system has during deployment
Tools can take several shapes. They include responsible algorithms, such as algorithms that provide an explanation of the choices made by an AI system or algorithms that optimize, among other things, for fairness to ensure that the outcomes will not benefit one group of people more than others. Tools may also take the form of assessment or auditing tools that test AI algorithms for particular forms of bias. Such tools can be used during development and deployment to see if any changes to the system may result in unwanted outcomes.
Both types of tools can help in achieving responsible AI, but technology alone can take us only so far in dealing with bias. As bias reflects culture, it takes human understanding to make informed choices. Therefore, responsible AI tools also include best practices, design patterns and in particular design methodologies. These range from co-creation workshop format to prototyping and checklists that help to explicate the values that are now implicitly embedded in technology. These methodologies help to critically reflect upon these values and to design and implement AI from desired values while giving the end users a voice throughout the development and deployment of AI applications.
Responsible AI research now
At ECAAI, and in particular in the Responsible AI Lab, they are doing research with practitioners from different domains to develop and evaluate all three types of tools: responsible AI algorithms, automated assessment tools and AI design methodologies. They want to ensure that the AI that surrounds us will be the AI we want to live with. For example, together with Dutch broadcasting organisations NPO and RTL and the Applied Universities of Rotterdam and Utrecht they are developing design tools for pluriform recommendation systems and for inclusive language processing. Furthermore, they are working with the City of Amsterdam to research how to guarantee inclusion and diversity in AI systems for recruitment.
Source: Wiggers, P. 2020. Responsible Artificial Intelligence in Practice, Amsterdam Data Science.
For more information on responsible AI see:
- Video: Responsible AI in de Praktijk
- Video: Opening Civic AI Lab