Article

Making AI Approachable for Citizen Participation

How can civil servants communicate understandably about the design, adoption, and use of Artificial Intelligence in order to make it more approachable for citizens to participate? 

One of the key challenges in mainstreaming citizen participation in the municipality’s design, adoption , and use of AI is that the topic remains unapproachable for many citizens.  As such, they often don’t find themselves in a position to meaningfully participate on this topic.  

In the spring of 2023, the municipality of Amsterdam and Digital Rights house conducted exploratory research on the link between algorithmic transparency and citizen participation.  Through a series of interviews and a survey administered by the municipality, many citizens expressed that they knew little to nothing about algorithms or about how the municipality is using them.  This lack of knowledge and awareness was cited as the primary obstacle to participating on the topic of algorithms.  More specifically, some respondents to the survey expressed that algorithms are difficult to understand, and they wouldn’t know where to begin to understand them. Several asked questions about how algorithms work and how they may impact a decision-making process. They also felt that because they are not experts, they might not be able to give well-substantiated inputs.  Similarly, some interviewees expressed that they didn’t feel qualified or legitimate to participate, especially due to a lack of technical knowledge. 

The Amsterdam Participatory Approach Action Plan 2020 highlights the importance of ensuring equal access to relevant and understandable information for citizens, so that they are equipped to participate on a “level playing field”.  Before, during, and after a participation process, those developing and using AI within the municipality carry the responsibility of communicating about their projects in a way that is understandable for citizens. This aligns with rule #4 of the Amsterdam participation guidelines: “we communicate during the entire project in understandable language, tailored to those involved. ” 

The literature also highlights the number of barriers citizens face in participating on the topic of  municipal use of AI.  In a study on contestable camera cars, researcher Kars Alfrink highlights the skills and knowledge that are needed to participate on “equal footing”, and the risk of encountering unequal participation between different groups.  More specifically, “more educated and higher-income segments of the population are more inclined to engage with ICT-led interventions”, according to Silvana Fumega, Director of the Global Data Barometer.  As such, making the use of AI understandable for all citizens is a key element of striving for inclusive participation. 

The effort to make AI more approachable for citizen participation is closely linked to the concept of meaningful transparency of AI. In the literature on AI governance, transparency must go beyond simply sharing or publishing information; in order to be meaningful, it must aid understanding and be actionable. In other words, transparency should not be an end in and of itself, but rather a means to do something else . Alfrink et al argue that the information shared through transparency measures is expected to enable citizens to assess the fairness of an algorithm’s decision-making4  Algorithmic transparency also depends on its audience, what their needs are, and what they can understand.  Good transparency is “flexible”, meaning that it is adaptable to different audiences. It should also not overwhelm the audience with unnecessary information. “I think you shouldn't have to understand exactly how an algorithm works, but you should understand how it impacts your situation,” said a civil servant interviewed in the 2023 study by Digital Rights House and Gemeente Amsterdam. In essence, meaningful transparency on AI requires adapting information about AI and its use to your target audience, giving them the right information necessary to participate 

How can civil servants communicate understandably about the design, adoption, and use of AI in order to make it more approachable for citizens to participate? 

Although there is no universal solution, below are some ideas drawn from experiences with citizen participation in the city of Amsterdam. 

  • Make things as approachable as possible from the participant recruitment phase 

If you are holding a participation session with citizens, make the description or invitation as simple and clear as possible, and emphasize that no technical knowledge or expertise is required.  

  • Provide resources to learn more before participation 

In some cases, citizens may want to learn more about a topic before they participate. This can also helpthem achieve a more equal information position. However, be careful not to scare participants away with too much “homework”.  

  • Start with definitions

Starting your survey, dialogue, or any participation method you choose by clearly defining key terms (i.e. AI or algorithms) can help everyone get on the same page. Choose wisely, because too many definitions can be overwhelming.  Applied (as opposed to theoretical) definitions can make the concept more approachable. You can also use comparisons to link a difficult concept to something your participants better understand.  

  • Give examples 

It may help to give examples of how the municipality already uses AI in order to make it more concrete, and show the different ways in which the technology can be applied. This was done in a 2023 survey on “Citizen participation in the use of algorithms by the municipality of Amsterdam”. Keep in mind, however, that the examples you choose to share can shape the perspective of participants throughout the process.  

  • Focus on impact 

Focus on the impact that a technology has on citizens as opposed to the technology itself. Participants dont necessarily need to know exactly how the technology works, but rather how it affects them. 

  • Create scenarios 

Scenarios can help participants imagine how a technology can impact them, and also how they would feel about the use of this technology in different situations. You can use scenarios as a prompt, or you can build scenarios together with participants. 

  • Make it tangible 

If the technology already exists, you can allow participants to see it and ask questions. For example, the municipality’s Computer Vision team brought a real-life scan bike to the first session of its citizen panel. If the AI system does not have a physical element, you can do a virtual demo. 

  • Break it down 

If the participation concerns an algorithm, for example, with a risk of bias, break down the algorithm into the data that goes into it and the variables it considers. As was the case with the Slimme Check algorithm, this allows paticipants to more easily understand and discuss what elements go into decision-making. 

  • Don’t focus on the AI 

If the AI is being applied to an existing process, focus on the process itself, especially if it is something that participants are familiar with. Then, you can address how the technology would impact that process.  

In conclusion, the perceived complexity of AI adds to the importance of communicating about your projects in a way that's  approachable for citizens.  Even those without technical knowledge should be able to understand the impact of the technology, and participate in a meaningful way.  The conceptual framework of meaningful transparency on AI, alongside the practical experiences of the municipality's past participation efforts, present valuable insights on making AI projects more approachable for citizen participation.

 

References

1. Alfrink, K., et al. Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23–28, 2023, Hamburg, Germany. 

2. Fumega, SCivic engagement, inclusive data and AI: some questions to be asked. Artificial intelligence in the city: Building civic engagement and public trust. Centre for Interdisciplinary Research on Montréal, McGill University. 

3. Challis, L. D. The Citizens’ Strain to See Through Transparency: Exploring Reciprocity As an Alternative in the Smart City of Amsterdam. University of Twente. 

4.  Alfrink, K., et al. Designing a Smart Electric Vehicle Charge Point for Algorithmic Transparency: Doing Harm by Doing Good?