Article

Inclusive participation for inclusive AI

How can civil servants think about inclusivity in the context of their AI project? How should they approach citizen participation in order to reach that goal?

Inclusivity is a core principle shaping the city of Amsterdam’s use of digital technologies, and can be seen in many of the municipality’s commitments and agendas.  The Tada Manifesto for the responsible digital city states that we should take into account the “differences in needs, experiences, and perspectives between individuals and groups”.  Similarly, the declaration of the Cities Coalition for Digital Rights, of which Amsterdam is a founding member, states that “everyone should have the opportunity to participate in shaping local digital infrastructure and services”.   

Inclusive participation is crucial in ensuring that the design and use of technology in the city reflects the needs and values of different stakeholders in the city. Input from citizens provides critical contextual knowledge and helps eliminate blind spots. This is especially important with AI, due to the risk of automating harmful biases.  

 Rule #3 of the Amsterdam participation guidelines states that “We make extra efforts to actively involve all stakeholders... participation gives us the opportunity to involve more people with different perspectives, this is how we build an inclusive city.”   Inclusive participation on the topic of AI presents unique challenges, but this makes it ever more important. 

In his work on contestable camera cars, Kars Alfrink highlights the issue of representation in citizen participation on public AI systems, emphasizing the need for more diversity and inclusion. As in other fields, the “usual suspects” tend to share their opinions, while, as Seebohm and Smith point out, many citizens are seldom heard due to the lack of engagement and accessible channels for participation.  This is exacerbated by the perceived complexity of the topic of AI and the fact that many citizens don’t feel knowledgeable enough to share their opinion.    The reality is that the use of AI is neither easily understood nor top of mind for a lot of citizens, especially those who are socially or economically marginalized. As such, the people who participate in municipal decision-making, whether it be on algorithms or other issues, are already part of a self-selecting group.   This is especially concerning as marginalized groups are often disproportionately impacted by the use of AI.

Along similar lines, interviews with civil servants at the municipality of Amsterdam reveal that the lack of trust in government is another barrier to inclusive participation and thus to the inclusive use of AI in the city.   “In certain areas, people will already call the municipality if there's one mattress from the ground. And in other areas, they may not trust the government, or they may be more used to things happening like that in their neighborhood, and they will not report it anyway”.  When it comes to the use of AI, marginalized groups may be even more doubtful that the government will take their perspectives into account and act in their best interest, presenting a greater barrier to participation.  

After identifying and addressing fundamental barriers to inclusive participation for your AI project,  you will have to choose an approach to inclusivity and representativeness in the recruitment of participants.  This starts with thinking about key stakeholders in your project, and identifying target groups that should be involved in the participation process.  

The following use cases at the municipality of Amsterdam have taken different approaches to selecting target groups and recruiting participants for citizen participation. 

  • Vision on AI: Taking into consideration the breadth of stakeholders impacted by the use of AI in Amsterdam , the municipality held 10 dialogues throughout the city in order to build the Vision. Extra effort was made to involve residents of Noord, Nieuw-West, and Zuidoost, as these areas are both fast-growing and historically marginalized. In addition to taking a geographic approach, the team also specifically targeted groups that are disproportionately impacted by the use of AI, such as students, educators, those with low digital literacy, and creative professionals.   The goal was to have a more holistic view of the citizen perception of AI,  and understand more fully the risks as imagined and experienced by citizens as AI becomes increasingly prevalent.   
  • Responsible Scanning and Recognition:  The municipality’s Computer Vision team aimed to create a representative citizen panel to collaborate on designing an ethically responsible, privacy friendly, and secure image recognition solution for the public space.  To do so, they first administered a survey to gather citizen perspectives on the use of object recognition in public space.  Through the survey, they identified citizens who were interested to continue participating on this topic. They then followed up with another questionnaire including more personal and demographic information.  Based on the survey and questionnaire, the following metrics were considered to compose a representative panel of 11 citizens: opinion on the technology at hand, district of residence, age, and gender.  
  • Slimme Check: A citizen council consisting of welfare recipients and representatives of interest groups was consulted during the development phase of an algorithm to be used in the welfare application screening process.  The participants were expert stakeholders in this particular policy area and were able to give input on a use of technology which would impact them.  
  • For the Accessible Route Planner project, the municipality worked with Cliëntenbelang, an interest group for people with disabilities, to reach participants from their target group. Partnering with organizations who work closely with your target group is an important way to gain trust and achieve greater inclusion. 

Once you have identified the key stakeholders in your project and recruited participants, you must think about designing an inclusive participation process.  Beyond knowledge and understanding of technology and algorithms, other barriers to participation include costs of participation (including time lost), accessibility, and lack of clear communication. Focusing participation efforts on what is efficient for the majority poses a danger for reinforcing exclusion of marginalized citizen groups. 

In their study on citizen listening in the UK during the pandemic, Seebohm and Smith found that no single form of communication could be accessed by more than two-thirds of the participants. While many preferred face-to-face communication, some were only able to participate through digital channels. As such, they argue that participation processes should be “bespoke and flexible”, and that “individuals should be free to choose the form of engagement that feels right for them and gives them a sense of agency” 

Seebohm and Smith’s work shows the importance of meeting citizens where they are, both in terms of participation method and communication.  It makes a strong case for a multichannel participation strategy if you aim to include different groups of citizens. Think about the target group(s) you are working with, and ensure that the communication channel(s) and participation method(s) you select are adapted to their needs.  Along with this comes choosing a time and place that are accessible for your participants, in the case of in-person participation, and consider compensation if travel or a significant time commitment is required. 

Throughout your participation trajectory, from recruitment to during a participation moment, make sure to communicate in understandable language in order to make it as inclusive as possible.  This is especially important when it comes to AI.  You can learn more about making AI approachable for citizen participation in this article. 

In conclusion, while inclusivity is a fundamental challenge of citizen participation in any project, the topic of AI introduces additional layers of mistrust and hesitation to participate.  These elements can lead to systematic exclusion of certain groups from having a say about the use of AI in the city.  Examples from AI projects in the Amsterdam context show that inclusivity and representativeness in participation can be conceptualized and approached in various ways, such as targeting specific groups impacted by an AI application, or seeking citizens from different neighborhoods in Amsterdam to collaborate on a Vision on AI.  Communication and method selection are crucial in making participation inclusive and accessible; priority should always be placed on understanding the needs of citizens and meeting them where they are . 

Factors such as educational background, level of income, or ability should not be barriers to participation. These intersections shape how people interact with the digital world and technologies such as AI.  Only through inclusive participation can governments ensure that the AI they build and use meets the needs of all of its citizens. 

Sources: 

  •  Alfrink, K., et al. Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23–28, 2023, Hamburg, Germany. 
  • Seebohm, L., & Smith, N. Learning to Listen Again: How people experiencing complex challenges feel about engagement and participation through the Covid-19 pandemic. Centre for Public Impact UK. 
  • Bellantoni, A., Chwalisz, C., & Cesnulaityte, I. Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave. OECD.