This dialogue explores how to address the "terrible" uses of AI, defining "terrible" as anything causing individual, societal, or systemic suffering—from obvious harms like AI-enabled suicide or theft to subtler dangers like reduced human creativity and autonomy. The dialogue argues that existing legal frameworks and human rights laws are insufficient for AI's complexities, requiring expanded interpretations and new governance approaches as technology rapidly outpaces regulation. Drawing analogies between AI and tools ranging from knives to nuclear bombs to addictive substances, the dialogue emphasizes the need for proactive design that encourages human flourishing through offline time, boredom, and genuine interaction while limiting harmful dependencies. The document proposes practical solutions including using cities like Amsterdam as testing grounds, creating "AI for Good" case studies, forming coalitions of policymakers and citizens, and empowering democratic influence over AI development—recognizing that while the concentration of AI power in few companies creates governance challenges, it also presents opportunities for more targeted regulatory intervention.

Summary of the Text: What Is Terrible Use of AI?

Definition of "Terrible":

"Terrible" is defined as causing suffering. This suffering can be individual, societal, or systemic. It includes both clear harms (like suicide or theft enabled by AI) and more subtle ones (like reducing space for humans to be human).

Types of Terrible Uses of AI:

  • External negative use: e.g. harmful applications by companies or governments.
  • Internal negative use: e.g. personal dependency, reduced creativity or inspiration.
  • Bad actors: individuals using AI for criminal or unethical purposes.
  • Less obvious dangers: such as AI reducing human creativity, autonomy, and emotional depth.

Existing Legal and Ethical Frameworks:

  • Human rights laws and internet regulations already cover some "clear" terrible uses.
  • But these are not sufficient for the complexities introduced by AI agents.
  • Legal frameworks must evolve to include new interpretations for AI use.
  • The UN and activists are working on expanding these frameworks (e.g. AI Act).

Challenges of Governing AI:

  • Legal systems are always catching up with technological advancements.
  • Frameworks are not permanent—they can erode or be manipulated.
  • There is an imbalance of power: few companies control AI development, making governance both difficult and, paradoxically, potentially easier due to centralization.

Analogies and Warnings:

  • AI is like a knife: useful, but dangerous in the wrong hands.
  • Or even like a nuclear bomb: the stakes are that high.
  • AI is also seen as a collective addiction—comparable to alcohol or social media.

Designing for Human Flourishing:

  • We must limit and protect ourselves from negative AI effects.
  • Design solutions that encourage offline time, boredom, and human interaction.
  • Encourage creativity, inspiration, and high-level thinking.

Urgent Research Needs:

  • More research is needed on AI's long-term societal effects.
  • The speed of development (e.g. processing power doubling every 3 months) makes this urgent.
  • We must plan for potential future harms, not just current ones.

Empowering People and Communities:

  • Designers and developers have power; their values shape technology.
  • Societal values must be reflected in design choices.
  • Citizens can create filters, tools, and even AI models that align with ethical values.
  • E.g. a university chatbot that responds with a question, prompting reflection.
  • Scandinavian countries giving citizens rights to their own face and voice.

Proposed Actions:

  • Use cities like Amsterdam as test beds for ethical AI.
  • Create "AI for Good" case studies.
  • Form a coalition of the willing: policy makers, schools, universities, communities, and citizens.
  • Use test cases, ethical hacking, and public engagement to trial and refine policies.
  • Push for democratic influence over AI development and governance

Philosophical questions group:

  • How do we design for the terrible possibilities of AI?
  • Why should I as a citizen trust any institution?
  • How can AI help Smart City Systems to increase the safety of citizens?
  • How much should we limit the use of AI systems for young people and/or educate them?
  • How can we sure that while changes happen, it improves equal benefits for citizens?
  • How can citizens be informed about the use of sensitive information?
  • How can the youth make the most use of benefits of AI?
  • How can corporations be included in the dialogue to create safety and equal opportunities?
  • Who can I talk to? How can we provide good quality conversations for human beings, either with people or with AI?
  • How can we as a democracy keep up with the fast changes and govern it well?
  • How can societal dialogues work through in changing policies in universities, governments, corporations, etc.
  • How can builders become accountabl and can we make sure that humans are not guinea pigs?

The chosen question is: How do we design for the terrible possibilities of AI?

Methods

Give a definition on: what is terrible?

A practical case: sharing photo's online that get commercialised by companies.

Sub question: who designs? what are incentives now for designers and where does it lead?

Dialogue

Started with the definition of: what is terrible?

There can be external negatieve use, internal negative use and there can be individual bad actors. There is clear terrible use, like when AI leads to committing suicide, or when bad actors use AI. There is also less obvious terrible use, like when when AI leads to less space for humans to being human. That is not terrible by law, but will lead to terrible consequences.

Terrible is suffering. We can look at it from individual, community and society perspective.

The clear terrible uses are already covered by law: you can already go to court for physical safety, theft, other damages. The human rights framework is already in place. There are frameworks for the Internet

This is not enough. It needs to be expanded in interpretation for AI Agents, when they perform tasks for humans. The UN Secretary created an office for creating a framework for AI. Activists can join and influence the AI Act.

The legal framework give the guidelines for the incentives, so it always need to be perfected. In practice the legal frameworks are a game of catching up. And the legal frameworks in place are not set in stone, but eroding.

AI is like a knife, great for onions, but we don't like the killing. When Internet came we thought it might be the end of all wars, because we could connect with everybody over the world. That worked out differently.

Maybe the knife metaphor is too gentle. AI could also be the equivalent of a nuclear bomb.

AI is a collective addiction we get sucked in, like alcohol, cigarettes and social media. How can we tackle it like an addiction and stay human?

We should avoid the terrible as much as possible. How do we prepare for the worst?

That means we have to limit and protect ourselves. And design for the opposite: that needs high brain activity and capabilities. We should allow ourselves to be bored and design offline time and spaces to interact. That way we can counter the feeling of being online the whole time.

AI deincentivizes human beings to be creative. What does that do with inspiration?

We should invest more in research on the societal effects of AI. We don't know the consequences. Will we really lose capabilities? How to research this? It goes too fast, processing capacity doubles every three months. You have to assume what might happen and plan on that.

The designer has the power. Big companies design and we don't have a say in it.

But we can create filters: like when you see different websites in different countries.We can democratically design filters. Example UvA: they designed an own chatbot that answers with another question.

The other side is that everybody can design applications with AI. You go to huggingface and clone someones face.People will use it. In a scandinavian country they responded by giving people right to their own face and voice.

Who has the power and authority? These are obvious wrongs we can act upon. Than influence policies and companies.

The fact that only a handful of companies have all the developing power makes it also easier to govern.

How could people working in the big companies be influenced? If the morals of developers are different than of society they might design harmful technology.

The whole world playing field is too big. But we can design for the worst in Amsterdam as test case. Make AI for good case studies. We need the public's help with communities and companies. In a coalition of the willing with policy makers, universities, schools, social organizations and a public pool of citizens. We can trial policy in the network. Test cases. Find people for research. Ethical hackers can test what bad actors can do. Act and analyze.

Credits ©

    Icon/thumbnail

  • Knowledge Sweatshop by Leo Lau & Digit - Leo Lau & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/ - CC BY-NC-SA Attribution-NonCommercial-ShareAlike