This interdisciplinary project explores how human-in-the-loop (HITL) interventions can foster responsible designs of artificial intelligence (AI) systems. EU regulation requires private and public institutions to implement HITL frameworks in AI decision-making.

Still, critics argue that often enough HITL are set up to fail and used as a fig leaf to legitimize predefined decision outcomes. To address this issue, established researchers from the fields of humane AI and behavioral ethics will team up to conduct controlled experiments using the machine behavior approach. The core objective is to develop an AI sandbox model that can provide empirical insights into designing and implementing HITL for effective and responsible AI decision-making. 

Project team:

  • Prof. Dr. Shaul Shalvi (Economics and Business)
  • Dr. Christopher Starke (Faculty of Social and Behavioural Sciences)
Image credits ©

    Icon/thumbnail

  • Pexels