Briefing Paper

Designing a Rights-based Global Index on Responsible AI

Artificial Intelligence (AI) is a wicked problem facing society globally. It is wicked because it is complex and hard to define as a policy concern. How is it being used, and who must – who can? – take responsibility for ensuring it is used to better society? As it increasingly moves to becoming a general purpose technology, it cannot be isolated from the social and economic conditions in which it is produced and used; in fact, it is changing the very nature of societies and economies, demanding new kinds of research and policy interventions in order to understand and manage its effects. What makes AI more complex is the paradox it is bound up in: this technology that offers major transformative potential for societies in its capacity to compute great swathes of information, at an efficiency rate far greater than any human mind, comes with major risk to fundamental rights and values. Evidence has demonstrated that even the most legitimate uses of AI have caused harm to people and their societies and environments. To add to this, we do not yet know the full implications or impacts that AI is having, or will have, on different societies around the world.