Skip to content

Exploring the role of dignity in government AI Ethics instruments


Think what could happen if dignity was at the centre of every government decision connected to Artificial Intelligence?

The Challenge


There is so much focus on the harms that technologies of the future can and do bring. And rightly so - we’ve seen so many instances where technologies have deepened inequalities and undermined democracy. Undoubtedly governments have a responsibility to reduce these harms of technologies and protect citizens from their impacts. However, don’t governments also have a responsibility to proactively enable human flourishing? To create the conditions for the best possible upside to technologies, not just managing the downside?

What is dignity?


Dignity refers to the inherent value and inherent vulnerability of individuals. Dignity is a desire to be seen, heard, listened to and treated fairly; to be recognised, understood and to feel safe in the world. Dignity is influenced in positive and negative ways by others’ behaviours and/or by technologies and other factors. And at the same time, people have inviolable dignity.

We believe that a core role of government is to create conditions for dignity for all. To do this, governments need to play both protective and proactive roles - we describe this as a Dignity Ecosystem.

Protective roles include preventing dignity violations from occurring and remedying for them if they do occur. Proactive roles include mechanisms that promote dignity.

Applied to Artificial Intelligence systems this could look like mechanisms that ensure the system complies with existing anti-discrimination or privacy laws (prevention), mechanisms to contest and change decisions made by an AI (remedy) and also mechanisms to involve affected populations in the design and implementation of the AI system (promotion).  

We believe that a healthy Dignity Ecosystem needs to be cultivated through every act of government, including government AI Ethics instruments. To do this, a balance of both protective and proactive roles needs to be played by governments through their AI Ethics instruments

What we did


After devising our view on what a Dignity Ecosystem is, we turned it into an analytic tool (the Dignity Lens) and applied this to government AI Ethics instruments from Australia, Canada and the UK.

We wanted to understand to what extent dignity is addressed in government AI Ethics instruments and what roles governments are currently playing with respect to a Dignity Ecosystem.

Our Findings


  • Finding 1

    Overt mention of dignity appears to be either absent or marginal in the AI Ethics instruments we analysed. We found that dignity is not explicitly referenced in either the Australian or the Canadian instruments, and is only referenced once by the UK AI Ethics instrument.

  • Finding 2

    The Dignity Ecosystem is currently out of balance, with governments more focused on playing protective roles. There is much opportunity for more proactive roles to be included.

Three things governments can do next


Apply the Dignity Lens

to your own AI Ethics instrument(s) to heighten awareness of the roles you are playing in the Dignity Ecosystem

Incorporate proactive roles

that promote dignity into already-existing mechanisms

Join us

in co-creating more ways to enable healthy Dignity Ecosystems in AI by getting in touch

Check out the full report

We believe that a core role of government is to create conditions for dignity for all. To do this well, governments need to ensure they are playing a role in both protecting and proactively promoting people’s dignity - we call this cultivating a Dignity Ecosystem.

Download the report