top of page

Provisional morality - a provocation from Yassmin Abdel-Magied

Updated: Jan 22, 2021


Provisional morality - Yassmin Abdel-Magied

One of the 2020 LinkedIn Changemakers, Yassmin is a globally sought-after advisor on issues of social justice, focused on the intersections of race, gender and faith. She has travelled to over 24 countries across five continents, speaking to governments, civil society and corporates on inclusive leadership, tackling bias and achieving substantive change.


On provisional morality in technology


In this provocation, Yassmin Abdel-Magied presented the concept of 'provisional morality', inspired by the writings of Layla El Asri, and how we can apply this to developing ethical problems in the tech sector. Provisional morality comes from Descartes using the example of building your perfect house. While you’re building the house, you'll need another house to live in until the 'real' house is finished. What rules guide you in this interim period?


On the problem


Yassmin started by outlining a recent event in AI Ethics: Timnit Gebru, a leading researcher in the field, was recently fired from Google.


In 2018, Gebru co-authored a research paper which demonstrated the inaccuracies of facial recognition: certain models misclassified the gender of people of colour, more so with women of colour. More recently, while at Google, Gebru put forward a paper which highlighted the risks of training language models on large data sets. These four risks are:


The environmental cost: training models with a lot of parameters takes a lot of computing power, and therefore has a large carbon output. Inscrutable data: if you're working with a lot of data, it's hard to know what's in it. E.g. scraping large amounts of free text from the web will result in hateful speech being included in the data set. Research opportunity cost: currently, there's a lack of research going into training AI to understand language, and much more research going into simply manipulating language. This is because Big Tech firms have the resources to manipulate language accurately enough with very large data sets. Potential of deceptive use: NLP engines are sounding more and more like humans (Open AI's GPT-3 is a good example of this). These can be used to spread misinformation online, for example.


None of this is particularly groundbreaking — these risks are very real, but they aren't anything new. Still, Google asked Gebru to remove her name from the paper. When she refused, she was fired. Google's response to the paper was:


"Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems"


Yassmin gave another example: deepfaked porn. The data that trains these machines is, once again, scraped from the internet. This data contains all sorts of pornographic images, including non-concensual videos and images of sexual abuse. The PhD student who ran the GeneratedPorn subreddit noted that if you’re scraping as much data from the web as he was, you can’t help it if some of it is exploitative. Interestingly, 96% of deepfakes are pornographic.


There is an assumption here that there are no fundamental flaws with large language models. Dr Gebru suggests that all the problems with large data sets that she outlined are technical ones — they can be fixed with a technical solution, if given enough time. But given 96% of deepfakes found online are pornographic, this suggests the challenge isn’t technical, but social. How can we make the call that a problem can be solved through technical means alone?


_________________________________________________________________________


Want to keep up to date with the world of tech ethics?

_________________________________________________________________________



Descartes' provisional morality may offer some guidance


Yassmin was inspired by Descartes’ four maxims (i.e. clean-cut deontological rules) that make up his provisional morality, but thought they could do with an update. Here they are, for reference (or take a look at them in full here):

  1. Obey the laws and customs of my country

  2. Be as firm and decisive in my actions as I could

  3. Master myself rather than my fortune; change my desires rather than the order of the world

  4. Devote my life to living this truth, until I am eventually able to 'update' my morality (ie moving into the perfect house)

Looking at these four maxims, what would a provisional moral code for the tech sector look like? For instance, considering the third maxim, could we realistically expect companies to focus on improving internally rather than externally, on what might be 'beyond their power'? What does it look like for large tech firms to think about the limitations of their power?


We unpacked the first maxim during the discussion: it's all well and good for Descartes to obey the laws of his country — he was a privileged white man. Laws are written by those who are already powerful, and therefore oppress others. Yassmin mentioned the effects of Australian laws on indigenous communities as an example of this, such as The Northern Territories Intervention. And in fact, globally, we find ourselves tied more and more to US law and politics, even if we don't live there. This is because of the far-reaching influence of US-based tech giants. During the discussion, others pointed out that the first maxim could be replaced with collective action, or other forms of democracy.


The termination of Timnit Gebru is not just a diversity and inclusion issue


During all the various discussions that were had around the moral implications of firing Dr Gebru, Google announced that they had trained a trillion-parameter language model. This is the largest model out there by quite some margin — GPT-3 has 175 billion parameters.

This has made it quite clear that looking at Dr Gebru's termination through a D&I lens has distracted us from the actual work that she has presented us with: the very clear risks of large language models, which Google have seemingly not taken into account. We must remember that diversity and inclusion is important not just because we need more women of colour in tech, but also because we need a greater diversity of ideas.


Yassmin ended the provocation with two questions:

  1. If we recognise that the varying issues with machine learning are not exclusively technical, how can we build a framework of provisional morality to continue to operate within until we figure out how to solve those challenges?

  2. How do we maintain focus on what marginalised folks are saying, and ensure that what they are saying doesn't get lost in the noise? Focusing on the representation of historically excluded communities in tech is no use if we do not listen to what they have to say.



Stay up to date with the world of tech ethics by signing up to our newsletter here

bottom of page