top of page
Tiffany Coleman

Could the structure of a tech company cause biased AI?

Updated: Jul 2, 2021


Screenshot from Mitchell's presentation

AI learns from us – so how do we make sure tech companies are structured to ensure what we teach AI is ethical to begin with?


Leading AI expert and researcher Dr Margaret Mitchell, whose outstanding career includes working at Microsoft and Google, spoke at CogX Festival’s recent “Can you undertake ethical AI research in a corporate environment?” session and drilled down into whether you can operationalise ethical processes in a corporate tech environment. The key challenge, Dr Mitchell says, being “taking existing organisations and procedures in a company and augmenting them, or changing them, with processes that focus attention on human values and foreseeable benefits and risks of different technology” and whether that is ever fully possible.


Looking at a typical pipeline of how machines learn, Dr Mitchell takes us through four likely stages. Firstly, the data being collected and annotated, secondly the AI model/machine is trained, next the data or media is processed - perhaps by us applying a filter for example – and lastly, we, the humans, see the output.


A key takeaway from Dr Mitchell’s experience is that human bias needs to be considered at every single one of those stages. When that data arrives it already has a subset of human perspectives included within it, including racism, stereotyping and sexism – as Dr Mitchell says, “data is biased because it is a snapshot of the world – it is not all of the world” and we need to consider the limitations of this being the only data the machine learns from.


At the next stage, when the model is trained, we have human bias in even determining which model is selected and how the model is evaluated. Then, when the data is processed, there is further human bias - for example by us applying thresholds depending on the context of where the data will be shared.


Of course, once the data moves to the final stage and is “out there” then we see it, hear it, feel it, think it – and it shapes our actions and the next lot of data that we train the AI with and the circle continues, with human bias being embedded and carried forward at each stage.


AI learns from us – so how do we make sure tech companies are structured to ensure what we teach AI is ethical to begin with?

As AI takes an increasingly dominant place in companies across the world, Dr Mitchell’s research highlights just how cumulative this data bias impact, which Dr Mitchell calls “bias laundering”, can be, and it’s clear to see how important it is companies address this early on. Dr Mitchell also suggests “bottom-up research working towards beneficial long-term goals could be more naturally integrated into the development pipeline” and could help address any concerns that top-down leadership approaches, with limited two-way communication, might be exacerbating the cumulative bias impact.


Watch the full session to learn more about how these approaches, and other aspects of Dr Mitchell’s research and experience, could be key to developing company structures designed to ensure “ethics-informed development shapes what the company produces”.


@mmitchell_ai

@AdaLovelaceInst

@CogX_Festival




_________________________________________________________________________


Keep up to date with all things tech ethics. Sign up to our newsletter today.

_________________________________________________________________________




Comments


Stay up to date with the world of tech ethics by signing up to our newsletter here

bottom of page