There is a growing appetite for bridging the gap between AI systems, their developers, and their users. This has been a key driver for the Ada Lovelace Institute’s project on Algorithmic Impact Assessments (AIA). In this meetup, Jenny Brennan and Lara Groves shared their recent work with developing AIAs, and how they support technologists in embedding ethical principles into what they build, thus making them more accountable to the users they build for.
What is an AIA? An Algorithmic Impact Assessment is a tool for identifying the potential societal impacts of an algorithmic system before it’s launched.
AIAs are very much in the early stages of development, and as such there’s no standard methodology on how to put one together yet. However, there’s huge interest in tools like AIAs — they have so far been mostly proposed for public sector use, and there is already one ‘live’ example of an AIA tool being used in the Canadian government. So in these early stages, it’s important to consider the potential use-cases for AIAs.
There are other types of much more well-established impact assessments that have a wide range of applications: environmental impact assessments were first proposed in the 1970s, and gave way to other similar tools such as the human rights impact assessment. These can be used to anticipate and consider potential impacts of a new policy, or they can be used in a more evaluative way, where the assessment happens after implementation. For example, Facebook commissioned a human rights impact assessment after the human rights atrocities in Myanmar, which they played a huge role in.
Jenny and Lara developed their AIA framework by partnering with the NHS AI Lab. This partnership allowed them to explore how an AIA could be used in a real-life healthcare system, which the NHS AI Lab will then trial.
The AIA framework was built around the National Medical Imaging Platform (NMIP), a proposed large-scale data set of medical images developed by NHS AI Lab. The data set will include images such as MRI scans, CT scans, and photos of skin. The purpose of this platform is to provide training data for medical AI products; NHS AI Lab want to make the data set available to researchers and developers, while ensuring the data is used in a way that is in line with their values and guiding principles.
At this stage of development, it was important to consider if an AIA was even appropriate for this context. In order to get some relevant perspectives, Jenny and Lara spoke to twenty expert stakeholders, including those hoping to build commercial medical imaging products. They found that there was a strong interest in examining the societal implications of building such technologies, as well as a need for more accessible language around AI so that those with non-technical expertise could participate. Interviewees also noted an AIA should not duplicate any existing regulatory steps.
The purpose of this platform is to provide training data for medical AI products; NHS AI Lab want to make the data set available to researchers and developers, while ensuring the data is used in a way that is in line with their values and guiding principles.
These interviews laid the groundwork for the AIA process itself, which in its current state consists of seven steps. So, if a team wanted to access the NMIP data, they would have to complete the following:
Reflexive exercise: the team gets together and does an initial session of identifying potential impacts, such as the best and worst-case scenarios for use of their proposed AI system.
Application filtering: the team then uses outcomes from the exercise to apply for data access, and this is reviewed (and potentially filtered out) by the Data Access Committee (DAC).
Participatory workshop: this would be facilitated by NHS AI Lab, and consist of 25-30 patients and members of the public, in order to get a diverse set of opinions on the proposed system.
AIA synthesis: here, the team incorporates the findings from the workshop into the initial template from the reflexive exercise.
Data-access decision: This synthesised material is then submitted to the DAC for review, at which point the DAC make the decision on whether to grant the team access to the data.
AIA publication: details of all the above steps are then made public on the NMIP’s website
AIA iteration: the team, and others involved, then revisit the AIA after a set period of time — e.g. two years — and incorporate new learnings as their project develops.
For this AIA, Jenny and Lara recommend the NHS AI Lab use the National Medical Imaging Platform’s Data Access Committee to both review data access applications, and the AIAs as a whole — they are the accountability mechanism. NHS AI Lab may also want to put together their own committee, but they should ensure it’s made up of a mix of expertise, including medical professionals, computer scientists, and patients.
_________________________________________________________________________
Want to keep up to date with the world of tech ethics?
The participatory workshop is vital in ensuring a wide range of perspectives are included in the AIA; workshops should be informal, and allow participants to deliberate on the social impacts of the team’s proposed AI system. Participants will also be given induction sessions so they can understand more about the applications of AI in healthcare, and their role in the project.
Where else this AIA can be used: this AIA is structured around the National Medical Imaging Platform specifically, and there’s certainly no one-size-fits-all framework. However, the reflexive exercise and the participatory workshop are both steps that are easily applicable to other sectors, with some minor adjustments.
Lara and Jenny developed this AIA to be accessible to all, and hope that other organisations see it this way if they choose to adopt these templates. The idea is that when people use the framework, and adapt and improve it for their purposes.
The AIA mechanism is still in its infancy; there are many outstanding questions that need to be explored. For instance, who should be conducting these assessments? At the moment, AIAs are being tested internally, but in the future these could be conducted by an external party. There are other questions around what the AIA artefact should be — for National Medical Imaging Platform, the output is a simple template, but other organisations may want to manifest their AIAs in another way, to address harms in different contexts.
It’s clear from this test project that AIAs are an important tool, but also a nascent one. This means that further testing is needed so that industries can gather case studies, and begin developing best practices on how to effectively conduct AIAs, and eventually build AI systems that don’t perpetuate harms.
For anyone interested in learning more about the AIA process that Jenny and Lara developed for NHS AI Labs, you can look at the AIA template here, and the user guide here.
Comentários