Development of an AI fairness toolkit for bias checking in AI evaluation results.
N/A
N/A
Developing
In our mission to deploy safe and well-tested AI, we have been discussing the topics of fairness since our inception in 2020. The guidance thus far published on bias mitigation in AI and addressing health disparities through digital solutions lack practical instruction. While we appreciate the importance of diversity in data, validation and developer teams, we find it difficult to benchmark at what point these aims can be considered achieved.
Moreover, we found that fairness metrics often address individual attributes independently of healthcare context and without intersectional analysis, focussing on equality in terms of sample numbers only. This partly stems from the intuition to address health disparities through the lens of the protected characteristics (Equality Act 2010). These do not account for societal factors (e.g. poverty, literacy), which play defining roles in healthcare outcomes and are the leading causes of health disparities globally. Guidance also champions transparency on training and testing data by encouraging publication of communication tools such as model cards, similar in nature to information leaflets produced for medications. However, the guidance does not address how this information can be interpreted by clinicians considering implementation of AI products when a patient is not demographically represented in AI model data.
The end outcome is that results of clinical AI evaluations are difficult to interpret and difficult to communicate, increasing the potential for harm, which reinforces the need for effective methodologies to detect and mitigate bias. We have faced this problem when both developing and evaluating AI solutions.
In this project, we aim to develop and implement an open-source toolkit which can analyse AI evaluation results for fairness and bias.
The toolkit will give context-driven mitigation advice supported by robust statistical testing, ensuring that the AI models are reaching conclusions based on pathology-only, and not patient characteristics which affect healthcare outcomes and navigation of the clinical pathways.
Work done so far
We have designed our solution which is formed of 4 parts. We have a applied for a grant to help us develop this toolkit quicker but were unsuccessful, so the timeline for project delivery has been delayed.
The work has been put on pause until March 2024. We will being stage 1 in March - consisting of data acquisition through interviews with colleagues to analyse what are the primary concerns that in the expert opinion of our NHS colleagues affect patient access to healthcare.