AI and Automation
by Katherine Mayes, Programme Manager for Cloud, Data, Analytics and AI, techUK
Industry View from
Mitigating bias in algorithmic decision-making
Bias in decision-making is an inherent, age-old issue for humans. Whether consciously or not, day-to-day your innate biases can influence the candidate you hire for a job, the “type” of person you choose to date or the neighbourhood you decide to live in. From an evolutionary perspective, forming stereotypes isn’t always a bad thing. The ability to stereotype can help us to process vast amounts of data, allowing for efficient decision-making which facilitates survival and helps us to navigate the world we live in. However, in an increasingly digital age, where algorithmic decision-making is starting to disrupt most sectors in society and significantly impact on people’s lives, tackling issues of bias is critical.
Algorithmic bias can be introduced in several forms, from the wrong classification of a problem, failing to recognise or address historical human biases in training data sets, or using an incomplete or unrepresentative data set. It is also inherently difficult to fix the problem for a number of reasons – for example, the identification of bias within a decision-making system may only become obvious retrospectively, sometimes a lack of social context when developing the model can create bias and different definitions, and interpretations of those definitions, of what is considered “fair”, can also create difficulties.
Thankfully there’s increased political will to tackle this issue and a strong contingent of researchers and companies are working hard to find solutions. The Centre for Data Ethics and Innovation’s (CDEI) current review on algorithmic bias, and the ICO’s work on an AI auditing framework and new regulatory sandbox will play a key role in shaping industries’ approach to mitigating bias.
Companies and researchers are taking a variety of approaches to mitigate the risk of bias, from deploying algorithms that help detect and mitigate hidden biases within training data sets to embedding principles and processes that hold companies accountable to fairer outcomes. IBM’s AI Fairness 360 Open Source Toolkit is one of several existing tools that aims to detect and mitigate AI bias by implementing bias mitigation algorithms to scan for signs of bias, and then recommend adjustments in real time. It turns out AI may actually be part of the solution to fixing bias in algorithmic decision-making, because it can systemise bias and allows it to be audited and therefore rectified.
The CDEI’s recent interim report suggests that organisations currently have a limited understanding of the tools and approaches available to identify and mitigate bias. At this year’s techUK Digital Ethics Summit on 11 December we will look to resolve this issue by showcasing the people and groups that are leading from the front in this area and developing effective, practical solutions to the ethical issues we face as a society.
Companies developing and implementing AI systems need to ensure that transparency, openness and accountability are built into the development of an algorithmic decision-making process. These systems also need to be built by diverse and multidisciplinary teams to ensure a broad range of perspectives, contexts and backgrounds are considered during the development of AI systems.
Applied correctly, algorithmic decision-making can lead to fairer, more efficient decisions. Collectively, industry, government, academia and civil society, must work together to build a culture of public trust and confidence around automated decision-making systems and ensure the benefits are clearly understood and demonstrable to the public.
Related articles
What’s next?