AI’s dark secret: It is rolling back progress on equality

AI systems all function the same way, by identifying patterns. The truth is that machine learning systems struggle with difference.

Published : Oct 19, 2024 19:28 IST

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. | Photo Credit: REUTERS/Dado Ruvic

My life has never fit a pattern. My grandparents were refugees, my mother had me when she was 14 years-old, and I developed huge behavioural issues as a teenager.

I did not grow up in typical circumstances. But I had an opportunity to beat the odds. If I had been born into the age of artificial intelligence (AI), though, could I still have got to where I am today? I am doubtful.

You see, while I never fit a pattern, AI is all about them.

AI systems, whether predictive or generative, all function in the same way: they process vast amounts of data, identify patterns, and aim to replicate them. The hidden truth of the world’s fastest-growing tech is that machine learning systems struggle with difference.

Also Read | Nobel Prize in physics: AI pioneers Hopfield and Hinton win for machine learning foundations

Pattern really is the key word here—something that happens repeatedly. In a dataset, that means an attribute or feature that is common. In life, it means something that is shared by a majority.

For example, a large-language-model such as OpenAI’s ChatGPT “learns” grammatical patterns and uses them to generate human-like sentences. AI hiring systems analyse patterns in the résumés of high-performing employees and seek similar traits in job applicants.

Similarly, AI image-screening tools used in medical diagnosis are trained on thousands of images depicting a specific condition, enabling them to detect comparable characteristics in new images. All of these systems identify and reproduce majority patterns.

So, if you write like most, work like most and fall ill like most, AI is your friend. But, if you are in any way different from the majority patterns in the data and AI models, you become an outlier and, over time, you become invisible. Unhirable. Untreatable.

Women of colour have known this for a long time and have exposed AI bias in image recognition and medical treatment. My own work has looked at how AI systems fail to properly identify and provide opportunities to women with Down Syndrome, people living in low-income neighbourhoods, and women victims of domestic violence.

In light of this growing body of evidence, it is surprising that we have not yet fully faced the fact that bias is not a bug in AI systems. It is a feature.

Bias is the challenge

Without specific interventions meant to build fairness, identify and protect outliers and make AI systems accountable, this technology threatens to wipe out decades of progress towards non-discriminatory, inclusive, fair, and democratic societies.

Almost every single effort to fight inequality in our world is currently being eroded by the AI systems used to make decisions about who gets a job, a mortgage, a medical treatment, who gets access to higher education, who makes bail, who is fired, or who is accused of plagiarism.

And it could get worse: history tells us that the road to authoritarianism has been paved with discriminatory practices and the establishment of a majority “us” versus a minority “them”.

We are putting our trust in systems that have been built to identify majorities and replicate them at the expense of minorities. And that impacts everyone. Any of us can be a minority in specific contexts: you may have a majority skin colour but a minority combination of symptoms or medical history, and so still be invisible to the systems deciding who gets medical treatment. You may have the best job qualifications but that gap in a CV, or that uncommon name, makes you an outlier.

This is not to say we should not use AI. But we cannot and should not deploy AI tools that do not protect outliers.

Bias in AI is like gravity for the aerospace industry. For aircraft manufacturers, gravity is the single, greatest challenge to overcome. If your plane cannot deal with gravity, you do not have a plane.

Also Read | AI’s technological revolution: Promised land or a pipe dream?

For AI that challenge is bias. And for the technology to take off safely, its developers and implementors must start building mechanisms that mitigate the irresistible force of the average, the common—the force of the pattern.

As an outlier, working in this space is not just a gift—it is a responsibility. I have the privilege of standing alongside trailblazing women like Cathy O’Neil, Julia Angwin, Rumman Chowdhury, Hilke Schellmann, and Virginia Eubanks, whose groundbreaking work exposes how current AI dynamics and priorities fail innovation and society.

But, more importantly, my work on AI bias allows me to honour the tiny me I once was. The clumsy, lost, awkward girl who got a chance to defy and beat the odds because they were not set in algorithmic stone.

That is why reclaiming choice and chance from AI should not be a technical discussion, but the fight of our generation.

This article first appeared on Context, powered by the Thomson Reuters Foundation.

You have exhausted your free article limit.
Get a free trial and read Frontline FREE for 15 days
Signup and read this article for FREE

Get unlimited access to premium articles, issues, and all-time archives