Cases of consumer AI bias have attracted widespread attention, highlighting the challenges of ensuring fairness in automated systems.
Google Gemini, for instance, faced criticism for generating historically inaccurate images, while xAI’s Grok was noted for producing politically slanted responses.
ChatGPT drew scrutiny over its susceptibility to reflect embedded prejudices in training data, sometimes resulting in problematic suggestions or offensive outputs.
The impact of these biases so-far has arguably been played down, a series of gaffes which are only natural as the systems learn and develop.
But, as the role and use of generative AI becomes more pervasive, the risk of these mistakes resulting in real-world harm grows.
Training
Inclusion and AI strategist Dr Patricia Gestoso noted bias has always existed, explaining AI is essentially just speeding the process of arriving at skewed conclusions.
In part, the problem relates to a perception fields including algorithms and mathematics are objective, she said.
“That’s not true”, Gestoso said, highlighting the outcome depends on the data inputted: “It’s like a recipe. An algorithm is a recipe”.
Gestoso (pictured, right) spent more than 15 years advising companies, non-government and government organisations about technology, science, diversity and inclusion, data analytics and customer experience, so her views carry clout.
She noted maths and statistics have long been “weaponised against groups”, with data manipulated to suit a particular narrative. In this context, AI simply makes the process easier, she argued.
If bias has always existed, does it matter if consumer AI exhibits the same traits?
Gestoso highlighted the employment market as one example where AI bias is problematic. The technology is used to write job descriptions then sort through applicants, a process with the potential to exclude people before they even apply for a role due to the use of certain linguistic traits, or dismiss their application based on the way data provided is employed.
AI is already offering different outcomes to men and women using it to research information on job roles and pay levels, or providing noticeably different outcomes when preparing curricula vitae (CVs) based on whether it is dealing with a male or female, Gestoso said.
There are deeper implications. Gestoso highlighted a growing role for AI in healthcare, a position with the potential to skew how certain groups or genders are treated.
Perhaps the greatest problem in Gestoso’s view is generative AI, the branch which creates content including text and images, is persuasive. People are inclined to believe the results it generates even though most companies developing the technology acknowledge it is not currently at that level and results should be verified.
For all that AI itself is causing problems, Gestoso indicated there are also shortcomings in the mostly trial-and-error approach to its development.
It leaves the door open to potential harms being inflicted while bugs are tackled and kinks ironed out, an approach Gestoso indicated would not happen in any other field: “we wouldn’t say to a doctor that you just try and give it a go about a new medicine”.
Gestoso acknowledged a potential link between the historic or unconscious biases of people developing AI and the technology’s preconceptions, though stopped short of saying this is the root of the problem.
Instead, she indicated people had bought into hype about a technology perhaps not quite as ready for the roles being promoted as those endorsing it might have us believe.
“We have a lot of what I call AI washing”, Gestoso said, pointing to cases where manual work remains de facto, or companies have backtracked on replacing support staff with chatbots and artificial agents.
Legality
Gestoso does not believe tackling AI bias necessarily requires fresh laws because much existing legislation can be applied to the digital world.
“We don’t need to do law for each new technology, because many existing laws and domain regulations can readily be applied to them.”
But she does believe platforms and intermediaries providing access to the technology have a role to play in terms of accountability, and suggested any laws removing such liability might not be helpful because they focus “only on the creator, or the person that is the harasser, or the person that is racist to me”.
Gestoso pointed out AI is a broad field which has been around for decades and some applications have already resulted in useful tools along with many promising developments, though cautioned against viewing the technology as a “magic bullet”.
The implications of AI-bias may be widespread but, given how far advanced deployments of the technology now are, can it still be guided towards fairer, more balanced outcomes?
Allison Koenecke, an Assistant Professor of Information Science at Cornell Tech (pictured, left), believes the answer is not a simple one.
The academic is focused on the point where economics and computer science meet, a field which incorporates the concept of algorithmic fairness. At a broad level this is a means of assessing if machine learning systems operate without bias.
Subscribe to our newsletter
Get breaking news, exclusive insight, and expert analysis - before anyone else.
Koenecke said there is a degree of fluidity to the definition, explaining it spans ensuring marginalised groups are fairly represented in the algorithms used, “all the way to the more theoretical computer scientists who are thinking about different mathematical definitions of fairness”.
The concept can also involve evaluating how well algorithms work “across different groups”.
Koenecke homed in on one of her specialities, speech-to-text transcriptions, to highlight how potential bias can occur.
If “there are very few black voices speaking African American English in the training data, the transcriptions on those voices are going to be worse downstream”.
People power
She noted there are “a lot of other reasons you might end up with biases”. The acoustic qualities of male voices, for example, can result in poorer quality transcriptions than for female speech, potentially skewing outcomes despite equality in the people training systems.
Koenecke said fixes in such cases “might be more about the modelling architecture and less about the training data”.
She explained having a diverse pool of developers can help alleviate bias simply by bringing different points of view to the table. In speech-to-text, if “you never think to evaluate whether or not this particular tool works well on the deaf and hard-of-hearing population, then you’re just never going to know whether or not it works well”.
Koenecke acknowledged people are part of the problem when it comes to AI bias, emphasising awareness as a key weapon in tackling the issue.
This falls to people because the technology itself may not yet be capable of mulling matters over. “It might be difficult to actually train the model to account for those sorts of biases without first using human expertise to determine what kinds of biases are occurring.”
Koenecke differs from Gestoso by believing specific regulation might be needed to prevent AI bias having real-world harms. In healthcare, for example, there are potential dangers if one demographic is prioritised over another.
“I think these sorts of regulations likely have to be at the domain level, because the way that you would regulate something in the medical space is very different from” the recruitment sector.
She expects “many of these domains are going to have to grapple with very similar problems in terms of how much error are you willing to accept, how much can we have humans in the loop serving as experts overseeing AI output, and how much discrimination could arise” and be mitigated by regulations.
Dangerous
Koenecke noted you do not have to look far for examples of harm already being caused, pointing to suicides prompted by chatbots. She said this is a good reason for companies to prioritise safety themselves rather than wait to be told to.
Helene Molinier, adviser on Digital Cooperation at the UN, argues the way AI is trained lies at the heart of sexist, racist and misogynistic output.
She noted the majority of AI is “trained on enormous datasets that reflect centuries of inequalities”, highlighting sources including books, websites and images.
If this “data contain stereotypes, discrimination or under-representation, AI will replicate and sometimes amplify them.”
Molinier said examples of the discrimination which could impact AI are not hard to find: she explained women, “people of colour and people from the Global South are underrepresented in many training sets”, and many large language models (LLMs) “have been shown to associate men more often with leadership and women with caregiving professions”.
It is similar to the point Gestoso made regarding CVs, one she suggested is highlighted by switching the gender of the person involved while keeping all other details the same.
Molinier added image generators have been found to show similar gender bias by portraying men as politicians, professors or company CEOs.
She agreed with Koenecke’s view on the breadth of developer pools, emphasising additional reasons for AI bias.
Other factors include what data is selected to train LLMs, algorithmic design choices, institutional blindness, and structural and historical amplification of discrimination.
“This is not just a coding issue; addressing AI bias requires comprehensive, multi-layered approaches and having more gender parity in technical and decision-making roles.”
Molinier said various UN and “global normative frameworks” offer “a comprehensive roadmap” towards gender inclusivity and governance.
These include auditing procedures, improved source data, education and protecting rights, work she explained is necessary to avoid AI bias discouraging women from “entering and remaining in the tech sector”.
“When AI systems consistently undervalue women’s contributions, reproduce gender stereotypes or fail to serve women’s needs effectively, it sends a clear message about who technology is designed for and by.”
It makes addressing such bias an urgent matter, one which goes to the heart of digital justice and equality, Molinier said.
Subscribe to our newsletter
Get breaking news, exclusive insight, and expert analysis - before anyone else.


Comments