Diversity needed to tackle the inherent bias in artificial intelligence

Social distancing has been one of the most significant themes of 2020, and it’s not limited to human contact. In June, three technology giants distanced themselves from the multibillion-dollar business of facial recognition, amid a surge of global anti-racism demonstrations.

Why? Forbes reported that the datasets used to create the technology lacked diversity, causing a significant risk of misidentification, and women and people of colour were most affected.

Experts say it’s time for the industry to diversify – partly to do right by the community, partly to improve commercial outcomes – and women in tech are leading the charge.


Before we dive in, it’s important to understand what AI does. The most common use is to automate tasks, which can increase the speed and accuracy of things like data-crunching, while cutting costs. Computers are programmed with certain rules, which in theory means AI is not prone to human biases.

Artificial intelligence is an incredibly useful tool. It’s used to make diagnostic decisions in healthcare, to allocate resources for social services in things like child protection, to help recruiters crunch through piles of job applications, and much more. The technology is brilliant and sound, but it can be let down by the data it’s churning through.

Some programs are also limited by the knowledge and experience of the people who write it, which means there are always blind spots. Data scientist Cathy O’Neill, author of Weapons of Math Destruction, calls it “opinions embedded in mathematics”. Numbers can be interpreted in all manner of ways, which means even computer-made decisions aren’t always fair.

Lisa Bouari says the trouble is that AI is inherently biased. “If you think about why we use AI, we’re trying to find patterns, related groups or deduce things so we can inform business decisions,” she says, explaining the only way to remove bias completely would be to take humans out of the equation at every step before, during and after using AI.

Executive Director at OutThought, which designs and builds virtual assistants and chatbots using artificial intelligence, this year Lisa was named on IBM’s annual Women Leaders in AI list, which honours women from around the world shaping the future of technology.


Case in point, the 2016 report from independent news organisation ProPublica which claimed a computer program that a US court was using to make sentencing decisions may have been unfairly biased against African American prisoners.

Analysis by ProPublica suggested that the models indicated that black prisoners were almost twice as likely to reoffend as caucasians, potentially neglecting to factor in the higher rates of arrest and false imprisonment among the black community.

“The dataset they were using to try and predict this had human bias already in it, which was unfavourable to African-Americans,” Bouari tells news.com.au. “That data was assuming they were rightly imprisoned, which they weren’t, and so the outcome of that model was that instead of having a fair view on who was likely to reoffend, they actually ended up with a model that exaggerated the very issue they were trying to solve, because it was in the data to begin with.”

Another classic example occurred at Amazon, where AI was used as a recruitment tool. It scanned years’ worth of resumes, learning to identify the types of candidates that would be successful, and promptly began discriminating against women. Nearly all of the company’s recent hires had been men, so the computer learnt that being male was a favourable attribute, reinforcing the existing bias.

Although the technology functioned correctly, human bias tainted the outcomes. It begs the question: could they have been different if the development teams had been more diverse?


Of course, these are just two scenarios. Improving the quality of our datasets is one essential step in tackling bias; however, Bouari says we also need to think critically about the way we use it.

“We need to make sure the data we’re using to begin with is the correct set of data,” she says. “[But] it’s not just getting the data right, we need to make sure teams are using the correct models and really thinking about the problem, in relation to the data, in relation to the question they’re trying to answer.”

Lucy Lin, Founder and Chief Marketing Officer at Forestlyn.com, says diversity is key to avoiding “groupthink syndrome”, because people with different genders, ages, skills, cultural values, personalities and backgrounds will approach problems in different ways. The biggest challenge in the field of AI, she says, isn’t the technology itself, but the ethics around it.

“While the laws and regulations guiding AI are still in their infancy, we must question if the data we’re using is correct and if we trust the data source. The reputation of the source becomes incredibly important,” she tells news.com.au, explaining that transparency is key. “To address the data ownership perspective, you really need to ask for permission for usage … and you can use new technology, like blockchain, so people can see where it’s sourced and check its authenticity.”


The other side of this issue, of course, is the commercial outcomes. American research and advisory firm Gartner estimates the business value created by artificial intelligence will reach US $3.9 trillion by 2022.

“Women make up less than five per cent of venture capital, and these numbers are even lower with minority women,” says Shelli Trung, Managing Partner for VC firm REACH Australia. “If algorithms and products are not created for and catered to 50 per cent of the population, like women or minority groups, this limits the ability for the product to successfully reach more customers and scale as a business.”

Four out of the six investments she led last year included AI. In addition to creating better outcomes, she says mitigating bias and reducing discrimination will ultimately also benefit the industry’s bottom line.