Skip to main content
Blurred motion shot of employees working independently in office.

The unfair future of artificial intelligence

Will A.I be a good thing for the future of work, for the future of healthcare, for the future of technology and society? Most experts say yes

There are many types of A.I bias—from Twitter bots who unfortunately learned a little too well, to U.S. healthcare algorithms that disadvantaged certain groups—but the most common one can be found on your everyday Facebook feed. Facebook’s AI-powered personalisation tool is designed to suggest content it thinks we’ll like and engage with.

The only problem is, the more we engage with a certain topic, the more Facebook suggests that topic, the more we engage with it, and so on and so on, amplifying our own biases and reducing the chance we’ll come across different points of view. Facebook somehow found a way to create an algorithmic version of the famous ‘confirmation bias’, and the effects have been felt in everything from politics and social movements to the spread of misinformation.

"With the recent explosion of real-world A.I applications, these risks are now out there, potentially unchecked, roaming across public data."

Whats the big deal?

 

These risks weren’t such a big deal when AI was more of a theoretical exercise, known only to data scientists, but with the recent explosion of real-world AI applications, these risks are now out there, potentially unchecked, roaming across public data.

We’ve seen AI bias creep into employment screening tools, criminal recidivism calculators and even real estate advertisements (Facebook’s feed algorithm also found that real estate ads allegedly generated better engagement when shown to white people, with the result that only white people ended up receiving them).

Regulators have been slow to manage this sort of thing, if only because AI has moved much faster, and into more varied fields, than most people anticipated. But progress is being made. The Australian Human Rights Commission recently published its own guide to recognising AI bias. “We need to be more realistic,” Commissioner Ed Santow says, “All forms of technology have always existed in ways that can either help or harm us. That is as true of the most sophisticated form of deep neural network as it is of a knife. We need to make sure that we have a legal structure in place that makes it as likely as possible that people will be protected.”

The legal side of things is one half of the battle. In some ways, tech companies themselves are trying to put the handbrake on AI, or at least turn it into more of a surgical tool, not a data-driven sledgehammer. IBM, Amazon and Microsoft have all promised to stop work entirely on facial recognition AI, at least until the racial biases have been ironed out (white men are usually identified correctly 99% of the time, while other demographics, for example, are misidentified in up to one third of scans).

The big question is: will AI be necessarily a good thing for the future of work, for the future of healthcare, for the future of technology and society? Most experts say yes—with a few fundamental caveats.

The need for transparency

 

The first is the so-called ‘Black Box’ issue: the idea that how an algorithm actually works, its underlying model, should be kept secret for IP reasons. This creates a potentially huge transparency problem. If we’re using AI to determine who gets a bank loan, or to detect lung cancer, but we don’t know how exactly the algorithm works, there’s no real way to know if it’s working properly.

This need for transparency is driving the next evolution of machine learning: ‘Explainable AI’. Explainable AI is essentially what it sounds like, a set of capabilities and methods that can interrogate an algorithm and look for potential biases. IBM have already released Watson OpenScale, which allows developers to check for prejudicial data sets, and suggests ways to mitigate them. AI Fairness 360 is another open-source toolkit for fine-tuning machine learning algorithms.

The second caveat is legal oversight. For AI to function correctly and responsibly, it needs to be governed by a clear and enforceable set of rules. Again, these are a work in progress, and regulations do vary across the world. Most institutions have already put forward recommendations, at least. The EU’s High Level Expert Group has published its Ethics Guidelines for Trustworthy AI, and the CSIRO has released something similar in Australia.

Even Silicon Valley, which has traditionally been a big fan of self-regulation, has jumped on-board with some degree of enthusiasm. Facebook has announced $7.5 million in funding for an independent AI ethics research centre, Amazon is working with the U.S. National Science Foundation to fund AI fairness research, and the San Francisco-based Partnership on AI (a machine learning best-practice initiative) has over 100 members, including Apple, Amazon, Samsung, Accenture and Facebook.  

“AI can help humans with bias, but only if humans are working together to tackle bias in AI.”

What's the lesson?

 

The lesson here is that AI is a tool, just like any other, and its efficacy is almost entirely down to the health and sanctity of its data. Feed an algorithm prejudiced data, and you’ll get a prejudicial result. AI is neither inherently better nor fairer than a human. It’s open to the same creeping biases that we all face. The trick, over the next five or ten years, will be learning to recognise those biases, and mitigate their effects.

As James Manyika, Jake Silberg and Brittany Presten writes in the Harvard Law Review, “A.I can help humans with bias, but only if humans are working together to tackle bias in A.I.”

 

Interested in learning more about AI and how it can be used in business? Check out our online short courses here before the machines take over.

This article was originally published on 22 March 2021