Reading time: 3 Minutes
When bias exists in the workplace, it hinders not only inclusivity and diversity, but creativity and innovation. Trendwatcher David Mattin says we should look to AI for the solution
Way back in 2016, Microsoft set a new AI-fuelled chatbot loose on Twitter. Her name was Tay, and Microsoft called her “an experiment in conversational understanding”. Within 24 hours, Tay was spewing forth a constant stream of mostly unprintable hate speech. “Chill, I’m a nice person. I just hate everyone,” was among her few repeatable exclamations.
It wasn’t hard to diagnose the problem. Tay used AI to learn from the conversations she had had with real humans and adjusted her own output accordingly. Hate in, hate out. Microsoft quickly pulled the chatbot and apologized. But ever since, Tay has been a poster child for the thorny, complex relationship between AI and human ethics.
Increasingly more of our lives – from our Facebook feeds to the stock market, to the distribution of police across our cities – is governed partly by AI-fed algorithms. How can we make sure that those algorithms don’t simply replicate and so further reinforce our very human flaws, including racial, gender and other biases?
Coaching against bias
That conversation will only intensify in 2018. But now, a new way of looking at all this is becoming clearer. What if we flipped the debate on its head? What if we asked: what will AIs be able to teach us about how to build a more ethical world?
Given the experience with Tay, and the wider debate about how to ensure AIs don’t plunge us back into an ethical dark age, that might seem a shocking question. But it starts with a reasonable premise. All human beings have biases. We often can’t – or don’t want to – see them. It’s clear that AIs can adopt human biases, but what if they could also be a powerful tool in helping us spot them in ourselves and others?
As it turns out, this AI-fuelled ethics revolution might start in the workplace. Israeli startup Joonko is building an application that uses AI to help uncover unconscious manager bias. It syncs to common workplace platforms used to manage staff and allocate tasks: think Salesforce, Workday and so on, and then gets to work, constantly scanning data for signs of bias. If these are uncovered, Joonko will email the relevant manager and start coaching them on how to put things right. For example, “I looked at all our candidates for the R&D Manager position and notice we have no candidates from under-represented groups. I also notice we have three diversity candidates waiting in our Application Review stage.”
Of course, managers themselves will always want to retain final control of who they hire. And we’re a long way from AIs smart enough to handle the whole process. But Joonko and AI-driven services like it can already provide valuable insight that supports decision-making.
Could AI help us to think and make decisions with a clearer, unbiased head?
Weeding out bullies
The idea that AI can help us forge more ethical workplace cultures could not be more timely. A more connected world is a more transparent world, and greater transparency is laying bare the internal culture of every business as never before. Often, customers don’t like what they see. Just look at the backlash against big Silicon Valley tech that started in 2017 and now rumbles on. It’s driven in part by a feeling that the big Silicon Valley players have cultivated toxic internal cultures of sexism, bullying and overwork. And that feeling has been fueled by the testimonies of current and former employees, who in a connected world can reach millions of readers.
Internal workplace cultures and their ethical status are at the top of the agenda. And once employees start to hear about Joonko and other services like it, it’s likely they’ll start to expect their own employers to put AI to work to ensure a fair and ethical workplace. Given the right data, AIs could be trained to spot and highlight all sorts of unfair approaches, like these:
“Did you know you tend to promote the men in your team faster than the women?”
“Did you know you’ve never hired anyone over 40, although 60 per cent of your applicants meet that criteria?”
What’s more, managers could be far more likely to accept that a true bias has been revealed, and take action to fix it, if the advice comes from an AI. AI is coming, that much is certain. It’s up to us to make sure it makes us more ethical, not less. The workplace is a great place to start. Up next, an AI CEO? Just maybe don’t suggest that at the next all-hands meeting.