Can robots ever be trusted to “Do the right thing”?
For that matter, can humans?
Moral Machines is a website from MIT that is conducting some fascinating online research to collect data on human morality when applied to dilemmas involving driverless cars, i.e. the kill/save decision in the event of catastrophic engine failure.
I have guided about ten groups through this over the past few weeks, and it has been unsettling to see just how much ‘unconscious bias’ can affect our decisions about who we would save and who we would sacrifice. Amid the binary programme, this is a somewhat murky and swirling pool of Artificial Intelligence AI morality!
Twice now I have been interrupted by Siri activating on my (silenced) iPhone while I was facilitating a group workshop, evidence that our tech is getting nosier as it gets more proactive in helping us. Undeterred by their forays into online chatbots that could have been filed under “Guys, come on, they saw this coming from the International Space Station, Microsoft are continuing to invest heavily in AI, recently launching five new Skype chatbots.
If you missed the story, Tay; Microsoft’s machine-learning AI twitter bot, intended to mimic the language patterns of a 19-year-old American girl, and learn from interactions with human users of Twitter. Yes: real, flawed humans. Tay was, dispiritingly, taught to be racist in hours, raising theoretical questions about how to best set morals for a machine without being unduly influenced by the worst bits of humanity. What does this mean for freedom of speech if we eliminate the outliers?
Wendell Wallach, a scholar at Yale’s explains that the most advanced machines today only have operational morality: “the moral significance of their actions lies entirely in the humans involved in their design and use, far from full moral agency.” However, looking further forward into the Age of AI, as robots become more sensitive and autonomous their moral agency will also increase.
However, the thorny question cannot be avoided for too long: who gets to decide and set a machine’s moral code? The engineer at the BotFactory? The department head at the company where the AI is put to work? The corporate HR department? The CEO? The shareholder? Who would you feel comfortable with deciding on the outcome of the trolley problem for a self-driving car?
If setting the moral code of a machine feels like a theoretical leap too far, then let’s start somewhere more familiar. Setting your organisation or team’s core values forms the moral backbone of your team, and the work you produce. This list must be meaningful and authentic; it needs to represent the way you actually function and not just be a wish list of how you would do things if you didn’t have a deadline to meet or a budget to cut. When it comes to moral questions, the bottom line is never the answer.
Being clear on your core values will enable the moral compass for workplace AI to be set. And the AI may find sticking to this morality easier than us humans at times … There was certainly no AI involved in France Telecom’s decisions that perhaps drove dozens of employees to suicide, and, of course, most recently the BHS tragedy.
Perhaps then we could employ AI in the future as our corporate ethics gatekeepers: the safety net that forms an impregnable wall against a corrupt or morally weak Board? However, even if there is a firm moral code programmed into the AI, how possible would it be for this code to stay stationary with superintelligent AI? Francesca Rossi, professor of computer science at the University of Padova in Italy, warns that “embedding ethical principles in a machine is not going to be easy: hard-coding them is not an option, since these machines should adapt over time.”
I think it is time to switch on to the uncomfortable reality ahead if it is solely shareholder priorities that underpin the decisions an AI is programmed to take when faced with the Trolley Problem. How comfortable would we be if a company’s shareholders were also to be the ethics committee? We need to be peering behind the headlines to understand who is writing the programmes for the AI and the basis of any ethical decision-making framework.
Like anyone we enter into a relationship with, we have to be able to trust AI if we are going to fully accept it into our lives. So let’s make our AI ethical and our work honourable.
The robots are coming; look busy!
Laura x
If you think you need some support in setting your Core Values and Goals, you can enhance your EQ and upgrade your humanness at a WishFish workshop on emotional intelligence and personal resilience. Please email Gail (she’s human) on info@wishfish.org.uk for more information.
Leave A Comment