Today we are going to explore human consciousness and the questions it raises for the possible development of conscious artificial intelligence (AI).
A bit heavy I know; I don’t want you clicking away to kittens on YouTube, so instead I would like you to consider the book and film 2001: A Space Odyssey, in particular the character arc of “HAL”, a sentient computer that controls the systems of a spacecraft and interacts with the crew.
Now a cultural touchstone for discussions around the dangers of AI, HAL begins as an obedient, dependable member of the crew, but a command conflict causes HAL to malfunction and launch a program of emotionless murder; a rather high-stakes example of garbage in, garbage out.
This sci-fi scenario is dependent on HAL being a conscious entity that, upon its descent into “madness”, made a choice to kill humans to ensure a desired outcome. The ‘can robots ever be conscious’ question has engrossed us for decades and is the basis of the Turing Test; if an AI can lead a human to believe it is a fellow human within a conversation.
Fascinatingly, for us humans however, our consciousness is a bit of a slippery fish. So much so (as discussed on a recent Radio 4 debate) that there is no machine as yet that can detect the exact, precise moment at which a human slips into unconsciousness when under general anaesthetic. Hence the need for a trained human anaesthetist at the helm to use physical cues to ensure we are still under.
So if we can’t exactly tell when it’s lights out, how would we possibly determine when the spark of AI conscious life is illuminated? More importantly though, would this scenario even be possible?
In her wonderful essay “The Problem of AI Consciousness” Susan Schneider, Associate Professor of Philosophy and Cognitive Science at the University of Connecticut, explores the question of whether a superintelligent AI could have a conscious experience, ie: would the world’s stimuli feel a certain way to them?
As well as discussing some very interesting philosophical perspectives of human consciousness, Schneider questions whether AI, being silicon-based, is even capable of consciousness:
“Carbon molecules form stronger, more stable chemical bonds than silicon…which has important implications in the field of astrobiology; it is for this reason that carbon, and not silicon, is said to be well-suited for the development of life throughout the universe.”
My thoughts exactly… really, is silicon as good as the real thing?!
However, there is a line of thought questioning whether it really matters if a robot is actually conscious… If it appears to be, then maybe that’s enough for it to interact with us on a similar level, particularly within a customer service environment. Rather like my 3-year old laughing along to a joke that she can’t possibly comprehend; she recognises that the appropriate behaviour is to laugh. So with Pepper and Nao robots taking Japan by storm, and 10,000 on pre-order in the States for 2016, what does this all mean?
My call to action would be to do what you can NOW to bring spark to your work. Surprise, attention, delight and fun are difficult to programme, so be better than an algorithm this week. Focus on creating spark with your employees so that they can reflect this out to the customer. If you are in a position of influence, consider what needs to be protected SOON to ensure you are bringing the human spark; without this, energy drains and we become robotic in our interactions.
The robots are coming, look busy!