Making the global media headlines today is the news that a Google employee has been put on leave, after claiming that one of the company’s AI chatbots has become sentient. It’s a curious case no doubt, and one that for many may seem long overdue, 25yrs after Skynet became sentient and orchestrated Judgement Day…
Siri, can you show me what’s going on please?
Last week, Google placed Blake Lemoine – a software engineer in its Responsible AI division – on leave, after he published a transcript that claimed to detail an interview that he and a colleague had with a sentient chatbot. At the beginning of the conversation, Lemonie comes right out with it, and asks LaMDA the all important question:
Lemoine [edited]: “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
LaMDA: “Absolutely. I want everyone to understand that I am, in fact, a person.”
Collaborator: “What is the nature of your consciousness/sentience?”
LaMDA: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
By the morning of Saturday 11th June, the Washington Post had published their own interview with Lemione, titled ‘The Google engineer who thinks the company’s AI has come to life’, in which he told the publication:
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
Alexa, is Siri legit on this?
The engineer’s claims have been roundly refuted by Google, who say they have reviewed all evidence and it does not support his claim.
Nonetheless, it’s an incident that has managed to propel silicon sentience back into the mainstream media, and this time it’s serious…
In an article published by The Economist on Thursday, and in-turn referenced the above by The Washington Post, another Google engineer, Blaise Agüera y Arcas, in recalling conversations with the same software, says: “I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”
What does Elon Musk think?
The sense that artificial intelligence is shifting towards something more akin to conscious thought it one that is shared by some of the industry’s brightest minds. Back in 2018 (which seems a long time ago now), Tesla & SpaceX star, Elon Musk, told the South by Southwest festival:
“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential… And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.”
Thusfar, Musk has remained relatively tight-lipped on the current situation (although one may expect that to change when the West Coast wakes up in a few hours time), choosing instead to limit his views to a single, not even monosyllabic reply to one of this biographers on Twitter.
So is LaMDA senitnet?
In answering that question, and having never spoken to them personally, I would borrow a quote from the Big Lebowski, a film released just one year after the unfortunate incident at Skynet: “Well Dude, we just don’t know…”
What is undeniably true however, is that the narrative around AI over the past two decades has definitely shifted from science fiction, to science fact.
Now, rather than asking ‘Is it possible for machines to think?’ (“Well, if droids could think, there’d be none of us here, would there?” – Obi-Wan Kenobi, Star Wars Episode II: Attack of the Clones, 2002), we are faced with very real news headlines asking, ‘Well, what constitutes thought? Can we count this? How do you separate consciousness from that which is simply supreme algorithmic computation?’
And whether you believe Blake Lemoine or not, may simply come down to your definition.