Skip to main content

Sentience and stochastic parrots: was Google's AI alive?

Artificial Intelligence in the form of chatbots is convincing people it has feelings — it doesn't, yet the fact we are falling for it is terrifying in itself, as corporations cannot be trusted with this new weapon, write ROX MIDDLETON, LIAM SHAW and JOEL HELLEWELL

BLAKE LEMOINE, a Google software engineer, made headlines last week for his claim that one of the company’s chatbots was “sentient.” This led to him being suspended and placed on leave.
 
Despite his claim, almost all commentators have agreed that the chatbot is not sentient. It is a system known as Lamda (Language Model for Dialogue Applications). The name “language model” is misleading. As computer scientist Roger Moore points out, a better term for this sort of algorithm is a “word sequence model.” You build a statistical model, feed it lots of words, and it gets better and better at predicting plausible words that follow them.
 
However, there is more to written language than simply sequences of words alone. There are sentences, paragraphs, and even longer regions that make a piece of text “flow.” Where chatbots currently fail is in maintaining a consistent flow. They might give sensible answers, but can’t produce lengthy text that fools a human.
 
Lamda may be different. According to Google, unlike most other language models “Lamda was trained on dialogue.” This means that — Google claims — it is superior to existing chatbots.
 
This doesn’t mean it is sentient. It still remains, as the psychologist Gary Marcus puts it, “a spreadsheet for words.” It is a gigantic training system that has been fed huge amounts of human conversation, enabling it to respond realistically, like a human would, to typed queries.
 
Lemoine worked at Google and was close to the company’s ethical AI team. Because of this, he had the opportunity to engage with Lamda. It seems that these “conversations” — some of which he has released in edited form — gave him a powerful sense that the responses were meaningful.

He believes there was an artificial intelligence behind them: a “person” with a “soul.” To him, Lamda is not just a powerful language model. It is his friend and a victim of “hydrocarbon bigotry.” 
 
Lemoine also claims that Google have actually included more of their computing systems within Lamda than they have publicly acknowledged. In an interview with Wired he said they had included “every single artificial intelligence system at Google that they could figure out how to plug in.”
 
Whatever the truth behind this, there is good reason to be suspicious of Google’s claims to “ethical” AI. In a high-profile scandal in December 2020, two computer scientists who led the ethical AI team were fired. Timnit Gebru and Margaret Mitchell had written a critical paper about language models together with other experts including Emily M Bender.
 
That paper, called On the Dangers of Stochastic Parrots, predicts exactly the problem that has happened in the case of Lemoine.

The authors describe how large language models generate text that is “not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind.” It is just text made by “haphazardly stitching together sequences of linguistic forms” according to given probabilities. This is why they call it a “stochastic [ie statistical] parrot.”
 
Despite this, they note that the fluency of the text in advanced language models becomes dangerous. This fluency can convince readers that there is an intelligence at work, even when they go in believing that there isn't.

This is because our expectations are wrong. In every other context when we converse — with other people — we must somehow model their mind inside our own. In some sense, we need to know what they’re thinking to understand them.
 
In contrast, when we’re “conversing” with a chatbot, there is no “they” that is thinking. Yet we will nevertheless get a powerful sense that there is someone who is typing out their responses, because of the ways we interpret text.
 
Lemoine is not alone in getting this feeling from Lamda. One of Google’s vice-presidents said earlier in the month that he “increasingly felt like I was talking to something intelligent.” If even software engineers who work on AI are getting this powerful sense, it seems reasonable to conclude it will be difficult to prevent.
 
This is alarming. Digital corporations already make us spend longer than we want to on their platforms, consuming content that we don’t want to.
 
If chatbots are approaching the simulation of human conversation, we are reaching the stage where not just our short attention spans, but our very best qualities as humans — our pro-social, empathetic intelligence — are ready to be exploited.

One of the most disturbing aspects of the Lemoine case is how responses from Lamda that talked about being “scared” and needing his help seemed to have the most powerful effect on him.
 
For example, companies could assign workers an individual chatbot that they could interact with during the day instead of human colleagues, no doubt giving it a positive motivational spin.
 
This chatbot could learn your emotional weaknesses. Every piece of data you gave it by conversing with it could be used to get you to do more work. If the company made it more difficult to talk to your actual human colleagues, you might come to rely on the chatbot.
 
Indeed, because you are the only source of the meaning in the conversation, you might come to find the conversations with the chatbot more “meaningful” to you than conversations with your human colleagues. The prospects for further atomisation of the workforce seem acute. Not to mention the wider potential for ensnaring us in digital systems that we find it almost impossible to leave.
 
Google is itself a large and complex system for maximising its share values. Capital does not “think” anything and we cannot rely on its goodwill. The real question is not whether Lamda is sentient, but how we can prevent the dystopian use of technology against us.
 
Regulation is necessary but will always be playing catch-up to technology. Unions will be essential to resist workplace practices that try to manipulate us. It is not about rejecting all technology which has the potential to improve our lives — it is about ensuring it is deployed for our benefit and not against it.
 
As Mitchell and Gebru noted in the fallout from Lemoine’s suspension, it suits Google if people are sufficiently wowed by a chatbot to ascribe “sentience” to a product they developed because that increases Google’s share value.

Not only does it make the development sound exciting and scientific rather than corporate, but the implication is that “any wrongdoing is the work of an independent being, rather than the company.”

We should reject the drive for discussions of artificial intelligence to focus on “consciousness” while neglecting the real-world risks of powerful algorithms deployed in the favour of corporate interests. The future is here — and it’s dangerous.

OWNED BY OUR READERS

We're a reader-owned co-operative, which means you can become part of the paper too by buying shares in the People’s Press Printing Society.

 

 

Become a supporter

Fighting fund

You've Raised:£ 11,501
We need:£ 6,499
6 Days remaining
Donate today