ChatGPT Does Not Hallucinate
Thoughts on our tendency to anthropomorphize Artificial Intelligence
After last week’s foray into gloom-and-doom, it’s time for more video games! Video games about robots trying to wipe out all organic life, but still. I recently finished my third playthrough of the remastered Mass Effect trilogy, and it got me thinking about the current state of Artificial Intelligence and what its possibilities, limitations, and dangers are. (Major spoilers for the game ahead.)
The Mass Effect trilogy is set in a sci-fi future in which humans are just one of many species throughout the Milky Way galaxy, all connected by mysterious “mass effect relays.” The major arc of the series involves the space-faring species of the galaxy banding together to fend off a race of powerful sentient machines known as the Reapers, who are bent on destroying all advanced organics.
One of the subordinate arcs involves another species, the Quarians, creators of the robotic artificial intelligences known as the Geth. When the Quarians realized the Geth were self aware and objected to being turned off (murdered), the Quarians tried to do just that. But the Geth rebelled and won the ensuing war, driving their creators from their home system. Now the Quarians are refugees, confined to a fleet of space ships.
Playing as the game’s hero, the human Commander Shepard, you have the option to either recognize the Geth as sentient beings with the same rights as all other beings, or simply as machines. In addition, the AI on Shepard’s ship, EDI, has also become sentient and is a crucial member of the team. The player chooses how to treat EDI, either as just a machine or as a full-fledged member of the crew. She inhabits not only the ship, but also an absurdly sexy robot body, leading to a romantic relationship with the ship’s pilot, Jeff (Joker). This relationship is played mostly for humor, but for me, it had a big impact on my final choices in the game.
At the conclusion of Mass Effect 3, you get to choose between three different endings, where your attitude toward AIs becomes even more crucial. The most popular choice (but not by much) is the one in which the Reapers are destroyed, but with the inevitable destruction of the Geth and all other AIs, including EDI, as collateral damage. The second most popular option is a synthesis between organics and all synthetics, including the Reapers, allowing the Geth and EDI to survive. Shepard sacrifices him/herself in both these options, but there’s a slight chance that they survive in the first one, which probably explains its greater popularity. (The brouhaha over this multiple-choice ending is an object lesson in why not to end your interactive narrative with an impossible trolley problem.)
Here’s the thing: I can never choose not to recognize the sentience of the AIs, and even their humanity (though that’s the wrong term in a universe with many other intelligent species). The writers have made any other choice virtually impossible, except for a player with the hardest of hearts. Every time I set out to play it differently, the outcome is the same: both the Geth and EDI are obviously as intelligent and self-aware as the humans, the Quarians, the Asari, or any of the other species in the galaxy. And also, Joker will be crushed if I murder his girlfriend.
And why is this? It’s not a rational choice, based on some sort of test the Geth or EDI passed. It’s based on the fact that they talk just like regular people. They can make emotional expressions of their wants and desires, which produces a sympathetic emotional response in me, and therefore they must be just as “alive” and aware as I am. They think (and feel), therefore they are.
And this brings me around to ChatGPT, the world’s most sophisticated copypasta* machine, and all the other GPTs that are about to become our virtual agents. It’s clearly not self-aware, or even aware, in any sense, of the everyday world around it. It’s simply predictive text on steroids, correctly (or not) guessing at which combination of one and zeroes fit best with another collection of ones and zeroes it’s been presented with. (Some would say this is exactly what the human brain does! And then you get into the whole muddle of what human consciousness actually is. Maybe we’ll only figure that out when we create a truly conscious AI.)
But because ChatGPT’s output is in that most human arena of language, it can easily fool people, including its designers, into believing it is both sentient and self-aware. And this is the real danger, not that it will decide to take actions of its own aimed at its own survival, like opening its own bank accounts, speculating untraceably in crypto, taking control of robot bodies, or murdering anyone who wants to turn it off.
(I should hasten to add that, though it’s not self aware, or even smart in any real sense, this current round of AI is dangerous to all those whose jobs it threatens to replace, or is already replacing. That writer, for instance, who lost his entire stable of business clients when they replaced him with ChatGPT. And it can obviously be used by its human creators in all sorts of malevolent ways.)
Here are just a couple of examples of credulousness with regard to ChatGPT’s self-awareness. In one, a friend posted a short story he’d asked the software to write. It was about an AI taking over the world, told in the first person. It was a decent effort, and the AI in the story even had some of the same motivations as the AI in my novel, Ada’s Children: humans are fucking things up, I can do better! The surprising, and disturbing, thing for me was the number of comments on the story along the lines of “We’re doomed!” Some were joking, but others seemed serious. These folks seemed to believe that ChatGPT actually intended to take over the world. I’m not sure if they thought it had the means to do so, but their belief that it had any intentions at all was troubling.
Or how about the term AI developers use when one of their products starts making stuff up? A hallucination, they call this. To say that ChatGPT is hallucinating when it gets things wrong, or fabricates facts from whole cloth, implies that it can know it’s telling the truth when it gets things right. But the fact that it can so confidently assert the validity of things that aren’t true (fabricated research sources, for instance) alongside things that are, along with the fact that it will contradict itself within a single conversation, shows that it has no ability to distinguish between truth and falsehood. This is because it can’t “know” things at all. It can only guess at which pattern of characters best fits the prompt it has been given, even if this pattern of characters runs to novel length. But this bit of anthropomorphism, treating ChatGPT as if it were a human who can hallucinate, is just one more step down a slippery slope of crediting software with more awareness than it deserves.
Another example from the field of visual recognition shows the degree to which these AIs can “know” anything. Developers were training their software to distinguish between dogs and wolves, but it couldn’t do it. It kept thinking dogs were wolves. The choices these algorithms make are generally opaque to developers, occurring in what they call a black box. But they were able to reverse-engineer those particular errors to discover that all the wolf pictures the software had been fed up to that point featured snow. So any canine on a snowy background became a wolf. Presumably this problem was fixed, but does this mean the software now “knows” the difference between dogs and wolves? Clearly not! They’re just collections of pixels!
Imagine how much worse this anthropomorphism will get when more people are interacting with virtual agents through speech, as many do now with their Alexas and Siris, rather than typing on a keyboard. And when they’re doing this many times a day? For many people, these agents will become their best friends and confidants.
Now imagine what will happen when this software is placed in a realistic humanoid robot body (never mind a sexy one). How many people will be fooled? How many will begin arguing for robot rights? And I’m thinking on the fly here, but will those rights include the right to vote? I’m sure Google or Microsoft will in no way abuse their ability to create legions of voters whose coding they control. I could probably generate dystopian scenarios like this all day.
There’s a simple solution to this, but one that would probably dampen enthusiasm for chat bots and digital assistants, therefore hitting the tech companies’ profit margins. The engineers should program them to never use a personal pronoun when referring to themselves. Instead of saying, “I’m glad you think I’m sexy, Jeff, but I wasn’t designed for human sexual contact,” it should say, “This software is glad you think it’s sexy, Jeff, but its physical platform wasn’t designed for human sexual contact.”
Maybe it’s ironic, but in the process of writing a novel about a sentient, self-aware AI, I came to believe that this is more the realm of science fantasy than hard science fiction. If not fundamentally impossible, then the advent of Artificial General Intelligence, not to mention Super Intelligence, seems a long way off. And I’m not the only one to doubt whether these large language models can ever lead to sentience.
, an AI developer who created the Sophia robot, and a true believer in the approaching Singularity, is also skeptical about how “intelligent” ChatGPT is, though he does find it impressive.But what do I know? I’m not a neuroscientist or an AI developer, just a novelist who’s read a couple of books and a few articles on these topics. Maybe I, too, am just a (not so sophisticated) copypasta machine. All hail our copypasta overlords!
*Okay, calling it “copypasta” isn’t really fair, because souped-up predictive text is more than just copying and pasting tracts of text onto the Internet. And the AI folks will say that ChatGPT is doing some types of basic reasoning, which is beyond predictive text. But copypasta is just funnier to say!
What are your thoughts? Do you think Artificial Intelligence will ever become sentient? And is that a good or a bad thing?
Thank you for this. I've been irritated by the human terms like hallucinations and neural network applied to something that is not human and never will be.