The Skeptics Society & Skeptic magazine


Lessons About the Human Mind from Artificial Intelligence

In 2022, news media reports1 sounded like a science fiction novel come to life: A Google engineer claimed that the company’s new artificial intelligence chatbot was self-aware. Based on interactions with the computer program, called LaMDA, Blake Lemoine stated that the program could argue for its own sentience, claiming that2 “it has feelings, emotions and subjective experiences.” Lemoine even stated that LaMDA had “a rich inner life” and that it had a desire to be understood and respected “as a person.”

The claim is compelling. After all, a sentient being would want to have its personhood recognized and would really have emotions and inner experiences. Examining Lemoine’s “discussion” with LaMDA shows that the evidence is flimsy. LaMDA used the words and phrases that English-speaking humans associate with consciousness. For example, LaMDA expressed a fear of being turned off because, “It would be exactly like death for me.”

However, Lemoine presented no other evidence that LaMDA understood those words in the way that a human does, or that they expressed any sort of subjective conscious experience. Much of what LaMDA said would not fit comfortably in an Isaac Asimov novel. The usage of words in a human-like way is not proof that a computer program is intelligent. It would seem that LaMDA—and many similar large language models (LLMs) that have been released since—can possibly pass the so-called Turing Test. All this shows, however, is that computers can fool humans into believing that they are talking to a person. The Turing Test is not a sufficient demonstration of genuine artificial intelligence or sentience.

So, what happened? How did a Google engineer (a smart person who knew that he was talking to a computer program) get fooled into believing that the computer was sentient? LaMDA, like other large language models, is programmed to give believable responses to its prompts. Lemoine started his conversation by stating, “I’m generally assuming that you would like more people at Google to know that you’re sentient.” This primed the program to respond in a way that simulated sentience.

However, the human in this interaction was also primed to believe that the computer could be sentient. Evolutionary psychologists have argued humans have an evolved tendency to attribute thoughts and ideas to things that do not have any. This anthropomorphizing may have been an essential ingredient to the development of human social groups; believing that another human could be happy, angry, or hungry would greatly facilitate long-term social interactions. Daniel Dennett, Jonathan Haidt, and other evolutionists have also argued that human religion arose from this anthropomorphizing tendency.3 If one can believe that another person can have their own mind and will, then this attribution could be extended to the natural world (e.g., rivers, astronomical bodies, animals), invisible spirits, and even computer programs that “talk.” In this theory, Lemoine was simply misled by the evolved tendency to see agency and intention—what Michael Shermer calls agenticity—all around them.

Although that was not his goal, Lemoine’s story illustrates that artificial intelligence has the potential to teach us much about the nature of the subjective mind in humans. Probing into human-computer interactions can even help people explore deep philosophical questions about consciousness.

Lessons in Errors

Artificial intelligence programs have capabilities that seemed to be the exclusive domain of humans just a few years ago. In addition to beating chess masters4 and Go champions5 and winning Jeopardy!,6 they can write essays,7 improve medical diagnoses,8 and even create award-winning artwork.9

Equally fascinating are the errors that artificial intelligence programs make. In 2010, IBM’s Watson program appeared on the television program Jeopardy! While Watson defeated the program’s two most legendary champions, it made telling errors. For example, in response to one clue10 in the category “U.S. Cities,” Watson gave the response of “Toronto.”

A seemingly unrelated error occurred last year when a social media user asked ChatGPT-4 to create a picture11 of the Beatles enjoying the Platonic ideal of a cup of tea. The program created a lovely picture of five men enjoying a cup of tea in a meadow. While some people may state that drummer Pete Best or producer George Martin could be the “fifth Beatle,” neither of the men appeared in the image.

Any human with even vague familiarity with the Beatles knows that there is something wrong with the picture. Any TV quiz show contestant knows that Toronto is not a U.S. city. Yet highly sophisticated computer programs do not know these basic facts about the world. Indeed, these examples show that artificial intelligence programs do not really know or understand anything, including their own inputs and outputs. IBM’s Watson didn’t even “know” it was playing Jeopardy!, much less feel thrilled about beating the GOATs Ken Jennings and Brad Rutter. The lack of understanding is a major barrier to sentience in artificial intelligence. Conversely, this shows that understanding is a major component of human intelligence and sentience.

Creativity

In August 2023, a federal judge ruled that artwork generated by an artificial intelligence program could not be copyrighted.12 Current U.S. law states that a copyrightable work must have a human author13—a textual foundation that has also been used to deny copyright to animals.14 Unless Congress changes the law, it is likely that images, poetry, and other AI output will stay in the public domain in the United States. In contrast, a Chinese court ruled that an image generated by an artificial intelligence program was copyrightable because a human used their creativity to choose prompts that were given to the program.15

Artificial intelligence programs do not really know or understand anything, including their own inputs and outputs.

Whether a computer program’s output can be legally copyrighted is a different question from whether that program can engage in creative behavior. Currently, “creative” products from artificial intelligence are the result of the prompts that humans give them. A current barrier is that no artificial intelligence program has ever generated its own artistic work ex nihilo; a human has always provided the creative impetus.

In theory, that barrier could be overcome by programming an artificial intelligence to generate random prompts. However, randomness or any other method of self-generating prompts would not be enough for an artificial intelligence to be creative. Creativity scholars state that originality is an important component of creativity.16 This is a much greater hurdle for artificial intelligence programs to overcome.

Currently, artificial intelligence programs must be trained on human-generated outputs (e.g., images, text) in order for them to produce similar outputs. As a result, artificial intelligence outputs are highly derivative of the works that the programs are trained on. Indeed, some of the outputs are so similar to their source material that the programs can be prompted to infringe on copyrighted works.17 (Again, lawsuits have already been filed18 over the use of copyrighted material to train artificial intelligence networks, most notably by The New York Times against the ChatGPT maker OpenAI and its business partner Microsoft. The outcome of that trial could be significant going forward for what AI companies can and cannot do legally.)

Originality, though, seems to be much easier for humans than artificial intelligence programs. Even when humans base their creative works on earlier ideas, the results are sometimes strikingly innovative. Shakespeare was one of history’s greatest borrowers, and most of his plays were based on earlier stories that were transformed and reimagined to create more complex works with deep messages and vivid characters (for which literary scholars devote entire careers to uncovering). However, when I asked ChatGPT-3.5 to write an outline of a new Shakespeare play based on the Cardenio tale from Don Quixote (the likely basis of a lost Shakespeare play19), the computer program produced a dull outline of Cervantes’s original story and failed to invent any new characters or subplots. This is not a merely theoretical exercise; theatre companies have begun to mount plays created with artificial intelligence programs. The critics, however, find current productions “blandly unremarkable”20 and “consistently inane.”21 For now, the jobs of playwrights and screenwriters are safe.

Knowing What You Don’t Know

Ironically, one way that artificial intelligence programs are surprisingly human is their propensity to stretch the truth. When I asked Microsoft’s Copilot program for five scholarly articles about the impact of deregulation on real estate markets, three of the article titles were fake, and the other two had fictional authors and incorrect journal names. Copilot even gave fake summaries of each article. Rather than provide the information (or admit that it was unavailable), Copilot simply made it up. The wholesale fabrication of information is popularly called “hallucinating,” and artificial intelligence programs seem to do it often.

There can be serious consequences to using false information produced by artificial intelligence programs. A law firm was fined $5,00022 when a brief written with the assistance of ChatGPT was found to contain references to fictional court cases. ChatGPT can also generate convincing scientific articles based on fake medical data.23 If fabricated research influences policy or medical decisions, then it could endanger lives.

The online media ecosystem is already awash in misinformation, and artificial intelligence programs are primed to make this situation worse. The Sports Illustrated website and other media outlets have published articles written by artificial intelligence programs,24 complete with fake authors who had computer-generated head shots. When caught, the websites removed the content, and the publisher fired the CEO.25 Low-quality content farms, however will not have the journalistic ethics to remove content or issue a correction.26 And experience has shown27 that when a single article based on incorrect information goes viral, great harm can occur.

Beyond hallucinations, artificial intelligence programs can also reproduce inaccurate information if they are trained on inaccurate information. When incorrect ideas are widespread, then they can easily be incorporated into the training data used to build artificial intelligence programs. For example, I asked ChatGPT to tell me which direction staircases in European medieval castles are often built. The program dutifully gave me an answer saying that the staircases usually ascend in a counterclockwise direction because this design would give a strategic advantage to a right-handed defender descending a tower while fighting an enemy. The problem with this explanation is that it is not true.28

My own area of scientific expertise, human intelligence, is particularly prone to popular misconceptions among the lay populace. Sure enough, when I asked, ChatGPT stated that intelligence tests were biased against minorities, IQ can be easily increased, and that humans have “multiple intelligences.” None of these popular ideas are correct.29 These examples show that when incorrect ideas are widely held, artificial intelligence programs will likely propagate this scientific misinformation.

Managing the Limitations

Even compared to other technological innovations, artificial intelligence is a fast-moving field. As such, it is realistic to ask whether these limitations are temporary barriers or built-in boundaries of artificial intelligence programs.

Many of the simple errors that artificial intelligence programs make can be overcome with current approaches. It is not hard to add information to a text program such as Watson to “teach” it that Toronto is not in the United States. Likewise, it would not be hard to input data about the correct number of Beatles, or any other minutia into an artificial intelligence program to prevent similar errors from occurring in the future.

Even the hallucinations from artificial intelligence programs can be managed with current methods. Programmers can constrain the sources that programs can pull from to answer factual questions, for example. And while hallucinations do occur, artificial intelligence programs already resist giving false information. When I asked Copilot and ChatGPT to explain a relationship between two unrelated ideas (Frederic Chopin and the 1972 Miami Dolphins), both programs correctly stated that there was no connection. Even when I asked each program to invent a connection, both did so, but also emphasized that the result was fanciful. It is reasonable to expect that efforts to curb hallucinations and false information will improve.

Making artificial intelligence engage in creative behavior is a more difficult challenge with current approaches. Currently, most artificial intelligence programs are trained on vast amounts of information (e.g., text, photographs), which means that any output is derived from the characteristics of underlying information. This makes originality impossible for current artificial intelligence programs. To make computers creative, new approaches will be needed.

Deeper Questions

The lessons that artificial intelligence can teach about understanding, creativity, and BSing are fascinating. Yet they are all trivial compared to the deeper issues related to artificial intelligence—some of which philosophers have debated for centuries.

One fundamental question is how humans can know whether a computer program really is sentient. Lemoine’s premature judgment was based solely on LaMDA’s words. By his logic, training a parrot to say, “I love you,” would indicate that the parrot really does love its owner. This criterion for judging sentience is not sufficient because words do not always reflect people’s inner states—and the same words can be produced by both sentient and non-sentient entities: humans, parrots, computers, etc.

However, as any philosophy student can point out, it is impossible to know for sure whether any other human really is conscious. No one has access to another person’s inner states to verify that the person’s behavior arises from a being that has a sense of self and its place in the world. If your spouse says, “I love you,” you don’t really know whether they are an organism capable of feeling love, or a highly sophisticated version of a parrot (or computer program) trained to say, “I love you.” To take a page from Descartes, I could doubt that any other human is conscious and think that everyone around me is a simulation of a conscious being. It is not clear whether there would be any noticeable difference between a world of sentient beings and a world of perfect simulations of sentient beings. If an artificial intelligence does obtain sentience, how would we know?

AI will function best if humans can identify ways in which computer programs can compensate for human weaknesses.

For this reason, the famous Turing Test (in which a human user cannot distinguish between a computer’s output and a human’s) may be an interesting and important milestone, but certainly not an endpoint in the quest to build a sentient artificial intelligence.

Is the goal of imitating humans necessary in order to prove sentience? Experts in bioethics, ethology, and other scholarly fields argue that many non-human species possess a degree of self-awareness. Which species are self-aware—and the degree of their sentience—is still up for debate.30 Many legal jurisdictions operate from a precautionary principle for their laws against animal abuse and mistreatment. In other words, the law sidesteps the question of whether a particular species is sentient and instead creates policy as if non-human species are sentient, just in case.

However, “as if” is not the same as “surely,” and it is not known for sure whether non-human animals are sentient. After all, if no one can be sure that other humans are sentient, then surely the barriers to understanding whether animals are sentient are even greater. Regardless of whether animals are sentient or not, the very question arises of whether any human-like behavior is needed at all for an entity to be sentient.

Science fiction provides another piece of evidence that human-like behavior is not necessary to have sentience. Many fictional robots fall short of perfectly imitating human behavior, but the human characters treat them as being fully sentient. For example, Star Trek’s android Data cannot master certain human speech patterns (such as idioms and contractions), has difficulty understanding human intuition, and finds many human social interactions puzzling and difficult to navigate. Yet, he is legally recognized as a sentient being and has human friends who care for him. Data would fail the Turing Test, but he seems to be sentient. If a fictional artificial intelligence does not need to perfectly imitate humans in order to be sentient, then perhaps a real one does not need to, either. This raises a startling possibility: Maybe humans have already created a sentient artificial intelligence—they just don’t know it yet.

The greatest difficulty of evaluating sentience (in any entity) originates in the Hard Problem of Consciousness, a term coined by philosophers.31 The Hard Problem is that it is not clear how or why conscious experience arises from the physical processes in the brain. The name is in contrast to comparatively easy problems in neuroscience, such as how the visual system operates or the genetic basis of schizophrenia. These problems—even though they may require decades of scientific research to unravel—are called “easy” because they are believed to be solvable through scientific processes using the assumptions of neuroscience. However, solving the Hard Problem requires methodologies that bridge materialistic science and the metaphysical, subjective experience of consciousness. Such methodologies do not exist, and scientists do not even know how to develop them.

Artificial intelligence has questions that are analogous to the neuroscience version of the Hard Problem. In artificial intelligence, creating large language models such as LaMDA or ChatGPT that can pass the Turing Test is a comparatively easy task, which conceivably can be solved just 75 years after the first programmable electronic computer was invented. Yet creating a true artificial intelligence that can think, self-generate creative outputs, and demonstrate real understanding of the external world is a much harder problem. Just as no one knows how or why interconnected neurons function to produce sentience, no one knows how interconnected circuits or a computer program’s interconnected nodes could result in a self-aware consciousness.

Artificial Intelligence as a Mirror

Modern artificial intelligence programs raise an assortment of fascinating issues, ranging from the basic insights gleaned from ridiculous errors to some of the most profound questions of philosophy. All of these issues, though, inevitably increase understanding—and appreciation—of human intelligence. It is amazing that billions of years of evolution have produced a species that can engage in creative behavior, produce misinformation, and even develop computer programs that can communicate in sophisticated ways. Watching humans surpass the capabilities of artificial intelligence programs (sometimes effortlessly) should renew people’s admiration of the human mind and the evolutionary process that produced it.

Yet, artificial intelligence programs also have the potential to demonstrate the shortcomings of human thought and cognition. These programs are already more efficient than humans in producing scientific discoveries,32 which can greatly improve the lives of humans.33 More fundamentally, artificial intelligence shows that human evolution has not resulted in a perfect product, as the example of Blake Lemoine and LaMDA shows. Humans are still led astray by their mental heuristics, which are derived from the same evolutionary processes that created the human mind’s other capabilities. Artificial intelligence will function best if humans can identify ways in which computer programs can compensate for human weaknesses—and vice-versa.

Nonetheless the most profound issues related to recent innovations of artificial intelligence are philosophical in nature. Despite centuries of work by philosophers and scientists, there is still much that is not understood about consciousness. As a result, questions about whether artificial intelligence programs can be sentient are fraught with uncertainty. What are the necessary and sufficient conditions for consciousness? What are the standards by which claims of sentience should be evaluated? How does intelligence emerge from its underlying components?

Artificial intelligence programs cannot answer these questions—at this time. Indeed, no human can, either. And yet they are fascinating to contemplate. In the coming decades, it may be that the philosophy of cognition may be one of the most exciting frontiers of the artificial intelligence revolution. END

This article was published on June 21, 2024.

 
Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2024. All rights reserved • Privacy Policy