The Skeptics Society & Skeptic magazine


Human v. Artificial Intelligence:
Will AI Come Back to Outsmart, Sting, or Assist Us?

A fragment attributed to the ancient Greek poet Archilocus contrasted the fox, who “knows many things,” with the hedgehog, who “knows one big thing.”1

Since then, this dichotomy has been applied to world leaders, philosophers, economists, psychologists, musicians, writers, even fast food chains, although sometimes not so dichotomously. For example, some of those individuals end up being described as “A hedgehog who used foxy means” (Abe Lincoln) or “a born hedgehog who believes in being a fox” (jazz musician Miles Davis). More technically, psychologist, cognitive scientist, and AI expert Gary Marcus2 noted that:

Humans are very good at a bunch of things that AI is (as of today) still pretty poor at:

  • Maintaining cognitive models of the world
  • Inferring semantics from language
  • Comprehending scenes
  • Navigating 3D world
  • Being cognitively flexible.

Yet pretty poor at some others (wherein you could easily imagine AI eventually doing better):

  • Memory is shaky
  • Self-control is weak
  • And computational ability limited

[and as books and articles by Skeptics regularly describe]

Subject to Confirmation Bias, Anchoring, and Focusing Illusions.

Cognitive neuroscience expert Hans Korteling3 listed the following differences between what he termed human “carbon-based” intelligence and artificial “silicon-based” intelligence:

  • Human biological carbon-based intelligence is based on neural “wetware,” while artificial silicon-based intelligence is based on digital hardware and software, which are independent of each other. In human wetware, anything learned is bound to that individual, whereas the algorithm by which something is learned in AI can be transferred directly to another platform.
  • While humans can only transmit signals at 120 meters per second at best, AI systems can transmit information at speeds approaching that of light.
  • Humans communicate information “through a glass darkly” as it were, through the limited and biased mechanisms of language and gestures; AI systems can communicate directly and without distortion.
  • Updating, upgrading, and expanding AI systems is straightforward, hardly the case for humans.
  • Humans are more “green” and efficient. The human brain consumes less energy than a light bulb, while an equivalent AI system consumes enough energy to power a small town.

Data scientist and business guru Herbart Roitblatt4 likened AI to Archilocus’ hedgehog because “it does one thing and one thing only, but does so unceasingly and very well, while our human minds are like his fox,” having all the desirable and undesirable features that come bundled with our flawed cognition. Artificial intelligence researchers, Roitblat pointed out, “have been able to build very sophisticated hedgehogs, but foxes remain elusive. And foxes know how to solve insight problems.”

Human intelligence is capable of not only reasoning, but solving novel problems, as well as experiencing and exercising insight. Psychologists define human (and non-human) intelligence as being an ability rather than a specific skill (whether learned or instinctive) because of its general nature. It is able to integrate such diverse cognitive functions as perception, attention, memory, language, and planning and apply those inputs to novel situations. As psychologist Jean Piaget once quipped, “Intelligence is what you use when you don’t know what to do: when neither innateness nor learning has prepared you for the particular situation.” [Emphasis added.]

How Alike and How Different Are We?

Is AI capable of leaps of insight like human intelligence? Or is “artificial” intelligence more akin to serial learning in humans, in which performance, through repeated practice, gets better and better with each iteration until the upper limit is reached?

As a test, consider a study by psychologists Jonathan Wai and Matt Lee.5 They performed a “compare and contrast” of how artificial intelligence on the one hand and human intelligence on the other responded to practice on the well known, and often dreaded, Graduate Record Exam (GRE). First, they noted that according to the figures released by manufacturer OpenAI, GPT-3.5 scored only at the 25th percentile on the Math portion and at the 63rd percentile on the Verbal. GPT-4, however, the beneficiary of substantially more training, increased its performance to the 80th percentile on the Math section and the 99th percentile on the Verbal!6

Despite claims by “improve your score on the GRE” training programs, flesh-and-blood humans improve little, if at all with repeated practice. As evidence, Wai and Lee cite a meta-analysis of nearly one million test-retest observations of the GRE between 2015 and 2020 that found, on average, those individuals retaking the test scored a mere 1.43 to 1.49 points higher, so that a test-taker starting at the 25th percentile would have increased their performance by roughly five or six percentile points on either subtest.

Most of that change, Wai and Lee note, can be explained in terms of the well-known statistical phenomenon of regression to the mean, because most of those who obtain very high scores tend to move downward toward the mean while those who obtain very low scores tend to move upward toward the mean. The highly advertised cases of the very small number of individuals who do markedly better after prep courses are most likely the result of test-taking practice, particularly effective for those learning to overcome test anxiety that suppressed their “true” score. Overall, no matter how many times they take the test, an individual is most likely to get about the same score, give or take a little up or down.

Alas, as Wai and Lee’s comparison demonstrates, when it comes to the most widely used and pragmatically effective standardized tests, AI and human intelligence do not behave anything like the same process. Artificial intelligence keeps on learning, and learning, and learning…. But what it learns depends upon what it is taught. Given the proper input, what comes out can be amazing. If given wrong, insufficient, inadequate, or biased information in, what comes out is garbage, sometimes offensively so.

AI-generated image that resembles SpongeBob SquarePants

Prompting DALL·E with the words “animated sponge” produced output that highly resembles SpongeBob SquarePants without ever inputting trademarked or copyrighted names (of which DALL·E rejects many).

Gary Marcus performed experiments with video industry concept artist Reid Southen (known for his work on Matrix Resurrections, Hunger Games, and Transformers).7 They demonstrated quite graphically just how impressive AI’s output can be. Southen and Marcus used DALL·E, a text-to-image software program developed by OpenAI, that generates digital images from simple everyday language descriptions, termed “prompts.” As protection against copyright infringement, DALL·E rejects many proper names. However, in their example (shown left), the trademarked name “SpongeBob SquarePants” was never entered as a prompt, just the two common, everyday words “animated sponge”!

Check out the Marcus and Southen post for similar equally, if not more, impressive examples of the familiar Star Wars droids, Robocop, and Super Mario—again generated by DALL·E from everyday language descriptors without ever inputting any proper trademarked or copyrighted names. Their examples demonstrate not only the power, but also the legal issues arising from the use of generative AI (described elsewhere in this issue).

Biased In, Racist Out

If AI can be amazingly right it can also be amazingly—and offensively—wrong. The classic case was in 2015 when software developer Jacky Alciné discovered that Google’s standalone photo recognition apps labeled photos of Black people as being gorillas. Given the history of racial stereotyping, Alciné (who is Black), understandably found the error exceedingly offensive. The explanation was not any explicitly conscious racism on the part of

Google, but the possibly more subtle prejudice that stemmed from the AI program not being trained in recognizing a sufficient number of people of color. Google’s quick-and-dirty but effective solution was to prevent any images from being recognized as that of a gorilla. In 2023 Nico Grant and Kashmir Hill8 tested not only newer releases of Google, but also competitive Apple, Amazon, and Microsoft software.

Their results? Google’s software produced excellent images in response to prompts for just about any animal Noah might have loaded on his Ark—but nothing for gorillas, along with chimpanzees, orangutans, and even non-apes such as baboons and other monkey species. Apple Photos was also equally primate-ignorant. Microsoft’s One Drive failed for all animals, while Amazon Photos opted for the opposite solution of responding to the prompt “gorillas” with an entire range of primates.

The use of AI for doorbell recognition produced not a racial, but rather a “domestic” malfunction. One user found the person ringing labeled as his mother when it was in fact his mother-in-law. Depending on the state of one’s marriage, the result could be anything from surprising to disconcerting to home-wrecking.

Beyond the need to consider general issues of racial, other demographic, and domestic sensitivity (to their credit, most software giants have now added Ethics staff to their software development teams), Grant and Hill’s experiments should give us pause about blindly relying upon AI for recognition in cases of security and law enforcement. How thoroughly will the software be tested? Would those most likely to be adversely affected by false hits have the power and/or funds to mount a proper response or defense?

But What Does AI Mean for Me?

What the average person really wants to know about artificial intelligence is what it means to their everyday lives—most specifically, “Am I going to lose my job to AI?” or “Will my life be regulated by AI?” (Rather than faceless human bureaucrats?)

The worst conspiratorial fears kicking around are of those epitomized in the classic 1970 sci-fi movie Colossus: The Forbin Project, based on D.F. Jones’ 1966 novel Colossus: A Novel of Tomorrow That Could Happen Today. “Colossus” is the code name for an advanced supercomputer built to control U.S. and Allied nuclear weapon systems, that soon links itself to the analogous Soviet system, “Guardian,” and next goes about seeking control over every aspect of life, and in so doing subjugating the entire human race. It then presents all humankind with the offer we can’t—or at least, dare not—refuse:

This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. (…) you will learn by experience that I do not tolerate interference. I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. (…) You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.

In the film’s closing dialogue, the project’s lead designer and manager, speaking on behalf of all humankind, defiantly rejects the offer from a Colossus—“NEVER!”9

Following the Matthew Effect, those who are best at using AI will derive even greater advantage than those less so.

While such paranoid fears persist, a lot has changed since then in geopolitics and in computing. In both cases, there has been a massive ongoing, and ever accelerating redistribution of power. It’s no longer a two- or even a one-power world, but a multi-power one. Even small groups without necessarily possessing any recognized or established geographical base, such as Al Qaeda or Hamas, have proven that, in one day, they can literally change the world. And in computing, the massive God-like single computer has given way to microprocessing and nanoprocessing such that most people now hold in their hands mobile phones with more computing power than rooms filled with the most sophisticated U.S. or Soviet military defense computers at the time the novel and the film were written. Intellectual and economic power are more in the hands of firms and even individuals dispersed all around the world, and no longer concentrated in massive complexes controlled by the super-power governments. Indeed, for individuals, wealth, power, and quality of life are increasingly less a function of in which nation-state they live and much more a function of their own knowledge and skills, particularly in the high-tech, STEM-savvy domains. So how then will AI affect the lives of ordinary people?

Social scientists have long used the term Matthew Effect, or the Effect of Accumulated Advantage, to describe the tendency of individuals within a diverse group to accrue additional social, economic, or educational advantage based upon the initial relative position.10 The name derives from the Parable of the Talents in the Gospel of Matthew (25:29):

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.

It is thus relevant that the Greek word tálanton originally meant a weight, then a coin of precious metal of that weight and hence something of great value, and only eventually a human skill or ability, and that this change of meaning derived from the Gospels no less. It’s now commonly summarized in the lament that, “the rich get richer and the poor get poorer,” though the phenomenon applies not only to monetary wealth. One of the hard laws of individual differences is that anything that increases the mean for a distribution also increases the variance. The latest high-tech alloy golf club or tennis racket may increase the length of the weekend player’s drive or the speed of their serve, but will do so more for top amateur players and even more so for the pros. You get ahead in absolute terms, only to fall relatively further behind.

What does all this have to do with AI and jobs? In the words of Harvard Business School professor Karim Lakhani, a specialist in how technology is changing the world of work, “AI won’t replace humans—but humans with AI will replace humans without AI.”11 Following the Matthew Effect, those who are best at using AI will derive even greater advantage than those less so. So, from a positive-sum perspective, everyone can benefit from greater use of AI in the cost of goods and services decreasing while accessibility increases. However, the one good that is always distributed on a zero-sum basis is status, and our evolutionary history has preprogrammed us to be especially concerned about it. Even relative purchasing power will possibly tend to become less, not more, equitably distributed, based increasingly on AI skills and abilities.

And yet, there is a silver lining. On the one hand, increased use of artificial intelligence, certainly not as our master, nor even our slave, but increasingly more as a very capable partner, will allow us to ensure that the most basic necessities of life can be distributed to all. Faster, better, and cheaper basic needs, education and training, medical care, and even creature comforts, will allow us to mitigate the ever-increasing inequalities. Doing so, however, will require a lot of good will and common sense, qualities in which both artificial and human intelligence “oft do go awry.” Critical thinking offers an at least partial palliative. END

This article was published on June 14, 2024.

 
Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2024. All rights reserved • Privacy Policy