The Skeptics Society & Skeptic magazine


Why Should We Pursue Human Intelligence With AI?

In the quest for intelligent machines, approaching, or even surpassing human intelligence, has been a prominent dot on the horizon since the 1950s. Aside from the various technological challenges, I believe this quest is enormously difficult for three reasons:

  1. We don’t have a clear picture of exactly how intelligence works in humans.
  2. We have no generally accepted operational definition for many relevant concepts (such as consciousness), making their existence difficult to prove.
  3. We continually shift what we consider intelligent.

All this makes it difficult to “clone” intelligence. Consequently, experts disagree on when we will reach human-level AI. The dot on the horizon shifts with time and continuously seems to be equally far away (as in the quip “AI is five years away…and always will be”). Yet it is not inconceivable that the intelligence code will be cracked. Chess once seemed to require some form of human intelligence; you had to be able to think strategically and assess your opponent. We now know that all such “what ifs” and “if thens” can be programmed, and an abstract representation and brute computational power have proven sufficient to defeat even the greatest human chess champions. Granted, that’s just chess, but what if all those tasks that now seem immeasurably complex could also be solved with correspondingly complex algorithms, or even with relatively simple algorithms?

In that respect, creativity seems to be the new chess. AI is already capable of creating works of art and composing pieces of music. Many people have difficulty accepting the results as examples of true creativity. And there is the deeper philosophical discussion as to whether we ourselves are not simply programmed and thus do not act as autonomously as we like to think. AI can revive one of the most painful insults to humanity famously put forth by Sigmund Freud: many, if not most of our actions are not the result of conscious choice.

Machines long ago outperformed us in physical labor, and more recently in computational power. And now our intellectual ability is at stake. This ability has always set us apart from all other creatures on earth and has given us (at least instinctively) control over our future. It is therefore not surprising that some people resent this development. Think what we might, AI is a mirror for humanity. It teaches us an enormous amount about ourselves and asks us fundamental questions about what it means to be human.

However, in my opinion, the key question we should be asking is: In designing AI, should we pursue human intelligence at all? Submarines do not swim the way fish do, nor do airplanes fly like birds; so why should computers have to think the way humans do? If you give a spider human-level intelligence it will not start behaving like a human, but rather like a “super spider” that can spin even better webs and catch even more prey. We are only going to make real progress in AI when we let go of the idea that we are superior beings. Humans are not superior to insects; we each evolved based on the respective evolutionary adaptations to our respective environments. Humans may have more advanced cognitive skills, but insects will most likely survive even a nuclear disaster. So success is context-dependent and therefore relative.

We need to start asking ourselves for what purpose we want to use intelligent machines, rather than seeing intelligence as an end in itself. How can we use intelligent machines to create a better world? Indeed, what exactly is a better world? I submit the path forward is for humans and machines to work together, allocating tasks based on their respective specializations. Leave the complex statistics to computers but reserve the socially sensitive issues for human decision making. Let machines monitor railroads for possible damage, but let people watch over the application process for new railroad employees. Let machines assess CT scans for cancerous abnormalities, but let people discuss the treatment process with patients. Why should we build emotions into machines? On the contrary, I think we should strive to make computers operate as objectively as possible. After all, we humans with all our evolutionarily programmed biases and emotions have proven to be not very good at that at all. As the world chess champion Garry Kasparov, who was famously defeated by IBM’s Deep Blue computer, advised…

Machines have calculations.
We have understanding.
Machines have instructions.
We have purpose.
There’s one thing only a human can do.
That’s dream. So let us dream big. END

About the Author

Rudy van Belkom is the Executive Director of The Netherlands Study Centre for Technology Trends (STT). His book, AI No Longer Has A Plug, offers developers, policymakers, philosophers, and anyone with an interest in AI the tools needed for integrating ethics into the AI design process. In addition, he developed an ethical design game for AI, inspired by the scrum process, that can be used to translate ethical issues into practice.

This article was published on May 17, 2024.

 
Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2024. All rights reserved • Privacy Policy