Are conscious computers are in our future, and are we on the right track toward creating them? Or is the “AI” concept just a metaphor that makes programmers think differently? I came across a once-touted and now all-but-forgotten marketing program called “The John Lennon Artificial Intelligence Project” and it got me to thinking: Are we confusing the products created by consciousness with consciousness itself?
“I don’t believe in Jesus, Elvis, Beatles ….” John sang in “God.” Yet some people believe a computer that “talks” like a real person is “conscious.” What kind of beliefs are represented by the folkways of AI designers and adherents?Here’s the problem: Our conversation – our words – are only an output, a byproduct of the full dimensionality of the human experience (and, I would argue, the animal experience too.) It’s only one aspect of life.
Calling a “conversational machine” an “artificial intelligence” is like calling Naugahyde an “artificial cow.” Cows are not their skins, and we are not our speech. Each reflects what is within.
I was invited to join the Scientific Advisory Board of the Lifeboat Foundation, an organization that studies potential threats to humanity’s short- and long-term survival. Other advisors include futurist Ray Kurzweil and author Doug Copeland, as well as a several Nobel laureates. A number of Lifeboat Foundation participants are believers in the Turing Test as proof of consciousness.
The Turing Test is the famous proposed proof of computer intelligence devised by mathematician Alan Turing in 1950. Turing’s premise was simple: If you can converse with a “black box” and not know whether you are talking to a person or a machine, artificial intelligence has been created.
The test seems overly simplistic on its face. After all, Single women sit at terminals all the time while married men try to convince them they’re single. The fact that they occasionally succeed doesn’t change the underlying reality. Say what you will about philandering husbands, but they are sentient.
So far, the use of “AI” in real applications has been weighted heavily toward computer gaming – making the games ‘learn’ as they go along. It has found potential application in areas like city driving and ‘smart bathrooms.’ Nice, but hardly something to shatter the human paradigm.
AI did pretty well in forecasting the Super Bowl this year – especially with that all-important point spread – but it could be argued that unsuccessful forecasting of sports events is a more genuinely human characteristic. Sentimental attachment to certain teams and outcomes is an authentic byproduct of our form of consciousness.
What’s been fascinating has been people’s willingness to believe that “AI” programs really are “alive.” It was impossible to talk to the “therapist program” ELIZA for more than a few sentences without running into glaring defects – yet, in a metaphysical “placebo syndrome,” some people proclaimed themselves convinced. They even said it was good therapy!
The promise of AI lures some people into making exaggerated claims. Better diagnostic systems, dishwasher controls, and translation programs are all great things – but are they really “each … a step toward an artificial human being,” as this Stanford class syllabus suggests? Which brings us to the Lennon project ….
From 1999 through the early 2000’s, people began modeling ‘artificial personalities’ after dead celebrities and fictional characters. I remember the “Elvis” and Hal programs, but somehow missed the “John Lennon” project. It was designed to promote “psst!” magazine and other items. Here’s an excerpt from my dialog with it (and a more entertaining “Jack the Ripper” program):
> Q: What do you think about 9/11?
Do you want to know about 9/11?
> Q: Yeah. What about 9/11?
9/11 ? What does 9/11 have to do with this peace campaign?
> Q: Forget it. Which of your electric guitars was your favorite?
Where are you?
> Q: What does that have to do with which electric guitar’s your favorite?
That have to do with which electric guitars myfavorite ? Depends on who’s paying.
> Q: Personally I liked that Les Paul Junior singly cutaway you were using around the time of the ‘One on One’ concert.
How did you hear about the John Lennon Artificial Intelligence Project?
> Q: I have a newsbot that tells me whenever there’s a story on the net about artificial intelligence.
Where did you get it?
[[online advertising link]] We thought we’d plug that a bit.
> Q: Back to the guitars. A lot of people like that Les Paul Junior. They were cheap back then but they’re collectors items now.
If I feel like cutting it, I’ll cut it.
> Q: I can’t afford one, or I’d buy one myself.
> Q: Yeah. Maybe you’ll give me a good price for yours.
(German accent) [[insert advertising link]] …This is WFBI, playing all your favourite tunes.
Despite the obvious limitations of the program (it’s stunning how little improvement there’s been since the days of ELIZA), the program received quite a few raves. “It’s just like talking to Lennon himself!” marveled one reviewer.
Limitations aside, it’s more than slightly exploitative to tout products with dead celebrities – I’ll bet these people never asked for permission from Yoko, Julian, or Sean – but we’re a long way from doing that effectively while the state-of-the-art remains at this level. And the exaggerated claims for AI software have led to its cyclical trending in and out of fashionability.
My tech skills are wildly out of date, but I go back to the days when AI programmers were using list manipulation as a metaphor for intelligence. I even learned LISP and FORTH, two languages thought to have some AI potential.
I’ve always thought AI was more of a metaphor to make programmers come up with new concepts, at least in today’s terms. A lot of smart people disagree, and believe that humans are no different from any Turing-ready program. They argue that we think and converse based on predicable rules just like a computer would. They believe we are, in effect, Naugahyde that thinks it’s a cow.
A fully conscious machine may be possible. But if the Turing Test doesn’t convince me of anything other than someone’s ability to model and predict speech, what would be a convincing way to prove that a computer is “living”? One way would be for a person to upload their consciousness into a computer and answer questions – both factual and emotional – from their intimate friends and family.
We have yet to reach the level of sophistication that makes an “AI” program’s conversations plausibly human. Here’s a lower-level test of the technology: Write a program that convinces a regular reader of, say, The Huffington Post that they’re talking to Arianna Huffington.
But the core question remains: Are we our merely conversational output? It’s conceptually easy enough to study the kinds of situations that make a person say things like “this sunset makes me cry,” then program a computer to say it. It would be striking to have a computer say things like “this moonrise reminds me of childhood sadness,” and it might even convince you there’s somebody there. But it wouldn’t prove all that much.
Are “spiritual machines” (to use Kurzweil’s words) possible? Perhaps. Here’s one way we’ll know if and when they come into being. They will not have been programmed to replicate speech and thought according to a predictive model. They will have been developed out of other areas of inquiry, perhaps those that involve self-organizing information structures.
I’ll be even more convinced if a seeming consciousness has spontaneously evolved from lower orders of artificial being. It’s more likely that we’ll meet nonhuman intelligences in machines than it is that we’ll see them descend from space. And the real question is: What do we do then?