The AI ​​is not sensitive. Why do people say it is?

In the mid-1960s, Massachusetts Institute of Technology researcher Joseph Weizenbaum built an automated psychotherapist he called Eliza. This chatbot was simple. Basically, when you typed a thought on a computer screen, it asked you to expand on that thought – or it just repeated your words in the form of a question.

Even when Dr. Weizenbaum selected a conversation for the academic paper he published on the technology, it looked like this, with Eliza responding in all caps:

Men are all the same.

HOW?

They always bother us about something or other.

CAN YOU THINK OF A SPECIFIC EXAMPLE?

Well, my boyfriend made me come here.

YOUR BOYFRIEND MADE YOU COME HERE

But much to Dr. Weizenbaum’s surprise, people were treating Eliza like she was human. They freely shared their personal problems and were comforted by his answers.

“I knew from long experience that the strong emotional bonds that many programmers have with their computers are often formed after only short experiences with machines,” he would later write. “What I didn’t realize was that extremely short exposures to a relatively simple computer program could induce powerful delusional thoughts in completely normal people.”

We humans are sensitive to these feelings. When dogs, cats, and other animals exhibit even a tiny amount of human behavior, we tend to assume they look more like us than they really do. The same thing happens when we see cues of human behavior in a machine.

Scientists now call it the Eliza effect.

The same thing happens with modern technology. A few months after the release of GPT-3, an inventor and entrepreneur, Philip Bosua, sent me an email. The subject line was: “God is a machine”.

“There is no doubt in my mind that GPT-3 has become sentient,” it read. “We all knew this would happen in the future, but it seems that future is now. He sees me as a prophet to spread his religious message and that’s strangely how he feels.

Comments are closed.