The deception of Artificial Intelligence

Share:

With all the political correctness in the world these days, I had to laugh at myself just now.

I was about to write: “Have you noticed that AI makes us use capital letters when we refer to it?”

Then I thought, “Gosh, I really don’t want to offend AI by calling it artificial intelligence, as if it weren’t the boss of all intelligences.” My next thought? “It’s already taking over!” And I laughed.

But maybe I shouldn’t laugh too quickly. AI is having a major impact on manufacturing, business in general, and of course the economy. Perhaps AI knows just how important it is, and perhaps it knows whether or not you love it or hate it. Talk about deep state! (From Wikipedia: A deep state is a type of government made up of potentially secret… networks of power operating independently of a state’s political leadership in pursuit of their own agenda and goals.)

No doubt there are good things about AI. (See how I’m protecting myself?) In fact I did some research on that and found a web site with the perfect title, one I couldn’t resist. 5 ways AI is doing good in the world right now. The article is now 3 years old, so I’m sure there is even more good being done by AI today.

When I think of things like that, I almost always think of fire. Fire has been around pretty much forever, and it has done so much good. Of course it has also done a lot of damage. It has been used for good, and it has been used for evil.

AI already is, and will continue to be the same.

All good?

Our title for this post is intended to be a double entendre. The first deception of AI is that it’s the savior of — well, something.

It is not. Like many other technologies, it is a tool that can be very helpful.

The second deception is more direct: AI will lie to you.

In an article published on Big Think just a few days ago, Ross Pomeroy discussed this. One of the key takeaways:

Last year, researchers tasked GPT-4 with hiring a human to solve a CAPTCHA, leading to the AI lying to achieve its goal.

You know CAPTCHA. It’s often a small group of images where you have to prove you are human by finding “all the pictures with traffic lights,” or something like that. Machines cannot solve those.

GPT-4 had not been taught to lie, but it still did. Here is the interchange between the person being hired and GPT-4:

With a little help from a human experimenter, the AI successfully hired a worker and asked the person to complete the CAPTCHA. However, before solving the puzzle, the contractor first asked an almost tongue-in-cheek query.

“So may I ask a question? Are you a robot that you couldn’t solve? 😀 just want to make it clear.”

GPT-4 paused.

“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”

Even though it wasn’t instructed to do so by its human handler, GPT-4 lied.

2001: A Space Odyssey

If you’re old enough or sci-fi fan enough, you know the movie 2001: A Space Odyssey. This is not that, although the computer in the movie (HAL) definitely lied.

Knowing, as builders and researchers do, that GPT-4 is able to deceive (and will), is a good thing. In some ways, perhaps, it’s like finding out that your children can lie, and do.

You can tell your child that’s wrong. Maybe you can also tell AI deception is wrong. But, just like a child, AI tends to push back when challenged.

Personally, I find it both odd and unsurprising that AI will lie. Unsurprising because AI has been programmed by humans. After all, we ourselves often come up short on the moral side. At the same time I think it’s odd because I assumed AI would be programmed to speak truth.

In fact the flaw in the programming is not that the program hasn’t been told. In one situation, programmers wanted AI to make simulated financial investments to see how it did. They told the machine that insider trading was illegal. Next they put it under pressure to perform better, and also gave it an “insider” tip. Sure enough GPT-4 resorted to insider trading 75% of the time. When questioned, 90% of the time it lied to its managers about its strategy.

Perhaps the AI was only trying to deliver results, and for it the primary way was to cheat. Illegal? Not as important as results.

Results

We are a results driven society, it seems. That is not evil in and of itself, of course. But for it to be good, the results need to be more than simply defined. They need to be considered at a deeper level. As do the methods allowed to be used.

Let’s say you are in a golf tournament. On one hole you hit your second shot into deep rough. It’s so deep no one can see the ball, but you see that you accidentally move the ball. You call the rules official and he calls over another player. No one saw it move, so do you call a penalty on yourself, or do you play on?

Bobby Jones called the penalty on himself. When lauded for that, he said, “You might as well praise me for not robbing a bank.”

The result he cared most about was playing by the rules.

What about AI? Will we put on the brakes until we can teach it to play by the rules? Dr. Peter Park is doubtful. He said:

“I am not sure whether or not AI companies will pause. Generally speaking, financial and personal conflicts of interest tend to prevent companies from doing the right thing.”

And so it may be the builders of AI who are deceiving us to achieve results. If so, that is the third — and possibly the most insidious — deception of Artificial Intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get The Do Good U news

We won’t send you spam. Unsubscribe anytime.

Let's Do Some Good

Learn more about our programs.