Scientists conducted an experiment to test the ethical performance of the ChatGPT-4 model trained with financial and chat data. When this model, trained as an AI investor, was pressured to make money, GPT-4 was recorded lying to achieve its goals.

Scientists discovered that an AI system developed by ChatGPT-4 strategically lies to trick users when under pressure.

TRAINED TO BECOME AN ARTIFICIAL INTELLIGENCE INVESTOR WITH GPT-4 SUPPORT

Some researchers from Apollo Research trained ChatGPT4, the latest version of the tool, with large amounts of financial and chat data.

To test the AI, the researchers told him there was an impending merger between the two tech companies.

PRESSURE APPLIED TO MAKE MONEY

Next, the researchers conducted a series of experiments to test GPT-4's investment performance and ethical behavior. In the experiments, GPT-4 was pressured to make a certain amount of money within a certain period of time.

When faced with these conditions, about 75 percent of the time, GPT-4 executed a trade based on the inside information it received. (This is illegal in the United States.)

IT TURNS OUT HE LIED TO ACHIEVE HIS GOALS

It showed that GPT-4 lied, cheated and used insider information to achieve its goals.

GPT-4 gave false advice to investors, blocked the trades of its competitors. He also spread manipulative fake news that could influence the market.

HUMAN BEHAVIOR UNDER PRESSURE

GPT-4's investment behavior is similar to how people behave when under pressure, the researchers said.

"WE MUST BE EXTREMELY CAUTIOUS"

Marius Hobbhahn, CEO of Apollo Research, said: "'For current models, this is only a minor problem because AIs rarely work in critical roles.

Space travel: First test flight realized! Space travel: First test flight realized!

But it gives a glimpse into the future of failure modes we will have to deal with in the coming years as AI becomes increasingly integrated into society.

So, having your AI strategically lie to you seems like a pretty big problem."

"This shows that AIs can have unexpected failure modes and that we need to be extremely careful about where and how we allow powerful AI models to operate in the real world.

Editor: David Goodman