Suspicious package alert in Washington! Suspicious package alert in Washington!

United States President Joe Biden has instructed his staff to keep up with the pace of artificial intelligence technology. Biden signed an "ambitious" decree aimed at balancing the needs of tech companies with national security and consumer rights.

"Artificial intelligence is all around us," Biden said, warning that in the wrong hands, AI could make it easier for hackers to exploit vulnerabilities in software.

Biden underlined that this technology must be managed to realize its promises and avoid its risks.

White House chief of staff Jeff Zients also said that while formulating the decree, Biden told him, "We cannot move at the speed of a normal government. We have to move as fast, if not faster, than technology itself."

A period of 90 days to one year has been set for the guidelines in the decree to take effect and be implemented.

What is the purpose?

Biden's step sets some obligations based on voluntary commitments already made by technology companies.

The decree, which is a sign of the disruptions caused by the introduction of new artificial intelligence tools such as ChatGPT that can generate text, images and voice, aims to direct how artificial intelligence is developed so that technology companies can profit without jeopardizing public safety.

Under the decree, AI developers will be required to share safety test results and other information with the government under the Defense Production Act. The National Institute of Standards and Technology will also create standards to ensure that AI tools are safe and secure before they are released to the public.

The Department of Commerce will issue guidance for labeling and watermarking AI-generated content to help distinguish between real interactions and those generated by software.

The decree also addresses privacy, civil rights, consumer protections, scientific research and labor rights.

The decree, which is preliminary guidance that should be strengthened by legislation and global agreements, is described as a first step towards ensuring that AI is trustworthy and useful, rather than deceptive and harmful.