The letter, signed by 13 mostly former employees of companies such as OpenAI, Anthropic and Google's DeepMind, argues that senior AI researchers need more protection to voice criticism of new developments and to seek input from the public and policymakers on AI.

"LOSS OF CONTROL OF ARTIFICIAL INTELLIGENCE COULD LEAD TO HUMAN EXTINCTION"

"We believe in the potential of artificial intelligence technology to bring unprecedented benefits to humanity. At the same time, we recognize the serious risks these technologies pose. These risks range from deepening existing inequalities, manipulation and misinformation, loss of control of autonomous AI systems, and potentially human extinction."

An OpenAI spokesperson told The Independent, "We are proud of our track record of providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous discussions are crucial given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world."

EU investigation into Apple! EU investigation into Apple!

OpenAI also pointed to its support for increased AI regulation on AI security.

Editor: David Goodman