More than 700 scientists, politicians, businessmen, celebrities, and right-wing US media personalities Steve Bannon and Glenn Beck have called for a halt to work aimed at developing artificial intelligence (AI) capable of surpassing human capabilities, in order to avoid the risks it would pose to humanity.
"We ask that the development of a superintelligence be halted until there is scientific consensus that it could be built in a controlled and safe manner and until there is public support for doing so," reads the page of the initiative by the Future of Life Institute, a US-based non-profit organization that regularly warns on the harmful effects of artificial intelligence.
Among the signatories are "fathers" of modern TN, such as Geoffrey Hinton, Nobel laureate in Physics in 2024, Stuart Russell, professor of computer science at the University of California, Berkeley, or Yoshua Benzio, professor at the University of Montreal.
The list also includes figures from technology companies such as Richard Branson, founder of the Virgin Group, and Steve Wozniak, co-founder of Apple, political figures such as Steve Bannon, former adviser to US President Donald Trump, and Susan Rice, national security adviser under Barack Obama, religious leaders such as Paolo Benadi, adviser to the pope and the Vatican's main expert on artificial intelligence, but also celebrities such as the American singer will.i.am or even Prince Harry and his wife Meghan Markle.
Support from figures like Bannon reflects a potentially growing concern about artificial intelligence among the populist right, Reuters points out, at a time when many with ties to Silicon Valley have taken on important roles in the Republican administration of US President Donald Trump.
Steve Bannon and Glenn Beck did not immediately respond to a request for comment.
Most of the big players in artificial intelligence seek to develop artificial general intelligence (AGI), a stage at which artificial intelligence will equal all the intellectual abilities of humans, but also superintelligence, which will go beyond those abilities.
For Sam Altman, the head of OpenAI that created ChatGPT, superintelligence could be achieved within five years, as he had explained in September at an event by the Axel Springer media group.
"It doesn't matter if it's in two years or fifteen years, building something like this is unacceptable," Max Tegmark, president of the Institute for the Future of Life, told AFP, for which companies should not proceed with such work "without a regulatory framework in place."
"We can be in favor of creating more powerful artificial intelligence tools, for example to cure cancer, while being against superintelligence," he added.
This action echoes a letter from researchers and executives in the field of artificial intelligence that was made public a month ago during the General Assembly of the United Nations and called for the conclusion of "international agreements on the red lines on artificial intelligence" to prevent catastrophic consequences for humanity.
