The signatories have expressed longstanding concerns about the rapid progress of AI, which has been highlighted by the emergence of ChatGPT. They are calling for a pause in the race to create "powerful digital minds," which even their creators cannot fully comprehend or control.
The recent release of ChatGPT3, at the end of 2022, followed by version four just a few days ago, is a significant leap forward in the practical applications of AI. Previously, AI was a subject for experts — it has now entered the mainstream.
These experts caution that we have not yet mastered the ethical, social, economic, political or strategic consequences of this technology, which is progressing rapidly and on a massive scale. Before continuing at such breakneck speed, they argue that we should take a moment to reflect and establish a set of rules to govern its development.
Of course, it might be suggested that some of the signatories — Elon Musk comes to mind — are more afraid of losing the AI innovation race. Musk was one of the first financiers of Open A.I., the company behind ChatGPT, but he withdrew from the project and ceded control to Microsoft, which is now the major beneficiary of ChatGPT's success. But the proposal still raises legitimate concerns.
Is it possible to take a "break" in AI development? It seems unlikely — first, because of the intense competition between Silicon Valley giants, and around the world. In the current climate, could the U.S. and China agree on rules for AI, while Washington wages a technological war against China?
There's also the danger of autonomous weapons, where AI makes decisions about who and what to target.
Still, the questions raised are worth discussing. First of all, anyone who has used ChatGPT understands that, despite its errors and failures, the software has the potential to replace some human tasks. What will happen to the millions of human professionals that AI could replace?
Another concern is the potential for AI to generate disinformation. The images created by AI, such as the recent one of the Pope wearing a puffy down jacket, are remarkably realistic and hard to differentiate from reality.
Or when it comes to military matters, there's also the danger of autonomous weapons, where AI could make decision about who and what to target, and when to pull the trigger. The U.S. government says its military will not develop such weapons, unless their enemies do.
Thus there are plenty of good reasons for a temporary "pause" in artificial intelligence research. Yet it remains to be seen whether human intelligence is hard-wired to make such a reasonable choice.