Pausing AI Research: Are Humans Intelligent Enough To Do The Right Thing?
Everyone from Elon Musk to Apple co-founder Steve Wozniak to top Artificial Intelligence researchers have signed a public petition calling on a six-month moratorium on AI research. The ultimate decision will be left in the hands of humans, who are smart, but also vain and greedy.
PARIS — A request for a six-month moratorium on artificial intelligence research, shared Wednesday by the Future of Life foundation, garnered over 1,000 signatures within hours from leading engineers and entrepreneurs in American technology. Notable signatories include Elon Musk, the head of Tesla and SpaceX; Steve Wozniak, the co-founder of Apple; and the visionary author Yuval Noah Harari.
Their request is simple: they're calling for a six-month moratorium on any new research into AI tools that goes beyond what has already been accomplished by conversational software such as GPT-4, which has attracted significant attention.
The signatories have expressed longstanding concerns about the rapid progress of AI, which has been highlighted by the emergence of ChatGPT. They are calling for a pause in the race to create "powerful digital minds," which even their creators cannot fully comprehend or control.
The recent release of ChatGPT3, at the end of 2022, followed by version four just a few days ago, is a significant leap forward in the practical applications of AI. Previously, AI was a subject for experts — it has now entered the mainstream.
These experts caution that we have not yet mastered the ethical, social, economic, political or strategic consequences of this technology, which is progressing rapidly and on a massive scale. Before continuing at such breakneck speed, they argue that we should take a moment to reflect and establish a set of rules to govern its development.
Of course, it might be suggested that some of the signatories — Elon Musk comes to mind — are more afraid of losing the AI innovation race. Musk was one of the first financiers of Open A.I., the company behind ChatGPT, but he withdrew from the project and ceded control to Microsoft, which is now the major beneficiary of ChatGPT's success. But the proposal still raises legitimate concerns.
Is it possible to take a "break" in AI development? It seems unlikely — first, because of the intense competition between Silicon Valley giants, and around the world. In the current climate, could the U.S. and China agree on rules for AI, while Washington wages a technological war against China?
There's also the danger of autonomous weapons, where AI makes decisions about who and what to target.
Still, the questions raised are worth discussing. First of all, anyone who has used ChatGPT understands that, despite its errors and failures, the software has the potential to replace some human tasks. What will happen to the millions of human professionals that AI could replace?
Another concern is the potential for AI to generate disinformation. The images created by AI, such as the recent one of the Pope wearing a puffy down jacket, are remarkably realistic and hard to differentiate from reality.
Or when it comes to military matters, there's also the danger of autonomous weapons, where AI could make decision about who and what to target, and when to pull the trigger. The U.S. government says its military will not develop such weapons, unless their enemies do.Thus there are plenty of good reasons for a temporary "pause" in artificial intelligence research. Yet it remains to be seen whether human intelligence is hard-wired to make such a reasonable choice.
- AI As God? How Artificial Intelligence Could Spark Religious Devotion ›
- Why 'Artificial Intelligence' Needs A Smarter Name ›
- What Happens When A Ukrainian Asks ChatGPT About Crimea ›
- China's Dilemma In Race For AI Dominance: Speed v. Control - Worldcrunch ›
- The AI Capitalists Don't Realize They're About To Kill Capitalism - Worldcrunch ›