When the world gets closer.

We help you see farther.

Sign up to our expressly international daily newsletter.

Already a subscriber? Log in .

You've reached your limit of one free article.

Get unlimited access to Worldcrunch

You can cancel anytime .


Exclusive International news coverage

Ad-free experience NEW

Weekly digital Magazine NEW

9 daily & weekly Newsletters

Access to Worldcrunch archives

Free trial

30-days free access, then $2.90
per month.

Annual Access BEST VALUE

$19.90 per year, save $14.90 compared to monthly billing.save $14.90.

Subscribe to Worldcrunch

Pausing AI Research: Are Humans Intelligent Enough To Do The Right Thing?

Everyone from Elon Musk to Apple co-founder Steve Wozniak to top Artificial Intelligence researchers have signed a public petition calling on a six-month moratorium on AI research. The ultimate decision will be left in the hands of humans, who are smart, but also vain and greedy.

Photo of Israel Protest against Judicial Reform in Tel Aviv, Israel

Israeli-born author Yuval Noah Harari is one of the leading voices urging caution about AI development

Pierre Haski


PARIS — A request for a six-month moratorium on artificial intelligence research, shared Wednesday by the Future of Life foundation, garnered over 1,000 signatures within hours from leading engineers and entrepreneurs in American technology. Notable signatories include Elon Musk, the head of Tesla and SpaceX; Steve Wozniak, the co-founder of Apple; and the visionary author Yuval Noah Harari.

Their request is simple: they're calling for a six-month moratorium on any new research into AI tools that goes beyond what has already been accomplished by conversational software such as GPT-4, which has attracted significant attention.

The signatories have expressed longstanding concerns about the rapid progress of AI, which has been highlighted by the emergence of ChatGPT. They are calling for a pause in the race to create "powerful digital minds," which even their creators cannot fully comprehend or control.

The recent release of ChatGPT3, at the end of 2022, followed by version four just a few days ago, is a significant leap forward in the practical applications of AI. Previously, AI was a subject for experts — it has now entered the mainstream.

These experts caution that we have not yet mastered the ethical, social, economic, political or strategic consequences of this technology, which is progressing rapidly and on a massive scale. Before continuing at such breakneck speed, they argue that we should take a moment to reflect and establish a set of rules to govern its development.

Puffed-up Pope

Of course, it might be suggested that some of the signatories — Elon Musk comes to mind — are more afraid of losing the AI innovation race. Musk was one of the first financiers of Open A.I., the company behind ChatGPT, but he withdrew from the project and ceded control to Microsoft, which is now the major beneficiary of ChatGPT's success. But the proposal still raises legitimate concerns.

Is it possible to take a "break" in AI development? It seems unlikely — first, because of the intense competition between Silicon Valley giants, and around the world. In the current climate, could the U.S. and China agree on rules for AI, while Washington wages a technological war against China?

There's also the danger of autonomous weapons, where AI makes decisions about who and what to target.

Still, the questions raised are worth discussing. First of all, anyone who has used ChatGPT understands that, despite its errors and failures, the software has the potential to replace some human tasks. What will happen to the millions of human professionals that AI could replace?

Another concern is the potential for AI to generate disinformation. The images created by AI, such as the recent one of the Pope wearing a puffy down jacket, are remarkably realistic and hard to differentiate from reality.

Or when it comes to military matters, there's also the danger of autonomous weapons, where AI could make decision about who and what to target, and when to pull the trigger. The U.S. government says its military will not develop such weapons, unless their enemies do.

Thus there are plenty of good reasons for a temporary "pause" in artificial intelligence research. Yet it remains to be seen whether human intelligence is hard-wired to make such a reasonable choice.

You've reached your limit of free articles.

To read the full story, start your free trial today.

Get unlimited access. Cancel anytime.

Exclusive coverage from the world's top sources, in English for the first time.

Insights from the widest range of perspectives, languages and countries.


AI And War: Inside The Pentagon's $1.8 Billion Bet On Artificial Intelligence

Putting the latest AI breakthroughs at the service of national security raises major practical and ethical questions for the Pentagon.

Photo of a drone on the tarmac during a military exercise near Vícenice, in the Czech Republic

Drone on the tarmac during a military exercise near Vícenice, in the Czech Republic

Sarah Scoles

Number 4 Hamilton Place is a be-columned building in central London, home to the Royal Aeronautical Society and four floors of event space. In May, the early 20th-century Edwardian townhouse hosted a decidedly more modern meeting: Defense officials, contractors, and academics from around the world gathered to discuss the future of military air and space technology.

Things soon went awry. At that conference, Tucker Hamilton, chief of AI test and operations for the United States Air Force, seemed to describe a disturbing simulation in which an AI-enabled drone had been tasked with taking down missile sites. But when a human operator started interfering with that objective, he said, the drone killed its operator, and cut the communications system.

Keep reading...Show less

The latest