When the world gets closer.

We help you see farther.

Sign up to our expressly international daily newsletter.

TOPIC: artificial intelligence


Hey ChatGPT, Are You A Google Killer? That's The Wrong Prompt People

Reports that the new AI natural-language chatbot is a threat to Google's search business fails to see that the two machines serve very different functions.

Since OpenAI unveiled ChatGPT to the world last November, people have wasted little time finding imaginative uses for the eerily human-like chatbot. They have used it to generate code, create Dungeons & Dragons adventures and converse on a seemingly infinite array of topics.

Now some in Silicon Valley are speculating that the masses might come to adopt the ChatGPT-style bots as an alternative to traditional internet searches.

Microsoft, which made an early $1 billion investment in OpenAI, plans to release an implementation of its Bing search engine that incorporates ChatGPT before the end of March. According to a recent article in The New York Times, Google has declared “code red” over fears ChatGPT could pose a significant threat to its $149-billion-dollar-a-year search business.

Could ChatGPT really be on the verge of disrupting the global search engine industry?

Watch Video Show less

The Laugh Frontier: Can AI Understand Irony?

Bot did you get it?

What was your first reaction when you heard about Blake Lemoine, the Google engineer who announced last month the AI program he was working on had developed consciousness?

If, like me, you’re instinctively suspicious, it might have been something like: Is this guy serious? Does he honestly believe what he is saying? Or is this an elaborate hoax?

Put the answers to those questions to one side. Focus instead on the questions themselves. Is it not true that even to ask them is to presuppose something crucial about Blake Lemoine: specifically, he is conscious?

Keep reading... Show less

Robot Artists And Us: Who Decides The Aesthetics Of AI?

Ai-Da is touted as the first bonafide robot artist. But should we consider her paintings and poetry original or creative? Is this even art at all?

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist”, and an exhibition of her work called Leaping into the Metaverse opened at the Venice Biennale.

Keep reading... Show less

In Call With Erdogan, Putin States His Demands For Ending War

👋 Rimaykullayki!*

Welcome to Friday, where Russian bombs hit the western Ukrainian city of Lviv. Rescue operations are still underway for the hundreds of survivors in the rubble of Mariupol theater; meanwhile, Putin details his demands for the end of the war, in a phone call with Turkey’s Erdogan. From Argentina, we look at the promising AI-driven health tech being developed and what it could mean in the treatment of several illnesses such as diabetes, cancer or COVID.

[*Quechua - South America]

Keep reading... Show less
Lautaro García Alonso

Data, Selfies, Prevention: How AI Is Transforming Healthcare

From testing for COVID through WhatsApp to taking selfies to check heart risks, AI programs are being used in Argentina to complement early-stage diagnoses. The technologies are in their early stages but are able to detect what the human eye might miss.

BUENOS AIRES —The World Health Organization (WHO) estimates that every year 138 million patients suffer from medical misdiagnoses that prove fatal in 2.6 million cases. In the United States, medical errors relating to misuse of pharmaceutical products or misdiagnosis were the third cause of deaths there in 2015.

All this proves that medicine is not infallible, and even specialists can go wrong. The daily performance of all doctors is subject to factors like stress, overwork or exhaustion (they sometimes work 24 hours straight). In this context, technological advances of recent years may bring some good news. Artificial Intelligence (AI) has brought innovations that boost diagnosis and even detect conditions invisible to the naked eye.

Watch Video Show less
Anna Akage

The Stakes Of A Ukrainian-Russian Drone Arms Race

A recent unmanned attack could heighten tensions in the conflict zone and have broader geopolitical consequences.

Last week Vladimir Putin complained that even without accepting Kyiv into its ranks, NATO could place missiles in Ukraine near Russia's borders. Russian media was quick to help prove Putin's point, writing about Washington's current military aid to Kyiv, Ukraine's talks with London on obtaining British Brimstone missiles and Turkish drones in Donbas, which has been a disputed site of conflict since 2014.

Just days later, the Ukrainian military for the first time used the Turkish Bayraktar TB-2 drone in Donbas. The incident Tuesday could seriously change the situation in the conflict zone and have consequences for both Russian-Ukrainian and Russian-Turkish relations.

Watch Video Show less
David Larousserie and Alexandre Piquard

AI, Translation And The Holy Grail Of "Natural Language"

Important digital innovations have been put into practice in the areas of translation, subtitling and text-to-image.

PARIS — When asked about advances in language management through artificial intelligence, Douglas Eck suggests pressing the "subtitle" button on Meet, the video conferencing service used for the interview, because of the COVID-19 pandemic. The words of this American engineer, who had come to Paris to work at Google's French headquarters, were then displayed in writing, live and without error, under the window where we see him, headset on. This innovation, unthinkable until recently, is also available for most videos om YouTube, the Google subsidiary. Or on the dictaphone of its latest phones, which offers to automatically transcribe all audio recordings.

These new possibilities are just one example of the progress made in recent years in natural language processing by digital companies, especially giants such as Google, Apple, Facebook and Amazon (GAFA). Some of these innovations are already being put into practice. Others are in the research stage, showcased at annual developer conferences, such as Google I/O (which took place May 18-20) and Facebook F8 (June 2).

Watch Video Show less
Gabriela Samela

VR For HR: Virtual Reality As A Tangible Tool For Human Resources

Latin American firms are joining others around the world testing Virtual and Augmented Reality solutions in personnel recruitment and training.

BUENOS AIRES — The image of someone wearing a virtual reality (VR) headset immediately makes you think they're playing games. Yet immersive simulation is now being used to recreate a work environment where present or future employees can learn, practice and train for work.

While simulation technology is used more frequently for operations or the security sector, in Argentina some firms are using it to manage human resources: in selection processes and in staff inductions and training.

Watch Video Show less
Charles Cuvelliez

How We Build Human Bias Into Artificial Intelligence


PARIS — When Amazon realized that its AI recruiting tool favored men, the company quickly shelved it. Back in 2016, a chatbot released by Microsoft turned into a sex-obsessed neo-Nazi machine in only 24 hours. These incidents, along with others, played right into the hands of all those who say there is too much AI in our daily lives.

But some researchers are looking at things from a different perspective: if AI makes such mistakes, it's because it has been taught that way. That means we can also teach it to avoid such mistakes. How? By tracking down all the biases contained in the data the AI is fed when it learns.

Said data is only the result of our own biases, which have been at play for a long time. They are easy to find: consider the number of simultaneous appearances of "woman" and "nurse" in the same text compared to a possible proximity with "doctor." It's easy to imagine that computer scientists will be often referred to in masculine terms or close to the word "man." But there is more: Does anybody know, for example, that the error rate for facial recognition can reach 35% for women with black skin, compared to 0.8% for men with fair skin?

We are the ones who teach the machines to be biased.

All this is due to the body of initial data available to train AI algorithms. The data available is AI's Achilles heel. For example, social networks provide an abundant and cheap source of data. But the presence of fake news, hate speech and general contempt towards minorities and women that can be found there doesn't bode well for AIs.

An experiment conducted on Twitter by a researcher at Swinburne University revealed that negative feelings were most often expressed against female leaders rather than male leaders.

Here's another experiment we can all make. Just enter the keyword "president" or "prime minister" on Google Images: men are over-represented by 95%. But it's not Google's fault.

Biases can also be found elsewhere: Has anybody ever wondered why voice assistants or customer contacts in call centers, when they are handled by robots, have reassuring female voices?

It's true, studies show that men and women prefer a female voice to speak to them. It's more reassuring. It's maternal. When you look at it closely, this preference becomes more refined, though not in the right direction: We prefer a male voice to talk to us about computers or cars, and a female voice for all things interpersonal.

AI learns its racial bias from its creators — Photo: Abyssus

Recently, manufacturers of intelligent automated personal assistants and connected speakers have adapted their algorithms to show less patience with the rude and harassing nature of users who sometimes vent their frustrations against machines. The thinking behind it is to try and avoid people venting against women on the street, because they have become too disinhibited by their experience with machines.

Amazon has reprogrammed Alexa to answer questions with an explicit sexual nature in a curt fashion. Google Home, meanwhile, has introduced the "Pretty Please" function, which adapts to the kind or unkind tone with which a user addresses it.

But Google Home has no conscience nor personality: It doesn't actually care whether it's talked to politely or not. Are you, at the end of the day, polite with your washing machine? Probably not always, especially when it's broken. But it could affect the machine's learning process. A user would only moderately appreciate it if his smart speaker only talked to him harshly.

Apple also offers its own Siri assistant in several versions: female or male voice, with different English accents. The voice still remains by default female, but we note that it can be male by default for Arabic, French, Dutch, and English (one wonders why).

Artificial intelligence has no bias like us.

It's not enough to talk with different accents. Personal assistants and smart speakers need to understand everyone.

To do this, the companies that design them rely on a corpus of audio clips, speeches, and more. It's easy to imagine that some groups in society are under-represented, such as low-income, rural, lower social classes that use the Internet less. Obviously, you're not going to find them in the corpus.

One of these corpora, Fischer's corpus, contains speeches by people whose mother tongue is not English, but we immediately see that they are under-represented. More amusingly, maybe, Spanish and Indian accents are already a little more represented than the various accents within Great Britain.

Artificial intelligence has no bias like us. We are the ones who teach the machines to be biased. The World Economic Forum believes that it will take until the next century to achieve true gender equality. Chances are that with AI, we might have to wait even longer.

Watch Video Show less
Frank Niedercorn

AI In Healthcare: New Battleground For Big Tech And Startups

PARIS — What if the GAFA quartet (Google, Amazon, Facebook, Apple) also became giants in the healthcare sector? Google first tried to put on the white coat more than a decade ago. After "Google Health," its online medical record project abandoned in 2012, the company made a strong comeback with its subsidiary DeepMind Health, doing what it does best: collecting and processing data. In this case, the data was that of patients in hospitals, particularly in the United Kingdom.

However, things were more difficult than expected with the Royal Free Hospital Trust, which allowed the company to access the medical history of 1.6 million patients. The agreement was terminated when UK authorities found that the data had not been kept anonymous, and had been used in a broader context than originally planned.

Watch Video Show less
Alidad Vassigh

Why AI Is Now The Key Ingredient For Modern Productivity


CARACASArtificial Intelligence is opening up new opportunities for the economy and society. But it will also affect millions of human jobs, and thus poses a huge challenge for public policymakers, warns a 2016 White House report on AI's projected impact on the U.S. economy.

Watch Video Show less
Evgeny Morozov

Europe, Trapped Between U.S. Protectionism And Chinese Ambition


MUNICH — As far as industrial strategies and international competition are concerned, there's no greater contrast than that between Europe's resignation and China's iron determination. It's not surprising that it was China — and not Europe — that proposed to form an alliance against Trump's raving protectionist madness. With little success: Even Washington's harassment cannot pull European politicians out of their slumber — or, more likely, their afternoon nap.

Watch Video Show less