A robot confirming that it is not a robot.
A robot confirming that it is not a robot. AI-generated/Worldcrunch

BERLIN — In any given week, I have at least one existential crisis. It usually hits at work, sometimes in the evening, always in front of my laptop screen. I stare at a grid of 16 squares, each containing fragments of a street scene. I’m tasked with picking out bicycles, traffic lights or crosswalks. Sometimes, I’m squinting at warped, barely legible words, trying to decipher them. If I’m lucky, all I have to do is check a simple box: I’m not a robot.

For the latest news & views from every corner of the world, Worldcrunch Today is the only truly international newsletter. Sign up here.

Lately, these tests seem to pop up at the worst possible moments. When I was desperately trying to buy nearly sold-out concert tickets. Or while frantically booking a train ticket via the DB German railway app as my last high-speed train to Berlin pulled into the station. Whether I made it home or ended up spending the night at Hamburg Central depended on proving — right then and there —that I was human. No pressure.

Do two or three pixels of a tire creeping into the next square mean I have to click it? Does the pole count as part of the traffic light? And why on earth do the number zero and the letter O look so maddeningly similar?

In real life, no one expects me to constantly prove my humanity — it’s just taken for granted. But as more of our lives unfold online, and as proving one’s humanness boils down to a few clicks, how much does that really separate us from machines? Wouldn’t a smart enough robot pass these tests too?

Keeping the bots at bay

The tests are called CAPTCHAs, completely automated public Turing tests to tell computers and humans apart. They have been around for more than 20 years. CAPTCHAs stop bots from visiting websites to buy hundreds of Taylor Swift tickets, which then end up on eBay for twice the price. Or setting up an armada of email accounts that send love letters from Saudi princes out into the world. You know which ones I am talking about, the ones that go: “I want to share my inexhaustible wealth with you, will you give me your IBAN?”

Shouldn’t machines be making our lives easier instead of creating extra work for us?

Internet users collectively spend about 500 years solving CAPTCHAs every single day. That’s an astonishing amount of time, according to IT service provider Cloudflare, all in an effort to keep bots at bay. But shouldn’t machines be making our lives easier instead of creating extra work for us? That’s how it all started, back in the 18th century, when the steam engine replaced our muscle power, which we now diligently build up in gym courses with names like “Body Pump,” as if we still had to roll steel for a living.

Or 100 years later, when electricity was discovered to power machines, which later made the invention of the vacuum cleaner and washing machine possible. In the 1940s, machines began to calculate and the first computers were invented. Since then, attempts to get them to think like we do have been going on, and now advancement are happening at a way faster pace.

With artificial neural networks, with books and images, generally data, vast amounts of it, and with remarkable success. Today, there are recommendation algorithms that tell us which series we should watch and which books we should read, and generative AI that can produce both. Artificial intelligence can now do some things that I cannot do straight away; I cannot predict the folding of amino acids and who would be knocked out in the preliminary round of a chess tournament.

Why is it still difficult for them to distinguish between traffic lights and lamp posts?

Monica Perosino
A CAPTCHA of the site designed to filter out humans and only lets programs pass. – Brogue Lessor Jig/Wikimedia Commons

Unpaid work

We decided to ask Norbert Pohlmann, professor of cyber security at the Westphalian University of Applied Sciences in Gelsenkirchen, who, as we can see in advance, often needs two attempts to solve a CAPTCHA.

“The better artificial intelligence became, the more complicated the CAPTCHAs became,” says Pohlmann, “it’s a game of cat and mouse.” For a long time, computers only recognized what conformed to their rules. A bicycle has two wheels. A bicycle has handlebars. A bicycle has two pedals. Says Pohlmann: “They lacked the abstraction ability to recognize things when they look different than usual.” A bicycle with a cargo basket of children in front of it. Or rusting away on a street corner, locked up but without a rear tire.

The CAPTCHAs have been repurposed and we have taught the machines to see.

“Over time, the data sets have become larger and the models and algorithms have become better and better,” says Pohlmann. “Artificial intelligence has seen almost everything today and can evaluate it better.” Bicycles from below, behind and from the side, racing bikes, tricycles, cargo bikes.

Who taught them? That’s right: you and me, the users.

In 2009, Google bought the service reCaptcha, the largest provider of internet barriers. Since then, we have been doing unpaid click work for them. The pixelated words that we had to type out? Often came from old books that computers can’t read well. We digitized them for Google, CAPTCHA after CAPTCHA. This is how the online library Google Books was created. And with every traffic light, every bicycle, every crosswalk that we clicked on, somewhere a self-driving car learned when it was better not to accelerate. In that sense, we also benefit in the end.

The CAPTCHAs have been repurposed and we have taught the machines to see. Google assumes our answers are correct because we want to pass the test. And if hundreds of people give the same answer, can they be wrong?

Human inefficiency

Being human — and this is perhaps the first realization — means being relatively in agreement about what you see in front of you. Having a common factual basis. And then you discuss climate change at Christmas or on the internet and have to worry twice as much about our species.

Especially since reports like this are now emerging, as reported in Spektrum der Wissenschaft, with the headline “IT security: AI reliably solves captchas,” and The Independent with the headline “Bots crack ‘Are you a robot’ tests better than humans, study shows.” Our answers to tests that were supposed to keep machines away are what enabled them to solve those tests. Today, artificial intelligence is better at it than we are.

What now?

In 2014, Google started adapting CAPTCHAs to the new circumstances, no longer identifying people based on their strengths (great at recognizing crosswalks), but also based on our weaknesses (incredibly slow). Some CAPTCHAs today are insultingly simple, they only consist of a formal confirmation. I’m not a robot, check. It’s not about clicking the box. How you do it is much more important.

“Bots react suspiciously quickly to tasks like this and always in the same way,” says Pohlmann. They take the shortest route, while we’re still jerking the mouse or swiping the trackpad to find the cursor on our screen. Pohlmann says you can recognize people by their individual mouse movements. Which is a diplomatic way of saying: by the degree of our inefficiency.

Blurred lines

But this method too is now outdated. Bots can now be programmed to move around the internet like people, a little shaky, a little lost, sometimes slower, sometimes faster.

The machines have learned to hesitate. Or at least to pretend to.

You can now buy software on the Internet that solves all kinds of CAPTCHAs, thousands of them, for as little as one dollar.

Is it possible that you are… how should I put it… a robot?

The short film I Am Not a Robot by Dutch director Victoria Warmerdam won the Oscar for Best Live Action Short at this year’s Academy Awards. The main character Lara, an employee at a music production company, fails a CAPTCHA twice, three times, until she gets annoyed and calls the IT department and turns it into a matter of principle. “I know that’s a very personal question,” says her colleague, “but is it possible that you are… how should I put it… a robot? You wouldn’t be the first to find out.” Warmerdam’s film is a black comedy about identity and self-determination in a world in which the line between man and machine is slowly blurring.

How else do you prove that you are human?

​A view on a refugee camp on the outskirts of Idlib.
A robot standing near luggage bags in Japan. – Lukas

Proving we are human

There is my body, which just carried me to the office, and with which I am now sitting here typing this article. But how long until this is another obsolete way to recognize a human? Optimus is 173 centimeters tall and weighs 57 kilos: it is a humanoid all-purpose robot that Tesla is currently working on. Kim Kardashian recently played rock-paper-scissors against Optimus and won; she posted a video of it on Instagram. There is no official connection, but Tesla CEO Elon Musk has just advertised a dozen more engineering positions to improve Optimus. The humanoids are expected to roll off the factory assembly line next year.

It’s probably only a matter of time before we’re standing in each other’s way in front of the freezer section in the supermarket, Optimus and me. And since it’s a Musk product he’ll probably be ranting to himself that only the far-right Alternative for Germany (AfD) can save us.

Who can say for sure that the colleague who works fully remotely really exists?

Also, how often do we meet in three dimensions? I have friends who I speak to often but rarely see. Who can say for sure that they are really talking to me, and not my avatar, which I trained with our chat histories and gave my voice for efficiency reasons? That the colleague who works fully remotely really exists?

Well, one could argue that there is a big difference between us and machines: We have consciousness. Unfortunately, science is very divided about what consciousness actually is, in which part of the brain it is located and whether it even needs a body to develop. But on one thing, says Johannes Kleiner, he and his colleagues agree: Artificial intelligence, as things stand today, is not conscious, cannot believe, cannot love, cannot have hopes or fears.

Kleiner is a physicist and consciousness researcher at the Center for Mathematical Philosophy at Ludwig Maximilian University of Munich. “Imagine the code and the computer chips on which the programs run as prisons,” he says. They determine the scope of action. Most of the time, this is limited to recognizing patterns in our data. And using that to derive the most likely answer to every question. It’s stochastics, the art of guessing, and not consciousness. For now.