AI Can't Think Like Us, But Is Forcing Us To Reset How We Think
GPT-4 and other artificial intelligence systems can pass complicated exams, but this says more about how we conduct tests. Artificial intelligence shouldn't lead us to despair — instead it should spur us to rethink our learning and education systems.
PARIS — Everyone is panicking about the success of artificial intelligence chatbot GPT-4 in passing the New York Bar exam. However, the real concern should be about the quality of the exam. If, indeed, the challenge is to articulate an answer to a question based on a sum of knowledge to be learned, the machine is superior to the human mind — that's nothing new.
But if the app is asked to solve a legal problem regarding a complex concept — what makes things right or wrong, for instance — the machine remains far behind what a human brain is capable of.
If you ask GPT-4 "What is good?," the machine obviously brings out a number of elements linked to the notion of “good,” according to the way it has been defined in the history of philosophy.
What is good?
The chatbot’s answer however is cautious to a fault. On the one hand, it accumulates an exhaustive list of adjectives and nouns to try to cover what “good” is, and on the other hand, it underlines a large number of eventualities (beliefs, traditions, subjectivity, etc.).
In other words, it stays neutral: it accumulates words, without any conviction.
This artificial "intelligence" is not able to articulate an answer according to what is good, only to describe it. It’s also unable to develop a conviction about what we expect from everyone. So, GPT-4 does not tell us whether it is better for retirement age to be 64 or 62. At best, the robot can imagine a pragmatic answer, which will not necessarily be the right one.
"Aren't we just dealing with a 'super-dictionary?'"
No pleasure or joy
To Spinoza, the good is "every kind of joy, everything that fills the expectation.” To Locke, it is "everything that creates pleasure in us.” Not only is GPT-4 unable to formulate such answers that combine perspective, volition and quality, but it is even less able to understand their meaning.
The only added value of the human species is imagination.
The chatbot is only capable of putting together a succession of words linked with virtue, goodness, honesty, generosity, without any sort of assessment. The qualities of each of the terms are left to the appreciation of each person because the device is unable to do so.
To put it another way: aren't we just dealing with a "super-dictionary" with GPT-4? Aren't novels written with GPT-4 just a collection of words, without any creativity, imagination, or added value? If clothes don't make the man, the addition of words doesn't make the novel. There may be a narrative, or descriptions, but there will not be any emotion or conviction.
The whole point of the debate around this computer program is to emphasize our limits, our intellectual laziness, the comfort in which we let ourselves go by taking refuge behind texts, rote learning, without conviction.
What about imagination?
The only added value of the human species is imagination. This species is the only one able to choose, translate, anticipate, remember, manipulate and interpret situations, stakes, new things or emotions.
It is the continuous and profound learning of these qualities that we are currently lacking, especially because we are not able to evaluate them in an easy, quick, obvious way.
But GPT-4 confronts us with our contradictions: calculating quickly, producing ease, not thinking… It is accessible to everyone, even to a machine.
It is up to us to review the knowledge, learning and education systems that emphasize our real added value — otherwise, we will continue to watch technological developments from our couch, contemplating our quiet despair.
- Why 'Artificial Intelligence' Needs A Smarter Name ›
- Hey ChatGPT, Are You A Google Killer? That's The Wrong Prompt People ›
- Twitter Woke-Bashing With A Shot Of AI — On The Meaning Of Language, Circa 2023 ›