Credit: ArrN Capture via Unsplah

-Analysis-

NEW DELHI — Professor Bent Flyvbjerg’s article, “AI as Artificial Ignorance,” is a reality check that dampens the excitement, and the fears, about the capabilities of artificial intelligence. 

Specifically, Flyvbjerg points out that the LLM (Large Language Model), which powers wonder tools like, ChatGPT, are text generators, which repeat sequences from their “training data,” and cannot reason things out or generate anything new.

For the latest news & views from every corner of the world, Worldcrunch Today is the only truly international newsletter. Sign up here.

And soon after, retired Brigadier Amit Kathpalia, professor, engineer and project management trainer, warned against excessive reliance on answers that LLMs provide. His interesting example is of a question about contract management, put to ChatGPT, to which ChatGPT gave a patently incorrect answer. When challenged with reference to the courts’ judgements to the contrary, ChatGPT corrected itself, but made excuses, just like a human advocate may, when caught on the wrong foot. 

In a more recent post, Kathpalia is somewhat more direct, he quotes Flyvbjerg, “A fool with a tool is still a fool” — and continues, “AI has the potential to make the fool using this tool into an even bigger (and maybe dangerous) fool.”

New fluency

At the same time, a recent article in a leading city newspaper stated that it is “AI literacy” that India needs today. While it was the rise in conventional literacy, from 12% in 1948 to over 75% today, that “fueled economic mobility, global competitiveness and innovation,” the article says, the current “AI era demands a new kind of fluency — AI literacy.” 

What follows is a short review of what AI consists of, how it has already invaded the world of business and manufacture, services, research, even entertainment, and then of what its limitations may be. 

Artificial intelligence could be understood as the way computer systems attain very high capabilities, not through explicit programming, much more, in fact, than is possible by programming, but through interaction with data, in a manner similar to how living things master complex skills.

There are situations where the animal brain does much better.

In simple applications, a set of known data is analysed to find a mathematical formula that fits their distribution. The formula is then tested on more known examples and refined, so that it makes correct predictions with unknown data too. The technique can then be used to devise marketing strategies, forecast weather and automate clinical diagnoses. While powerful computers were able to process huge data without taking too long over it, there are situations, image recognition is one such, where the animal brain does much better. The animal brain does not analyse data linearly, like the conventional computer, but “trains itself” to interpret significant data elements and trim its responses on the basis of how good the predictions are.

Computer programmes were hence written to simulate the animal brain, in the form of “neural networks“, or software components that behave like nerve cells. In a task of telling pictures of dogs as different from pictures of cats, for example, the computer is shown a collection of photographs of dogs and cats, where each picture is marked as a dog or a cat. The computer, typically, breaks each picture into smaller frames, and acquires features, like the brightness and colors of the frames, and works out an indicator, based on a set of multipliers of the values of the different features.

Every time the indicator differs from the label, the network makes changes of the multipliers, a technique called “back-propagation,” to bring the finding closer to the label. After many rounds of iterations, and trials with even millions of pictures, the system gets pretty good at identifying dogs and cats. And an extension is to consider a set of feature values from a set of pictures of people, to identify persons. 

Children play with a robot dog toy at the 5th China International Consumer Products Expo in Haikou, south China’s Hainan Province, on April 13, 2025. Photo: Pu Xiaoxu/Xinhua via ZUMA

LLM does not judge

The LLM is a development of this idea, to predict not the label of an image, but the possible word or words that would follow a string of words, or arise from a prompt, which could be a word, a question or a request. “Training” of such an arrangement would typically be with text passages that are associated with key words that appear in the text. And given the key words, the model would learn to generate answers, reports, reviews, even poems or stories, following, if necessary, the style of a specified author. 

We can now readily understand what Flyvberg and Kathpalia have said. The LLM does not “judge,” it does not “understand.” If an AI system is asked to draw a portrait after the style of Rembrandt, it may create something that could pass for an original. But, if it were to answer a question, it may widen its net to seek answers from leading search engines, but it could as well choose incorrectly from contradictory sources, or follow a mistaken report that appeared in its training data.

AI’s capabilities

So much for the limitation of LLMs, how about the capabilities of AI? The concept really started early in the last century, when efficiency experts became active in industry. Mathematical principles were already there, they were now applied to optimise use of resources, to maximise the results. The problem, essentially, was to consider the inter-relations of several features and navigate, an effort that entailed increasingly complex computation. 

Naturally, when computers became common, so did the applications, and, in turn, the applications became more complex. 

And, with the entry of the Internet and e-commerce, there was an explosion of data, which Machine Learning and AI came in to exploit. Automation and digitisation have come of age. With computers, the concepts of optimising, maximising became everyday applications. On the one hand, in applications like weather forecasting, scheduling, dynamic pricing, AI methods brought in efficiency and speed. And in advertising and marketing, the digitising of retail business provided the data for real time customer profiling and personalised service. 

The LLMs may start training on the same data and may be doomed to destruction.

There are many who feel this is an invasion of privacy, but the benefits to commerce are undeniable, and in a free market, the customer benefits. 

And then, the otherwise impossible task of programming a computer to carry out things like character and image recognition, driverless vehicles, text and image generation, landscaping, have made possible an unending list of benefits. These are all “legitimate,” even if sometimes annoying, applications. But there are also applications that can be, and are, misused, which has given AI its bad press. Particularly, profiling and counterfeiting. Like every technological advance, that brought comfort and convenience, this one has the capacity to undo the gains. 

The good and the bad

But with all technological advances, society adapts to the good and the bad, and changes itself, usually forever. The Internet and AI, are advances, momentous ones, and there is no way they can be rolled back. As for their negatives, society will adapt. With growing awareness, cyber fraud will soon stop being profitable. Fake news items will give rise to new yardsticks of credibility and in the arts too, we may see the birth of AI-transcending levels of creativity. At a more basic level is the effect on jobs and professions, but have economies not been equal to this in times past? 

And then for the errors and mistakes of information from the LLM — the observations of Flyvberg and Kathpalia are eloquent. Even without errors, with proliferation of LLM generated data, the LLMs may start training on the same data and may be doomed to destruction like an audio amplifier that screeches and burns out by its own feedback. 

Now coming to whether it is AI literacy that India needs. The authors of the article, “Mapping AI in India,” have made a case for introducing computational thinking — problem-solving, abstractionand designing solutions rooted in computational logic — as part of school curriculum. But they go on to say this is not limited to and not even the same as ability in coding, but is awareness of how AI systems work, to be critical and to be able to leverage AI.