AI Turf “War”: An Old Man’s Defence of Good Old-Fashioned AI by Flexday’s Nirmal Kartha

Updated on 12-Jun-2024

On the 23rd Anniversary of Digit Magazine, we invited stalwarts and leaders from the tech industry to give their perspectives on the role of AI (Artificial Intelligence) in India. Mr Nirmal Kartha, Principal Data Scientist at Flexday Solutions LLC., shares his thoughts on the future of AI, the evolution of its use cases, and the potential it holds.

According to Mr Kartha, “As well deserved the laurels are, genAI today also has severe technological limitations. Although unexpected emergent capabilities from the simple task of pre-training on huge swathes of data continue to impress, there is strong support for the idea that they, in their current form today, are nothing more than stochastic parrots — if you hear a parrot say “Want some chocolate?”,  you do not immediately conclude that the parrot knows what a chocolate is or what it means to want something. It’s just mimicking the sounds it has heard people say before.”

Here’s the full article below. The article was originally published in the June 2024 AI Special Anniversary issue of Digit magazine.


When I began my graduate studies in Artificial Intelligence half a decade ago, AI was still popular but far from being in the limelight as it is today. Seminal research work, which would lead to the amazingly realistic image generation models we see today, was yet to be published. GPT-1 was just a year old. The idea of feeding large amounts of publicly available data to large transformer-based models – the beating heart of any large language model you can start using today – was just getting popular in academia and industry. ML and AI were far from being dormant though. 

It was a space humming with activity around what is now informally referred to as “Traditional AI”.  

You might be excused for thinking November 2022 was when the world took note of AI. The ChatGPT moment, albeit a very important one in the meteoric launch of AI into the permanent mainstream fixture of today, was far from being representative of the myriad ways in which Machine Learning (ML) was integrated into consumer and business products. Generative AI (genAI) was promising and did exist then — with Generative Adversarial Networks (GANs) as the star player (check out https://www.whichfaceisreal.com/ to see an artifact from that era) — but with far and few broad commercial applications. The current generation of genAI solutions is indeed a world apart from the past but what happened to traditional AI? Where is it now? And where will it be in the future?

Before I make claims about where we are and where we might go in the future, I think it would be best to clarify what traditional AI is generally considered to be. Traditional AI’s primary purpose is discriminative (as opposed to generative) in nature. It doesn’t “create” anything new. It is less concerned with filling in the next word or pixel and more concerned with extracting useful features of your input data to make very task-specific predictions.

Source: Midjourney (Prompt is an engineered version of “traditional vs generative AI –-ar 16:9 –s 750 –style raw –v 6.0”)

Think about the algorithm your bank uses to determine whether that 50K transaction made on your credit card a minute ago in Eastern Europe is fraudulent or not. Or the one that Netflix/Instagram/Tiktok etc uses to recommend the next piece of media to consume, which will most likely increase your sleepless binge session by a few more hours. Both AIs are “special-purpose” (i.e., Netflix’s trained recommendation engine is unlikely to work well/at all in another domain like fraud detection). Both AIs take in input data — your transaction/viewing history, broader variables like the behavior of others etc — and transform them into specific predictions (fraud/not, suggest/not).

Traditional AI use cases extend beyond fraud detection or recommendation systems. Deep neural networks that use a certain data transformation technique called a convolution (think of it as smearing two things together to find out overlaps between the two), have been the workhorse for a wide variety of image-related tasks like image classification, object detection, image segmentation etc for a long time. For instance, facial recognition systems currently in production for Big Brother-style surveillance or relatively more benign applications like biometrics, leverage some form of convolutional networks. Many models used for fortune telling aka making predictions about the future aka time series forecasting would also fall under Traditional AI. Think of these as what a clothing manufacturer would use to predict not only the demand for their sturdy jeans next summer but also for the raw materials they would need to purchase in advance to fulfill that future demand. I can go on and on.

Source URL

GenAI is just the small tip of the wide and deep iceberg that is the field of AI. The term “artificial intelligence” was coined in 1956 by John McCarthy, when he organized the Dartmouth Summer Research Project on Artificial Intelligence in 1956, which is considered the founding event for AI as a field of study (note that the “idea” of machines simulating intelligence existed prior to that – for e.g. 1950 saw Alan Turing devise the famous Turing test which would be popularized in science fiction movies like Alex Garland’s Ex Machina). The goal of the conference was to “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”. Creation is just one aspect, albeit an important one, of intelligence. So are making informed decisions.

We should note that the distinction between traditional and genAI can be blurry. Oftentimes, It is mostly one of convention, like the gender of inanimate objects in languages like Hindi or French descending from the Proto-Indo European language family. For instance, genAI models are capable of zero-shot learning (think of it as recognising a new animal you’ve never seen before, just from your memory of more familiar animals and hearing someone describe this rare beast). 

With clever prompt engineering, you can bend genAI to your will of making predictions. Similarly, one can argue that what comes out of a traditional AI model, i.e., the output is also “content”. This is made even blurry by the fact that the underlying architectural component of all contemporary genAI solutions — the transformer (which, in turn, can also be broken down into constituent blocks & layers akin to how an atom can be split open into subatomic particles) are used in several applications to make predictions. For instance, almost all self-driving car companies (whatever are left standing and the ones that faded away) use vision transformers extensively for navigation. Or healthcare applications that use them, alongside “traditional” convolutional networks for medical screening and detecting abnormalities from scans. Let’s just make our lives simpler and refer to genAI as the crop of modern text, image and video generation technologies that can generate new “content” in the way we think humans do. Everything else – traditional AI. In the midst of this euphoria around genAI, one wonders: Traditional AI is all around us. It has always been all around us. But will it always be around us?

Working for Flexday AI Corp. — an organization that builds both traditional and generative AI solutions for our Fortune 500 enterprise clients — I have been focusing on the genAI half, even before the ChatGPT moment. While executives are enthusiastic about initiatives, actual deployment rates remain low. Few companies have a clear idea on what they want to accomplish with genAI when they are adopting it. Long-term implications, including costs and effects of regulation are still unknown. Oftentimes, end users within organizations have unrealistic expectations about timelines, costs, and value delivered by their GenAI projects. This could be chalked to the almost messianic frame in which genAI organizations like OpenAI communicate about their products.

Image via Unsplash

As well deserved the laurels are, genAI today also has severe technological limitations. Although unexpected emergent capabilities from the simple task of pre-training on huge swathes of data continue to impress, there is strong support for the idea that they, in their current form today, are nothing more than stochastic parrots — if you hear a parrot say “Want some chocolate?”,  you do not immediately conclude that the parrot knows what a chocolate is or what it means to want something. It’s just mimicking the sounds it has heard people say before. Hallucinations can potentially persist even if you implement the most watertight Retrieval Augmented Generation systems.

Many Traditional AI systems in deployment are far more explainable and interpretable compared to the more “black-box” genAI systems. Consequently, given the enormous size of the training data, combating bias in these models will also prove to be difficult. Even when identified, there is no widely agreed approach for “unlearning” to mitigate the identified biases. Last but not the least, the rise of high-profile copyright infringement suits and controversies – one of the latest being OpenAI using a voice eerily similar to that of Scarlett Johannson’s in spite of her nor providing permission – will increasingly cast a dark shadow over the future of such tools.

My genAI experience, coupled with my prior work in development of traditional AI solutions and observation of industry trends, makes me confident enough to claim that traditional AI is likely to continue to be all around us for the near future. Unless major architecture-level breakthroughs are made in generative AI, it is less likely to subsume traditional AI.

In fact, generative AI and traditional AI can exist in a harmonious symbiotic relationship to augment one another. For example, a tailor-made traditional AI marketing model can extract useful features from a consumer’s purchase patterns and feed them even to a general-purpose generative AI model like GPT to create compelling and personalized content copy. Yes, the edges are still rough and as such, caution must be exercised before integrating them, especially into mission-critical domains. However, I am hopeful about the future.

Industry leader

Contributions from industry leaders and visionaries on trends, disruptions and advancements that they predict for the future

Connect On :