Soon after cracking the famed Enigma encryption devised by the Nazis, thereby playing an instrumental role in tilting the Second World War towards the Allies, Alan Turing, the father of computing science, allowed his nimble intellect to return to his beloved area of research: artificial intelligence (AI). He gave us a glimpse of his vision in 1947 when he said: “What we want is a machine that can learn from experience.”
Turing was projecting heuristic problem-solving abilities on machines that were amoebae compared with the primates we place on our laps today. According to him, a computer could be said to be artificially intelligent if it could mimic human responses under specific conditions. With that in mind, he proposed a test in which a human and a computer would answer the same set of questions posed by another human, and this second human would be asked to guess which responses stemmed from the human and which from the computer. Turing might have relished watching the latest AI phenomenon known as ChatGPT attempt to pass his test.
Sixty-nine years after Turing’s death, it is time to ask whether ChatGPT is a harbinger of the much-awaited quantum breakthrough that the field of AI has been promising for decades. Setting aside for a moment the great advances in mathematical modelling, algorithms, and computing power, the field of AI itself has witnessed some significant milestones along the way (see Table 1).
The writing had been on the screen for decades. Hence, when ChatGPT gained a million users just five days after its launch in November 2022, beating the previous record of 2.5 months set by Instagram, many experts might have seen the development as the inevitable crawl of destiny, not an avalanche. But avalanche it was. Within two months, the software had already catered to 100 million unique users, a feat that even the severely addictive TikTok took nine months to achieve. Perhaps the greatest sign of its success is the fact that the platform continues to be intermittently unavailable; it is almost always operating at peak capacity. This is the equivalent of a sign that says, “Closed, come back later”, on a website, with the storekeeper having no doubt that the customer would walk in again tomorrow. Is it any wonder that in early January of 2023, OpenAI—the company that created ChatGPT—was seeking funds at an estimated valuation of $29 billion?
OpenAI Inc was founded in 2015 as a non-profit backed by the likes of Peter Thiel, Elon Musk, Jessica Livingstone, Sam Altman, Trevor Blackwell, Andrej Karpathy, Ilya Sutskever, and Reid Hoffman. Together, they pledged $1 billion and envisioned that the company would develop ethical AI that would benefit all of humankind. They decreed that its patents and research findings would follow the open-source model. However, in 2019, a year after Musk stepped down from the board, the company formed a subsidiary named OpenAI LP, a “capped-profit” entity. Revenues of this company could now be leveraged to offer a hundredfold return to investors. But, apparently, no more! In addition to rewarding investors, the move allowed the company to incentivise champion AI developers with stock options. But some industry-watchers wondered whether the parent organisation would stay true to its vision of creating maximum value for all, at all times.
The era of AI companies had well and truly begun, as is evident in PitchBook’s reporting that American venture capital firms invested $115 billion in AI companies in 2021 alone. Ergo, a winner company like OpenAI did not have to worry about a dearth of investors. The company chose Microsoft as an investment partner in 2019, with the latter reportedly pumping in billions of dollars since then. Microsoft’s latest round of investment came in January 2023, just a week after it announced the layoff of 10,000 of its own employees, leaving the market in no doubt about the side of the toast that is buttered.
To its credit, OpenAI appears to have hired and developed smarter than the market to claim the first-mover advantage. This has definitely put a strain on their infrastructure budget, but considering the quantum of feedback they are receiving from real users in real time, they are able to enhance their software at zero extra cost.
“Within two months, the software had already catered to 100 million unique users, a feat that even the severely addictive TikTok took nine months to achieve.”
Never before has an AI-driven piece of software captured the imagination of the world to this extent. Today, laypeople have begun caring about jargons such as Large Language Models (LLMs), neural networks, Generative Pre-Trained Transformer (GPT) and whatnot. They are attempting to understand how and why gargantuan data are fed to AI tools to “train” them, and why the performance of an AI with 175 billion machine learning parameters will be qualitatively different than one trained with a mere 10 billion parameters.
So, what has caused this excitement? What can ChatGPT do?
A tool for the future
Some have called it the “stochastic parrot” capable of nothing more than stringing together random words that, at first glance, make sense. But for most people, ChatGPT is the wunderkind that can converse with a human like an equal, write all kinds of things—including essays, poetry, software code, manuals, white papers, novels, and everything in between—while also offering translation abilities in a wide variety of languages. It also appears to possess some ability to reject inappropriate requests, correct its flaws, and retain the thread of the conversation throughout. On January 31, ChatGPT passed the US Medical Licensing Exam, leaving onlookers with one of two conclusions: either the software possessed admirable capabilities or the definitive test to practice medicine was a no-brainer.
In simple terms, ChatGPT is CliffsNotes on steroids. It is to the search engine what the search engine was to the Yellow Pages.
ALSO READ: What should really worry us about AI
In order to truly understand the capabilities and limitations of an AI tool, one can measure its performance vis-à-vis the Bloom’s Taxonomy (Figure 1). Examiners keen on assessing the learner’s holistic knowledge use it. In a nutshell, it presumes learning to occur on six different planes, and the best outcomes are experienced at the higher planes of the pyramid.
Since the era of microchips began, remembering has not been an issue for any computer. True AI prowess begins from the second layer. With that in mind, let us assess ChatGPT, layer by layer:

Figure 1.
Understand: When asked for fun ways to teach Boyle’s Law to a 9-year-old, the software threw up an apt suggestion bank in a jiffy. No fuss.
Apply: When asked to list the qualities required to be a good rocket scientist, the software exhibited the ability to apply knowledge in a “common sensical” manner—citing technical and non-technical qualities.
Analyse and Evaluate: The tool offers middling results in these two layers on issues it considers non-controversial—like recommending mobile phones within a price range. But when you ask it to list occasions in which Donald Trump lied or to write a biography of Narendra Modi, you can smell the Dettol in the words. When tackling something less controversial—like comparing Sachin Tendulkar and Virat Kohli—the response is almost apologetic, filled with disclaimers. Experts believe that these guardrails have been erected by GPT-3.5, which is infinitely more cautious than GPT-3. Mercifully, the software does not hedge when asked whether climate change is a hoax.
Create: The truest sign of intelligence is the ability to think, conceptualise, and create original works. When asked to write a short story in the style of Wodehouse about a missing necklace, the tool threw up a few short paragraphs reminiscent of a second-standard student’s essay. Asked to add an attempted murder, an election, and mountaineering to the story, the tool approached each conflict sequentially, resolving one before moving on to another. And since mountaineering does not fit easily in a British noble’s life at the manor, it skipped it altogether. The text made no attempt to “Show”. Everything was told. The result was similar when asked to script a comic strip based on Calvin & Hobbes in which Calvin wins the Student President election against Susie by swearing to abolish homework. It understood and created a script with well-sequenced panels, but the Calvin was unrecognisably angst-free.
Additionally, the tool is verbose, repetitive, does not do mathematics well for some strange reason, and almost always gives responses much below the requested word length. When asked to create a playlist of songs featuring Manoj Bajpai, it was inaccurate. And when asked to create a playlist of songs of the Bollywood actor Sachin, it did not comply because the previous query was about Sachin Tendulkar, who, as everybody knows, was not an actor.
However, considering that the tool is at the Australopithecus stage of evolution, one can only wonder how well it will do once it attains the Homo sapiens’ level of cognitive abilities.
A tech reshuffle
Already, AI tools are making their presence felt in coding. Former AI Director of Tesla recently praised Microsoft’s Github Copilot in the following tweet:
“Copilot has dramatically accelerated my coding; it’s hard to imagine going back to ‘manual coding’. Still learning to use it but it already writes ~80% of my code, ~80% accuracy. I don’t even really code, I prompt. & edit. (sic)”

The race for AI supremacy is well and truly on, and this race might become more significant, lucrative and impactful than the nuclear arms race or the race to space. | Photo Credit: istock
Elsewhere, speaking at the 2022 World Economic Forum in Davos, Sundar Pichai reportedly said:
“AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity.”
Since nobody has so far conducted an experiment in which three humans are deprived of heat, light, and AI respectively to determine who suffered the most, one may have to take Pichai’s statement as a working hypothesis for now. And take into account the attitude behind these words while analysing the seismic layoffs the industry, not just Google, has witnessed in the recent past.
Google’s layoffs are of particular importance though because no other company competes directly with OpenAI in this space. Pichai emailed his employees announcing the slashing of 12,000 jobs—around 6 per cent of the workforce. The email mentioned AI thrice. The market responded by raising the price of the Google stock by 5 per cent.
ALSO READ: Privacy a casualty in race to develop generative AIs
Prominent among those laid off were employees serving AI and robotics initiatives. It is difficult to view this development as unrelated to the surge of popularity enjoyed by ChatGPT, especially since the company seems to have hastened the launch of its own chatbot named Bard. It is even more difficult to imagine a future where corporate leaders will prioritise people over profits. To them, AI is the sound of a cash register ringing so loudly that one can hear it on the other side of the galaxy.
This corporate truth is made evident by the fact that Google launched Bard and Microsoft launched Sydney in quick succession to a limited audience. The results were underwhelming and, in some cases, embarrassing. The manner in which Sydney gaslit a user is at once hilarious and chilling. If one ignores these short-term hiccups, there is no doubt that the race for AI supremacy is well and truly on, and this race might well become more significant, lucrative, and impactful than the nuclear arms race or the race to space.
“AI is no different from the climate. You can’t get safety by having one country or a set of countries working on it. You need a global framework.”Sundar PichaiCEO, Alphabet
Inevitably, AI will become mainstream. It will rule. But how would its reign be? It is time to round up the threats, challenges, and opportunities associated with this society-altering concept. Let us begin at the most obvious point: the threat to livelihood.
Corporate leaders engage in verbal gymnastics while downplaying the erosion of the job market by AI. Their favourite position is that technology has always enabled employees to enjoy greater productivity, larger earnings, and more high-end work worthy of a human being. In other words, jobs are displaced upwards, not outwards. But if the mediaperson persistently questions the impact on the job market, these stalwarts state that any disruption in the job market will be temporary.
The threat of wide-scale unemployment
The truth is that AI is likely to do to white collar jobs what robotics did, and is continuing to do, to blue collar jobs. It is only a matter of time before robots make obsolete the vast army of delivery personnel serving us with robotic zeal, and autonomous cars transform drivers into dinosaurs.

It is only a matter of time before robots make obsolete the vast army of delivery personnel serving us with robotic zeal, and autonomous cars transform drivers into dinosaurs. | Photo Credit: istock
Back in 2017, Elon Musk rang out a warning for the transportation domain, saying, “Transport will be one of the first to go fully autonomous. But when I say everything —the robots will be able to do everything, bar nothing.” A similar, but perhaps less drastic, fate awaits workers in the knowledge industry, according to Bill Gates. In his opinion, some jobs in even medicine and teaching are in jeopardy. And one can safely state that if AI can do 80 per cent of a programmer’s job, at some logical point, they will be devalued or even fired.
The gradual mainstreaming of AI tools provides fertile ground for speculation. The table (Table 2) keeps an open mind about the impact—could be enabling, devaluing, or displacing—of some impressive AI tools on various vocations.
Those who experience a negative impact might slide one rung down the value chain. For instance, freelance writers who deliver content as if it were a commodity might be paid less to do some last-mile polishing of AI-generated content. This phenomenon is elaborated in the subsequent section. Meanwhile, freelance writers who are positioned above commodity peddlers might be asked to leverage AI to slash cost and time, which is likely to cause a stagnation or dip in salaries.
Professionals who base their services on demonstrable uniqueness are likely to be untouched. Those who create output using primary research, those with signature styles, those with well-established brand names, and those who epitomise core human values such as compassion, generosity, empathy, and resilience would be difficult to replace even by innovatively programmed bots. These professionals might even become more productive and emerge stronger in this era.
It is this small subset that corporate leaders prop up as signs of unmistakeable human progress. It does not seem to matter that purely capitalistic economies—nations that have poor or absent social welfare measures—are unable to offer even rudimentary levels of comfort to a silent majority who have strived over a lifetime. To acknowledge this phenomenon, look no further than elderly Americans who are working well past their retirement ages just so that they do not sink further into debt.
ALSO READ: ‘Digital inequalities will power digital colonialism’
If the idea is to build an economy that works for everyone, AI seems a terribly unlikely tool of choice. ChatGPT itself agrees. When asked about job erosion, it said:
“Overall, it is likely that the impact of AI on the job market will be complex, and the key to minimising any negative impacts and maximising the benefits of AI will be to approach the integration of AI technology in a thoughtful and responsible manner.”
The threat of commoditised content
A 2022 study conducted by Europol (The European Union Agency for Law Enforcement Cooperation) titled “Facing reality? Law enforcement and the challenge of deepfakes” estimated that, by 2025, 90 per cent of online content will be AI-generated. As if on cue, BuzzFeed CEO Jonah Peretti announced that his website will begin publishing AI-generated content. The undercooked content generated by AI today should work well for a website infamous for its clickbait strategy. Not surprisingly, the company slashed 12 per cent of its workforce prior to Christmas.
“Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the ‘accident’ case of [AI] being developed and then acting unpredictably.”Sam AltmanCEO, OpenAI
In the long run, an aggregation of such developments will feed a constant diet of unintelligent, trivial, and ultimately dissatisfying content to the whole of humankind. It remains to be seen if the most popular writers are substituted by AI, or these writers leverage AI to produce books at triple the speed, although it is difficult to imagine the AI mimicking the speed of someone like James Patterson!
Interestingly, human beings are less likely to sniff out AI-generated content than GLTR, a tool designed to do this. How does it do better than humans? Simple. It scans the text for the presence of creative or mismatched words that an AI will never use. Our imperfection might yet save us!
The great existential threat
Not long before the spectacular launch of ChatGPT, OpenAI’s CEO Sam Altman said in an interview, “The only real driver of human progress and economic growth over the long term is the societal structure that enables scientific progress and scientific progress itself.”
While a rational person may agree with this, the problem is that money has co-opted the term “scientific progress”, which means it is easy for Silicon Valley to tell us what the term should mean. But if one were to consider a deeper definition of science, then one would hear implorations from inexact sciences like anthropology, sociology, and psychology, making a case for exercising great caution when dealing with events of immense magnitude. Such caution cannot be the byproduct of self-regulation. Silicon Valley is, and must be, driven by the profit motive. Therefore, there is no reason to believe that it will exhibit greater restraint and moral fortitude than Wall Street.

Abstract landscape made of tiny cubes and human-like face, an artificial intelligence concept. | Photo Credit: Getty Images
The solution can only be monitoring and regulation by a knowledgeable and global external agency like the United Nations, but this one must actually work and be accorded overarching decision-making powers to curb AI initiatives that are against human interests. The biggest names of the tech industry, including future superstar Altman himself, agree that regulation is crucial. More than 150 experts, including these big names, declared as much in a 2015 open letter.
ALSO READ: ChatGPT for dummies
Areas in which regulations are urgently needed include arms usage and surveillance. Otherwise, tools like Rewind—which remembers everything that the user has seen, heard, and said by recording every single action on the computer— democratises surveillance power. Anybody can use it to mimic the behaviour of government espionage agencies. That is one kind of democracy the world does not need.
In the absence of a global regulatory agency that implements a standard code of ethics, companies are, for now, attempting to regulate themselves. Google, for instance, prohibits weapons work, although it remains open to selling technology to the military. In 2018, it refused to renew its defence contract for the controversial Project Maven, an initiative that leverages artificial intelligence to analyse drone footage and might one day find itself at the intersection of surveillance and autonomous weaponry.
There is cause for even more caution and humility, if one considers the evolutionary perspective.
With our seemingly supreme powers and intellect, we have halted one aspect of evolution—we will not allow another biological organism to become superior to us under our watch. Hence, evolution—which is an unintelligent, disinterested force—will stay arrested in this regard till one of two things happen:
“The place that I think this is most concerning is in weapon systems.”Bill GatesCo-founder, Microsoft
A catastrophic event that will vanquish us, giving another biological organism an opportunity to eventually take our place.
A mutation within humans that creates a superior organism. This could well be a technological entity that once upon a time mimicked us, but eventually became way better than us. Such an entity could also unleash the aforementioned catastrophic event.
Seen in that context, AI is perhaps the inevitable march of life itself. And it is in our best interests to rein in this marcher. One of the smartest thinkers of his generation, Stephen Hawking, agreed. He said:
“The development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
The threat to learning
“When students cheat on exams it’s because our school system values grades more than students value learning,” tweeted Neil deGrasse Tyson way back in 2013. Well, in 2023, cheating may well become a banal occurrence with Google’s Socratic, OpenAI’s ChatGPT, or any similar tool actually producing student homework. The resultant stunted learning must be deemed unacceptable in at least mission critical vocations such as piloting and medicine.
The threat of reality distortion
We are what we eat. The same goes for AI. No AI tool can escape the limitations or inaccuracies of the data used to train it. Let us understand this by hypothesising that today’s AI existed in the 1630s, the time when Galileo endorsed Copernicus’ Heliocentric model of the galaxy. The church declared this heresy and sentenced him to house arrest. Now, the majoritarian view of that time was that all celestial objects revolved around the earth. And an AI that somehow collated the opinions of all Europeans would believe this to be true, not knowing that the sole contrarian stance of Galileo was in fact the factually correct position. To circumvent this problem, the AI tool must be trained to give infinite weightage to this single position. That is easy to do in hindsight, but what about today’s Heliocentric ideas? The majority of us—including the creators and trainers of AI—will not even know where we are incorrect. And thus, unintentionally, the technology may spread ideas that distort reality. This is compounded by the fact that the Internet is not equally accessible to all, a fact that will minimise the presence of and weightage given to underprivileged viewpoints. Such marginalisation can be based on class, gender, ethnicity, geography, or even the language of expression.
A shockingly apt example of unintentional reality distortion comes from Israel where a Palestinian construction worker was arrested in 2017 for a simple “Good Morning” post on Facebook. The platform’s AI-powered translation software erroneously translated the text as “attack them” in Hebrew and “hurt them” in English!
“I think autonomous weapons are extremely scary.”Jeff BezosFounder and executive chairman, Amazon
Now let us add wanton distortion of reality to this already volatile situation. Back in November 2019, OpenAI decided against releasing GPT-2 because they felt that it could be used to create enough garbage content to pollute the whole web. Meanwhile, deep fakes of celebrities as well as commoners are successfully sparking online and offline conflicts. Biased television channels and online mobs then amplify this disinformation, thereby pouring gasoline on the sparks. Privacy, security, and dignity of individuals are sacrificed in the process with shocking nonchalance.
A related issue comes from the biases and blind spots of the creators and trainers of AI. Soon after the release of ChatGPT, memes began circulating in India about how the software did not mind cracking jokes on Jesus and Krishna but claimed that it would be insensitive to do so on Muhammad. On the other side of the world, the New York Post reported that the software had no issues cracking jokes on men but considered cracking jokes on women and overweight people unacceptable. In this manner, ChatGPT acquired political hues even before the paint had dried off on the server cabinets.
Savvy media moguls are bound to take advantage of this opportunity, perhaps creating their own AI tools that would be trained on datasets biased in favour of their worldviews. Thus, the Internet can remain a place of segregation and cultivated ignorance. These moguls will be helped by the fact that it is extremely expensive to retrain AIs from scratch, which might prompt companies to decide against correcting systemic inaccuracies and biases unless they are required to do this for business growth.
The Internet is already successful in fuelling and amplifying hatred. Imagine navigating it with an AI tool that has much greater power than conventional search engines. Inaccurate or biased AIs will do a fantastic job of summarising a message of divisiveness. The consumer does not even have to collate divisive ideas from different sources. These will be served on a platter of “intelligence”.
The threat of systemic inequities
In 2020, four authors, including Timnit Gebru, the co-director of Google’s Ethical AI team, submitted a paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” to the Association for Computing Machinery Conference on Fairness, Accountability, and Transparency. When Gebru left Google mere days before the paper was accepted for the conference, she said she was fired whereas the company said she had resigned. Two months later, another co-director—and a co-author of the paper—was also let go.
The paper spoke at length about how the inaccurate or biased data fed to AI tools can lead to systemic suppression of underprivileged viewpoints (we have already covered this ground). It also brought focus to the energy-guzzling nature of Large Language Model-driven AI tools.

AI companies are obsessed with size. They amass oceanic computing power to increase the amount of training data fed to the tools. While more data offer greater performance, they also create their own complications. | Photo Credit: istock
Google’s sincerity regarding an egalitarian approach to business came under further scrutiny in February 2022 when two more members of its Ethical AI team—Alex Hanna and Dylan Baker—quit. Hanna stated in a piece she wrote for Medium.com: “I could describe, at length, my own experiences, being in rooms when higher-level managers yelled defensively at my colleagues and me when we pointed out the very direct harm that their products were causing to a marginalised population. I could rehash how Google management promotes, at lightning speed, people who have little interest in mitigating the worst harms of sociotechnical systems, compared to people who put their careers on the line to prevent those harms.”
If another reason is required to usher in third-party regulation, this is it.
The threat hidden inside size
By now, it must be clear that AI companies are obsessed with size. They amass oceanic computing power to increase the amount of training data fed to the tools. While more data offer greater performance, they also create their own complications.
“One can create neural networks having trillions of machine learning parameters, but is it AGI [Artificial General intelligence]?” asks Amol Dharmadhikari, the CTO of Silicon Valley startup Ionate.
Perhaps it will lead to a kind of omniscience, which is a superpower attributed to divinity? Dharmadhikari is not convinced. He says:
“We need to examine its intrinsic complexity as it grows. All complex systems are inherently unstable and can fail in unexpected ways. At the mind-boggling scale of multi-trillion parameter network, the human ability to tune it will reach a saturation point. Will we be able to understand and tune it as it increases in size? Even as we speak, there are cases where ChatGPT is churning out gibberish. As it grows, its failure modes may be harder to detect and rectify.”
Does this mean that at a particular size, only well-developed AIs will have the ability to train larger AIs under development? It is a theoretical possibility, but that question will be answered only by the future, even if one has no problems with humans playing a minimal role in the creation of future generations of AI tools. At least at first hearing, this prospect sounds like the preamble of a dystopian sci-fi movie.
For now, the threat of size resembles the problem of speed in relativistic science. If humankind somehow develops vehicles that can approach the speed of light, the mass of the vehicle will keep increasing as the vehicle gains speed. Even minuscule increases in speed will require a mammoth infusion of additional energy. Would the task of increasing the size or reliability of AI tools align with this analogy?
Application in the most pressing problems
As far as one can recollect, most exciting new technologies have been sired, midwifed, birthed, and raised by corporates. It stands to reason that corporates will pocket the earnings of these technologies. As a result, the best technologies end up solving the easy problem of amplifying wealth for the already wealthy. That is like using rocket fuel to commute to the grocery store. To achieve optimal results for all stakeholders, these technologies must be directed at the most pressing and complex issues of humankind such as global climate change, economic inequity, political instability, divisiveness in society, and delivery of good governance.
As a researcher at the Indian Institute of Tropical Meteorology, Pune, Dr Bipin Kumar is engaged in solving such problems. A couple of years ago, he and his colleagues began leveraging AI to make more accurate hyperlocal rainfall predictions across the length and breadth of India, thus providing crucial input to a nation whose economy still relies on the monsoon.

Robot typing text on a typewriter. Artificial intelligence generated text and future of journalism concept. | Photo Credit: istock
The key to predictions is high resolution imaging which can be achieved by feeding coarse resolution images that cover 25*25 square kilometres to an AI tool and receiving images with 4X resolution as output. The enhanced resolution offers greater clarity at the block or village level, thus helping administrators and policymakers make better decisions. Says Dr Kumar:
“AI requires very high computational powers while training the tool or while adding more variables to the model. But once that is done, we can make do with light computation powers. A technology like AI is crucial for us because we are dealing with higher complexity of data than other domains. And I would say that we are still at a nascent stage of leveraging AI.”
They have begun extending this process framework to predict crop stubble burning patterns in Punjab and Haryana during winter, the occurrence of debilitating fog at the New Delhi airport, and even creating early-warning mechanisms in lightning-prone areas of the country. Additionally, Dr Kumar is applying the same framework in an oceanographic project to study the “mixed layer depth” of the ocean which greatly influences lower atmospheric conditions as well as oceanic currents, which, in turn, will help us better understand and predict cyclone dynamics.
“Cutting edge AI is far more dangerous than nukes... AI needs proactive regulation, not reactive. Else, it will be too late. It is a fundamental existential risk to civilisation. I don’t think people fully appreciate that.”Elon MuskCEO, Tesla
Recently, Satya Nadella mentioned how rural Indians used Bhashini—a tool capable of seamless translations, co-developed by Microsoft Research India and the Indian Ministry of Electronics and Information Technology—to access some government service. This anecdote should become the norm.
The use of the latest and greatest technologies to solve the latest and greatest problems of humankind must become the norm. This cannot possibly be considered optional, even though it will require massive investments of money, expertise, and effort.
A final word
For many years now, AI has been unobtrusively influencing our behaviours and destinies. It has been predicting consumer patterns, vetting potential clients, taking charge of security apparatuses, conducting nano-trading, addressing customer grievances, recognising faces, managing inventory, assessing medical reports, suggesting entertainment capsules on OTTs, forecasting sales, managing traffic, whetting appetites during mealtime, and much more.
Yet, the launch of ChatGPT is a watershed moment in human history. The airy souffle of AI we have experienced so far is giving way to the thick, creamy ice-cream that is the conversational bot. We have a narrow window of opportunity here to find, not a cork, but a regulated outlet for the genie’s bottle. We need to act before fundamentalistic capitalism dictates terms to society in yet another episode of the tail wagging the dog.
ALSO READ: Escape from ChatGPT
The loudest alarm bell chimes out of the words of Sam Altman who in a recent interview said:
“My basic model of the next decade is that the marginal cost of intelligence and the marginal cost of energy will trend rapidly towards zero…. Now there might still be someone who is willing to spend a thousand times more, and they will have access to a tremendous amount of intelligence or energy… what happens then?”
What, indeed? The idea that we must be fine with the depreciation of intelligence is shocking at best. The elephant is not finding substitutes for its trunk; the ants are not giving up teamwork; nor is the crocodile outsourcing the task of biting into prey. But to what extent are we willing to undervalue the very intelligence that defines us and even grants us supremacy?
A hundred years from now, somebody might look at this page of our history book and ask, “What were you thinking?” Or maybe they would say, “Ah! Here’s where we began accomplishing what really mattered.”
For our collective sake, let us hope for the latter.
Eshwar Sundaresan is an author, freelance journalist, counsellor, life skills trainer, and bestselling ghostwriter.
The Crux
- With ChatGPT, the field of AI has crossed a major milestone. Never before has an AI-driven piece of software captured the imagination of the world to this extent.
- ChatGPT gained a million users just five days after its launch in November 2022, beating the previous record of 2.5 months set by Instagram.
- The threats, challenges, and opportunities associated with this society-altering concept are many. The most obvious one is the threat to livelihood.
- AI is likely to do to white collar jobs what robotics did, and is continuing to do, to blue collar jobs.
COMMents
SHARE