What’s ChatGPT?
Short answer: It’s a ‘search-scan-and-paraphrase’ program.
The long answer?
Powered by a powerful natural language processing program (GPT-3), ChatGPT is an artificial intelligence (AI) software created by the US-based company, Open AI. The bot can create text of many hues—articles, stories, screenplays, marketing communication, imitation text, and even computing code. It has a conversational (chat) character. GPT means generative pre-trained transformers, which are a form of ‘large language’ models.
ALSO READ: Disruption or fantasy?
What is Natural Language Processing (NLP)?
NLP is a branch of computer science and mainly looks into how computers understand text like humans do.
What is a large language model?
Like the models used in science or maths, the language models in AI describe a system or a process in an abstract yet orderly way. They help scientists predict upcoming events based on past data. This is why they are also called predictive models. These predictions can be projected onto bigger scenarios, as in weather forecasting.
ALSO READ: What should really worry us about AI
How does it work with ChatGPT?
Well, large language models are trained on huge amounts of text, the more the data, the better the training. ChatGPT acts very fast thanks to the powerful algorithm that powers the program and the huge amounts of data OpenAI used in training it. GPT-3 was trained on 175 billion parameters (coefficients or weights that are applied to calculations within a program).
So, thanks to the data and the learnings from studying the data, ChatGPT can predict an answer. It’s a more advanced version of auto-complete. Since the machine knows a lot about how people write and what people write about, it can mimic those scenarios and produce results accordingly.
When you command it to, say, “write an essay on climate change like W.H. Auden”, it understands the query and predicts what comes next in the form of an answer and using its training it crawls the Web and learns about Auden’s style, publicly available debates on climate change, and produces text results in a paraphrased manner.
How does it really learn?
Think of ChatGPT as a little girl trying to learn. You teach her the alphabet, how words are formed, the grammar, the expressions, and metaphors, and then make her read passages from books you want her to learn from. Slowly she learns the pattern, the style of writing of the people whose prose she reads. Next, she starts writing and her writing will obviously reflect the quality of the prose she’s trained on.
When it comes to AI, there was what is called supervisory learning—a human oversaw what the machine learnt and trained it. Later, AI improved in such a way that programs started learning on their own—unsupervised—and the results were quite surprising. This is the case with ChatGPT, but there is a difference.
ALSO READ: Privacy a casualty in race to develop generative AIs
What is the difference?
It is not human. The girl will be able to produce something entirely new, thanks to her intelligence, beautifully curated by human evolution, which artificial intelligence lacks. So, in spite of all the brouhaha around ChatGPT, the fact is that it is basically a program that regurgitates the data it is able to search and fetch.
So there is nothing ‘original’ in its ‘creations’?
It depends on how you define originality and creativity. For instance, if you write an article for Frontline, in all likelihood you will be searching the Web, surfing through physical books, scanning through your own memories or experiences, and using all those inputs in your writing. On that cue, nothing is really original but there’s an element of creativity thanks to how the human mind processes them. However, the machine, in the case of the likes of ChatGPT, may not do so owing to the myriad challenges it faces in terms of accessing and processing data.
What challenges?
ChatGPT relies on publicly available information while producing results. It does not have access to information that is private: for instance, the robot may not be reading a Salman Rushdie book while preparing an essay on Rushdie and post-colonialism because that book is not available to the public now. This also means the chatbot cannot produce an accurate representation of a subject. There are other limitations as well.
What are they?
Today, a large chunk of the publicly available information on the Web is unverified or poorly classified. This includes misinformation, hate speech, pseudo-facts, the so-called post-truth information, and propaganda. As of now, in most instances, verified information is proprietary and hence commands a premium. If the chatbot needs to be trained on such data or needs to have access to such information, it needs to respect intellectual property or copyright laws. This is going to make such services expensive, if at all they get access to such data. For now, companies that are developing services like ChatGPT rely on ‘public’, ‘free’ information, which is a risky bet.
ALSO READ: ‘Digital inequalities will power digital colonialism’
And they do not ‘pay’ for the information their services are using?
Not yet. There is not much clarity on the kind of data OpenAI, whose founders include maverick sci-tech entrepreneur Elon Musk, has used for training its GPT models. Hence, policymakers will view the services with caution.
Is ChatGPT the only kid on the block?
ChatGPT has variants like Jasper, Writesonic, and Auto Bot Builder. Plus, there are the likes of BARD from Google; Gopher and Chinchilla from Deepmind; Claude from Anthropic; Ernie from Baidu; PanGu-Alpha from Huawei; OPT-IML or LLaMA from Meta; Megatron-Turing NLG from chipmaker NVIDIA; and Jurassic-1 from AI21 Labs.
Are all these as powerful as ChatGPT?
Some are trained on more data, to be frank. In comparison, GPT-3’s 175 billion parameters (that was 10 times GPT-2) is a tad lower than the 178 billion parameters of Jurassic-1. Gopher has 280 billion parameters. Megatron-Turing NLG claims to have used 530 billion parameters. MIT Review reports that Google’s GLaM model has used 1.2 trillion parameters.
Is this all about the US?
Generative AI is making big waves in China as well. Huawei has created a language model, PanGu, powered by 200 billion parameters. Inspur’s Yuan 1.0 is a 245-billion-parameter model. Baidu is collaborating with the Peng Cheng Laboratory on a model with 280 billion parameters—PCL-BAIDU Wenxin, which is used in many applications.
As though this were not enough, the Beijing Academy of AI recently said that it was working on a 1.75-trillion-parameter model called Wu Dao 2.0. These models are mainly meant for China’s internal market and the rest of the world may not hear about them. Still, their impact will be felt across the globe as Chinese gadgets and technology products are part and parcel of our digital culture now.
ALSO READ: Escape from ChatGPT
Will ChatGPT or large language models replace me as a writer?
Not as things stand now. Now the results range from the useful to the hilarious to the stupid and fantastic. But the technology is fast improving and soon it might enter the realm of what experts call a general purpose technology. When that happens, a bunch of applications may get replaced and a clutch of jobs may get rejigged. Still, it is far away from replacing human intelligence and creativity. In sum, this is going to augment or enhance human work in multiple ways, rather than replacing humans.
What are the legal implications of this kind of mimicry or copy-work?
Policymakers are only waking up to the issue. They are suitably clueless and are asking the industry to self-regulate. A follower of tech industry news knows that this industry thrives on hiding information about the inner workings of their products. So, this is only going to open a can of worms.
Fields such as intellectual property rights, data privacy regulations, AI ethics, human rights, gender and racial studies, sociology, and anthropology are already seeing heated debates on the impact of generative AI, and for sure these discussions will lead to a better scenario in which AI can function or try to function without bias and malice.
That’s optimistic!
How history is going to judge this technology, or any piece of tech for that matter, is not by what it can do but by what it helps people achieve. On that front ChatGPT has a long way to go to make a meaningful impact on the way we live and work.
Disclaimer: No part of this explainer was curated by ChatGPT.
COMMents
SHARE