The world of technology seems to have been turned upside down since OpenAI showcased ChatGPT and its ability to generate text as humans would. ChatGPT was all the rage at the World Economic Forum in January, shaping and reshaping technology markets, with Microsoft posing a serious challenge to Google’s monopoly on search.
The all-new Bing Search, integrated with ChatGPT, is helping Microsoft increase its dominance as a Big Monopoly Player. But the big question everyone is asking is: what are the implications of a generative AI in our information society, in jobs, in education and every sector where ChatGPT can take over? Anyone who has seen or interacted with ChatGPT can tell you that it does not always give accurate responses. In fact, some people are arguing that ChatGPT is just a dumb robot and one must treat it as just a fun toy and not a tool. There have been other generative AIs that actually worked better as a tool—for instance, another Microsoft product, Github Copilot, which works as an assistant for developers by generating templates of code from its millions of code repositories.
ALSO READ: What should really worry us about AI
What ChatGPT shows us is a future where generative AIs can help us and act as assistants with human tasks. Humans are obviously concerned on how such a future would look like.
How ChatGPT works
Going back to the fundamental question on what ChatGPT or generative AIs will be doing to our information society, they will be generating reality, but a ‘hyper reality’ that is not real. Users who interact with ChatGPT think it acts like a human. On the contrary, it is programmed to respond as a human would after it has been trained with large datasets taken from all corners of the Web and social media.
This also means that ChatGPT is just as biased as any human. If one were to train ChatGPT from datasets on conspiracies, it would respond just like any QAnon or Alt Right humans. ChatGPT is not responding to you with facts from an encyclopaedia, it is generating these facts and becoming a new encyclopaedia.
ALSO READ: ChatGPT for dummies
These problems are not limited to ChatGPT, but extend to the entire class of generative AI. Take the case of generative AI that can produce what is known as ‘deepfakes’, with videos and audios generating a new perceived reality that has never happened. Dubbed synthetic reality, this form of generative AI is now a tool for the cinema and advertisement industries where actors are being replaced by bots that will act like them. This act of replacing a human with a robot is dehumanising and has resulted in protests from movie industry artists and dubbing artists who are being replaced by generative AI.
To put it simply, ChatGPT and other generative AIs are simulacra, simulated entities generated by various large language models (LLMs) of simulators. Jean Baudrillard’s work on Simulacra Simulation (1981) looks into the idea of simulations and hyper reality and questions the idea of what is real.
Baudrillard’s work has been of significant debate among post-modernists, Marxists and among media researchers. Baudrillard defined hyper reality as “the generation by models of a real without origin”. While Baudrillard did not have an AI like ChatGPT in mind to describe reality, ChatGPT and other generative AI can be explained through his work. Generative AIs are accelerating the production of media at a scale no human or human-powered institution can, irrespective of it being real or otherwise. With the release of ChatGPT APIs, every industry is looking to see how it can be used in their products. This hyper-real production of information will disrupt industries similar to how the platform economy has impacted traditional economic sectors. Generative AIs are here to disrupt all forms of media industry and, in turn, our information society.
The implications of a hyper reality world are large and can be easily witnessed on the media industry first. Just like what social media has done to traditional journalism, expect a generative AI-based media industry to replace all forms of traditional media where human intelligence still has utility value. Every teacher is concerned about how ChatGPT will replace traditional forms of education and are already witnessing students use it to generate their assignments and academic work.
ALSO READ: ‘Digital inequalities will power digital colonialism’
It is not just reality that is affected by ChatGPT, even science fiction and other publishing industries are. The famous science fiction magazine Clarkesworld has seen a sudden rise of submissions to stories and is figuring out how to navigate people submitting generative science fiction stories that have not been produced by a human. Baudrillard argued that science fiction was hardly fictional, it was merely waiting “in its crude state” to emerge.
Market’s interest in AI
With the success of ChatGPT in showcasing real-world use cases for generative AI in search, the market is now driving investments in this direction.
ChatGPT has also set the direction for every tech company to build similar models. But with the low pricing being offered by OpenAI for its APIs, Microsoft is going to monopolise this space with an early start and the scale that small and medium companies do not have.
ALSO READ: Disruption or fantasy?
This race to build other generative AIs also means that more and more organisations are looking to collect data to train AIs. While ChatGPT gets all the praise for building a fairly successful LLM, we ought to look at how this was done—by collecting personal information on a large scale from across the Web. Microsoft has been criticised for how it used copyrighted code in Github to build Github Copilot, its AI coding assistant. In fact it is facing a lawsuit in this connection in the US, where Github, OpenAI, and Microsoft are asking the judge to dismiss the case.
ChatGPT, or any other generative AI, will be the result of personal and public data that has been gathered by large tech monopolies over decades. This gives them an unfair advantage compared to smaller companies attempting to achieve something similar. This data will also be used in the dehumanisation process in various sectors as generative AIs become more efficient at tasks where humans are no longer needed other than to monitor the AI.
Many commentators are recommending that India should adapt to changes in technology markets and invest in generative AIs. The Ministry of Electronics and Information Technology is working on a WhatsApp bot powered by ChatGPT to solve problems caused by lack of information on governance. While it seems like a giant leap by the government to look modern, it is in the government’s interests to control what ChatGPT-like tools tell the Government of India. These tools, like social media, can also amplify biases against these institutions and censorship of these tools will follow as soon as a controversy hits the government.
This race to find datasets to train AIs is also the end of any privacy that people attempt to enjoy in the real world. If one looks at the protections offered by the upcoming draft data protection law in India, it has clear exemptions for search engines and to any organisations to collect personal data that is publicly available. Google and Microsoft have complete impunity to collect any personal information of Indians as long as it is publicly available, just like social media posts that often contain sensitive personal information.
Big Tech enjoys the advantage of data, distribution, and capital, with which it can automate our society, and this process is going to be coercive and pushed with the power of capital markets. Big Tech’s power over the media industry has been long criticised, with Google and Facebook enjoying the power to manipulate and generate simulated realities that have even affected our democracies. Algorithmic biases that have fuelled genocides by tech platforms are getting a new upgrade with not enough investments being made into regulating these systems.
ALSO READ: Escape from ChatGPT
For years, researchers in the space of artificial intelligence have been demanding ethical practices and norms on how these systems are produced. But that has been long ignored by Big Tech companies, who initially funded these researchers to keep a watchful eye and to promote the pushing of practices they need into the regulatory setup. The advocates for an ethical AI have pointed to numerous dangers of algorithms and continue to caution us about ChatGPT and other similar AIs. Yet, the regulatory landscape has not been nudged enough in the right direction.
It is not my argument that artificial intelligence of any kind has no place in human evolution or that there is no social function of such tools in human lives. But what is worrisome is how this process of producing artificial intelligence is going to shape our society and cause unforeseen and foreseen harm.
This has been the biggest drawback of society and regulators in reigning in Big Tech and its impact on our democracies and electoral systems. Now that these perils are known and understood, we should strive to keep the power of AI and Big Tech in check before it causes such damage.
Srinivas Kodali is a researcher and hacktivist with the Free Software Movement of India.
The Crux
- Generative AIs can help us and act as assistants with human tasks.
- ChatGPT or generative AIs will be generating a ‘hyper reality’ that is not real.
- Race to find datasets to train AIs is end of any privacy in real world.
COMMents
SHARE