Do you feel comforted every time you type “chat” into your browser tab, knowing that you are seconds away from solving your new pesky problem that has been haunting you for less than a minute? And does it not feel even better when ChatGPT greets you the same way it always does, with the same tone and the information organized exactly the way you like?
Is it not great that we no longer have to wade through a never-ending Google search results page that is not cohesive with different fonts, different tones, different authors?
Well is it?
While a single, unified voice ensures that we feel familiarized and comforted by ChatGPT every time we ask a question, it has several implications for us on a deeper level. The way we have changed modes of searching for information from browsers to having the information being sorted and presented to us in an aesthetically pleasing manner has modified our intellectual gears and slowed them down.
So, what are the cons of using ChatGPT with its singular voice? I asked “Chat” itself. Let us run through them and see if this applies to you.
Lack of diversity in thought: A single voice may suppress alternative viewpoints, leading to groupthink or missing out on innovative ideas.
Oversimplification: Complex topics might be flattened to fit a consistent tone, reducing nuance or important context.
Loss of authenticity: If “one voice” is overly polished or corporate, it can feel inauthentic, especially in settings where individual expression is valued.
Have you noticed that since you started relying almost exclusively on ChatGPT as a source of information, whenever you have learnt about a topic or a concept, you have only gotten one perspective? We would argue that, no, AI language models pull from various sources to provide an answer, which means alternate viewpoints are considered! To that I say not necessarily. Although the content may have different perspectives, ultimately, the way that they are presented matters. ChatGPT’s singular tone cannot fully capture the nuance of complex issues and concepts, and while it summarizes both sides, it comes across as neutral and academic.
You may argue that this is good since we get a balanced view of issues. No, it is not, especially when looking at real-world issues and conflicts. Fundamentally, the internet is a platform for the free exchange of information, thoughts and opinions. When we go online, we unknowingly agree to this sentiment, and when we browse, we come across articles, blogs and papers by people from different countries, backgrounds.
The problem of over-simplification also arises when we use large language models (LLMs) like ChatGPT. Complex topics might be flattened to fit a consistent tone, reducing nuance or important context. When ChatGPT becomes our go-to source for getting information, we no longer hear the diverse voices in its carefully curated, uniform text – in fact, they are suppressed to the extent that we cannot even infer them from the subtext – we cannot trace them back because in its free models, ChatGPT does not even credit sources.
So, what does this mean for us intellectually? We slowly move towards a future where having opinions on issues and conflicts in the world becomes rare, because we no longer use the internet the way it was meant to, to connect people’s ideas across seas. When we talk to people around us, who also most likely get their information from ChatGPT, we do not get a different perspective. Our intellectual gears might slow down and rust because we are no longer engaging critically with the world around us. Instead, we are passive recipients of a singular, uniform narrative of the world with the nuance of hundreds of voices flattened into a summarized, 4-line, balanced paragraph.
Let us look at another intellectual slump that the continual use of ChatGPT puts us in. Scroll back up and look at the three lines I have selected from ChatGPT’s response: lack of diversity of thought, oversimplification, and loss of authenticity. I want you to look at it for a moment. At first glance, they seem like solid, separate limitations. I urge you to look again if you do not see an issue. Lack of diversity, oversimplification, and loss of authenticity fall under the umbrella of the first idea: lack of diversity. Oversimplification and loss of authenticity are primary consequences of having a lack of intellectual diversity. But ChatGPT does not capture that nuance, and we, as avid users of the tool, do not see it at first glance. Now, you would probably ask, “Why does this matter?”
As a generation, we have used search engines for double-digit years before ChatGPT became popular two years ago. We had been trained to skim through sources, capture nuance and think critically about the information we are receiving. In just two years of using ChatGPT, we are already losing some of these skills. We have now become used to being spoon-fed single-tone narratives of information to the point that instead of being an active user, searcher, we act like passive ears. We do not think twice about what ChatGPT tells us because we trust it – it is always consistent and clear and it is comfortable. It pains us to open Google and verify what ChatGPT is saying. It used to pain me to even attempt to open Perplexity, another AI software that does not quite have ChatGPT’s UI and comforting tone that I got so used to.
This is serious. It means that we have started prioritizing the comfortable tone over the value of facts. When we choose not to fact-check on Google or another AI search engine – even though we know in the back of our minds that ChatGPT is often wrong – we make an active, irresponsible choice of refusing to find the truth, the reality of facts and accept what may be flawed, just because we do not want to get uncomfortable.
Not only are we not engaging critically, we have also stopped caring about the integrity of information itself. And that brings us to the most critical problem. We are then vulnerable to dangerous channels of misinformation. We start to prefer the speed of information rather than waiting for the truth. We do not stop to question. We no longer hold subjective truths and opinions; rather, ChatGPT’s version becomes the truth if spread widely enough. If it goes on long enough, we stop thinking and just start listening and acting, which makes us and society incredibly vulnerable to scams and other forms of fraud and control. At their last stage, these consequences mean democracy comes into danger because, as people, we are no longer well-informed or care to be well-informed and form opinions on world issues, conflicts and politics. We stop being active members of society and we start becoming passive listeners. I think we can all infer exactly the kind of danger that arises from this.
Use ChatGPT. It is a tool meant to help us. But do not let it become the entirety of your brain and thought process. It is not additional effort, or something only so-called “smart” people do. It is a fundamental responsibility, especially as students strive to become active members of our society and think critically about the world we live in. As part of an academic system, we more than anyone should appreciate the importance of the integrity of information. Shunning this responsibility or being irresponsible with it is simply a choice we cannot afford to make.
Naisha Rajani is a Staff Writer. Email them at feedback@thegazelle.org.