At the start of the Nineteenth Century, Luddites in Nottinghamshire started what is known as
Luddite Riots. These rioters were protesting against the introduction of power looms that were replacing handlooms. Their main concerns were wage reduction, the devaluation of their trade, and the loss of autonomy because of new machinery. In no time, they became known for breaking and destroying the very machines that displaced them as an expression of protest. These days, you might find some Neo-Luddite in the wild hammering on their computer or smartwatch as a form of protest against new technology.
Then comes AI and, more specifically, LLMs. There have been widespread concerns about people being replaced by AI. Computer scientists are paranoid about software engineering jobs going down. Data science people are scared, too. Philosophy people – to whom the job market was already so hostile – are on one hand boasting that there is always meta philosophy to do, about anything AI might be capable of, while on the other hand, they are silently terrified of the prospect of having to always take a radical stance about anything in order to be taken seriously. How should we approach this state of affairs?
Should we be good old Luddites and go around destroying data centers or poisoning our images online so that the models go rogue? Or is there a less radical way of approaching this inevitable state of LLMs permeating our lives? I argue that since LLMs have the power to shape and alter people’s perception of human essence, we should look at LLMs (and AI development) with hostility.
Firstly, it is imperative to argue why the Luddites-way is not the right way to go forward. Technology makes our lives easier, but one of the best arguments for technology is a simple case: technology enhances accessibility. Health, education, information – the means of the few have been distributed, though with its own caveats, to a larger and larger population. Then, why is technology scary?
Here, I stipulate a psychological phenomenon associated with any new technology: humans want some part of their work replaced, but not their essence. The assumption here is that such a phenomenon of fear is a good thing. Such a value judgement is justified because, as moral agents, a lot of our moral assumptions and ideas of virtues depend on our assessment of our essence (this assumption can be fleshed out further in a different article, especially with regards to views like Buddhism which propose no-self/no-essence idea or other non-anthropocentric views).
For now, the fundamental dread —the tech dread— is the dread that in some way, a piece of technology will make being human obsolete. Calculators were feared because people thought they would replace people’s essence (they didn’t). Now, what might this human essence include? I believe that a fundamental human essence has to necessarily include the following: fulfillment through recognition, creativity, originality, and community.
This stipulated psychological urge can explain why people do a lot of what they do: people join groups (for community), people write and create (for creativity and originality), and people fall in love, marry, and have children (for recognition or community). These factors (though not exclusively) seem necessary to plausibly explain our essence. LLMs today try to replicate intelligence, in effect and at best mimicking basic characters of intelligence. However, the cultural narrative of what LLMs do seriously threatens the human essence.
A fundamental question we might be asking in this context is: What does originality or creativity mean if what you can create tomorrow can be completely replicated today by an LLM? An LLM mimics creativity, claims originality, and, at the general level, asserts intelligence. This, I argue, brings a huge cultural shift. My friends (especially those studying Computer Science) have already started arguing that LLMs “mean” things. We, as a community, have started asking no questions before completely trusting an LLM wrapper. which could hallucinate at any moment. The overwhelming data it has been trained with and the precision with which an LLM can mimic us – perhaps it has made us believe it is
one of us. We have already, in the past few years, become so used to
cognitive offloading that we have almost become domesticated by a token-prediction machine, which has stripped us of our agency. The problem in the cultural shift is not that it is replacing work; it is that it is replacing the idea of what we are. By extension, it is replacing what our essence is. It is not an understatement to say that we have been seriously underestimating human capacity and what humans are ever since the advent of defaulting-to-LLM culture.
So far, I have presented my views on why we should be concerned about a culture of defaulting to LLM in matters that are related to our understanding of human essence. I then propose hostility as a solution to the concern. Hostility, in this sense, is not outright rejection of AI models. Rather, it is a deeply active and defensive skepticism of letting AI permeate our lives. Universities need to have more discussions on what LLMs actually are rather than blindly trusting what the tech industry tells us they are. The media and policymakers should think critically of LLMs and the delusion that they posit through infiltration into our essence. It seems rather counterintuitive, but we need to learn to not trust what feels justified. We ought to be radically skeptical of the idea that our essence can be mimicked, and even if it is mimicked, we need to understand that it is just that – a mimicry, and not the real thing.
Manoj Dhakal is a Columnist. Email them at feedback@thegazelle.org.