The AI threat
5 mins read

The AI threat

“The people who shut their eyes to reality simply invite their own destruction; anyone who insists on remaining in a state of innocence long after that innocence is dead turns himself into a monster.”
— James Baldwin

The sudden rise of artificial general intelligence (AGI), particularly in the form of large language models (LLMs), has prompted considerable debate about the benefits and potential harms these technologies may bring. However, I argue that the harms are often misunderstood, especially by the lay public.

We are already witnessing increasing AI-related issues across various domains, especially regarding human intellect and self-expression. A research paper from Cornell University titled *Your Brain on ChatGPT* concludes that the unregulated use of such tools could restrict the development of human intelligence. Most notably, it could reduce the exercise and cultivation of critical thinking skills among the broader population.

Large corporations and political elites stand to gain substantially from a populace that lacks literacy and critical awareness—one that can be easily fed narratives designed to advance the agendas of these powerful players. The easiest way to achieve such indoctrination is through biased programming of chatbots.

### The Hidden Costs of LLMs

Users are often told that LLMs come at a cost. But this cost is far more than just cognitive impairment. It includes severe environmental risks, democratic backsliding, and exploitation of public resources.

Journalist Karen Hao, author of *Empire of AI*, highlights these concerns in her meticulously researched work. Hao explains how unchecked development of generative AI is depleting valuable natural resources like freshwater and arable land. She points out that some AI companies operate like techno-authoritarians, disregarding democratic principles by failing to consult local communities affected by the environmental damage caused by their data centers.

The Environmental and Energy Study Institute (EESI) estimates that a single data center can consume as much as five million gallons of water per day — the equivalent used by towns ranging from 10,000 to 50,000 people.

Furthermore, the Citizen Action Coalition, an Indiana-based nonprofit, has revealed that AI companies often use shell companies or secret project codenames to build new data centers. This secrecy prevents the public from being informed about these projects until after local approvals have been secured.

The Hoosier Environmental Council notes that generative AI data centers are hyper-scale facilities demanding enormous quantities of water and energy—and these data centers are expanding rapidly. It follows that we may soon see even greater exploitation of public resources, increased human labor demands, and a sharp rise in carbon emissions.

These environmental concerns are explored in depth in Sourabh Mehta’s article, *How Much Energy Do LLMs Consume? Unveiling the Power Behind AI*, published by the Association of Data Scientists.

### Debunking Misconceptions: Industry Perspectives

Oren Etzioni, co-founder of Vercept, addressed some common myths about AI in an interview with Harvard Business School’s Institute for Business in Global Society. He suggested that fears about AI causing harm stem mostly from misinformation rather than tangible threats to humanity. His advice to users was to learn how to use AI more efficiently to avoid falling behind.

However, such statements reveal a profound misunderstanding when voiced by CEOs who profit immensely from this technology and portray it merely as a productivity tool.

Etzioni’s claim that people cannot distinguish fiction from reality or that AI is somehow on the path to sentience is fundamentally flawed. Today’s chatbots aren’t feared because they are sentient antagonists like the AM from Harlan Ellison’s 1967 story; rather, they are critiqued for intellectual unreliability and their negative impacts on users’ cognitive abilities, income inequality, privacy violations, and data ownership concerns.

### The Way Forward

So, what can be done to keep pace with the rapid advances in AI while preserving human autonomy?

There are two key strategies:

#### 1. Individual Resistance

Resistance is a simple yet personally beneficial approach. It involves relying on one’s own intellect and reasoning rather than outsourcing critical thinking to chatbots. Avoid succumbing to dopamine-induced distractions from constant social media stimulation. Instead, invest time and effort in reading the classics—works by Homer, Goethe, Lermontov, Thucydides, Milton, Stendhal, Cellini, and others—to improve focus, literacy, and cognitive skills.

Let their words transform you.

Remember, chatbots are fundamentally predictive algorithms whose abilities depend on the volume of data they consume. They do not possess original thought—a uniquely human trait. Even generative AI models, as explained by International Business Machines Corporation (IBM), create “original” content only by training on vast amounts of raw data. Notably, IBM itself promotes AI assimilation into modern business platforms rather than AI sentience.

#### 2. Collective Action

Collective self-preservation requires greater resolve and resilience. However, it is a powerful means for the public to influence how AI technologies are adopted, including decisions about their scope and impact.

Through collective action, communities can advocate for stronger protections that safeguard civil liberties, human and labor rights, and intellectual freedom in an AI-driven world.

We stand on the cusp of a transformative new era.

Our goal must be to deepen the conversation around the implementation of universal human rights and safety measures in an increasingly AI-operated society.

By balancing individual responsibility with collective accountability, we can ensure that AI serves humanity’s best interests—not the other way around.
https://www.thenews.com.pk/tns/detail/1348325-the-ai-threat

Leave a Reply

Your email address will not be published. Required fields are marked *