One of the most popular forms of artificial intelligence among the general public today is ChatGPT. Its rapid rise in popularity is largely due to its ability to answer a wide range of questions and perform many tasks, from helping with homework to generating images in just a few seconds. ChatGPT gained widespread attention in late 2022 and early 2023, with social media platforms playing a major role in showcasing its capabilities.
The AI Boom: Why Everyone Is Talking to Chatbots

There are now several AI chatbots available online that function in a similar way, but the two most widespread ones are Chat GPT and Google Gemini. Google Gemini is closely integrated with Google’s search ecosystem and is often promoted as being particularly strong at handling real-time information and data-related tasks. However, many users still prefer ChatGPT over alternatives such as Gemini, which raises the question why. One of the reasons is that ChatGPT presents information in a clearer, more conversational, and more personalised way, even if it occasionally makes factual errors. This preference suggests that people increasingly expect information to be instantly available and easy to understand, even though traditional online research already requires relatively little effort. Since 2023, ChatGPT has developed into a powerful multi-purpose tool that can assist with thinking and problem-solving — but does it really “think” for us?
When Progress Feels Like a Step Back
Recently, AI users have reported having problems with ChatGPT that they weren’t experiencing before. Newer models of ChatGPT, have been released gradually since 2023 and changes in performance, speed, and response style have led to mixed reactions. Some users feel that responses are slower or more verbose than before, while others argue that the explanations have become overly simplified. It seems like the new model prefers quantity over quality, as the answers given are getting less useful and redundant. The messages given by GPT also became “dumbed down” with basically saying a lot but saying nothing at the same time, using wrong prompts, avoiding making examples, and rather sticking to general knowledge that drifts to being useless. “I feel that the new model is incredibly limited, slower, worse at analysis, gives half-hearted responses, and has removed the older, more reliable models completely.” says a Reddit user who claims to use ChatGPT on a daily basis for work and personal projects. In addition, a number of users have reported dissatisfaction with image interpretation and analytical depth, claiming that earlier versions felt more precise.
This changes might have been unintentional, but some critics believe that they may be linked to business models that encourage users to upgrade to paid subscriptions. If the answers are not precise enough, the user needs to write more prompts , which leads to running out of free questions. Consequently, the user is forced to buy Chat GPT premium. While there is no clear evidence that performance is intentionally reduced, this concern highlights wider fears about how large technology companies monetise access to information and digital tools.

The Hidden Consequences of Over-Reliance on AI
Although many people recognise that relying too heavily on artificial intelligence can reduce independent thinking, AI tools are still widely used in education and professional life. The brain is a muscle, which means that it also needs daily stimulation to stay active so excessive dependence on AI can weaken problem-solving, creativity, and critical thinking. Many people use AI for their work and schoolwork, which can sometimes come in handy in order to save time when doing complex research or projects. However, problems arise when users copy AI-generated content directly without reflection or personal input. This becomes especially concerning when educational facilities don’t regulate the usage of AI. If students rely too heavily on AI without understanding the material, this may lead to long-term consequences, especially in professions such as medicine, engineering, or architecture, where human judgment is essential.
Moreover, AI also raises serious concerns about misinformation. For example, some users make AI their personal playground and use chatbots to upload personal pictures in expectation of getting back some silly, warped versions of their family, friends and pets. At first glance, this does not seem dangerous or harmful. How could a picture of your dog in funny outfits possibly bring harm? However, as AI-generated images, videos, and text become more realistic, it is increasingly difficult to distinguish between real and artificial content. This creates opportunities for scams, manipulation, and the spread of false information. Some users and developers are attempting to address these issues by spreading awarencess on social media platforms about false information and content, thus promoting digital literacy, responsible AI use, and tools that reduce environmental impact.
Another issue is the environmental impact of AI. Data centres that power AI systems consume large amounts of electricity and water for cooling. While exact figures vary, studies show that AI systems have a significant environmental footprint, especially when used excessively for non-essential tasks. For example, according to a recent study by The Washington Post (WaPo) and the University of California, using OpenAI’s ChatGPT-4 model to generate a 100-word email alone consumes 519 millilitres of water and the electricity used is equal to powering 14 LED light bulbs for an hour (0.14 kilowatt-hours (kWh)) (www.businesseenrtyuk.com). Many users are unaware of these costs when generating images or running repeated prompts for entertainment.
When AI Saves Lives Instead of Time
Artificial intelligence may seem like a recent invention, which has infested practically every aspect of our lives, but it has been used for decades in specialised fields. For example, computer-assisted and robotic technologies have supported medical procedures since the 1990s, improving precision and reducing human error. They are extremely useful and used in a way that brings no harm to the planet or ourselves. Unlike generative AI tools used for content creation, these systems are carefully regulated and designed for specific, life-saving purposes.
AI Is Here to Stay. The Choice Is How We Use It
Today, AI continues to play a complex role in society. It is often misused in advertising, social media, and content generation, yet at the same time it contributes to medical research, disease detection, and scientific advancement. AI itself is neither entirely harmful nor entirely beneficial; its impact depends on how responsibly it is developed and used.
Written by Tia Marija Milak
