My 2 cents on AI? Overhyped, overrated

The overall narrative is threatening. However, the current state of AI is far from taking away most jobs. I believe this concern is more of an emotional reaction rather than an objective and logical observation, as humans will adapt regardless.
Introduction
The widespread fear since the release of ChatGPT-3.5, along with the ensuing hype, is that AI will replace us in every task and job?
I do believe AI is a useful tool when used correctly, there are numerous proven business use cases and real-life applications.
But is the fantasy of an omni-tool that can handle any request without human supervision or intervention even realistic?
AI can’t write good code
With the release of GPT-4o, we can clearly see what big AI companies are planning for the future. Their focus on reducing latency and cutting costs means we're heading towards a world where AI becomes our go-to assistant for specific, time consuming tasks; like finding and summarizing information or translating text. There was even a rumor that OpenAI might release a search engine (they instead released GPT-4o).
This makes sense because tools like ChatGPT can synthesize information quickly and mostly accurately from the web. Instead of navigating through tons of websites, chatGPT can do it for me.
Hallucinations: is ChatGPT under LSD?
Wow, this seems super good… Until it starts to hallucinate. If you've used LLM (Large Language Models) like ChatGPT, you've surely encountered this phenomenon. Hallucinations occur when the model generates information that appears plausible and coherent but is actually incorrect. This happens because large language models (LLMs) like ChatGPT don't truly understand what they're doing; they make a series of probabilistic word choices that sometimes (or often) lead to inaccurate results. This made the news a few weeks ago when Google started to roll up AI generated responses in their search engine, leading to ridiculous results like “dogs can fly”.
Hallucinations can’t be fully mitigated, and every measure to combat them adds extra cost in terms of both time and computational resources.
The conclusion is that you cannot simply rely on an LLM to fetch information; you need to double-check and verify everything you read, considering it as potentially erroneous. Sometimes, this means using an LLM can be more cumbersome than performing a classic Google search.
https://arxiv.org/abs/2401.11817#:\~:text=By employing results from learning,inevitable for real world LLMs.
Case Study: Traveling in Southeast Asia
Let’s go back to what AI is useful for and let me give you a personal example. I travel a lot in Asia, and sometimes Google Translate isn't enough for nuanced communication. Cultural differences are obviously huge and it leads to completely different communication styles which travelers have difficulties to grasp.
For instance, I’m pretty familiar with Thai culture now and I decided to take a quick test with the last version of ChatGPT. I pretended to know nothing about Thai culture and asked for advice when communicating with locals. It gave me the following advice. And boy, they are accurate!
GPT-4o's response:
- Value of 'Jai Yen Yen' (Cool Heart): Thais appreciate calmness and patience.
- Saving Face: They place great importance on maintaining personal dignity and avoiding public embarrassment.
- Non-Confrontational Communication: Thais prefer indirect communication to avoid confrontation.
- Smile and Be Polite: A genuine smile can go a long way in diffusing tension and showing friendliness.
These tips were incredibly accurate, this is the kind of knowledge you would have to live in the country to get it, or to make an extensive google search. Here it was the matter of seconds to get the information. But once again, it was accurate THAT time, what if I get a wrong result instead? Double-check all the time.
ChatGPT can’t write good code
Taking the hallucination phenomenon and the fact that a LLM does not understand what it is doing, it’s logical that it can’t write good code. And while it can still be useful to point to the documentation of a framework, or provide high-level pseudo-code, writing a complete, working, bugless solution with the current state of the models is impossible, or limited to very simple problems.
But if you know nothing about a language, it becomes super useful.
AI in Learning and Programming
Large Language Models (LLMs) can be incredibly useful as personal teachers and guides, especially when learning new programming languages or technical concepts.
Let's take the example of a developer who knows JavaScript and wants to learn Rust. One of the key concepts to grasp in Rust is memory management, which can be a challenging hurdle for beginners. Traditionally, learning anything online would involve watching video tutorials, reading written documentation, and practicing coding exercises. With LLMs, there's a new approach to learning new technical concepts, using it as a learning assistant.
You can ask an LLM questions about your code, and it will provide explanations and guidance in real-time. If the explanation is too complicated, you can ask it to rephrase and simplify: the LLM will adapt to your technical level. This interactive and adaptive approach can accelerate the learning process and make it more enjoyable.
The best part is that LLMs can provide this kind of guidance and support for almost any technical field, from programming languages to scientific concepts and beyond. They have the potential to revolutionize the way we learn and master new skills.
Impact on Jobs
Remember when everyone said AI would replace artists and programmers? That hasn't happened, and it might not happen anytime soon, if ever. Instead, AI has positively impacted these jobs, helping people work more efficiently. For example, artists use AI to enhance their creativity, and programmers use it to debug and optimize their code faster.
Case Study: AI in Graphic Design
Many artists complain about AI generated artwork and some are embracing the new trend. It will obviously change the art industry, at BearStudio we decided that leveraging the technology could be a clear benefit.
We had to create an artwork for our open source starter and decided to give a shot to AI by generating 2 separate assets, the spacesuit and the helmet’s reflection, to finally merge them manually in Photoshop. We wanted a Bear in a spacesuit with a specific reflection in the helmet. As it was for a community product, we decided to try an AI production pipeline, and it was pretty easy to get a nice looking image for a very limited budget !
You can find more about our UI starter pack here: https://www.bearstudio.fr/blog/actualites-web-numerique/start-ui
Limitations of AI in Job Replacement
AI won't replace engineers or other specialized roles, just like smartphones didn't replace computers.
Let me make the parallel:
The rise of smartphones has undoubtedly changed the way we live and work, and they replaced the family computer in the living room, but it's important to note that they haven't replaced computers entirely. Instead, each device serves a specific purpose, with smartphones excelling at quick tasks and computers handling more complex and detailed work. For example, writing an article like the one you are reading now is more practical on a computer.
https://gs.statcounter.com/platform-market-share/desktop-mobile/worldwide/#monthly-200901-202404
Similarly, AI tools like LLMs are augmenting human capabilities, but not replacing them. They excel at specific tasks but human expertise, creativity, and critical thinking are still outside of the reach of AI. By recognizing the strengths and limitations of both AI and human intelligence, we can harness the power of technology to expand the scope of what experts can achieve, without replacing the unique value of human intelligence.
No more support, only chatbots
Now that we have AI, there's a popular idea that we should train a customized model using our company documentation and case studies to replace all our support agents with AI!
This suggestion comes up too often. Not only would you lose the human touch that connects you with your customers, but you'd also risk exposing them to mistakes, errors, and misleading information due to hallucinations, as mentioned earlier. Do you really want to be liable for what your chatbot says? What if it promises a discount, makes a deal, or leaks sensitive information? Many of our customers simply don't want to take that risk.
Another scenario is abuse. A malicious user could find a way to jailbreak the AI (escaping its safety limitations) and then use it to hijack your computing power for their own advantage, leaving you with a huge AWS bill. The issue with automation is always the same: without supervision, things can go wrong very quickly.
ChatGPT into GMail, don’t ever write a (good) email?
This was one of the first claimed use cases of ChatGPT back in 2023 when it was released. Someone developed a plugin to handle your Gmail responses. While it might work in some ultra-niche scenarios, how could an AI truly grasp the nuances of a relationship between two individuals that are evident in any exchange?
LLMs also have “signature words” they tend to overuse, like "foster," "delve," and "deep dive." These words are over-used by ChatGPT-3.5. Imagine if your business partner or friend wrote to you using these phrases consistently: it would be pretty obvious they were using ChatGPT. Personally, I wouldn't appreciate it.
LLMs can be useful for drafting emails, reports, and other business documents. They can help you get your point across clearly and efficiently, saving time and making sure you covered everything. But I could not think of it as an ultimate tool for writing.
Future Directions for AI ?
OpenAI's demos of instant translation show their commitment to making communication across languages and cultures easier and more accurate. This could be a game-changer for global business and travel, helping people connect and understand each other better.
All LLMs are getting better at understanding and writing code and tools like co-pilot show the capabilities of AI to help engineers in their tasks, but not a way to replace them.
To sum up, AI is transforming our world in interesting ways. It's helping us find information faster, learn new skills, and communicate better across cultures. However, it's important to remember its limitations and use it wisely. AI is a powerful tool, but it's not a replacement for human creativity and expertise. By understanding how to leverage AI's strengths and acknowledging its limits, we can make the most of this technology.
Conclusion
I don’t buy into the overrated and hyped narrative that AI will render humans useless for any task or work. Similar fears arose during the 18th century with the invention of machines, yet jobs evolved, and human tasks adapted without rendering us obsolete.
The public loves to have something to be afraid of; it's human nature to focus on potential dangers. But this fear is overrated.
Perhaps if a general AI, capable of reasoning and creativity, were to be developed, my perspective might shift. But how close are we to achieving GAI?
For now, let's keep calm and continue building.

Rudy Baer
September 8, 2025