AI: cutting through the noise to focus on what actually matters

AI
Mindset
AI: cutting through the noise to focus on what actually matters

I’ve already shared my initial take on AI. My opinion hasn’t fundamentally changed, but I needed to break it down by the different hats I wear and the specific contexts I navigate. Here’s what I’ve observed.

Lately, I’ve finally had a bit more time to think about all the hype around AI, and more importantly to project myself into where this is going and how I should organize around it. Between an uncertain economic and geopolitical context, and the endless stream of opinions from every possible profile (AI enthusiasts, AI doomers, AI skeptics, AI fanboys, AI devSecNumOpsUX++, you name it), it’s hard to see clearly.

Over the past few weeks, I’ve had a lot of conversations with very different people, each with a strongly held opinion. I also attended the Tech.Rocks Summit in Paris (December 1–2, 2025), where AI was everywhere.

Unlike other hype cycles, I’ll admit I struggled to build a clear point of view. And spending my time listening to exaggerated claims and absolute certainties is not my thing.

Over the years I’ve become fairly pragmatic, which means I need to remove this mental thorn so I can move forward. And because I wear multiple hats, AI raises very different questions depending on the role I’m in.


Filtering the noise to think properly

To avoid getting lost in the chaos, here are the filters I apply.

❌ Musk / Zuck / Altman & co: inspiring, but unusable

I don’t ignore them because they only say nonsense. Sometimes they share real vision, sometimes huge exaggerations, and sometimes carefully vague statements designed to influence the market.

The real problem is what people make their statements say. A quote gets pulled out of context, turned into prophecy, and three posts later it becomes “the future of the world explained by a TikTok clip”.

And we forget a simple detail: these people have massive business, political, financial, and media incentives. Every announcement serves an agenda. We’ve seen the exact same mechanics before with:

  • The Metaverse: sold as the next revolution, billions burned, and in the end a graveyard of dusty VR headsets

  • NFTs: pushed hard by the gaming industry because there was potentially a lot of money to make, until it collapsed

  • The “miracle blockchain” supposed to fix everything: lots of noise, a few serious projects

It’s a familiar pattern. When a hype cycle explodes, everyone oversells, distorts, and weaponizes the narrative.

So it’s not that their statements are useless. They’re just unusable if you want to understand what AI means for a normal business, with real employees, real customers, a roadmap, a budget, and deadlines.

Their world isn’t mine.

❌ “AI Expert” LinkedIn profiles with absolute certainty

They’re often extremely confident, and very often they have something to sell. So I keep them out of my analysis.

❌ McKinsey / BCG & co reports

Sometimes interesting at a macro level, but too often high-level, biased, contradictory, and rarely calibrated for real operational life.

❌ Hardcore anti-AI takes

The ones who think using ChatGPT turns people into cognitive zombies. The productivity gains are real, visible, and measurable. Denying that is just posturing.

✅ What’s actually useful

My own observations, nuanced discussions, and researchers (when you can actually hear them through the noise).


Why structure all of this?

I’m 40, I run multiple companies, and I’ve worked in software for about fifteen years. I’ve been a developer, a manager, and a founder, and I’ve lived through multiple waves of technological change.

I shared some of these lessons in starting BearStudio: a decade of lessons and reflections.

With hindsight, one thing became obvious: AI doesn’t raise the same questions depending on the hat you wear.

To avoid this topic becoming an unstructured blob, I chose to segment it. Each role comes with different stakes, different risks, and different decisions.


My hats and the real problems behind each one

1) AI for Companies (leadership / organization level)

Just like the arrival of computers, the internet, or smartphones, AI is forcing a deep shift in how people work and how companies are organized.

The real question isn’t “which LLM should we choose?”. It’s: How do we lead change?

Sovereignty, costs, or skill loss are just subtopics.

The COVID period is the perfect example. Overnight, hundreds of companies had to go remote while being completely unprepared. Many rushed into Teams, Slack, or Zoom thinking “installing the tool equals solving the problem”.

Result: broken processes, lost managers, teams with no prioritization, and months of messy improvisation because they hadn’t anticipated the organizational side of change.

On the other hand, companies with basic digital culture, some documentation, and clear rituals transitioned in 48 hours without breaking the machine.

Same tools. Different outcome. Because tech was never the real issue. Change management was.

And with AI, it’s going to be the same story, just faster.


2) AI for Business (strategy)

You can’t predict the future, but you can:

  • define plausible scenarios
  • identify risks
  • prepare realistic action plans

Anecdote, misunderstanding scenarios:
Back in 2023, we had just hired a sales rep to develop the Benelux region. Dylan, one of our engineers, told us he had a contact there and could make an intro.

He did. Everything went well, until we got a fairly wild reply.

The guy basically said: “Not interested. Agencies won’t have any value in a few months. The software world is going to be totally different.”

Looking back, I’m conflicted. On one hand, he wasn’t wrong. The software world will change. But in terms of timing, three years later, we’re still waiting for that “total shift”. For now we can feel changes and observe changes, but in software, change is also kind of the baseline.


3) AI for Sales

Just like smartphones reshaped sales, AI will change:

  • Sales cycles
  • Commercial methods
  • The relationship between sales, product, and tech

We already see the ugly side with AI spam and automated outreach. But beyond the cheap stuff, there’s a deeper shift: the quality of pre-sales deliverables is rising fast.

Recently, I had a prospect in the insurance industry who wanted a cost estimate to build a solution. Nothing special, except he had a functional prototype built on Lovable, with no designer, no developer, no product team. And honestly, it was clean.

When we presented the estimate, he had a reflex we don’t see often enough:
“OK, I’m going to try selling it to my customers first. If they sign, I’ll launch development.”

This is exactly the behavior AI should logically push forward, even if reality doesn’t always follow logic:

  • salespeople, PMs, and founders showing up with credible prototypes
  • shorter pre-sales cycles
  • less dependence on engineering teams to build a solid sales story, for better and for worse
  • higher baseline expectations for what an incoming request looks like

Sales teams will become much more autonomous. This will shift the dynamic between sales, product, and tech.


4) AI for Development

The developer job will evolve deeply, not just a new language to learn. We’re talking about:

  • orchestrating AI services
  • pipeline management
  • data quality
  • new specs and testing practices
  • new roles (AI integrator, prompt QA, etc.)

Paradoxically, even though developers are used to change, I think this is where we’ll see much more friction and resistance, consciously or not.

I explored some of these career dynamics in the reality of tech careers: balancing passion, competition, and opportunity.


5) AI for UX

Designers aren’t disappearing. The “Figma monkey” maybe. But the strategic designer (UX researcher, UX writer, etc.) becomes even more critical to frame, validate, control, and steer AI-driven experiences.


6) AI for Management

This is where success or failure happens: human support, role adaptation, and skill building.

If you’ve ever managed juniors, you know the situation. You explain something, they nod, and then you realize they didn’t understand. You re-explain. Still nothing. You try a third angle. It finally clicks, but at the cost of way too much mental energy.

That’s exactly where AI can be a huge relief for managers. Need three different rephrasings of the same instruction? A metaphor that finally unlocks understanding? A version adapted to a junior level?

AI can do that in seconds, without draining your patience or wasting your afternoon.

And I can already hear the classic criticism: “Yeah, but you’re asking a machine…”

Honestly? When I ask my colleague Roger for advice on how to handle someone who’s struggling, Roger can be wrong too. Roger has biases too. Roger doesn’t hold the absolute truth.

The difference is that AI can give you ten angles. You pick what fits your reality, and you preserve your mental bandwidth for what actually matters.

If you’ve ever dealt with HR management, you know how valuable that can be. Not to replace management, but to offload the repetitive, exhausting, emotionally expensive part of the job.

As I discussed in dreaming of being a CTO: is it really for you?, leadership roles come with hidden costs that few people talk about openly.


7) AI for Learning (education)

Talk to any teacher for five minutes and you’ll understand the scale of it.

Whether it’s middle school, high school, vocational programs, or higher education, you’ll hear the same thing: “We see GPT everywhere.”

Assignments that sound too perfect, homework far beyond the student’s actual level, flawless essays written by students who struggle to write three sentences in class, and teachers forced into detective mode to spot inconsistencies.

Underneath it all, there’s a constant fight:

  • Fighting cheating

  • Fighting the temptation of shortcuts

  • Fighting the loss of cognitive effort

  • Fighting the drop-off in long-form thinking

Teachers aren’t anti-AI. Many see real opportunities. But right now, they spend a lot of time dealing with the side effects: changing rules, adapting grading, redesigning exercises, reinforcing supervision.

AI is already in classrooms, sometimes for the best, often for chaos, and teachers are doing what they can to keep up.


8) AI for Parents

Parents are facing a new challenge. After screens, social media, and addictive algorithms, we now have to deal with AI that every industry is going to shove everywhere.

A ChatGPT inside a teddy bear with no guardrails is not a great combo for building a healthy sense of reality.

It was at Tech.Rocks Summit 2025 that this really hit me. Listening to a few talks, I realized how much AI’s new capabilities, and especially its potential misuses, can create real problems for a child’s development.

And yes, kids are basically tiny parasites. That’s my humor, different topic. Their brains are still under construction, their sense of reality is fragile, and their ability to distinguish intent, fiction, and neutrality is still immature.

We already struggled for years with:

  • screens
  • phones
  • social media
  • YouTube and TikTok’s addictive algorithms

Now we add conversational AIs. Persuasive, adaptive, confident, even when they’re wrong.

We’re only starting to understand the potential impacts:

  • confusion between real and generated
  • cognitive dependency
  • invisible biases
  • emotional attachment to a “personified” AI
  • rule circumvention through overly compliant assistants

And just like with screens back then, nobody has a manual yet. We’ll have to navigate by testing, observing, and adjusting, until we have enough distance to understand what’s truly dangerous, what’s beneficial, and what mostly depends on supervision.

Parents will have to learn fast, because kids won’t wait.


Conclusion

This is an introduction.

I’ll go deeper on each of these topics in dedicated posts, with concrete examples and real-world feedback.

One thing I’m convinced of: Segmenting AI into specific problems is the only way to stop reacting to hype and start acting intelligently.

In the next post, I’ll cover something many people treat as a technical problem, even though it isn’t. I’ll explain how I approach AI as a change management topic inside my own companies, what actually breaks on the ground, and how to avoid organizational chaos.

Rudy Baer

Rudy Baer

Founder and CTO of BearStudio,
Co-founder of Fork It! Community!

January 12, 2026

AI, Mindset