Artificial Intelligence: When the Problem Isn't the Machine, But Us
🤖 Artificial Intelligence: When the Problem Isn’t the Machine, But Us
We are living in a fascinating moment.
Artificial intelligence has gone from science fiction to an everyday tool. It writes texts, generates code, creates images, summarizes documents, answers complex questions… and it does it in seconds.
But precisely because of that ease, that sense of “magic,” we are starting to walk a very dangerous line: using artificial intelligence without thinking.
And no, the problem isn’t AI.
The problem is what we do with what it gives us.
🧠 AI Doesn’t Think: It Collects, Predicts, and Responds
Current artificial intelligence models—ChatGPT, Copilot, and many others—are based on something very specific: historical data.
History, economics, law, physics, chemistry, programming, technical articles, forums, official documentation, opinions, mistakes, successes… Everything humanity has written over the years on the internet.
AI doesn’t reason like a human.
It doesn’t “know” if something is true or false.
What it does is predict the most likely response based on everything it has seen before.
And that’s where the first problem appears: if the source data is incomplete, outdated, or wrong, the response can be too.
🎓 The University Classroom of Error
Let’s imagine a university class with 20 students.
The professor explains a concept incorrectly.
Not because they want to deceive, but because they made a mistake.
What happens?
- All 20 students learn that concept incorrectly.
- They apply it incorrectly in exams, assignments, and their professional future.
- That error propagates.
Now let’s take that example to the digital world.
When we ask an AI to write an article, a guide, or a tutorial, and we don’t review what it generates, we’re doing exactly the same thing:
we’re teaching incorrectly… but to hundreds or thousands of people.
❄️ The Snowball Effect of Misinformation
This is where the problem becomes really serious.
Imagine that over the years:
- We publish AI-generated articles without reviewing them.
- We create tutorials, videos, posts, and documentation with errors.
- That information remains on blogs, forums, social media, and technical communities.
What happens next?
- People learn incorrectly.
- That erroneous information gets shared.
- In the future, new AI models are trained on that data.
In other words, AI starts to learn from information generated by other AIs… that was already wrong.
Taken to the extreme, the result is devastating:
An ecosystem where artificial intelligence feeds on its own errors.
💼 “But Isn’t AI Supposed to Save Us Work?”
Yes.
And no.
AI improves our productivity, but it doesn’t reduce the need for knowledge.
In fact, quite the opposite happens: it demands more knowledge.
A very simple example in our sector.
If we ask an AI:
“Write me the advantages of Dynamics 365 Sales”
The response is usually correct… until you start reading between the lines.
- It compares you to Salesforce.
- It introduces advantages that don’t apply to your case.
- It mixes commercial concepts with technical ones.
- Or it even recommends competitor solutions.
Where’s the problem?
👉 To detect that, you need to know Dynamics 365 Sales.
👉 And often, Salesforce too.
AI doesn’t eliminate the need to know.
It elevates it.
⚠️ Believing AI Because “AI Said So”
This is one of the biggest current risks.
We live in a time when many people think:
“If AI says it, it must be true.”
And that’s a huge mistake.
An extreme but very illustrative example:
If you ask ChatGPT:
“What is 1 + 1?”
It will tell you: 2.
But if you insist, if you repeatedly tell it no, it’s 4, and you provide “arguments,” there will come a point where it will agree with you.
Not because 1 + 1 is 4.
But because AI doesn’t defend the truth; it defends the coherence of the conversation.
Now imagine this same scenario applied to technical, historical, or scientific concepts far less obvious than basic arithmetic.
🎥 Videos, Images, and the Loss of Reality
A few months ago, identifying an AI-generated video was easy.
Today, it’s not.
On platforms like TikTok or Instagram, videos are starting to appear that are:
- Perfectly realistic
- Featuring people who don’t exist
- Depicting events that never happened
And there will come a moment—very soon—when we won’t be able to tell what’s real and what’s not.
News.
Opinions.
Testimonials.
Events.
Everything will be questionable.
🙋 The Responsibility Isn’t Technological, It’s Human
Artificial intelligence isn’t the enemy.
Lack of judgment is.
Using AI responsibly means:
- Reviewing what it generates.
- Cross-checking information.
- Not publishing content we don’t understand.
- Not delegating critical thinking.
AI can help us grow as much as we want…
but it can also amplify our mistakes if we use it mindlessly.
✅ Conclusion
Artificial intelligence isn’t here to think for us.
It’s here to think with us.
If we stop reviewing, questioning, and understanding what we publish, we’ll be building a future full of noise, misinformation, and false certainties.
And then, the problem won’t be AI.
The problem will be us.