A little over three years ago, ChatGPT was released. And although AI can noticeably make working life easier, adoption in many SMEs and organisations is still slow. Here are five reasons why — and what can be done about them.
It’s hardly surprising that Germany is not exactly a frontrunner when it comes to using AI tools like ChatGPT or Gemini — neither privately nor professionally. According to a Eurostat study, Germany ranks in the lower middle of the European field. There is no non-European comparison, but the results would likely be sobering as well.
Among AI fans and tech enthusiasts, the lack of an innovation culture is often blamed on the classic German mindset of “We’ve always done it this way.” But that explanation falls short. While uncertainty avoidance is generally stronger in Germany than in China, India, or the US, it is even higher in technology-friendly countries like Japan or South Korea. So mindset can only be one factor among many that slows down the introduction of AI into everyday work.
Problem 1: Too many tools, too many updates
ChatGPT, Gemini, Copilot, DeepSeek — or maybe Claude? And if so, which version, and for which task? Especially for people who don’t enjoy dealing with technology — and that’s most people — questions like these alone can prevent adoption. This is the “choice overload”: too many options delay decision-making and can even stop it entirely (“analysis paralysis”). We run into this constantly in everyday life. What most people need are simple answers and a sense of certainty.
Solution: Since AI tools are evolving rapidly anyway — and today’s “worst” can become tomorrow’s “best” (and vice versa) — it’s important from a company perspective to provide employees with one (paid) AI tool.
Which one? In many cases, the answer might surprisingly be Microsoft Copilot. It may not be the best model, but it can often be integrated easily in SMEs — see point 4.
Problem 2: Expectations that are too high
In presentations and webinars about AI, speakers love to showcase the benefits and demonstrate what’s possible. They pull rabbits out of hats, and the audience can hardly believe what they’re seeing. 😮
The idea is simple: make people excited about AI. And in principle, that’s a good thing. But the bigger the promises, the bigger the disappointment when reality doesn’t deliver — which leads to frustration.
I’ve experienced this myself many times, especially with image and video generation. In a presentation everything looks super easy, but back at your desk you end up desperately trying to generate an image of a hearing aid, a fuse box, or a tool wall.
Solution: Be more honest. Don’t invite speakers who promise the impossible. Instead, show what’s realistically achievable today — and where the current limitations still are.
Employees will probably appreciate this too. After all, it’s much better if AI speeds up and improves their work — rather than replacing it entirely.
Problem 3: Fear of legal violations
When it comes to law and data protection, it feels like there are two camps:
- The “Who cares” camp: We Germans always worry about data protection and liability — it only slows progress. We don’t really control our data anyway. I just throw everything into the tool.
- The “Better safe than sorry” camp: It’s best not to use AI tools at all: unclear data processing, data transfers to the US, built-in GDPR violations through personal data input, potential leaks, copyright issues… In short: hands off!
Of course: if you do nothing, you can’t do anything wrong. That’s true in every area. But avoiding AI is not a solution — and banning AI often creates shadow IT. Employees who want to use it will probably do so anyway.
Solution: Create a simple, practical AI policy that clearly states what employees are allowed to do — and what they are not. Also: use paid business/enterprise versions of AI tools. In those versions, user inputs are not used for training purposes and they can be used in a GDPR-compliant way. Personal data and business secrets should still not be entered — but realistically, nobody will ever know if someone does. Depending on the tool and plan, costs quickly reach around €30 per user per month.
Companies that trust their employees and have fewer data protection concerns may choose to use free versions — but it’s hard to recommend that publicly (and I won’t). 🤷♂️ The truth is: in many SMEs, free AI tools are widely used in practice, and the likelihood of getting caught and sued for a data protection violation is low.
Problem 4: No AI culture
For many people, the question “Can AI do this for me?” simply isn’t part of their thinking yet. They don’t even consider asking whether AI could take over tasks or support them.
As a result, work continues as usual — not because of hostility towards technology, but because AI is not cognitively “available” as an option. Presentations and training sessions aren’t wrong, but there’s always the risk that employees consume them passively rather than integrating them into everyday routines.
Solution: Every company has at least a few employees who are genuinely curious about AI. That means you can quickly find strong internal use cases. These should be communicated (e.g., in the intranet) to create imitation effects, launch internal “challenges,” and spark coffee-break conversations about AI.
An AI idea competition or a mentoring programme with internal AI contacts can also help to give the topic more weight. Ultimately, the goal should be that employees talk about prompts at the coffee machine and ask each other: “How are you doing that?” That’s what I mean by an “AI culture.”
And there’s another dynamic: big players like Microsoft and Google have a strong commercial interest in selling their AI products — so more and more AI functions are being integrated directly into existing applications.
The logic is simple: if the user doesn’t come to AI, AI will come to the user. People no longer need to ask whether AI can summarise a long email — Outlook and Gmail offer it automatically (or even do it unprompted). Since Microsoft Office is used by most SMEs, this “automatic” integration brings huge advantages.
Problem 5: Deep AI integration is hard
AI Level 1 is using general-purpose LLMs like ChatGPT, Gemini, and Copilot. Level 2 is creating Custom GPTs, Gems, or agents for recurring tasks — so you don’t have to re-enter long prompts every time. Level 3 is connecting AI with existing IT infrastructure. Example: a customer inquiry comes in via email, Outlook is linked to the CRM and the knowledge base, and an answer draft is generated immediately.
This is where it gets truly interesting: AI becomes a powerful efficiency engine and could, in some cases, replace jobs entirely. The problem: implementation is time-consuming and expensive.
Solution: If the internal IT department doesn’t have the resources (or if there is no internal IT department), companies will have to invest money and hire an automation agency or a digital transformation expert.
Conclusion: AI needs less hype — and more structure (and time)
In my view, slow AI adoption in SMEs has less to do with fear of technology and more with very practical obstacles: too many tools and updates overwhelm people, inflated expectations lead to frustration, legal uncertainty slows adoption, AI culture doesn’t emerge automatically — and deeper integration into processes and systems is often expensive and complex.
The good news: many of these problems can be solved if companies treat AI not as a one-off “project” but as an ongoing learning process. That requires clear decisions, simple guidelines, and good-practice examples.
But above all, employees need something that is often missing in AI strategies: time. AI can create quick wins — but productive use doesn’t come from a single training session. It comes from experimenting, making mistakes, iterating, and exchanging ideas with colleagues.
So if a company wants AI, it shouldn’t just provide tools — it should also create room for learning.
What ChatGPT says about this blog post
The blog post stands out thanks to its clear problem–solution structure and highly practical examples that realistically reflect typical hurdles in AI adoption in SMEs (tool overload, expectation management, legal/data protection concerns, culture, integration). Particularly strong: the balanced tone without hype and the actionable recommendations. Referencing Eurostat adds credibility. In places, the writing could be slightly tighter, and for some claims (e.g., Copilot/costs), additional sources would strengthen the argument.
Grade: B (2)
A strong, well-structured, practical article with clear recommendations. With a bit more conciseness and occasional supporting sources, it would be even more convincing.