AI projects have grown rapidly in recent years, often driven by high expectations and strong pressure to “do something with AI”. At the same time, many pilots struggle to prove lasting business value.
We spoke with Frank Sinde, Field CTO AI at Damovo, who leads AI projects for clients. He explains why he expects parts of the AI market to cool down, where value will remain, and how companies should prepare now.
Frank, you are predicting an AI bubble burst in 2026. What exactly do you expect?
Over the last few years, expectations around AI have been inflated. Many projects were started to “do something with AI” without a clear business case. At some point investors and boards will ask where the recurring value is.
This was also recently predicted by the founder and CEO of Anthropic, Dario Amodei. He is concerned that some AI firms may be playing fast and loose with the staggering sums they’re spending on data centres and compute power. “I think there are some players who are ‘YOLO’ -ing, who pull the risk dial too far,” said the CEO during The New York Times’ Dealbook Summit on December 3rd, 2025.
So, what happens then?
You will then see funding dry up for providers that live on pilots and slideware. Some startups will disappear because proof of concept never turned into stable revenue. The companies that remain will be those that can show measurable impact on productivity, quality, or revenue. In other words, ROI instead of hype becomes the new normal.
What does this mean for Damovo customers specifically?
For our B2B customers it means we move from experimenting to running AI in production. Across all our domains, so Unified Communications, Customer Experience and Contact Centre, Enterprise Networks, and Cybersecurity.
We ask a simple question for every AI feature: can it prove its value in a short, clearly defined period, for example within 90 days in a proof of value? If the answer is no, we stop or redesign. Our role is to help customers focus on a few high-value use cases instead of a long list of pilots that never scale.
What does such a proof of value look like in practice?
In practice, a proof of value is quite hands-on. We start by picking a concrete use case with the customer and agreeing on how we will measure success. That means defining the process, the KPIs, and the data we need, for example which calls or tickets are in scope and what the “before” situation looks like.
Then we integrate the AI into the existing tools, such as the contact centre or enterprise networking platform. The goal is not to build something on the side, but to improve the way people already work.
Finally, we run the scenario for a limited period and compare before and after. If the data shows a clear benefit, we have a case for scaling. If not, we adjust or stop. At Damovo we use an AI Proof of Value (PoV) framework with templates, KPI dashboards, and integration playbooks for multi-vendor environments. Customers do not have to start from scratch every time. They can see the impact quickly instead of spending months on setup.
Which areas already deliver reliable results?
The strongest areas right now are customer experience, technical service, enterprise networks, and cybersecurity. In these domains we see repeatable outcomes, not just isolated pilots.
Can you give a concrete example?
Take customer experience. In a contact centre handling enquiries, we deploy bots designed around a clear intent catalogue, such as balance checks, payment dates, and policy changes. The bot resolves simple requests end-to-end on first contact, reducing queue times. For more complex scenarios, the bot intelligently offers a callback or escalates to a live agent with full context. When the interaction reaches an agent, they receive real-time transcription, an automated summary of the conversation, and suggested responses or next-best actions. This allows the agent to resolve the enquiry faster, improve customer satisfaction, and consistently meet key service KPIs.
And beyond customer service?
Technical service is another strong area. Many service desks deal with a lot of recurring tickets while facing a shortage of skilled staff.
In enterprise networks the focus is really on minimising downtime and improving reliability. AI looks at telemetry data, notices unusual patterns earlier, and can trigger changes before users even realise something is wrong. That does not prevent every incident of course, but it can reduce unplanned downtime and reduce the number of tickets that end up with operations. Additionally, built-in AI agents automate repetitive workflows, freeing operations staff to concentrate on higher-value tasks.
What about security? Everyone’s talking about AI and cyber threats.
Cybersecurity is shifting from static detection to adaptive, AI-driven defence. Instead of relying only on signatures or predefined rules, we use context-aware and threat-centric analysis to identify risks earlier — for example when vulnerabilities become exploitable, or when attacker behaviour indicates active reconnaissance or exploitation attempts.
Modern platforms combine automated intelligence enrichment, behaviour-based detection, and AI-orchestrated playbooks. This means potential threats such as phishing, spoofing, misconfigurations, or vulnerable services are validated in context, prioritized automatically, and remediated faster.
The result: fewer successful attacks, shorter response times, and a security posture that continuously adapts to the real threat landscape.
Those are all external-facing use cases. What about internal productivity or employee experience?
Internal productivity is where AI quietly reshapes everyday work. In modern UC platforms this extends beyond transcription and summaries to real-time meeting orchestration, intelligent actions, and instant retrieval of past conversations. Employees experience this not as “new technology”, but simply as a smoother communication workflow.
The real challenge for companies is controlled activation. UC systems handle the most sensitive communication data, so governance must define which AI features can access which conversations, how vendor AI assistants behave, and how usage remains consistent across Teams, Webex, Zoom, or hybrid environments. Turning everything on at once is rarely an option.
In proofs of value, we look for straightforward signals e.g. less time spent on documentation, fewer repeat meetings, clearer decisions, and reduced context switching. The aim is not to automate all communication, but to cut the administrative noise around it and give people more focused time.
OK, but here’s my question. If all this works so well, why do you think the bubble will burst?
Because most companies are not doing what I just described. They are running pilots that never end. They are testing ten different AI tools with no plan to choose one. They are measuring “engagement” instead of revenue.
When budgets tighten, that ends. Fast.
So, what should companies do now?
I think the main thing companies need right now is fewer experiments and more partners who can link AI to real business value. Most organisations have done one or several pilots. The difficult part is to scale the few that really work, keep them stable in day-to-day operations, and still meet all the security and compliance requirements. That is where many teams get stuck.
Where do you usually start with customers?
We usually start quite pragmatically. Together with the customer we build a small portfolio of AI use cases for each business area. The idea is to see very clearly where an investment actually makes sense and where it does not. Where are obstacles to be expected, and does the customer have internal resources to tackle the projects?
From there, we integrate these use cases into the existing platforms, for example UC, contact centre, network, or security systems. We try to avoid creating yet another isolated solution. AI should improve processes that are already there, not create a parallel world.
Another goal is to teach customers to recognize new use cases themselves and evaluate them accordingly.
And how do you make sure this scales and stays secure?
That is where data, governance, and operations come in. We define who can access which data, how prompts and models are handled, and how we monitor quality and risk. It sounds a bit dry, but without this groundwork, things do not scale.
At Damovo we then take on parts of the ongoing operations if the customer wants that. So we look after monitoring, metrics, and continuous tuning, so the AI solutions keep delivering value and do not quietly degrade over time. The idea is simple, really. No more one-off proofs of concept that disappear after a few months, but a stable setup that can grow with the business.
Final question. What is your conclusion on the AI bubble?
I expect parts of the market to cool down sharply. That is healthy. Projects that cannot show impact will stop, and some providers will leave the market.
The value of AI, however, remains. In 2026, the winners will be those vendors and customers who align AI with clear ROI, operational readiness, and security. They will benefit from market consolidation and scale the use cases that really matter. Not driven by hype, but by results.