AI beyond the bullshit: zo maak je AI waardevol

During our OWOW event in November, the topic was (of course) AI. But the most useful takeaway from the keynote was not “you have to do something with AI.” What stuck with me was a much more down-to-earth starting point: stop the hype, start with a small, concrete problem, and make sure the fundamentals are in place.

We recognize that message at Cadran Analytics. Many initiatives do not fail because of the technology, but because of data, processes, and adoption. In this blog, I share my observations and translate the insights into a practical roadmap for SMBs.

First, get clear: what are we actually talking about?

AI has become an umbrella term. The keynote made this clear by breaking AI down into “layers”: from AI in a broad sense (computers that perform tasks or support decisions) to machine learning, deep learning, and generative AI. It then went one step further toward agents: systems that not only generate output, but also independently perform actions and connect different tools.

What stood out to me was that most practical examples had little to do with “a smarter algorithm,” and much more with designing workflows. Think of an agent that supports order picking in a warehouse, voice interfaces that let you run through a process via conversation, or even a “happy robot” you can call to retrieve status updates and pass them on to a transport tool.

The common thread quickly became clear: the value is not only in the model, but in how you make AI part of an existing process.

Value comes from three value drivers (and not from “doing something with AI”)

The keynote distinguished three ways in which AI can deliver value. This is a useful frame because it forces you to talk about business impact from the start rather than technology:

  1. Automation and optimization: less manual work, fewer errors, and faster cycle times.
  2. New products and business models: for example smarter recommendations, bundles, or service propositions.
  3. New insights from data: better forecasting, better steering, and better prioritization (think margin, inventory, or service levels).

For many SMBs, a logical starting point is repetitive, time-consuming tasks. A realistic nuance I appreciated: automation often starts “classically” (standardizing and documenting process steps) and only then becomes smarter with AI. In other words, you do not have to make everything “AI-first” right away to create impact.

Why so many AI initiatives fail

A confronting part of the keynote: fewer than half of AI initiatives make it into production, and of those, only a minority deliver demonstrable value. The reasons are immediately recognizable. The presentation mentioned (and we see this with organizations we talk to) several recurring patterns:

  • Pilots that work technically but never land in the day-to-day workflow.
  • Use cases that are “cool” but do not improve a KPI or have no business owner.
  • Projects that get stuck on data quality, data quantity, rights and roles, integrations, or lack of buy-in. 

The conclusion was clear: when an initiative fails, it is rarely just because of “AI.” It is almost always about adoption and the fact that the foundation (data, processes, and governance) is not in order. The keynote presented a disciplined four-step approach. I found this especially strong because it enforces the sequence: you only start building once you know what problem you are solving, with which data, for whom, and how you will measure success.

The roadmap that works: from ambition to value

Step 1 – Check your AI foundation before you start experimenting

Four prerequisites were explicitly mentioned: AI-ready data, infrastructure, insight into your data, and AI ambassadors. In practice, this means: 

  1. Data is consistent and business-ready (definitions are correct and unambiguous).
  2. Relationships between tables and entities are logical, so context is preserved.
  3. There are automated checks and monitoring for data quality (so you are not steering on outdated or broken data).
  4. Access rights are in place: not everyone needs to see everything (“for your eyes only”).

My translation: make sure data is cleaned up and brought together in one place. Only then does “ChatGPT on your own data” or a smart agent have any real chance of success.

Step 2 – Choose the problem with the most potential (not the most buzz)

Start with a benefit assessment to keep focus. Plot potential use cases in a simple matrix with “potential value” versus “feasibility and data availability,” and begin with the candidates that score high on both axes.

This avoids the familiar pitfall: starting with whatever happens to be on the management agenda, instead of the use case that delivers the most impact in time, cost, or service.

Step 3 – Build iteratively with a multidisciplinary team

Key point: you will iterate. First, sketch a business case on the back of a beer coaster, then validate the data, then test, and only after that integrate it into the process. And you do this not just with someone from Data or IT, but together with the business: someone who owns the problem and feels responsible for the outcome.

A nuance I found particularly strong: sometimes, while building, you discover that you do not need a complex AI model at all. Better data, better rules, or a sharper dashboard can already deliver a large part of the value. That is also success.

Step 4 – Do not forget the people: adoption is the product

The final step was strikingly little about “yet another model,” and much more about operationalizing: how do you ensure the solution is used and keeps working? Show results, collect feedback, adjust, and only then scale.

The human side is not a side issue. If users do not trust it or it does not fit their workflow, even the best model is worthless. 

Practical example: AI on your own data without hallucinations

One example that illustrated how to stay in control was a “ChatGPT-like” experience on your own data within a platform environment: you ask a question, and under the hood a SQL query is executed on your database. This makes the result controllable and reduces the risk of a model making things up.

For many organizations, this is an attractive intermediate step: faster answers to operational or commercial questions, without every question having to go through a BI backlog.

Finally: 6 questions to be sharp by Monday

If you take one thing away from “AI beyond the bullshit,” let it be this: start small, but be serious. These questions help determine where to start and how to safeguard value:

  1. Which task or decision currently costs structural time or money (every week again)?
  2. Which KPI must demonstrably improve (cycle time, inventory levels, margin, etc.)?
  3. Do we have our data in order (quality, quantity, context, rights)?
  4. Who becomes the business owner (AI ambassador) and who helps build?
  5. How does this land in the process (not just in a demo)?
  6. How do we measure success after 4 to 8 weeks, and what do we do if it disappoints?

Want to spar about which use cases are most promising for your organization, or first get clarity on whether your data and reporting are AI-ready? We are happy to think along, precisely to prevent AI from becoming an expensive pilot instead of an accelerator of your operations.

Jelle Huisman managing partner

Jelle Huisman

Managing Partner