Home Services Work About Insights Contact ask a quote
// AI

When to say yes to LLMs — and when to say no.

When to say yes to LLMs — and when to say no.

Large language models are remarkable tools. They can summarise documents, generate code, classify text, and answer questions with startling fluency. It is therefore tempting — especially for decision-makers who have read the headlines — to ask "how can we use AI?" before asking "what problem are we solving?"

At XenoLabs we have run LLM evaluations for clients across industries. The single biggest predictor of project success is not the choice of model or the sophistication of the prompt. It is whether the problem was the right problem to begin with.

The framework

Before scoping any AI engagement, we ask three questions:

  1. Is there data? LLMs need context. If the information that would make a decision exists nowhere in digital form, no model can conjure it.
  2. Is consistency required? LLMs are probabilistic. For regulated outputs — legal, medical, financial — you need determinism or robust human review in the loop.
  3. What does success look like in numbers? "Better" is not a KPI. "Reduces document review time by 40%" is.

If a client cannot answer all three, we pause the conversation and help them get there first. The cost of a badly scoped AI project is not just the build cost — it is the erosion of confidence in AI that follows a failed launch.

When to say yes

LLMs add genuine leverage in a narrow but valuable set of tasks: summarisation, classification, generation, and retrieval augmentation. If your problem is in one of those categories, has clean data behind it, and has a measurable success criterion, the answer is almost always yes.

When to say no

If the problem is a process problem, solve the process first. If the data does not exist, collect it first. If the goal is to impress rather than to improve, we will tell you — kindly, but clearly.

← back to insights