AI for Your Business

How to Think About AI Before You Buy Anything

By Scott Drake Simpson

The most important AI decision a business owner makes is not which tool to adopt. It is whether the organisation's structure can absorb it. AI implemented without operational clarity compounds the dysfunction it was meant to resolve. The question is not "which AI" but "are we ready."

For owner-managers of UK businesses with twenty to eighty employees, this is the decision that shapes everything that follows. Not the vendor shortlist. Not the feature comparison. The honest structural question that most sales processes skip entirely.

The managing director had sat through three vendor demos in a fortnight. Each one had been impressive. Polished slides, live product walkthroughs, confident answers to every question. By the end of the third, he had a folder of proposals, a head full of possibilities, and absolutely no idea which one was right for his business.

It was not that the tools were bad. They were all capable. The problem was that each demo had been built around what the software could do, and none had addressed the question that actually mattered: how any of it would connect to the way his organisation already worked.

He said something during our first conversation that I have heard, in different words, from dozens of business owners since: "Everyone is telling me what AI can do. Nobody is helping me work out whether we are ready for it."

This is a pattern we encounter frequently in our advisory work with UK small businesses. The technology conversation starts in the wrong place.

What Can AI Actually Do for a Business Like Yours

AI capabilities for a business of this size are genuinely useful, but what they deliver depends entirely on the operational foundation underneath them. AI can automate repetitive administrative tasks that consume hours every week. It can surface patterns in customer data that would take a person months to identify. It can draft correspondence, summarise documents, generate reports from raw data, and flag anomalies in financial records before they become problems.

For a business of twenty to eighty people, those capabilities matter. Not because they replace staff, but because they free people to do the work that requires human judgement rather than human endurance.

Why the Same AI Tool Works Differently in Different Businesses

But here is where the conversation usually goes wrong. The question "what can AI do?" is answered in the abstract, as though every business operates the same way. In practice, what AI can do for your business depends entirely on what your business looks like underneath: where your data lives, how reliable it is, how your teams communicate, and whether your existing systems talk to one another or exist in isolation.

A business whose customer records are scattered across three spreadsheets and a CRM that nobody fully trusts will get a very different result from AI than one whose information flows are clean, connected, and current. The technology is the same. The operational foundation is not.

This is why treating AI as a purchasing decision misses the point. Before the tool, there is the structure. And the structure is either ready to absorb AI or it is not.

Should You Use AI in Your Business Right Now

Whether your business should be using AI right now depends less on the technology and more on what you would be building on top of. A sound operational foundation makes AI useful. A fragmented one makes it unpredictable.

Many businesses have already adopted AI informally. Someone on the team uses it for drafting emails. Another uses it for research. A third has started feeding customer queries into a chatbot they set up over a weekend.

None of this was planned. None of it was discussed. It happened because the tools are accessible and the instinct to improve things is natural.

What Happens When AI Adoption Has No Governance

The difficulty is not the AI itself. It is the absence of any shared approach to how it is being used. When individuals adopt AI independently, without governance, the organisation inherits risk it cannot see. Sensitive data may be entering systems with unclear privacy terms. Outputs may be inconsistent because different people are using different tools for similar tasks. And nobody has asked the foundational question: is the information going in trustworthy enough for the answers coming out to mean anything?

This is the same pattern of reactive software purchasing that created disconnected systems in the first place. A problem appears, someone finds a tool, the tool gets adopted, and no one steps back to consider how it fits into the whole. With traditional software, this pattern produced silos, workarounds, and the operational tax of disconnected systems. With AI, the consequences are faster and less visible, because the outputs look polished even when the inputs are flawed.

The question is not whether to use AI. It is whether you have the clarity to use it well.

How to Implement AI Without Making Things Worse

The starting point for sound AI implementation is not a product evaluation. It is a diagnostic. Before any AI tool is assessed, the organisation needs visibility of its own operational architecture: how systems connect, where manual workarounds exist, what data is trustworthy, and where decisions currently get stuck.

The instinct, when something new and powerful arrives, is to adopt it quickly. The fear of falling behind is real. Industry publications amplify it. Competitors who appear to be further ahead amplify it further.

But the businesses that get the most from AI are not the ones that adopted it first. They are the ones that understood their own operations clearly enough to know where AI would actually help and where it would simply automate existing confusion.

This is the same structural clarity that precedes any sound technology decision. AI simply makes the consequences of skipping it more visible and more expensive. A CRM purchased without understanding the data landscape creates duplicate records. An AI tool purchased without understanding the data landscape creates confident-sounding answers built on unreliable foundations.

When the Problem Is Upstream

I sat with a team recently who had implemented an AI reporting tool that pulled data from their project management system. The reports it produced were beautifully formatted and completely misleading, because the project data had not been updated consistently for months. The AI did exactly what it was asked to do. The problem was upstream, in the structure, not the software.

The intervention is not prescriptive. It does not begin with a recommendation. It begins with mapping: which decisions does the business need to make each week, and what information does each decision require? Where does that information live? How current is it? Who maintains it? The five signals that reveal structural readiness are the same whether the technology under consideration is a new CRM or a generative AI platform. The diagnostic is the same because the underlying question is the same.

This is where a structured AI strategy conversation becomes genuinely valuable. Not as a sales process, but as a way of seeing the organisation clearly before committing to anything.

What AI Guardrails Look Like in Practice

AI guardrails are the policies and boundaries that govern how an organisation uses artificial intelligence. In practice, they are simpler than the term suggests. They answer four questions:

  1. What data can be shared with AI tools?
  2. Who decides which tools are adopted?
  3. How are outputs reviewed before they are acted upon?
  4. What happens when something goes wrong?

For a business of thirty or fifty people, this does not need to be a lengthy governance document. It needs to be a shared understanding, agreed at leadership level and communicated clearly. It is the difference between AI adoption that happens by default and AI adoption that happens by design.

The guardrails themselves are not complicated. What is complicated is arriving at them without first understanding the organisation's current state. A business that does not know where its sensitive data lives cannot define what should and should not be shared with an AI tool. A business that has not mapped its decision-making processes cannot identify where AI-generated outputs might create risk.

This is why the structural work comes first. The guardrails are not separate from the diagnosis. They emerge from it.

Frequently Asked Questions

Is my business ready for AI?

Readiness is not about technical sophistication. It is about operational clarity. If you know where your data lives, how decisions flow through the organisation, and which processes are already well-structured, you have a foundation to build on. If those questions are difficult to answer, the readiness work comes before the AI work. The AI Readiness Assessment is a useful starting point.

What are the risks of adopting AI without a plan?

The primary risk is not that AI will fail visibly. It is that it will succeed superficially. AI produces polished outputs regardless of input quality. Without a plan, businesses inherit inconsistency across teams, data privacy exposure they cannot see, and a growing confidence in answers built on unreliable foundations.

Do I need to hire someone to implement AI?

Not necessarily. Many AI tools are designed for non-technical users. The question is not whether you can install the tool, but whether you have the structural clarity to know which tool belongs where, what data it should access, and how its outputs connect to real decisions. That is where advisory support earns its value.

The Shift That Matters

The managing director who sat through three demos in a fortnight did not end up buying any of the products he was shown. Not immediately. What he did was step back.

He mapped his organisation's systems with the help of a structured diagnostic conversation. He identified where data was reliable and where it was not. He discovered that two of the three tools his team had started using informally were feeding customer data to servers outside the UK, something nobody had thought to check.

When he eventually did adopt an AI tool, it was a considered decision. He knew what it would connect to. He knew what data it would use. He had a clear picture of what it was meant to do and, just as importantly, what it was not.

His team stopped treating AI as something to be anxious about and started treating it as something they understood. Not because they had become technical experts, but because they had built the structural clarity that made the technology legible.

The anxiety about falling behind was replaced by something quieter and more useful: confidence that when the business moved, it would move in a direction that made sense for the way it actually worked.

That is the shift. Not from scepticism to adoption. From confusion to clarity. The technology follows.

By Scott Drake Simpson, Founder of Drake Simpson Strategy

When the time is right

Clarity begins with understanding where you stand.

If your business is considering AI and you want to start with the structural questions rather than the sales pitch, a diagnostic conversation is a good first step.

Book a conversation