Your Board Thinks You're Fine But Your Employees Know Better When It Comes To AI Adoption
You may recognize this scene. The quarterly board meeting at your Upperhill offices runs up to four hours. Strategy. Audit committee findings. Digital transformation update. Somewhere near the end, an item appears on the agenda: Artificial Intelligence. The Group Head of IT presents. The board asks sensible questions about risk. The CEO nods. The conclusion, almost always, is that things are under control. Microsoft Copilot has been approved and we have two internal pilots in progress. A policy will be drafted. There is time.
The reality is very different. The picture painted at that board meeting is not the picture on the ground. In reality, AI is everywhere.
What the Data Says About Kenya
Kenya is not a laggard in AI adoption. It is, by a significant margin, the most AI-active consumer market on earth. Research published in 2025 found that 42.1 per cent of Kenyan internet users aged sixteen and above had used ChatGPT in a single month — placing Kenya ahead of the United States, Japan and China. Let that sit for a moment.
The country your board believes is ‘not yet ready for AI’ ranks first in the world. What this means in practice is that the analyst in your credit risk team, the relationship manager covering your SME portfolio and the HR officer processing staff appraisals are, right now, using AI tools at work. Not because they have been instructed to. Because they are resourceful professionals meeting their deadlines, and AI helps them do that faster. A client brief. A loan assessment summary. A board presentation. Pasted into ChatGPT and returned in minutes. The data goes somewhere. The model may train on it. Nobody has told them to worry about that, because nobody in the organization has mapped the risk.
It is, as I like to say, like handing a Ferrari to a first-time driver and saying: mind the accelerator. The output is visible. The exposure is not.
The Problem Has Just Got Significantly Worse
Until recently, the risk was about data — what gets shared with an AI model that your organization does not control. That risk is real and largely unmanaged in most Kenyan businesses. But a second, more serious risk has arrived: agentic AI.
Agentic AI systems do not just respond to questions. They act. You give them an instruction and they execute it on your behalf, autonomously, across multiple systems. For example, you might ask the AI to book a meeting at your Mombasa branch next Thursday, reserve accommodation nearby and prepare a briefing document from your recent client correspondence. Straightforward enough. But to do this, the AI has accessed your calendar, your email, your file system, the internet, and your location data. It has acted on your behalf across all of them without a further prompt from you.
Now consider what happens when that same system is deployed inside a commercial bank, a SACCO or an NSE-listed company — with access to customer records, internal communications and financial systems. The question is not whether something could go wrong. The question is what your organization’s liability looks like when it does.
None of this requires a digital transformation programme. The starting point is straightforward.
01
Talk to your staff. Not a survey but a conversation
Ask them what AI tools they are actually using, how frequently and for what purpose. Offer a full amnesty for anything previously undisclosed. You need an honest picture, not the official one.
02
Introduce basic training
The essentials are not complex: never input client data or personal information into an AI tool without understanding where that data goes; always verify AI outputs independently, because these systems hallucinate — they produce confident, plausible sounding answers that are sometimes factually wrong; and understand that an agentic AI tool acting on your behalf is your professional and legal responsibility.
03
Define your approved list
You cannot enforce an AI policy without one. The list does not need to be exhaustive, but it must exist and be communicated. One unapproved tool with access to customer data is all it takes to create a material exposure under the Kenya Data Protection Act 2019 — and the Office of the Data Protection Commissioner is actively building its enforcement capacity.
In the next post in this series, we look at the AI that is already embedded in the software your organization has been paying for years — without your knowledge or consent. Rather than asking “what is the company doing with AI?”, consider “what are our employees doing with AI?”. The answers are very different.
