AI Infiltration: The New Reality of Business Software

AI Infiltration: The New Reality of Business Software

Every Software Tool Your Business Uses Has Probably Added AI In The Last 12 Months & What This Means Going Forward

We had lunch recently with the managing director of a mid-sized accounting firm in Nairobi a few weeks ago. We met at a restaurant in Westlands — the kind of working lunch that runs longer than either of you planned. He told us, with some confidence, that his firm did not really use AI. No policies had been developed because, he said, there was nothing to govern.

By the time the food arrived, we had identified twenty-one different applications running across his business that contained AI functionality. His practice management software. His document review platform. His HR system. His email software. His accounting package. Every one of them had added AI features through routine updates in the past two years.

He had not been notified. He had not consented. He bought the lunch. If that number surprises you, consider the last time you read a software terms-and-conditions update in full before clicking Accept. The AI was added in one of those clicks. It is there now. The question is, what is it doing with your data?

Shadow AI: The Governance Gap Nobody Is Talking About

What I have just described is what the industry calls Shadow AI — AI operating within your organization without explicit oversight or governance, typically because it arrived embedded in commercial software rather than through a deliberate deployment decision. This is NOT an edge case — It’s the default scenario of almost every Kenyan organization using modern cloud-based software.

Here are the reasons this matters. AI models are computationally expensive. They run in data centres — almost always outside of Kenya and since your software provider almost certainly did not build its own AI, it integrated a model from OpenAI, Google, Anthropic or one of the large Chinese or European AI providers. Each of those providers has its own data handling practices, its own retention policies and its own jurisdictional exposure.

The Problem Has Just Got Significantly Worse

For any Kenyan organization operating under the Kenya Data Protection Act 2019, this creates questions that demand answers including:

  1. Where is my data — and is it now sitting in a jurisdiction your original vendor agreement never contemplated?

  2. Is my data private, encrypted and deleted after use, or could it be accessible, for instance if the provider was hacked?

  3. Most critically: is my data being used to train an AI model, and if so, could someone access that data simply by querying the system?

  4. And most importantly, are your staff aware of this possibility so they can avoid the most obvious risks?

Many Kenyan companies are formally registered with the Office of the Data Protection Commissioner and have data protection policies in place. Therefore, the question we ask every time is this: when was that policy last updated? If the answer is more than three years ago, it predates the wave of AI integration that has swept through commercial software globally and therefore the policy describes a world that no longer exists.

Two Incidents That Should Be in Every Risk Briefing

The first happened at a professional services firm — a scenario that has played out in various forms across Kenya’s many business sectors. Staff had been using ChatGPT’s free tier to help prepare board presentations and regulatory submissions, including material intended for a Central Bank of Kenya (CBK) inspection. On the free tier, user inputs are used to train OpenAI’s models. A board member with a technology background identified this during a governance review. Confidential client data and commercially sensitive financial information had been fed into a public model for months before anyone noticed. The firm moved rapidly to an enterprise tier that commits contractually to data protection. The exposure, however, had already occurred.

The second involves a competitive procurement process — the kind that runs regularly across Kenya’s banking, insurance and corporate sectors. Four vendors were invited to present. After each presentation, the evaluation panel moved to a closed session. One member of the panel used a meeting summary application for their own convenience, having added it quietly to the meeting. That application automatically distributed the full meeting transcript — including the panel’s assessment of each vendor — to all listed participants. The competing vendors received the evaluation before the process had concluded. The tool was functioning exactly as designed. No one had checked the documentation.

The Practical First Step

There are two things that every company should have covered — you need a list of all the applications that have AI embedded within them and you need an AI use policy that everyone in the company is aware of and understands. Neither of these is a complex or overly technical task but it is one that needs to be done.

The audit is simply a structured inventory of the applications your organization depends on, what AI each contains, and what each vendor is doing with your data. Start with your most business-critical applications. For each one that handles personally identifiable information, confidential client data or commercially sensitive material, review the current license agreement specifically for AI, data use and model training language. Some vendors will give you clear answers. Others will require a conversation with your account manager or a tier upgrade. In the worst cases, a vendor change may be necessary.

The policy is also straightforward and, in larger companies especially, is usually an extension of existing documents. But even in the smallest company, it’s important to write down what is allowed and what is not and formally make staff aware of the difference. And with technology moving at lightning speed, that policy should be reviewed at least quarterly.

This is not primarily an IT problem. It is a governance and leadership problem that happens to involve technology. Your IT team can run the inventory; HR can run the training. But the decisions and drive for action belong with your CEO, your Legal Counsel and your board. One day, regulation will come that requires you to demonstrate you took reasonable steps to manage your exposure to AI. If you cannot manage what you do not measure, the first step is knowing what is in your stack.

In this series, we have spent time looking at the risks of AI. But AI in general is massively positive for business. In the next article, we will turn it around and look at the practical steps you can take to accelerate your business with the responsible adoption of AI. Meanwhile, if you would like learn more about how Akili AI can help your business leverage AI achieve visit www.akili-ai.com to learn more about our AI offerings.