Next Step: Autonomous AI Agents

Next Step: Autonomous AI Agents

AI Agents make decisions

The traditional role of generative AI has been to respond to a question (a “prompt”) and return a response. In the same way that an AI can provide general information (“what are the ingredients of bread?”), it can also create a plan (“what are the steps to make bread?”).

By combining the plan with access to “tools” and memory, agents can undertake complicated multi- step tasks. Each step of the plan can be fed to another agent to execute and further agents are able to determine whether the task has been completed or needs to continue.

For example, a group of agents can respond to a wide-ranging customer enquiry, seek out a product or build a complex research report.

Agency

AI agency refers to the level of autonomy and decision-making capability an AI system has, including its ability to initiate actions without direct human intervention and its ability to directly interact with the outside world. It also touches upon the AI’s ability to act independently, adapt, and learn over time.

The key characteristics of AI agency: Autonomy (ability to act without human input), Intentionality (pursues goals based on programmed or learned objectives) and Adaptability (learns and improves over time) make agents potentially powerful.

But they also introduce challenges to ensure the agents don’t deviate over time from the original intention or take actions that were not expected.

Introduction to AI Agents

AI agents go beyond simple “question/response” approaches to AI. Instead, they are best thought of as a “system” that includes items such as memory, tools (for example a calculator or a web browser) and the ability to communicate with the outside world, or other agents, and undertake reasoning.

AI Agents are said to have “Agency” i.e. the ability to make decisions for themselves based on the information they receive. This is a powerful new approach to AI, enabling solutions to behave flexibly without a precise description of how to act in all possible circumstances.

01

Agent Architectures

While it is possible to have one, highly complicated agent, it is generally better to divide the problem in to a network of much simpler agents. This can reduce the complexity of the solution, improve the reliability of answers, accelerate the computation and allow smaller (and so cheaper) models to be used. The way agents interact is often represented as a Graph – a mathematical model in which a process steps from one state (node) to another in a clearly defined way. Agents are also often coupled with conventional software such as databases, web browsers or calculators to give them access to information that is outside its training, for example real-time stock price information.

02

Decision-making

Multi-agent systems can make it simpler to monitor how a system is making decisions. By getting the agent to explain each step it takes, it is possible to see where in a logical chain a possible error has been introduced. In many such systems, the agents will make several attempts to solve a problem (so called REACT models), iteratively refining the output and then testing whether the answer is good enough. This can result in much improved outcomes, much as happens when humans write a draft of a document and then refine it once the structure is in place. However, care must be taken to ensure the agent does not become stuck in a loop, unable to make progress.

03

Monitoring and Control

One of the most challenging areas of any AI system is monitoring and control. AI models themselves are highly complex and in many cases, unexplainable using current technology. In addition, they are not deterministic, in other words they will not necessarily give precisely the same answer each time to the same question. This means that care must be taken both to check answers are within range and that there is a clear process for understanding decision logic in the case where an answer was incorrect. This becomes more challenging as the AI performance improves – the more unlikely it is for an AI to make a mistake, the less likely it is to be spotted. This underpins the need for strong guardrails so that the system is unconditionally stable under all circumstances.

AI Agents are the next transformative step for generative AI. The ability to combine decision-making with tools and to determine autonomously which steps should be taken creates solutions that are much closer to a human response to problem solving.

The approach means that software no longer needs to consider all possible circumstances, rather the AI can be expected to make sensible decisions within defined bounds for situations that were not foreseen at design time.

But this freedom introduces new challenges. Designing systems that solve important commercial problems yet are reliable and responsible requires the skill of experts with experience in the field.

Akili AI is here to help. We bring a combination of market-leading skills, foresight and banking knowledge to support banks in the AI adoption journey to deliver a productive AI future while managing the transition to the new way of working.

Delivering Digital Transformation with Responsible AI

It is easy to get started with a simple clear approach:

  • AI audit – What AI does the bank use today and how are they managed
  • AI Governance – Put in place a simple but effective AI governance process
  • AI Opportunity Discovery – Map the places where AI can have most impact

Getting started with AI is about choosing the right problem. One with clear business benefit, that can be readily integrated into your existing systems. And one that aligns with the ethos and brand to enhance the customer experience.

Talk to us to start your AI Transformation Journey!