One of our biggest predictions for 2026 is that financial players will progressively move from AI that “talks”, to AI that “does”, with AI agents executing workflows under tight controls, shifting the focus from models to integration and governance.
Generative AI creates content from prompts. Agentic AI goes a step further by planning and taking actions using tools, within clear guardrails. In practice, AI agents can gather information, decide the next step, and execute parts of a workflow, improving over time through iteration, testing, and refinement.
Financial services is among the early industries moving from pilots to scaling agentic AI, with leading firms already deploying agents in operational workflows. It’s reported that 77% of banks have launched or soft-launched GenAI applications, and 31% have started implementing agentic AI.
Moving from “AI that talks” to “AI that does”
Most AI deployments so far have focused on AI that “talks”, which can search, summarise, and draft content to support employees. The next wave is different: AI agents that “do” are starting to take bounded actions inside workflows, moving cases forward and coordinating steps end-to-end, with humans kept in control where it matters.
- On the internal operations side, next generation AI agents will take action and execute workflows end-to-end. Imagine an “underwriting decision-support agent” reading a loan application, checking what’s missing, requesting documents, updating the case system, drafting the decision pack, and nudging the next step. All that while keeping humans in control of high-risk actions like approving the final lending decision, granting policy exceptions, and issuing the binding offer. The net result for the lender is faster throughput at lower cost, with more consistent decisions and a stronger audit trail.
The Bank of New York Mellon (BNY) is already pushing “AI that does” into day-to-day operational workflows, deploying 100+ “digital employees” (multi-agent systems) that take action in workflows, work alongside staff on operational tasks including payment validations and code repairs. These are run through its internal platform, Eliza, which is designed as a governed system of work for building and operating agents safely at scale.
- On the customer side, AI agents won’t just recommend what to do, they’ll carry out steps on the customer’s behalf, within clear permissions, guardrails and an audit trail. Imagine an agentic “mortgage renewal autopilot” that, instead of simply suggesting a new rate, proactively manages the renewal journey: gathering what’s needed, preparing the documentation, and coordinating progress across the process. It only pauses when it needs the customer’s input or consent to confirm details. In short, it turns a stressful, time-boxed decision into a guided, low-touch experience for the customer, with minimal effort required at the moments that matter.
Leading firms are already deploying agentic AI that takes bounded action
This shift to AI that takes bounded action is already visible in customer-facing financial services:
- Cleo’s new Autopilot moves from insights to execution by recommending financial actions and, once the customer confirms, orchestrates them behind the scenes. Autopilot can move money into savings, help prevent overdrafts with cash advances, and set spending limits at specific merchants, while keeping users in control via consent gates.
- Klarna’s AI assistant can execute customer tasks such as refunds and returns, not just provide recommendations.
- Lemonade’s AI ‘Claims bot’ handles 96% of first-notice-of-loss claims without human intervention. Lemonade reports that 55% of claims are automated end-to-end, with the bot only escalating cases it can’t settle to humans. The bot triages and assigns claims it is not authorised to settle, or where it identifies concerns, to human experts, analysing each expert's specialty, qualifications, workload, and schedule to determine how to assign the claim.
- Mastercard’s product onboarding assistant not only answers questions, but also automates routine onboarding tasks.
Agentic AI that “does” changes how value is created
The big ramification of AI that “does” is that the value shifts from who has the best model to who can safely connect AI to real production systems. This has several crucial implications for financial-services firms:
- Agentic AI that “does” requires stronger controls. Safely connecting to systems means designing and implementing stronger controls including permissions, approval gates, guardrails, and audit logs.
- It forces a rethink of roles. Humans focus on judgement, exceptions, and oversight, while agents handle most of the execution work, from admin and coordination to gathering information, running checks, and preparing decisions, within clear guardrails.
- It requires a new governance layer. The nuance here is that the moment an AI is allowed to take actions, firms need a horizontal “plumbing” layer that makes those actions safe and repeatable with permissions, approvals, audit logging, monitoring and system integrations.
Once that foundation exists, firms can roll out vertical agents that deliver outcomes in specific workflows by layering domain rules, evidence standards, and exception handling on top of the same horizontal “action infrastructure.” In practice, AI that “does” pushes financial players toward a simple pattern: one horizontal execution-and-governance layer enabling multiple vertical, value-driving agents.
.png)
The next frontier is “AI that does” at scale: agents taking bounded actions across core systems safely and repeatedly. Financial players are now starting to build the shared foundations for safe action. We believe where the real value and defensible differentiation will show up is vertical agents embedded in specific workflows, with domain rules and clear guardrails.
As more financial services firms build out their horizontal execution-and-governance layer, we expect a wave of workflow-specific vertical agent deployments that “do” throughout 2026.
Horizontal and vertical agentic AI are different
Let’s now unpack the difference between horizontal and vertical agentic AI and why it matters in financial services.
This distinction separates broad, reusable productivity gains from workflow-specific outcomes, and it helps explain why the biggest prize is vertical agents that can execute, not just advise.
.png)
- Horizontal agents help employees do everyday tasks faster. Horizontal agentic AI refers to agents that work across functions, teams, and domains, performing broadly useful tasks that aren’t tied to one product workflow. Think “document extraction,” “case summarisation,” “email chasing,” or “ticket triage.” These agents help with knowledge search, drafting, summarising, and routing work, and they can support many different workflows. They’re often quicker to deploy because they typically don’t require deep write-access into core systems and the risk surface is lower.
- Vertical agents reshape how specific workflows run. Vertical agentic AI is embedded inside a specific financial domain workflow such as underwriting, fraud investigation, asset-liability management, or portfolio rebalancing. A “mortgage underwriting decision-pack agent” or “fraud investigation agent” is specialised: it’s designed to run one workflow extremely well and often orchestrates multiple systems to do so. Vertical agents operate under domain rules and regulatory constraints, and they can directly influence financial outcomes.
Vertical agentic AI is strategic
The horizontal versus vertical distinction isn’t semantic, it’s strategic. It provides a practical lens for how to prioritise investment in agent bets. It helps you separate broad productivity uplift from workflow-level outcome change. It clarifies three things that matter in financial services:
- Where value concentrates. Horizontal agents tend to deliver broad productivity benefits like faster case handling, less manual admin, and better knowledge access. The gains are real but usually incremental and spread across teams. Vertical agents concentrate impact inside the firm’s high-value engines: revenue generation, risk management, and capital or balance-sheet decisions. They can influence approval rates, losses, fraud outcomes, portfolio performance, pricing quality, and capital efficiency. That makes their economic value more concentrated and measurable, but also raises the bar on required controls and governance.
- What it takes to ship safely. Horizontal agents typically sit in everyday employee tools like email, chat, case management, and ticketing, which makes them easier to roll out. Vertical agents plug into domain-specific systems and decision loops and usually require deeper integration with core systems. That’s why vertical agentic AI requires stronger auditability and more rigorous model governance and oversight, and tends to take longer to deploy with greater delivery and regulatory complexity.
- Where defensible competitive advantage is likely to emerge. Horizontal capabilities commoditise quickly because most financial services firms can buy similar copilots and tools from the same vendors. Vertical agents become institution-specific when they embed your data, your risk appetite, your policies, and your operating model. That’s why horizontal agents tend to improve how work gets done, while vertical agents can change how a financial services firm designs, distributes, and manages financial products and risk.
Vertical agentic AI “that does” is the next frontier of value
The step-change comes when agentic AI is embedded into underwriting, fraud decisions, balance sheet management, trading execution, or wealth portfolio construction and can execute bounded actions under clear controls.
That is when agentic AI starts shaping risk, pricing, revenue, capital efficiency, and product design. Vertical AI “that does” only becomes a meaningful value frontier under a few conditions:
- The agent must sit inside the decision loop, not just advising humans but triggering or executing bounded actions within clear guardrails.
- It should leverage institution-specific data and constraints, such as behavioural signals, portfolio performance, funding structure, and risk appetite, that are hard for competitors to replicate.
- It must be deployable under regulatory scrutiny, with explainability, auditability, and evidence that can survive model validation and supervisory review.
Vertical AI “that does” is not just more advanced AI. It’s a shift from productivity tools to intelligent financial infrastructure.
Vertical agents do not scale on their own. They depend on horizontal foundations for safe execution, including data plumbing, model governance, workflow integration, and action controls. Without those foundations, vertical agentic AI will stay stuck in pilots.
Unlock the value with 11:FS
If you want to move from ideas to shipped outcomes, 11:FS can help you do it end-to-end as a product consultancy. We work with banks, investment managers, payments firms, insurance companies and fintechs to identify the right opportunities, design winning propositions, and deliver the product, design, and engineering work needed to make them real, not just produce a slide deck.
That means mapping where AI can drive value, shaping the proposition and operating model, and then building the workflows and controls needed to deploy safely in production with measurable KPIs.
Discover more at 11fs.com/ventures





