Something fundamental has changed in enterprise security. It’s not a new vulnerability class, a nation-state threat actor, or a regulation. It’s the arrival of autonomous AI agents – and they’re already inside your environment.
Embedded in your CRM. Running inside your office suite. Deployed by a developer who found a compelling SaaS tool and connected it to a company API last Tuesday. These agents don’t just generate content – they access systems, invoke tools, move data, and act on your behalf. Often, without a security team knowing they exist.
Gartner January 2026 research, How to Secure Enterprise Agentic AI Ambition, is a rigorous, structured response to this reality. Written for CISOs and CIOs, it defines what a mature agentic AI security program looks like and makes a striking prediction:
Gartner projects that by 2028, organizations that implement a structured cybersecurity program for agentic AI will accelerate high-agency AI initiatives by 20% and reduce critical incidents by more than 50%.
That is not a defensive metric. That is a competitive one. Security done right doesn’t slow AI adoption – it enables it at scale. Here is what the research reveals, and why it matters to your organization right now.
THE THREAT YOU CAN’T SEE YET: SHADOW AI AND UNMANAGED AGENTS
Before you can secure AI agents, you have to find them. That turns out to be harder than most organizations expect.
Gartner describes a growing phenomenon it calls “shadow attack surfaces” – AI tools and automation adopted by employees and developers without central oversight, dispersed throughout the organization, often with access to sensitive data and enterprise APIs. These aren’t rogue actors. They’re well-intentioned people solving real problems with the AI tools available to them.
The security implications, however, are significant. A single AI agent frequently requires multiple accounts to function because of the range of systems and tools it accesses. And critically, those access rights often mirror the rights of the human operating the agent – far exceeding what the agent actually needs to do its job.
Gartner notes that AI agents’ access and rights are often those of the humans operating them, exceeding what the AI agent should do. This privilege sprawl is one of the most urgent structural risks introduced by agentic AI.
Why Traditional Discovery Won’t Work
A manual, siloed approach to agent discovery creates blind spots. Gartner recommends a multichannel discovery strategy combined with a “track back” approach – starting from your most sensitive data and tracing which agents can reach it. This means:
- Detecting any new access patterns to sensitive data stores
- Tracking new AI and automation features added to high-risk enterprise applications
- Identifying new connections to critical enterprise APIs
- Monitoring higher-risk employee access to stand-alone SaaS AI agents
The output of this process is an AI agent inventory – not just a list of tools, but a structured record of each agent’s type, resources accessed, AI model, and tool permissions. Without this foundation, every other security initiative is guesswork.
THE FIVE WORKSTREAMS: INSIGHTS BUILT FOR SCALE
Gartner organizes the agentic AI security response around five core workstreams, which together form what the research calls an Agentic AI Cybersecurity Program (A2CP). These are not sequential phases – they are concurrent, cross-functional disciplines that CISOs must establish and fund together.
5
Core workstreams in a mature A2CP
20%
Faster AI initiative acceleration with structured security
>50%
Reduction in critical incidents by 2028
1. Inventory High-Risk AI Agents
The first workstream is multichannel discovery combined with risk-tiered classification. Gartner recommends splitting agents into two distinct lists: task-driven agents (simpler, with risks tied primarily to data sensitivity) and goal-driven agents (built on large language models, orchestrating complex workflows, introducing new attack surfaces by virtue of their autonomy and tool access).
Goal-driven agents warrant heightened scrutiny not just because of the data they can reach, but because of what they can do with it – and the range of tools they can invoke to do it.
2. Implement AI Agent Access Modeling
Each AI agent must have a single, unique identity for traceability and compliance. This is non-negotiable. But identity alone is insufficient. The Gartner access modeling framework maps up to ten distinct access points for a single agent: input channels, system access, model invocation, resource acquisition, tool invocation, tool execution, tool proxies such as MCP servers, computer use, internet access, and agent output.
Identity and access management leaders must work with other teams to uniquely identify each AI agent and enforce least privilege by modeling the access each agent needs to do its job. Nothing more.
AI agents embedded in enterprise platforms like Salesforce Agentforce or Microsoft 365 Copilot often inherit user-level access by default – a pattern Gartner flags as a poor practice. The better model is just-enough and just-in-time privileged access, scoped specifically to the task at hand.
3. Champion Scoped Agency
This is the workstream that bridges architecture and security culture. The Gartner principle of “scoped agency” applies least-privilege logic to the agency of the agent itself – the scope of actions it can take, not just the data it can see.
The recommendation is to define agent scope early in design, narrow it further during development, contain tool access during deployment, and enforce boundaries at runtime. Organizations should treat every AI agent tool as “semihostile” – granting only the functionality required for expected tasks.
For custom-built agents, this means offering prescriptive lists of available actions rather than broad tools, implementing human-in-the-loop review before sensitive operations, and building containment mechanisms into the development process itself.
4. Manage Model Risks
Goal-driven agents built on large language models introduce a category of risk that has no direct analog in traditional software security: instruction injection. Where SQL injection exploits predictable parsing rules, instruction injection exploits the nondeterministic reasoning of language models.
Direct and indirect prompt injections are the main threats to AI agent models. Because agents have broader tool access than chat interfaces, these injections become instruction injections – capable of triggering real-world actions across enterprise systems.
Gartner identifies five key instruction injection entry points: user input, compromised memory, including poisoned multiturn sessions, malicious or compromised resources such as files an agent reads, compromised or malicious tool descriptions and responses, and websites containing indirect injection in web content.
Mitigation requires red team and application security efforts working in coordination, combined with runtime controls that support intent-based policies rather than purely signature-based detection.
5. Shorten the Threat Exposure Window
The final workstream is incident response – and it requires rethinking the model entirely. With fully autonomous AI agents executing high-speed, goal-driven workflows, traditional time-based SLA metrics such as mean-time to investigate and mean-time to respond become nearly irrelevant. The velocity of autonomous AI action can exceed the velocity of human investigation.
What replaces it is intent-based behavioral analytics: automated systems capable of distinguishing between legitimate agent behavior and anomalous actions driven by malicious instruction, compromised memory, or privilege abuse. Gartner recommends that CISOs evaluate integrating intent-based analytics directly within the systems generating alerts, and develop specialized AI incident response playbooks with clear cross-team ownership.
THE SECURITY MATURITY GAP: WHERE MOST ORGANIZATIONS STAND TODAY
We feel that Gartner is candid about the current state of the market. Cybersecurity technologies targeting the nondeterministic behavior of goal-driven AI agents remain nascent. Tooling is limited. Standards are immature. And this creates a dangerous dynamic: cybersecurity teams feel stuck, unsure whether to act on an incomplete set of controls or wait for the market to mature.
We think the Gartner answer is unambiguous: do not wait. The window between when agentic AI deploys and when security programs adapt is exactly when organizations are most exposed. The research makes clear that CISOs must prioritize deterministic controls – identity governance, access modeling, scope containment – rather than relying on AI systems to police themselves.
CISOs must prioritize deterministic controls to minimize agentic privilege abuses and contain AI agents’ agency, instead of relying primarily on AI to police itself.
This is a meaningful distinction. Deterministic controls – hard limits on what an agent can access, invoke, or output – provide defense-in-depth that does not depend on the language model behaving correctly. They constrain the blast radius when something goes wrong, whether through adversarial attack, model error, or unanticipated behavior.
WHY NORTH AMERICAN SECURITY LEADERS NEED THIS RESEARCH NOW
For security leaders in the United States and Canada, the pressure is immediate and multi-directional. Boards are asking about AI governance. Regulators are beginning to develop expectations around AI risk management. And business units are deploying AI agents whether or not a governance framework exists.
The organizations that will emerge strongest from this period are not those that block AI adoption in the name of security. They are the ones that build a structured, funded security program that grows with AI deployment – enabling the business to move at speed while controlling the risks that could turn an AI initiative into a compliance event, a breach, or a reputational crisis.
The Gartner research provides exactly the insights for that program: a rigorous, cross-functional framework that CIOs and CISOs can act on today, regardless of where their organization is in its AI journey.
HOW QUISITIVE HELPS
Quisitive works with enterprise organizations across North America to design and implement AI security frameworks grounded in real-world deployment experience. Our teams bring together cloud security architecture, identity and access governance, AI strategy, and Microsoft ecosystem expertise to help organizations secure their AI ambitions – not constrain them.
Whether you are assessing your current AI agent exposure, building an access modeling practice, or modernizing your incident response program for autonomous AI workflows, Quisitive can help you move from risk awareness to a defensible security posture.
Gartner® Disclaimer
Gartner, How to Secure Enterprise Agentic AI Ambition, Jeremy D’Hoinne, Dionisio Zumerle, 5 January 2026
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Quisitive.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved.