Artificial Intelligence has become a household discussion over the last couple of years, especially given the rise of the generative pre-trained transformer (GPT) models, made famous by ChatGPT.
At its core, GPT is a large neural language model that can understand content and generate new content and is at the core of what is commonly referred to as Generative AI (GenAI). The capabilities continue to expand where it’s not just pretrained data that can be understood, but the ability to interact in near real time with other systems – whether that’s retrieving data or engaging with them through an API or even a traditional user interface.
As this generative technology has continued its rapid advancement over the last couple of years, new capabilities are emerging in the form of AI Agents.
However, this concept is still new and not well known, often cloaked in mystery with the pretense of being able to help drive automation and business process efficiencies, freeing up humans to perform higher-level work.
An AI Agent builds on the capabilities of generative AI by not just transforming input into some type of new output, but by wrapping its work into a description to meet a goal, that provides guidance on the decisions it should make and the resulting actions it is expected to take.
Agents are, in general, singularly purposed: do this one thing and do it very well.
They possess capabilities that enable them to:
- Act on behalf of a user
- Connect to other systems through APIs and emerging protocols like A2A and MCP
- Operate as autonomous systems, taking corrective action on their own, and only engaging a human in events of failure
Unlike traditional automation, agents can:
- Learn and adapt in real time
- Operate independently
- Chain actions across systems
- Initiate tasks without user prompts
Examples include:
- An agent in Microsoft Teams that summarises meetings
- An agent that helps schedule and email clients
- Agents trained on HR data that can answer employee questions, craft emails, or book airline tickets
Agent capabilities are continuing to evolve with more autonomy, memory, learning, and self-improvement.
Agents can span from simple scripts to more complex autonomous agents (those that act without human prompting).
The term “agent” is overly broad, masking the various capabilities. No industry standard categorisation exists, but possible categories include:
- Human Delegated Agents: act on behalf of a human or group of humans. Example: an agent scheduling a meeting for a human user.
- Application-Bound Agents: act on behalf of an application or service.
- Impersonation Agents: morph between digital personas to interact with others.
- Third-Party or Trusted Agents: embedded into SaaS platforms or purchased capabilities.
Agents can also be classified as:
- Ephemeral (short-lived) or persistent
- Credentialed or “guest”
- User or system initiated
- Autonomous or interactive
Agentic AI goes beyond the task-focused agent. It can:
- Plan, orchestrate, and execute multi-step work
- Coordinate across many agents
- Tackle complex scenarios, reasoning the best approach along the way
It can cross system and identity boundaries, perform delegated tasks, delegate to other agents, operate persistently or on-demand, and with or without human intervention.
This is no longer just about automation – it is about delegated agency.
With agency comes the need for clear governance of ownership, identity, lifecycle management, trust, domain, scope, and accountability.
Why This Is a Security Game-Changer
With autonomy comes delegated authority. Without identity constructs and access boundaries, these agents introduce risks:
- Over-permissioning
- Opaque decision-making
- Credential sprawl
- Lack of accountability
- Lack of observability and auditing
- Hallucinations or errors
- Just-in-time (JIT) elevation for agents
- Poor secret hygiene
Considerations by agent category
Human-Delegated Agents
- Risk: Over-privileging can amplify misuse of user credentials
- IAM Challenge: Grant least-privilege without crippling usefulness
Application-Bound Agents
- Risk: Service-account proliferation, credential sprawl, stale secrets
- IAM Challenge: Rotate keys at scale, map agent identity to organisational units
Impersonation Agents
- Risk: Harder to distinguish genuine actions from agent actions
- IAM Challenge: Enforce non-repudiation and audit trails when personas shift
Third-Party/Vendor Agents
- Risk: Blind spots in vendor-issued tokens, hidden privilege creep
- IAM Challenge: Establish trust anchors, monitor unknown calls continuously
Ephemeral vs. Persistent Agents
- Risk: Ephemeral agents may not be tracked; persistent ones accumulate permissions
- IAM Challenge: Automated de-provisioning and periodic re-attestation
Authorised vs. Guest/Untrusted Agents
- Risk: Untrusted agents in sandboxes can still exfiltrate data
- IAM Challenge: Fine-grained conditional access, real-time behaviour analytics
Stay tuned for the next post on Why OAuth Falls Short along with insights and solutions.