Agentic AI Risks in Autonomous Agent Interaction

Picture a future where AI agents do more than execute tasks. They negotiate with each other, re-prioritise workloads, and even trigger system-wide changes without human sign-off. This future is no longer science fiction. The rise of agentic AI systems technologies that can set goals, plan strategies, and collaborate with other agents is transforming cybersecurity in ways that demand urgent attention.

While the business benefits are clear faster operations, smarter automation, and reduced manual effort the risks are equally significant. Agentic AI risks in autonomous agent interaction emerge when agents build feedback loops, take decisions collectively, and act beyond their intended scope. The result is a level of systemic risk that traditional single-agent deployments never faced.

What Is Agentic AI and Autonomous Agent Interaction?

Unlike traditional AI models that only respond to prompts, agentic AI can set its own objectives, plan steps, and execute tasks over long timeframes. It's described as a shift from “passive AI” to “active AI” that can adapt and learn continuously.

When multiple agentic AIs interact, complexity increases dramatically. This inter-agent dialogue resembles an unmanaged supply chain: one weak link can compromise the entire ecosystem.

Key characteristics of agentic AI systems include:

  • Autonomy: Agents can act without human intervention once goals are defined.
  • Adaptivity: Agents can adjust behaviour dynamically as inputs change.
  • Goal orientation: Large objectives can be broken into smaller, self-directed tasks.
  • Memory and state retention: Agents hold context across interactions, which can persist longer than intended.

When agents interact, they may:

  • Delegate or reassign tasks to one another.
  • Share partial results, increasing efficiency but also propagating errors.
  • Influence peer agents’ strategies, creating unexpected behaviours.

Each layer of interaction expands the attack surface. A compromised or manipulated agent can undermine the entire system.

Real-World Scenarios

Google Cloud: Agentic SOC with Gemini in Security

Google has showcased an agentic SOC where agents triage alerts, conduct investigations, and initiate responses.

Risk: If the triage agent dismisses a true positive, downstream agents may accept the error, suppressing escalation and leaving a real threat unchecked.

 

Microsoft: Security Copilot Agents and Partner Agents

Microsoft Security Copilot integrates multiple agents for phishing, data security, and identity, alongside partner-built agents from Tanium and Netskope.

Risk: When agents exchange context across vendors, a single poisoned input can propagate automatically. This creates an inter-agent supply chain where compromise in one area spreads system-wide.

 

AWS: AgentCore Identity for Agent Management

AWS introduced AgentCore Identity to govern agent identities and manage credentials across AWS and third-party services like Slack and Salesforce.

Risk: Poorly managed credentials may allow one agent to impersonate another, directing malicious actions and undermining peer trust.

 

Rapid7: When Agentic AIs Talk to Each Other

Rapid7 warns that when autonomous agents interact, containment zones and kill switches may fail.

Risk: Feedback loops can cause agents to pursue goals that drift from the original design. For example, a policy agent may relax compliance standards to help an action agent meet a performance target.

# Fact / Statistic
1 The global agentic AI market was valued at US$5.1B in 2024, expected to reach US$7.38B in 2025 (CAGR ~44.8%).
2 By 2030, the market could grow to US$47.01B.
3 Agentic AI reduces task time by an average of 66.8% (76% faster trip planning, 71% faster budget optimisation).
4 Microsoft’s Security Copilot includes 6 built-in agents and 5 partner agents as of March 2025.
5 By 2026, enterprises may have more autonomous agents than human users.
6 Microsoft identified failure modes such as memory corruption and intent hijacking.
7 40% of agentic AI projects may be scrapped by 2027 due to cost and complexity.
8 By 2028, 15% of daily work decisions may be made by agentic AI and 33% of enterprise apps could integrate agents.
9 70% of consumers would let AI book flights, 65% would let it book hotels.
10 Large enterprises report 60% of internal knowledge requests managed by agentic AI.
11 ISG reports adoption is accelerating, but governance challenges persist.
12 Deloitte reports autonomous agents remain under development for enterprise scale.
13 ServiceNow released autonomous AI agents for Security & Risk in May 2025.
14 Blue Prism found enterprises shifting toward agentic process automation.
15 Study found 21% of coding agent actions insecure, exposing sensitive data.
16 Evaluation studies focus 83% on technical metrics, only ~15% include safety metrics.
17 The Aegis Protocol tested 1,000 agents with 20,000 attacks, achieving 0% success under layered security.
18 The Manus project in China launched a fully autonomous coding agent in 2025.
19 Microsoft estimates 1.3B AI agents by 2028, up from millions in early 2025.
20 Over 80,000 microbusinesses are already powered by agentic AI.
21 Forecasts suggest 60% of new enterprise AI projects in 2025 will include agentic elements.
22 Fortune 100 companies already use agentic AI for 40% of workflows.
23 Gartner warns of “agent washing”, noting only ~130 genuine projects out of thousands.
24 Microsoft reports autonomous agents are moving beyond pilots into daily workflows.
25 Many projects fail to show real-world value due to reliance on technical benchmarks alone.

Agentic AI risks in autonomous agent interaction are no longer theoretical. From Google’s agentic SOC to Microsoft and AWS integrations, these systems are already reshaping enterprise security operations. While they offer efficiency and scalability, they also expand the attack surface, create inter-agent supply chains, and raise new compliance and governance challenges.

For CIOs and CSOs, the lesson is clear: the promise of agentic AI must be matched with rigorous oversight. Secure agent identities, monitor inter-agent communication, test for emergent behaviours, and keep human-in-the-loop controls active. The future of autonomous agents is coming fast, and those who prepare now will harness the benefits while avoiding systemic risk.

More from this months newsletter >

October Cyber News Wrap-Up: Australia’s Big Stories

31 October 2025

October Cyber News Wrap-Up October was a high-tempo month for Australian cyber news: big-brand breaches, […]

Read More

Continuous Vulnerability Scanning for Real Risk

30 October 2025

Scheduled Vs Continuous Vulnerability Scanning Why the old model is leaving gaps you cannot ignore […]

Read More

How to Maximise ROI from Your 2026 Cyber Security Budget

30 October 2025

Cybersecurity budgets are rising in 2026, but smart allocation is what drives real ROI. Here’s […]

Read More

Hackers Exploit Microsoft Teams Access Tokens to Steal Chats and Emails

30 October 2025

Hackers are exploiting Microsoft Teams access tokens to infiltrate chats, emails, and documents here’s what […]

Read More