Okay so I just got back from RSAC 2026 in San Francisco and I need to process what I witnessed.

Every single vendor at that conference had the word “agentic” somewhere in their booth materials. Every single one. I passed a company selling what appeared to be network cable management hardware, and their banner said “Agentic Infrastructure for the AI-First Enterprise.” I don’t know what that means. I’m not sure they do either.

Here’s the thing though. Underneath the fog machine of AI marketing and the very aggressive swag (I counted three different vendors giving away AI-themed stuffed animals, which is a sentence I never expected to write), there was something real happening at RSA this year. And if you sell security to enterprise customers — whether you’re an MSSP, a VAR with a security practice, or a carrier with a managed security play — you need to know what it was.

Let me tell you the story.

The number that got me

I was at the Cisco keynote when Jeetu Patel put a stat on screen: 85% of enterprises are testing AI agents. 5% have moved them to production.

I wrote it down. Then I sat with it.

Here’s what that actually means if you’re on the selling side: your customers want to do this. They’ve set up the sandbox. They’ve run the pilot. Somebody has given somebody in the C-suite a demo they found compelling. The AI agents are coming. And 80% of those customers are stuck between “we want to do this” and “we don’t trust it yet.”

That gap between want and trust? That’s where partners live. That’s exactly where partners have always lived. Cloud was the same story ten years ago. So was zero trust three years ago. A technology the customer wants but doesn’t know how to safely deploy. An intermediary who can bridge that gap. That’s the whole channel business model in one sentence.

What the vendors were actually launching

Cisco had the biggest practical announcement, in my opinion. They introduced something called Duo Agentic Identity — which, yes, sounds extremely like a superhero name, but bear with me. It’s basically a way to give AI agents a verified identity and map them to a human employee who’s accountable if the agent does something wrong.

Think about the real version of that problem. Your customer deploys an AI agent to handle IT helpdesk tickets. That agent has access to the ticketing system, the employee directory, maybe some credentials. Who authorized it? What’s it allowed to do? If it pulls sensitive data it shouldn’t have, who’s responsible? Right now, in most enterprises, the answer to all of those questions is “nobody knows.” The agent is basically running on the honor system.

That’s not a hypothetical. Cisco’s own threat research found that attackers are overwhelmingly targeting identity infrastructure — the stuff that authenticates users and brokers trust. AI agents are about to massively expand that attack surface. Cisco’s bet is that the market needs Zero Trust for AI agents the same way it needed Zero Trust for users.

The managed service opportunity there is real.

Palo Alto Networks was also showing something interesting on the network side — a Prisma SD-WAN troubleshooting agent that can autonomously investigate network outages, pull telemetry, correlate logs, and give you a root cause analysis with a one-click remediation option. Gartner is apparently predicting that by 2030, 50% of organizations will use agentic NetOps with minimal human involvement.

2030 sounds far away. It’s four years. In channel years, that’s two business planning cycles.

The thing nobody was saying out loud

Here’s what I noticed when I got people away from their booths and into the hallway conversations. There’s real uncertainty about accountability.

If an AI agent makes a decision that causes a data breach — routes a document somewhere it shouldn’t, grants access to something it shouldn’t, takes an action that exposes a customer — who gets sued? The vendor who built the agent? The enterprise who deployed it? The MSSP who was supposed to be monitoring it?

Nobody has a clean answer. The governance frameworks are six to eighteen months behind the technology. And in the meantime, vendors are asking partners to sell managed services around AI agents that don’t yet have a clear liability map.

I talked to three different MSSP owners at the conference and asked each of them: are you putting AI agents in your managed security SOC workflows yet? Two of them said they’re evaluating. One said “absolutely not until the contracts catch up.”

That last guy isn’t wrong. He’s also going to be eighteen months behind.

What I’d actually do with this

Look, I’m not here to tell you to bet the company on AI agent security before the governance frameworks exist. That would be irresponsible and you’d rightfully ignore me at the next conference.

But I am going to say this: the partners who are building the practice now — standing up the knowledge base, getting the Cisco certifications, figuring out what a managed AI governance service actually looks like in their stack — those partners are going to be two years ahead when the governance catches up and enterprise buyers start writing checks at scale.

The SOCRadar announcement at RSAC is worth noting: they’re offering free AI Agent and Automation Training to MSSP partners right now. As in, the education cost is zero. The barrier to starting is not money — it’s time.

The stuffed animals were, I want to be clear, a separate issue.


Got a story from the conference floor? A vendor who said something memorable, or something they probably wish they hadn’t said? You know where to find me. My DMs are open. We do this together.

See also: Why your MSP tools are the attack surface now | The ScreenConnect CVE your MSP clients haven’t patched