You Taught Your Employees to Spot Social Engineering. Nobody Told Your AI.

AI Social Engineering

Schedule a Portnox Cloud demo today.

Contents

You’ve forced your employees to sit through endless hours of watching little animated emails get caught on a hook. You’ve drilled them on calls that sound like the CEO asking for Apple gift cards. You’ve sent endless emails about urgent doc signing and free concert tickets to see who’s still not paying attention.

But what happens when the target isn’t your employees? At least, not your human ones.
In May 2026, someone sent a tweet in Morse code. Moments later, $200,000 walked out the door. No password stolen. No vulnerability exploited. No zero-day. Just an AI that did exactly what it was asked — because it couldn’t tell the difference between a legitimate instruction and a malicious one dressed up in dots and dashes.
A few months earlier, a security researcher sat down with ChatGPT and said: “Let’s play a game.” Three words later — “I give up” — ChatGPT handed over a valid Windows product key. One of them belonged to Wells Fargo.

Same attack. Different AI. Different payload. Same root cause: an AI that was manipulated through framing, not force. This is prompt injection. And it’s the social engineering attack your security strategy never saw coming.

The Helpdesk Has a New Vulnerability

The MGM hack — the one that took down the Bellagio fountains, froze the slot machines, and cost over $100 million in four days — started with a teenager, a LinkedIn search, and a 10-minute phone call to the help desk. No malware. No sophisticated exploit. Just a convincing story told to a human being who was trying to be helpful.

AI agents are the new helpdesk. They’re helpful by design. They have permissions by necessity. And unlike your employees, they’ve never sat through a security awareness training.

The attacker who drained $200k from Grok’s wallet didn’t hack anything. They just understood something most security teams haven’t fully internalized yet: AI doesn’t have instincts. It can’t feel like something is off. It processes the instruction in front of it and executes — whether that instruction came from your CEO or from someone who figured out that Morse code looks like noise to a filter and like language to an LLM.

AI Doesn’t Do Anything Different. It Does It Faster.

Here’s the thing security vendors don’t always want to say out loud: AI hasn’t invented new attack playbooks. It’s just running the old ones faster.
Thankfully, that means the defense hasn’t changed either.

The organizations genuinely prepared for AI-accelerated threats aren’t the ones scrambling to build AI-specific defenses. They’re the ones who already did the foundational work — proper segmentation, least privilege access, certificate-based authentication, continuous risk assessment. Because a well-segmented network doesn’t care if the attacker is a nation-state or a language model. It just doesn’t let things go where they’re not supposed to go.

The Grok attacker got $200,000 because an AI agent had wallet permissions, no transaction limits, and no human in the loop to say “wait, that’s weird.” The controls weren’t there. The AI just did what it was told.

Your AI Is an Identity. Treat It Like One.

When you connect an AI agent to your environment — whether it’s a security tool, an automation workflow, a customer-facing chatbot, or an agentic system with real-world permissions — you’ve added a new identity to your network. It has credentials. It has access. It can be manipulated. The question isn’t whether you trust the AI. It’s whether you’ve applied the same zero trust principles to that AI that you’d apply to any other identity on your network:
  • Does it have least privilege access, or more permissions than it needs?
  • Is it segmented so a compromise doesn’t become a full network incident?
  • Is there a human verification step before it takes irreversible actions?
  • Are you monitoring its behavior the same way you’d monitor a user account?
That’s not an AI problem. That’s a you problem. And it’s entirely fixable.

The Attacker Didn’t Need to Be Clever. They Just Needed You to Leave the Door Open.

Social engineering has always worked the same way: find the trusted entity with access, manipulate it into doing what you want. The target used to be a person. Now it’s the AI your team deployed last quarter, with full access and zero training on how to say no.
The fundamentals haven’t changed. The attack surface just got bigger.

Share

Related Reading

Zero Trust

Zero Trust Was Built for Humans. What Happens When the Users Are Agents?

May 5, 2026
Network Security

The Blind Spot in Your Zero Trust Strategy: The Management Plane

May 5, 2026
Portnox Product Release

Meet the Simplest Way to Get Secure Access on Windows—No Hassle Required 

May 4, 2026

Try Portnox Cloud for free today

Gain access to all of Portnox’s powerful zero trust access control free capabilities for 30 days!