A woman walks into the lobby of a corporate office. It’s the first time that she has ever set foot in this building. No one here has ever seen her before. She’s not an employee. She’s nobody’s guest. She has no credentials.
But you wouldn’t know that from the way people react to her. Actually, they don’t react at all. They tiredly ignore her, like a particularly dull office fixture. Not even a break room foosball table—more like a plastic potted plant bolted to the floor: always there and not worth stopping to admire.
Unobstructed, she strides right up to the reception desk.
“Hi there,” she chirps, a little sheepishly.
The receptionist’s head snaps up from her monitor, with an eagerness that suggests she’s happy for the distraction. (Jerry in accounting is yet again replying-all to emails that should really be private.)
The relief quickly gives way to puzzlement as she takes in the stranger at her desk. Her face crumples in thought as she tries—and fails—to place the visitor.
“I hate to bother you,” the woman continues, “but I’m interviewing at the office next door, and I spilled coffee all over my resume. Is there any way that I could ask you to print a new copy for me? I have the file right here.”
The woman produces a flash drive, with a flourish. Ta-da! The receptionist’s face softens into a smile. Of course she’ll help this poor, unfortunate soul. After some of the expletives she let fly in Los Angeles rush hour traffic this morning, she could use a little good karma.
The receptionist plugs the USB stick into her computer without a second thought. The two make small talk as she navigates to the lone file on the drive and double clicks, setting off a complex chain of events that will lead, by the end of the day, to compromising the entire network.
See, the resume isn’t a resume at all, but a cleverly disguised bit of malware, which is now secretly installing itself on the receptionist’s computer. She doesn’t suspect a thing, and who can blame her? The woman seems nice enough, and her parents raised her to be kind.
What you just read is based on a true story, one in which I was the visitor duping a well-meaning receptionist into compromising her company. No good deed goes unpunished, right?
Although I do need to stress that I didn’t actually infect anyone’s computer with ransomware, and I was there with permission. The company hired me to do a physical assessment of the premises. (It’s kind of my thing.) And as you can see, I found some weaknesses.
But I usually find weaknesses when I do assessments, physical or digital. Especially weaknesses to social engineering.
Despite all the trainings, the bulletins from law enforcement, the stories of multimillion-dollar heists and warnings from experts (like me!), social engineering attacks continue to catch us unawares.
You’d think it would be simple enough: “Don’t take USB sticks from strangers.” But social engineering works precisely because it preys on our most human instincts, such as the desire to be helpful.
And scammers exploit other emotions, too.
Attackers play on people’s curiosity, presenting unlikely stories and promising fabulous rewards that entice victims to find out what happens next.
We see this tactic a lot in “pig butchering” scams, where attackers pose as wealthy investors with an “opportunity” sure to make the victim millions. (It won't.)
Attackers evoke fear, often by posing as a boss, cop or other authority figure and threatening job loss or jail if the victim doesn’t do as they’re told. This guise works far too well because we’re taught to obey authority figures basically from the moment we’re born.
Attackers use the threat of scarcity, much like old-school infomercials: “Act now! While supplies last!”
For example, around open enrollment time, scammers often pose as HR or health insurance reps: “The window to enroll in a plan is closing, so you better give me your Social Security number pronto!”
Attackers compel victims with social proof. Most of us want to blend into the crowd. If we get a (admittedly sketchy) email claiming we’re the last person in our organization to take the (surprisingly invasive) employee survey, we’ll hop right on it.
It’s just like being peer-pressured back in high school: Everyone is doing it! Except in this case, “it” means “handing your password over to a hacker” instead of “smoking cigarettes behind the cafeteria.”
To stir up these emotions, scammers concoct pretexts (fake stories) and personas (roles they play). For example:
“I’m Susan from IT. I’m new, so that’s why you haven’t heard of me yet. But you have heard of the email migration, right? I need to set up your new account. You’re the last one in your department. If I could just have your password—”
Sometimes I think starting a theater company for these people would cut the number of social engineering attacks in half. Give them a healthy outlet for their fantasies. Hey, I get it—I enjoy wearing wigs when I break into clients’ buildings.
Not to be a pessimist, but we do need to face facts. We’re never going to completely beat social engineering. We’ll never develop a spam filter that attackers can’t dupe with a clever disguise. We’ll never have a foolproof test for telling malicious fakes from the genuine article, whether it be a text from your bank or a klutzy lady with a coffee-stained resume.
As long as people have emotions, scammers will take advantage of them. (Maybe our AI overlords need to hurry up.)
That’s good news for my job prospects as Chief People Hacker, not so much for the rest of you.
On the bright side, all my years of fraudulently posing as a fraudster (meta!) have taught me that the more obstacles we put in would-be attackers’ ways, the less likely they are to catch us flat-footed.
Toward that end, here are some of the most effective ways your organization can keep social engineering attacks at bay.
The whole reason why social engineering attacks rely on strong emotions such as fear and social pressure is to make you act before you think.
Oh no—my boss says we’re going to be in deep trouble if I don’t pay this unusually large invoice immediately. Better get on it!
My advice is to really slow down and evaluate any text, email, phone call or other message. It’s easier said than done, but it’s the single most effective defense against social engineering attacks. If most folks read their emails closely and asked questions before reacting, they’d start to see all the red flags unfold right before their eyes.
Wait a minute. Our vendor payments aren’t usually this big. And why is my boss emailing me about it directly instead of sending it through the accounting system? I’d better follow up on this.
One more piece of advice: When following up on suspicious requests, try to use a different communication channel. If your boss sent a strange email, call them to confirm. If the original message is a social engineering attack, responding directly to it would lead you right to the scammers.
It seems obvious, but too many organizations rely on generic cybersecurity trainings that don’t speak to the real attacks their people face.
I can’t tell you how many times I’ve reviewed a client’s training only to find that it’s extremely dated, focusing on things attackers don’t even do anymore. (Out: Nigerian princes. In: Cryptocurrency investments.)
Awareness trainings should be built around both industry-standard best practices and your organization’s unique context. Are you getting specific kinds of phone calls? Do scammers use particular pretexts and personas when they target your people? Incorporate that into security awareness training.
One type of training that gets less attention than it deserves is the use of cyber range exercises.
Cyber ranges are physical or virtual environments that simulate real-world networks and cyberattacks. We use them to run cyber crisis simulations, where we put executives and other team members in a simulated attack—such as a ransomware infection—and see how they respond.
Lest you think I’m just shilling for Big Cyber Range, let me say this: Perhaps the biggest benefit of a cyber crisis simulation is that it can help you figure out what’s missing from your crisis response plans.
Many organizations walk into our cyber ranges with a plan in place, but when the rubber hits the road, they quickly find huge gaps in those plans: attack tactics they hadn’t considered, responsibilities they never assigned, communication plans they never clarified.
By running through a cyber crisis simulation, teams can determine who's responsible for what and who’s working with who before hackers come knocking. That way, no one is sitting around sipping coffee in the break room while the network burns like the "This is fine" dog.
One of the easiest ways to stop attackers in their tracks is requiring verification for every important or unusual request: paying invoices, sharing confidential information, helping your CEO buy iTunes gift cards for the cleaning staff.
(Okay, that last one is definitely a scam.)
The problem is that many organizations use verification factors that are easy to guess, such as birthdays or start dates. Instead, I recommend factors that are a lot harder to fudge or find out.
For example, last summer, an executive at Ferrari foiled an attempted vishing (voice phishing) scam by asking the fraudsters—who were using voice cloning tools to pose as the CEO!—if they remembered what book the real CEO had recently recommended. The flustered scammers immediately hung up.
Unless you run a library, you probably can’t base all verification on book recommendations. But you can do other things. One of my clients used rotating passwords that changed every Monday. Unfortunately, that was the only verification factor they used, so I was still able to crack their system by tricking someone into giving me the password.
Which brings me to my next point: layers. Don’t ask for one verification factor. Ask for two or three. The higher the stakes of the request, the more factors you should require. It’s way harder for scammers to fool anyone when they need to collect multiple pieces of information.
You can put everyone through state-of-the-art training and develop airtight policies. If people don't have the tools they need to practice what you’re preaching, it all amounts to practically nothing.
Let’s go back to the story I told up top. I took some creative liberties with it. It’s not true that nobody noticed me. One person did—the woman that I followed into the building.
See, to get in, employees needed to swipe a badge. I didn’t have one, so I had to use the ancient art of tailgating—that is, following real close behind someone so they’ll hold the door open for you because slamming it in your face would be rude.
I could tell as I followed this woman that she knew something was up. The stink-eye she shot me said it all. Figuring my cover was blown, I braced for some burly security guard to scoop me up and toss me out cartoon-style.
Much to my surprise, that didn’t happen. I was able to flit around planting USBs like the malware fairy for hours before staging the climactic showdown at the reception desk.
And while I stood there, watching the receptionist print my resume, I caught part of a curious conversation happening just behind me. Someone was describing someone that they had seen, and boy it sure sounded a lot like me. My outfit. My height. My hair color.
Carefully, quickly, I glanced over my shoulder. There she was. The woman from the door, describing me to a security guard, who was scanning through camera feeds on his laptop. Looking for me while I stood just a few feet away.
Somehow, I was still able to sneak out undetected.
When I went back to discuss the results of my assessment with my client, the first thing I asked was: “What took so long?” The woman spotted me while I was entering the building, but she didn't report me until three or four hours later.
We did some digging, and it turned out she had wanted to report me much sooner. She just didn’t know how. It took her a couple of hours to find the right email address and a couple more for security to get back to her.
I see this kind of thing a lot: A person catches me tailgating. I say, “Thank you!” in my chipperest of voices. They look me up and down. They don’t recognize me. But what are they supposed to do?
They don’t know what to say. They don’t know how to stop someone from following behind, or how to politely decline a stranger’s request to use a printer. People aren’t necessarily taught to question others when it’s human nature to want to help—much less flat-out say “no.”
My point is that it’s not enough to issue commands such as “Don’t let strangers into the building” or “Report suspicious people to security.” Walk employees through exactly what that process looks like, and give them opportunities to practice as part of their training.
If you see someone walking around without an ID badge and they don’t look familiar, stop them. Ask, “Can I help you? Let’s get you checked in.” If someone asks to use a printer, say, “I need to get you a badge first—just a formality! Let’s go to security real quick.”
And don’t expect people to memorize these steps. It’s easy to get flustered when you’re facing the real thing. Make sure that every workstation—every device and desk—has a readily available guide for responding to physical and digital social engineering attacks. Include important phone numbers, email addresses and step-by-step reporting instructions.
If that woman had been able to act as soon as she saw me, I never would have been able to trick the receptionist.
So, if you really think about it, it’s her fault. Not mine.
Learn how to navigate the challenges and tap into the resilience of generative AI in cybersecurity.
Understand the latest threats and strengthen your cloud defenses with the IBM X-Force Cloud Threat Landscape Report.
Find out how data security helps protect digital information from unauthorized access, corruption or theft throughout its entire lifecycle.
A cyberattack is an intentional effort to steal, expose, alter, disable or destroy data, applications or other assets through unauthorized access.