I’ve been talking a lot about “insider threats” recently because I believe they’re one of the thorniest problems in security. Malware – while still dangerous – is largely a known threat with a robust set of best-practice prevention and recovery solutions even though it’s only been around for a few decades at most. Insider threats, on the other hand, have been around for as long as humans have existed, and there still aren’t fully acceptable best practices on how to detect, interdict, or respond to them. Sure, lots of experts have their opinions, me included. Trouble is, those opinions are coloured by the specialisation and biases of the expert offering them:
That’s why the subject fascinates me. Cybersecurity boffins seem to approach insider threats primarily from a detectionstandpoint. That is, they prefer to use covert monitoring and machine learning to notice suspicious changes in user behaviour. [1] It a very sterile and impartial tactic, made all the more palatable because data that identifies a potential threat actor immediately becomes Someone Else’s Problem (SEP). The security staff file a report and leave the room.
Physical security boffins seem more focused on insider threat response. Once a bad actor has been revealed (doesn’t matter how), security guards escort the offender off the property, hopefully without having to get the local police involved (and, ideally, without a firearms incident). It’s very tidy for them, since they only have to manhandle the perp. If the attribution was wrong, well, that’s SEP. The guards were just following protocol.
Auditors, like cybersecurity people, focus on detection. By scrutinizing records of business processes, they detect clues that might show evidence of (or infer some intent to commit) fraud. Where cyber folks scrutinize systems behaviour, auditors are more prone to scrutinize work performance. Still, it all ends up SEP after the auditors sound the alarm and leave.

Which leads us to Human Resources and Legal people, who almost always seem to get stuck with the “P” part of SEP: once a bad actor has been identified by someone else, the back-office boffins have to pour over their holy books and pronounce judgment, preferably the sort that boots the insider threat without triggering a lawsuit or workplace shooting event. Then they punt the potentially dangerous work back over the fence to Facilities.
Notice what’s missing from all of these approaches? It goes detect-respond-detect-respond … so who’s performing the interdiction part in the academic model? For my money, that’s the most important phase of insider threat management. It’s also the most often avoided part of the process – not overlooked, mind – because of how sensitive an act of interdiction can be.
For clarity, I am using the military definition of the word, not the religious or civil law definitions. I prefer this use for its great precision and its direct applicability to the insider threat problem. In all insider threat events, someone must act swiftly to either prevent whatever damage the bad actor intends to perform or else blunt its manifestation before the bad actor fully achieves their objective. Swift and decisive action is required to delay, disrupt, or disable the bad actor’s capability to do … whatever “bad thing” they intend to do.
As an example: Andy the security guard is watching employees enter the building one morning and notices that something not body-shaped is imprinting under worker Bob suitcoat. Sam is a professional and recognizes the shape of a holstered handgun. Knowing that firearms aren’t allowed within Andy company’s facilities, Andy has reasonable confidence that he has detected a potential insider threat. Even if Bob has no ill-intent, bringing a firearm past the entrance is a wilful violation of company policy. Andy could be wrong … but he can’t afford to be.

Andy reports the suspected threat to Bob’s manager, to HR, to Legal, and to the Security department. The support players all agree that no firearms are allowed in the building. So … what should they do? From a public safety perspective, the correct answer is to intercept Bob, disarm him, then remove him from the environment as swiftly and as quietly as possible. No one wants a murder to occur on their watch, but they also don’t want to trigger a panic.
The ideal tactic would be to call Bob down to a private meeting room where Security can safely search him, relieve him of his weapon, and hold him safely until appropriate corrective action can be initiated without the threat of violence. Call the cops, whatever. Seems reasonable, right?
Except that’s not what most companies do. By their nature, businesses – especially white-collar ones – are notoriously risk-averse. They don’t plan based on worst-case scenarios like “violence in the workplace.” That isn’t an indictment of modern businesses; it’s the logical effect of an organisation’s normal operating environment on their business practices. Despite what you hear on the nightly news, most of the insider threat events that businesses experience – and, therefore, focus on detecting – involve non-violent incidents like theft, fraud, disruptive behaviour, etc. I know it seems crazy, but even in the mass-shooting capitol of Earth, American workers tend to not get violent in the workplace. Really.
Their mitigating tactics employed against non-violent problems are often non-confrontational. That is, they focus on detecting an insider threat problem and then responding to it only once it’s safe to do so. Sure, some classified information might be lost or exposed, or some money might be lost, but a lawsuit over accusations of workplace harassment would likely be much more expensive. Better to accept a small loss in order to avoid the greater loss … and the drama … and reputational damage. It’s only logical.

Except … it’s not. Oh, the approach makes sense … if you’re sure that all of your insider threat problems are going to be civilised and non-violent. Problems that can be handled with memoranda and stern discussions. Detect and respond only situations. Why create, refine, and practice any sort of active interdiction protocol when you’re not likely to ever need it?
I submit, we need to define and practice these interdiction processes for the same reason why we practice evacuating company facilities when the fire alarm sounds: sure, a major fire in a modern office building is highly unlikely. Low probability … but extremely high potential consequences if and when it does finally happen. If such an event manifests, everyone better be 100% prepared to bolt immediately. As Health & Safety boffins like to gloat, failure leads to funerals.
This isn’t a petty exaggeration for drama’s sake. Go download Verizon’s 2019 Insider Threat Report. On page 21, Section II, the authors graphically show a range of human behaviour changes that should alert organisation leadership to the potential for an insider threat event. To quite directly: “Taken alone, an indicator could have normal, natural causes. It’s important to understand that a single indicator, or even multiple ones, doesn’t mean your employee is an active insider threat. But they can indicate something is wrong and that more attention could enhance the employee’s well-being—and thus the organization’s [well-being].”
In the graphic, several potential worrisome elements stand out: “hates authority” … “makes threats” … “hostile behaviour” … “brings weapons to work.” What sort of threat picture does that combination of motivational factors draw for you? Uh huh. Yeah. The worst-case scenario. Just like the aforementioned “fire in a crowded cubicle farm” example, we’re talking low probability but extremely high potential consequences.

Honestly, you don’t even need to have violence to need active, immediate interdiction. Consider unauthorized PCs connecting to your office network. Could be an accident, could be legit cyber-attack. Either way, it’s imperative that security find, disconnect, and secure the offending device ASAP. Probably detain the operator, too, for questioning. No reasonable company would say “let’s wait until the criminal inside our building has finished copying all of our most classified files to their laptop and replace them with ransomware before we start incident response. We don’t want any drama.” That would be criminally negligent. Someone must act … but who is it going to be? What are their engagement protocols? What are the limits on their authority? What special kit do they need? Does building security or the local police need to be activated?
The time to answer those questions is not in the heat of the moment. Every second squandered is a gift to the attacker. Interdiction protocols must be well understood and regularly practiced by the designated responders … just like we do with our fire wardens and AED operators.
So … why don’t we? Because no one wants to be wrong. HR types and lawyers are conditioned by their training to be exceptionally cautious … to avoid making potentially embarrassing mistakes at all costs. Under non-emergency circumstances, this is the best way to protect the company’s profits and reputation. That said, there inevitably comes a time when the cautions, risk-averse approach is the worst possible option. If your organisation doesn’t already have an interdiction plan that you’ve trained and tested, now’s the time to create one … Before you need to trigger it.
[1] This is where the so-called “User Behaviour Analytics” technologies enter the annual budget.