Insider threat cases are all over the news right now. Companies are eager to invest in complex, comprehensive monitoring technologies to detect potential rogue employees. Automated monitoring, though, can’t provide the empathy, support, and nuanced analysis that a fellow human can in detecting and mitigating potential ‘bad apples.’
Now seems like an opportune time to talk about ‘insider threats.’ It’s been a couple of weeks since the news broke that a Pakistani entrepreneur named Muhammad Fahd was arrested for having paid over a million USD in bribes to AT&T employees to unlock mobile phones, install malware on AT&T’s internal network, compromise privileged systems accounts, and install rogue network connections. According to ZDNet’s Catalin Cimpanu, the bribery scheme lasted ‘from at least April 2012 until September 2017’ and cost AT&T ‘revenue of more than $5 million/year.’ Credit where it’s due, Fahd’s corruption of corporate stooges was – if the charges are accurate – an ambitious venture that seems to have paid off … right up until it didn’t.
This case is also a great illustration for the ‘insider threat’ problem. In his 2019 book Transformational Security Awareness, Perry Carpenter explained it as: ‘Your defense-in-depth security strategy should always account for … Employees who negligently or intentionally circumvent technical controls … (and who) divert from the organization’s policies, controls, and processes.’ Put in more collegial terms, an insider threat is anyone you allow into your organisation who then betrays the organisation’s trust.
The thing about insider threats is that there are multiple types and each type requires different approaches to detect and counter. Security people usually talk about three types:
- Malicious users, who are motivated to harm their employee out of a perceived need for revenge, for entertainment, for profit, etc. These baddies are determined and energetic.
- Non-malicious users, who don’t intend to cause harm but do anyway thanks to accidents, omissions, and misunderstandings. That is, they’re people who believe they’re doing the right thing or can justify to themselves why they’re circumventing security rules.
- These are outsiders who deliberately engage in criminal skulduggery. These can be corporate spies, intelligence agents, activists … pretty much anyone who joins an organization under false pretences in order to get inside the defences and cause harm.
An effective corporate spy is extremely difficult to detect. They deliberately blend in and maintain a low profile. Contrast that with an intruder who has to act very fast to social engineer their way past all obstacles to reach their objective and then escape.
This isn’t an authoritative list; cybersecurity vendor Forcepoint has an ebook on this subject that suggests (and I agree) that there should be a fourth type:
- Compromised users, who are compelled to aid an attacker due to extortion, coercive pressure, etc. These users don’t want to harm their employer but are somehow forced to.
The AT&T employees who (it’s alleged) agreed to support Mr. Fahd (also alleged) criminal endeavours count as insider threats. They were trusted by AT&T to represent AT&T’s interests and to obey AT&T’s policy, and instead chose to sabotage their company for financial gain. They’d probably fall under the ‘malicious user’ type, in that they accepted bribes in exchange for helping an outsider. That said, it’s certainly possible some of those folks were ‘compromised users’ … Maybe Mr Fahd (or his cronies) got dirt on them and forced them to help. I’ll be curious to read the analysis once the case is finished to see who did what and why.
Whether they did it for kicks, for revenge, or to avoid having a devastating secret revealed, the end result to AT&T is the same: at least five million dollars in losses, potential fines from regulators, backlash from stakeholders and strategic partners, and significant reputational damage. To be fair, AT&T did (per to the article) catch the first wave of suborned insiders and fired them. Unfazed, Mr. Fahd was able to recruit an entire new crop of willing accomplices. That’s potentially humiliating. Imagine having to explain to your stakeholders that either your personnel screening process is unable to catch criminals, or that your workplace culture is so bad that employees can be easily convinced to sabotage their employer. It’s going to be tough to spin this story to minimize the damage.
I feel a lot of empathy for the PR team that has to engage the press on this. None of the story is their fault, but as the face and voice of the company, they’re likely to take a LOT of flak.
The thing is, though, it’s likely not an AT&T-unique problem. Something like this could have happened anywhere. Workers are people and people are complicated: they have divided loyalties, distractions, misperceptions, bad influences … It certainly doesn’t help that the USA abandoned ‘lifetime employment’ in the 1980s, leading to a nationwide culture where companies have zero loyalty to their disposable meat-based resources. Add in abusive management and many workers can be convinced to betray their employer with a sufficiently-large bribe or a bad day at the office. The more people you have, the more likely it is that one of them will turn their coat.
This is why security departments – working in very close coordination with HR, Legal, and upper management – must actively monitor worker behaviour for predictive indicators of disgruntlement, hostility, unusually high error rates, and security control bypasses. Sure, a behaviour-monitoring program can be messy and uncomfortable. Even if your organisation has established clear rules that all communications and spaces will be actively monitored, the idea that someone is watching for signs of future criminality reeks of Minority Report-style ‘pre-crime.’
The trouble is, there’s only so much you can monitor. The number of people required to watch a colleague eyes-on necessitates a ratio of observers-to-workers that quickly rises to 1:1. There are filters and proxies that can monitor e-mail, instant messaging, and phone calls for suspicious keywords, but those devices have to be ‘fed’ the words and phrases to listen for. That, in turn, requires extensive intelligence research and constant updates to the filtering criteria … Meanwhile, all of the detected keyword ‘hits’ still have to be reviewed by a human to put the flagged information into context. You can deploy pseudo artificial intelligence to try to ‘learn’ suspicious patterns, but that only helps if you monitor the channels that ‘potential insider threats use to discuss their dirty deeds … at work … and it still requires human review.
This is why the security awareness community is starting to advocate for leveraging the entire user community to act as first-line virtual sensors. The idea is to explain to everyone in the organisation – truthfully and transparently – that most ‘insider threats’ are good people making preventable mistakes. The more that workers reach out to help their neighbours to learn and apply required security protocols, the fewer accidents and omissions there will be. Everybody wins when everyone’s eager to help support everyone else.
To be clear, I don’t buy into Mary Parker Follet’s vision of achieving ‘spiritual union’ through one’s workplace. People can’t be happy all the time. People aren’t going to get along. That said, there’s a huge positive difference between an office culture where employees actively support one another and one where every worker is left to wallow in their own existential misery.
Additionally, if everyone in a work centre is more engaged, they’ll be more likely to spot the early signs of emotional distress, job frustration, alienation, disgruntlement, and other clues that suggest a colleague is susceptible to becoming a malicious insider. By reaching out with professional and personal support, good neighbours can help alleviate the stressors that might drive a good worker over the edge, thereby pre-emptively mitigating the threat. Essentially, mitigate the disgruntled employee before they cross the line from wanting to inflict harm to actually inflicting harm.
To be clear, this approach doesn’t stop the threat of industrial spies or compromised workers. It does, however, help to address 99%+ of the likely insider threats that most organisations will face. It’s also less costly, less cumbersome, and more reliable than trying to perfect a machine-based monitoring solution. That makes it prudent to add human interaction and active engagement to your arsenal of security controls. Keep monitoring, sure, but add real people to the equation.
As a beneficial side effect, actively supportive workers make for a stronger and more welcoming office culture, which improves esprit de corps and workgroup cohesion. It should also reduce employee turnover because people want to stay where they’re valued and supported.
This isn’t to say that active employee engagement program will safely inoculate an organisation against insider threats; rather, that it’ll significantly help reduce the instances of potential malicious insiders going too far, and it’ll help to detect those potential threats early enough to trigger additional close monitoring so that they can be stopped before inflicting extensive harm.
Cultural Allusion: Reach out and Touch Someone, AT&T’s famous 1979 advertising jingle
POC is Keil Hubert, email@example.com
Follow him on Twitter at @keilhubert.
Keil Hubert is the head of Security Training and Awareness for OCC, the world’s largest equity derivatives clearing organization, headquartered in Chicago, Illinois. Prior to joining OCC, Keil has been a U.S. Army medical IT officer, a U.S.A.F. Cyberspace Operations officer, a small businessman, an author, and several different variations of commercial sector IT consultant.
Keil deconstructed a cybersecurity breach in his presentation at TEISS 2014, and has served as Business Reporter’s resident U.S. ‘blogger since 2012. His books on applied leadership, business culture, and talent management are available on Amazon.com. Keil is based out of Dallas, Texas.