When I started writing this post, I made a typo: I wrote “Insider Treat”. That struck me as kind of funny; it’s like the opposite of a threat, right? But maybe it’s not the opposite, hear me out. It’s a hot day, and you walk into an ice cream shop, where your pal Jake works. He looks up from the counter, sees it’s you, and says, “Hey, we’re running a special on triple-scoop chocolate cones – free for my friends.”
Now, just based on the merit of being you, you got a freebie – a true insider treat.
And that’s when it hit me – a treat just for being you kind of is an insider threat; it’s when an employee uses their access to collect or give away inventory/info/ice cream to others.
So let’s talk about:
The Cybersecurity and Infrastructure Security Agency, or CISA, which operates under Department of Homeland Security oversight, defines an insider as “any person who has or had authorized access to or knowledge of an organization’s resources, including personnel, facilities, information, equipment, networks, and systems.” Its examples are extraordinarily broad. They include, among others:
Actually, when you think about it, it’s sobering how easily someone can pose an insider threat.
CISA defines insider threat, generically speaking, as “the potential for an insider to use their authorized access or understanding of an organization to harm that organization.” The CERT Insider Threat Center of Carnegie Mellon University’s Software Engineering Institute (SEI) recognizes – based on an analysis of more than 1,000 cases of insider theft – that the primary forms of malicious insider threats are intellectual property theft, IT sabotage, fraud, and espionage. According to CISA, these can be the product of collusion with outsiders, or they can be posed by third parties. In addition, unintentional insider threats are increasingly relevant. These can be the product of accidental acts or negligence.
We’re not going to get into the minutiae of insider threats; plenty of books have been written on the topic. Instead, we’ll focus on how to mitigate insider threats using controls, why it may be tough to get personnel to accept controls, and what approaches to implement these controls are generally more successful.
Let’s focus on access control as a common example of where you can start to mitigate insider threats, and the level of controls to aim for.
In many cases, early stage companies focus on building up, paying relatively little attention to the potential for insider threat. Their general game plan? Grant everyone access to everything so every engineer can go directly to production and change things live. The goal is to make changes fast, avoiding anything that stands in the way of doing that. Speed is the byword.
But allowing everyone unfettered access to everything is a problem from both a technology and a security perspective. As your company grows, you’ll want to learn how to counter insider threats, and you also want to reduce the business threats that arise when systems don’t exist to monitor who can access code and data, and when.
And – just by the way – countermeasures to reduce technology associated insider threats are necessary under Compliance frameworks. For example:
Failing to have adequate internal controls, including those that address insider threat mitigation, could cause a company to fail an audit by not meeting SOX section 404.
Starting insider threat risk mitigation is part of a company’s growth process, but it's a cultural shift that can leave employees feeling untrusted. If employees who could log in yesterday suddenly can’t today, it may cause a lot of resentment. Understandably, it’s wise to try to avoid sudden shifts in corporate culture, especially when the shift could be interpreted as stemming from distrust of employees.
So yes, one response to insider threat could be to have the legal team and HR impose a policy meant to combat it. Just let employees know that everything will be different from now on. But a better way may be to start with detecting, rather than preventing, certain behaviors.
For example, let’s say you have decided that a change management system is necessary to make sure that all changes to the IT production environment are done in a controlled way – to minimize the risk of unauthorized changes and the risk of disruption. Instead of just blocking access one day, a softer rollout might be to start by limiting the number of times a developer can go into production and make a change. You could explain that you want the number of changes to go down over time, so you’re going to start tracking the number of changes made directly into production. You’ll then send the number to each vice president under the CTO, so that they know how many direct changes in production, and what kinds of changes their team makes weekly or monthly.
You’ll be far from the final step of implementing change management, but you will have let people know that controls are starting. In a way, it’s like posting cameras on street corners. You’re probably not going to steal someone’s wallet, but if you were so inclined, the fact that there are cameras everywhere might make you think twice. Similarly, when people realize that their actions are being monitored, it’s often enough to get them to be more careful about what they do.
At some point you’ll want to build preventative controls. (You’ll also find that implementing controls successfully, including those that mitigate insider threats, require Compliance and Security to work together.) The model you’ll eventually need to move to is role-based, data driven access control. So the guy in marketing doesn’t have permission to change code in production, but maybe the person in engineering does. But not everyone in engineering should have access to all the data. Not to Protected Health Information (PHI), and not to credit card information, for example. Deciding whether a given person should have access will be based on data. This requires writing a data classification policy — not just downloading one off the internet, but writing one for your company and then using it as the basis for a series of controls.
Finally, access controls need to provide for emergency access. (HIPAA, for example, allows emergency disclosure of a patient’s PHI without the patient’s authorization.) There has to be a way to make changes in an emergency, even without going through the channels of proper authorization. You see the term “Break the Glass” often used in this kind of situation: sometimes you need immediate access, the way you need to break the glass to get a fire extinguisher in an emergency. But just the way that breaking the glass triggers an alarm, an emergency use of access should trigger a flag of what happened. This allows for a trail and helps keep use of emergency access to situations it’s absolutely needed.
If you keep controls at the right level, you’ll be able to explain them as part of the company’s increasing maturity – because that’s exactly what they’ll be. A startup can handle it where a few people make live changes that didn’t get recorded properly, but when every developer does it, and their VP is staring at 5,000 changes that were made directly to production, a leader has a solid case for saying that that’s not tenable, from a customer-satisfaction perspective. That lets you make it clear that the reason for controls is not distrust, but that so many things can go wrong and bog down, from non-security perspectives, when everyone has unfettered access. The company risks losing money and losing customers. Those are factors that people in a growing company understand.
So if controls are introduced the right way, they will more likely be accepted as part of the growing company’s culture, while at the same time helping with insider threat mitigation and keeping your company safer. For further guidance on how to implement internal controls correctly, turn to our Compliance experts at anecdotes who will be happy to share their vast experience with you.
You could also try giving out free ice cream whenever you introduce a new control. Couldn’t hurt.