Insider Threats and Weapons of Mass Destruction–What’s the Connection?
Steve Durbin, Managing Director at the Information Security Forum (ISF) believes that defending against insiders is “always a matter of trust.” He also reportedly said that negligent and accidental insiders pose a larger security risk than malicious insiders (CIO.com, 2016).
On the second point we agree, although we classify insider threats as careless, compromised or malicious. Indeed, instances of compromised insiders whose credentials were stolen resulting in a large breach are far more common than cases of malicious insiders purposely causing damage or stealing information. Edward Snowden may have wreaked havoc with his own admin credentials, but it was still the exception. The bigger risk is careless and compromised employees.
In the same article, the author Thor Olavsrud quotes Mr. Durbin saying that in addition to trust, organizations need to conduct risk assessment, put stringent controls in place, and “limit access to information to those who actually need it. That includes upper management.” We agree with this statement as well.
But we have to part ways with the first point, that in the end, “everything is going to hang on trust.” According to Mr. Durbin, “To combat the threat, organizations must invest in a deeper understanding of trust and work to improve the trustworthiness of insiders.”
While developing a trusting relationship with employees may reduce the risk of these employees being the source of an insider attack, the bottom line is that no matter how much trust is developed; careless insiders (e.g using bad practices) and compromised insiders (e.g. whose credentials have been stolen or computer infected with malware), are not typically malicious. So they’re not really breaking any trust, and that’s what’s important to remember. Even the best employees may not realize they’re doing something that can potentially harm the organization.
Now if you implemented proper training so that these employees know exactly what they are and aren’t supposed to be doing, you may mitigate some of that risk. But again that is not sufficient. Good employees get infected with viruses or make themselves vulnerable in other ways, and this means that you need more than trust and training.
You need to monitor how your data is being accessed, by whom and what’s being done with data, then identify if those actions are appropriate in the context of an employee’s role and match their working habits. When your employee inadvertently clicks on a malicious link in a phishing e-mail, it may result in malware being installed on their computer. Then when the operator of that malware is navigating your company’s network and accessing applications, database data and files from your employee’s device, they’re doing it with your employees’ credentials. It’s possible that neither you nor your employee know they have been infected. And most concerning, you can’t really distinguish between the actions of your trusted employee, and that of a hacker exfiltrating data from your network using your employee’s credentials.
Alternatively, think about the valued and hard working employee who is being productive but not realizing they’re doing something risky. For example, your accountant may want to catch up on his backlog while on vacation, and copies your company’s quarterly financial data to his Google Drive so he’ll have access to it. As Imperva ADC discovered, Man-in-the-Cloud attacks enable hackers to take over cloud drives, meaning they can steal the data in that drive, then publish it or sell it. So as we can see, trust wouldn’t be enough to prevent a breach. The accountant likely doesn’t realize he’s doing anything wrong, and just wants to be productive. You’d likely want to encourage and faciliate this dedication, not hinder it or discourage him. And yet, he’s putting your organization at risk.
After understanding these points, it becomes clear that working to improve trust among insiders is far from sufficient to combat threats.
Much like the Defense Threat Reduction Agency, the US State Department’s official Combat Support Agency for countering weapons of mass destruction, we believe in Trust, but Verify. Our latest offering Imperva CounterBreach does just that. It uses machine learning and analytics to identify risky insider behavior; not by reading employees’ email or monitoring their web habits – but by identifying the data they’re accessing, how they’re accessing it and what they’re doing with it. It then flags only those incidents in which a significant anomaly has been detected.
So please, definitely encourage a culture of trust in your organization. Verify your employees know what behavior is risky and what to avoid, put risk controls in place, and limit access to information to those who actually need it. But also remember to verify. Verify employees are accessing information with credentials assigned to them, detect when suddenly an employee’s activity becomes abnormal (it may be a hacker in disguise), and make sure they’re not copying your company’s sensitive data to locations that put that data at risk.