Only Human: Protecting Against Unwitting Insider Threats

Published by:
John
Published on:
June 9, 2017

Within the corporate world, the spectre of insider threat is one that is difficult to come to terms with. A malicious insider in an organisation has, by virtue of their position, access to privileged information and functionality that an outside attacker would be able to leverage only with great difficulty.

Likewise, whist a traditional attacker can potentially be deterred by good perimeter security, it is much more difficult to protect against attacks from the inside. The more straightforward attempts to do so often cause as much harm as they help, making systems frustrating for employees and reducing efficiency. In some cases, frustrated employees will find workarounds and ways to circumvent draconian access control systems – resulting in systems that do more harm than good.

However, the question of insider threat is often painted in black-and-white terms. Even the term ‘insider threat’ implies a certain level of antagonism. Generally speaking, what comes to most people’s minds when they think of an insider is an angry, disgruntled employee or a malicious infiltrator. But it’s not that simple.

Insider threat does not always originate from a few “bad apples” within an organisation - almost anyone, from software developers and accountants to senior executives, has the potential to be an insider. This breadth is partly why intentional and accidental leaks or attacks from an internal source are hard to mitigate against. Employees often need privileged access and special permissions in order to do their jobs, thereby negating traditional security procedures like firewalls, patching and blocking. You cannot patch human nature.

Pretend you didn’t see that…

According to Verizon’s 2017 Data Breach Incident Report, the most common cause of accidental breach within a company is misdelivery - mailing paper documents to an unintended recipient for example, or mistyping an email address. It's difficult to reliably prevent routine mistakes like this, but there are some more proactive technology precautions available.

For instance, reputable threat intelligence providers can deliver forewarning and insight into lookalike or “typosquatting” domains designed to resemble your own. These domains, which often vacuum up any email mistakenly sent to them, can be a dormant risk just waiting for someone to send them something sensitive and valuable.

Another option for reducing this risk is to use an encrypted out-of-band solution to transfer files – services such as SendSafely and Microsoft Sharepoint exist to fill this need. Using one of these systems, an email with sensitive attachments contains only a link. Accessing this link requires a user to pass the 3rd party system’s access control mechanism – i.e. Sharepoint will only allow you to access links that you are authorised to view.

Although slightly more cumbersome, systems like this can provide an additional layer of security – access can be revoked at-will, and some solutions will even allow you to specify “self-destructing” attachments that can only be picked up for a limited time after they are sent. Likewise, accessing this data on an out-of-band channel means that an attacker who gains access to your email account cannot peruse every message and attachment you have ever sent or received.

To err is human

According to the same Verizon DBIR report, the next most common cause of accidental leaks – accounting for about 20% of them – is “publication error”. From the name it would be reasonable to assume this might refer to the public distribution of a secret or internal page never intended for a wide external audience – a user clicking “Publish” on “tax-returns-2017.xls” instead of “blog.docx” in the content management system, for example.

However, information leakage can occur in a variety of places. In some cases, proud developers or system administrators may overshare technical information about their projects on LinkedIn or similar social networking sites. What they do not realise, of course, is that the itemisation of each technology they used while working on their most recent project is a perfect starting point for an attacker performing reconnaissance.

Developers may paste snippets of code to paste sites as an easy way to debug and share them with each other, not realising that these sites are constantly being scraped and searched by cybercriminals for anything that could be of use. In some cases, there have even been instances of secret keys and passwords being uploaded to code management sites like GitHub in configuration files by unwitting developers. Needless to say, if this information is out there, it is only a matter of time before an attacker finds it.

There are types of publication error that are more esoteric than simply “a page which should not have been accessible”, too. Returning to the realm of source code management, the ‘.git’ folder has been the bane of more than one web developer. The .git folder allows GitHub users to track and commit changes to their source code when working collaboratively. However, if the folder is unwittingly made available to outside attackers – say, due to improper permissions on a web server – it becomes easy to take advantage of the folder’s function and download a complete copy of any given website’s source code.

Besides the loss of intellectual property, this can potentially reveal sensitive configuration files containing passwords and other privileged information. Having access to the back-end of a web application also makes it significantly easier for an attacker to look for flaws and loopholes in the application’s logic.

Unfortunately, there isn’t a simple solution for accidental disclosure through human error. It’s a problem better addressed by strong policies and the targeted education of users in possession of privileged access rights. General awareness is not enough – power users must follow clear guidelines at all times to ensure safety and security on a day-to-day basis.

For a more proactive approach, companies can consider whether a threat assessment by a reputable organisation is appropriate for their needs. Such assessments can identify the many ways – both overt and subtle – in which privileged information can leak into their online presence.

A social animal

Finally, there’s a more direct way in which employees can be leveraged without their knowledge. External, malicious actors rely on employees who are overly trusting or easily deceived to gain a foothold on the internal system. As a method of attack, it's both easy to automate, and relies on a fundamentally unpatchable flaw – simple human helpfulness. This means that it’s relatively easy to send a dragnet of malicious emails across an organisation and wait for one to take hold.

In some cases, particularly in more targeted spear-phishing attacks, the attacker might be looking for some specific information. More often than not though, the only goal is to get their victims to execute malicious code found in an attachment or innocent-looking link.

Cybercriminals have zeroed in on just how lucrative this can potentially be. From December 2015 to March 2016, the proportion of phishing emails containing encryption ransomware (as opposed to a request for information or funds, a drive-by download page, or a link to a credential-harvesting site) went up from 56% to 93%. Ransomware is easier than ever to prepare and send, and lacks the uncertainty and skill requirement of more complicated attacks. By comparison, it offers quick and easy dividends to an attacker.

As with accidental disclosure of information, the best way to prevent social engineering is through comprehensive user education. That said, because this is an active attack rather than elective behaviour, there are some more proactive options available to organisations.

Intelligent ingress filters and email security systems that scan attachments are standard fare, of course. It is very important that these are regularly updated and monitored, so that they identify potentially dangerous emails and attachments. Likewise, performing simulated attack (so-called “red team”) assessments on your organisation can provide feedback on just how effective your security controls are when confronted with a real-world attack.

Minimising the Surface

Murphy’s Law states that whatever can go wrong, will go wrong. It may not be possible “fix” human nature, but one way of minimising risk in your organisation is to reduce the number of avenues by which threats can propagate. Think about your day-to-day routine tasks in detail – are there established policies in place, or are things more laissez-faire? If someone needs to send a sensitive attachment within the company, is there a specific system in place, or are they likely to use whichever is most convenient at the time?

Reducing the number of different services and channels of communication not only makes monitoring easier for sysadmins and security professionals, it also reduces confusion for employees. If your organisation uses 12 different domains with your company’s name in it, then they are liable to assume that an email from any domain with your company’s name in it is legitimate. If, on the other hand, they know that internal email addresses always end with “companyname.com”, they will be much more likely to recognise a malicious email before it’s too late do anything about it.

Conclusion

The unwitting insider represents a perfect storm of risk to an organisation. They not only have access to privileged information and functionality, their positive personality attributes (enthusiastic, generous and collaborative) can be easily exploited by malicious outsiders to get what they want.

There are solutions available, but they’re based in education and behaviour modification, and require more commitment than simply improving perimeter controls. As cyber criminals and nation states alike continue to take advantage of the ease with which these attacks can be perpetrated, the need for strong, consistent security policies becomes greater than ever.