“Not everything that can be counted counts, and not everything that counts can be counted.” -Albert Einstein
The application of traditional threat intelligence to the field of information security is a relatively young one. As the ability of the community to collect and share intelligence grows, the techniques we use to analyse it become more sophisticated. Assuming we have access to a “firehose” of big data, how can we model and analyse security threats most effectively? This question has led to many organisations attempting to standardise the process and provide a consistent way of distributing threat data. How effective are they at doing this, and how could they be better? How can we better use these imperfect sources of intelligence to inform our analyses?
The Indicator of Compromise (IOC) is a perfect example of the kind of data sharing that has grown up with the information security industry. Originating from the signature databases of antivirus engines, the term simply refers to any piece of data – such as a file hash, IP address or URL – that can be associated with malicious software or actors. Usually, this association is a result of direct observation – a file hash will be taken from malware “in the field”, or a URL from callbacks made to a particular malicious domain.
At their core, the purpose of IOCs is to reduce an attack to its essential identifying characteristics, things which are as unique as possible to the threat in question. It allows for the large amounts of data associated with an attack to be condensed into a form that can be standardised and easily shared. A number of standards and services have grown up as IOCs have become popular as a method of intelligence sharing. This includes information such as the content and format of the malicious data, the date it was collected, and various other pieces of metadata. Standards such as YARA and OpenIOC define a consistent way of representing IOCs that can be parsed by a computer, and services such as ThreatConnect and OTX AlienVault aim to build the community around sharing them.
The proliferation of various different standards, while well-intentioned, effectively splits the pool of intelligence available. These standards are for the most part incompatible with each other, so each one must be implemented separately if you wish to benefit from the communities and tools that use that format. Sticking to one standard or even relying on raw feeds of data such as malicious IP blacklists might be sufficient for simple use cases. For an antivirus’ signature engine, for example, or if you only intend to use the data to detect an intrusion on your own infrastructure, one source might provide you with all the information you need. However, IOCs can also be used to guide analysis and provide proactive intelligence, especially when they are bundled with metadata about the attack besides the actual content of the indicator in question.
In the most general sense, aggregating IOCs can be a way to paint a picture of the current “threat landscape” of attacks and attackers. Information such as the timing and geographical area of seemingly disparate attacks may imply connections when viewed as a whole. In order to do this successfully, it’s important to gather as much data as possible. Low-quality IOCs – such as those produced by an IP blacklist – are not particularly valuable for this kind of analysis. Ideally, this kind of intelligence is derived from high-quality IOCs that contain both unique and accurate data and as much of the metadata that can be associated with an attack as possible.
Not every IOC needs to have a value for every type of metadata that might be associated with it, but it is useful to ask exactly what kind of data can be extracted from an attack or a compromise. Do we know when the attack occurred? Can we record the average difference between the beginning and end of attacks? Can we accurately determine what it is about a compromised system that caused it to be targeted in the first place? And the most important question of all: how well do the current standards answer these questions?
The knee-jerk reaction to the many standards that imperfectly track and share IOCs might be to create one unifying standard that satisfies the requirements of them all. But while this would certainly allow many different types of IOC to be converted into your format, in the long-run it would only increase confusion. Most protocols designed for sharing IOCs are more appropriate for reacting to specific attacks or compromises than for performing holistic analysis – and this is due to one reason: demand. Currently, the role of an Indicator of Compromise is to serve as the input to system scans and other forms of reactionary security protocols.
If the community of intelligence-sharing were more developed, we might be able to create a system that is more like an indicator of risk than an indicator of compromise – one that identifies which machines were targeted, why they were targeted, and what decides the difference between successful and unsuccessful compromise. STIX is an example of a format that attempts something similar, and has gained a lot of traction recently. In part, this is because of its compatibility and cooperation with many other popular standards such as OpenIOC and YARA. Its holistic approach to representing potential threats also has a part to play; it does so by separating indicators from other “types” such as threat actors and exploit targets, and by providing a structured language to communicate information about all of them.
The success of STIX, which is very much designed with sophisticated intelligence-sharing in mind, is indicative of a growing awareness within the community that threat intelligence cannot occur in a vacuum. Only by sharing as much information as possible about attacks can we force attackers to do more than change one element of their modus operandi to avoid detection. If we want to implement proactive rather than purely reactionary processes, it is essential that we look at the entire “battlefield” of cyber threat – not just the type of ammunition the enemy is using.
Or contact us at email@example.com or +44 (0)20 7148 7475 to speak to a cyber intelligence consultant.