There are three primary qualities that can distinguish a high quality Indicator of Compromise (IOC) from a low quality IOC: accuracy, performance, and robustness. In this post, Part 1 of the IOC Development series . , we’ll review these qualities in depth with the goal of being able to develop high qualities IOCs for the detection of malicious activity.

An IOC is an artifact which can suggest the presence of malicious activity. Generally speaking, IOCs can be broken down into two main categories: Host Based Indicators (HBIs) and Network Based Indicators (NBIs). HBIs can be further broken down into two subcategories: Operating System (OS) based indicators, and File System (FS) based indicators.

Example IOCs:

HBI OS: A Windows Registry key/value

HBI FS: NTFS record of a known attacker filename and path such as “C:\temp\a.bat”

NBI: Base64 encoded DNS text records with a length of greater than 75 characters

Robustness

CyberSecurity organizations will very often publish a blog regarding a particular threat group or malware and include associated indicators. More often than not, those indicators are file hashes or domain names. While a hash or url can certainly be a high fidelity indicator, their value is not particularly high from a detection point of view, primarily because a detection which is wholely reliant on an MD5/SHA256 or URL is very brittle. An attacker can change a single byte of a payload and the hash detection is no longer viable. Similarly, attackers very often are operating from a high number of command and control (C2) servers, and it is trivial to redirect an implant to a new C2 server. As a result an IP address as an indicator, may be of limited utility.

Instead CTI analysts should take the hashes from these reports and use them as test cases for developing their own IOCs. That is the topic of this series .

A more robust IOC may include a high fidelity but brittle signal such as a hash in addition to other less brittle but perhaps lower fidelity signals. For example, instead of an IOC of a single hash you might look for:

  • the known bad hash
  • some semi-unique strings from the malware
  • an approximate file size similar to the observed malware

Here is a concrete example in Yara:

Brittle

rule BrittleRule
{
    condition:
        hash.sha1(0, filesize) == "f6c21f8189ced6ae150f9ef2e82a3a57843b587d"
}

More Robust

rule ImprovedRule
{
    strings:
        $s1 = "text here"
        $s2 = { E2 34 A1 C8 23 FB }

    condition:
        hash.sha1(0, filesize) == "f6c21f8189ced6ae150f9ef2e82a3a57843b587d" or
        ( filesize < 1MB and any of ($s*) ) 
}

The objective here is to create an IOC that groups enough semi-unique characteristics of the attacker activity (in this case malware) such that incremental changes in the malware will still likely yield a positive identification of the malware.

Performance

“Good” performance is an arbitrary measurement which is unique to your situation and the method in which the IOC will be deployed. A performant IOC should be fast and stable, and not attract the attention of any system administrators by eating up too many system resources. Although there are some general rules that help with building faster IOCs there is no exhaustive guide. One general rule is to fail fast. That is to say make large exclusions among the first items to be evaluated.

For example, if you are looking for a known bad hash across 100k endpoints consider that you will have to hash every file on every system to make that determination. This will surely be extremely resource intensive and slow. Instead you might consider excluding all folders except Windows\temp%, if that is a directory where you know your target malware will potentially be saved.

Here is an excerpt from an OpenIOC doing exactly that:

<IndicatorItem id="1864ad4e-0cc4-43a5-8188-cf47d3bf638a" condition="contains" preserve-case="false" negate="false">
    <Context document="fileWriteEvent" search="fileWriteEvent/filePath" type="event"/>
    <Content type="string">Windows\Temp</Content>
</IndicatorItem>

Fidelity

In an ideal world every IOC would produce all true positives and no false positives or false negatives. However we do not live in an ideal world. But there is an art to arriving at the appropriate accuracy for a given IOC. You want your IOC to be just flexible enough that you are likely to find previous and future iterations of the targeted malicious activity. If your IOC triggers off of such specific items, let’s say the exact number of bytes for a PE, then when that sample if updated, or re-obfuscated then your IOC will miss that future iteration.

That is a quick introduction to IOC development. The next sections of this series will cover TP/FP testing, performance testing and documentation.