We’ve all heard the saying about missing the forest for the trees many times, and network security professionals tend to “get into the weeds” when identifying and remediating threats. While it’s true that we can solve 80% of our network vulnerability issues by addressing the top 10% of our attacks, if we’re only looking at those top 10%, then we’re missing 90% of the forest. When we overlook so much, we lose visibility into some key indicators of possible compromise, and the big picture.
We can solve 80% of our network vulnerability issues by addressing the top 10% of our attacks
But if we’re only looking at those top 10%, then we’re missing 90% of the forest
We are all forced to do more with less, and that includes less time and resources to devote to incident response and remediation. However, with regular news about espionageware and other “new” malware that has been in place and active for years, it’s time we start looking at the lower end of our network issues.
The bottom 10%
All modern malware must communicate to a controller, and this is even more critical in the realm of espionageware, where the owners/authors are trying to extract data out of the targeted environment. We learned from the mass mailing worms of the past that high volume traffic meant quick detection, which translated to easy malware identification and response. Malware authors soon learned to avoid quick detection and facilitate longer periods of data extraction; they knew how important it was to keep these threats “low key” and hide as much traffic as possible in the daily network pattern. The result? Hidden threats would never rise to the top 10%, and if we were not looking elsewhere, we would never catch that traffic and threats.
When we look at the bottom 10% of issues, we start to see a more complete picture
When the equation is turned on its head and we look at the bottom 10% of issues, we start to see a more complete picture. While not everything on the bottom can or will be attributed to malware, we can identify things like malfunctioning (chatty) network cards, transient employee assets, and misconfigurations. While these non-malware identifications take time to process, what’s left is activity that can lead to early identification of “new” malware.
Establishing a baseline
You may be thinking: how are we going to find the time to identify everything and define what is “abnormal”? When done correctly, the majority of the time is spent up front, establishing a baseline and determining what is normal for your environment. The time requirements, as well as the baselines, are unique to each environment and can vary as much as an hour a week to hours per day. Once you’ve been able to obtain a baseline, it’s a process of continuously monitoring for changes to that baseline. This continuous monitoring should already be in progress, looking for intrusions and other attacks, but now with the additional requirement of looking for abnormalities. In reality, minimal additional effort for detection and identification will require the legwork of network support staff.
Once you’ve been able to obtain a baseline, it’s a process of continuously monitoring for changes to that baseline
The process of continuous network monitoring should be continuously updating the network norm—whether formally or informally. With this type of monitoring, you see the ebb and flow of network traffic, and your analysts recognize when things like network payroll may take more bandwidth or processor power. However, the process should raise red flags when a device starts making regular weekly connections to an out-of-organization or out-of-country host if that is not normal behavior. This single machine may be the gateway to a new attack against the organization, and the traffic must be identified as either benign or hostile. However, this traffic will never be classified if it’s not detected or monitored.
Detecting malware through abnormal activity
The easiest way to discover malware is through the abnormal activities or symptoms that are inflicted on a compromised host or network
If a covert piece of malware communicates with its command and control server or repository once every other day, how much proprietary company data can be exfiltrated in a week? If that same malware isn’t detected for 18 months, how much information is lost? By cutting back the time from infection to discovery, the time and amount of data being extracted is also cut down. For every piece of malware, there has to be a first discovery, and that is most often done by companies that are compromised, not malware researchers. The easiest way to discover malware is through the abnormal activities or symptoms that are inflicted on a compromised host or network.
Trees in the forest
While the majority of the bottom 10% of network issues will not be related to malware, that 10% gives us a more complete “big picture” of our networks. These long forgotten “trees” in our network “forest” have some tales to tell, if we’re willing to listen. Some of these tales will be of strangers in our midst, which must be addressed for the safety of the organization. These warning signs can lead us to earlier detection, saving the organization money and protecting our data. By neglecting the bottom 10%, we are overlooking some of the biggest sources of actionable intelligence on our networks.
We take a deeper dive into this subject in several whitepapers that address the detection of malware and abnormalities with SecurityCenter™ and Nessus®:
- Discovering Malware by Looking for Abnormalities
- Comprehensive Malware Detection with SecurityCenter Continuous View and Nessus
- 24/7 Visibility into Advanced Malware on Networks and Endpoints
- Tenable Malware Detection: Keeping Up With An Increasingly Sophisticated Threat Environment