Since the creation of tools like Nessus, the pioneering technology written by Renaud Deraison back in 1996, vulnerability management tools have helped users gain an incredible amount of insight into the inner workings of their environments. By scanning for assets that are running, discovering any associated weaknesses in the operating system or installed applications, and identifying vulnerable configurations, organisations have gained situational awareness and control of their infrastructures.
Consider the bane of most security professionals – malware. Unless a zero day exploit on the operating system or application is utilised (something that isn’t very common apart from the recent Sandworm zero day) all malware and Trojans either use an already identified vulnerability or spear-phishing to gain a foothold. If we rid our systems of easily exploited vulnerabilities and known malware attack vectors, we significantly reduce the risk of infection and subsequent exfiltration of confidential data. Shellshock is a prime example of this – once the proof of concept code was released, worms were being created and pushed out by malicious actors within hours. The ability for system and application owners to identify where they had Bash installed was critical in addressing Shellshock and reducing their chances of exposure. Without vulnerability management and automation for identifying all hosts, the task would be daunting for any organisation with more than a handful of systems.
Every vulnerability management tool still uses the same blueprint today: defining what to scan, scheduling a time to scan it, analysing the results and remediating discovered vulnerabilities. Whilst the technical capabilities of vulnerability scanners have matured greatly over the last two decades, the fundamental approach to scanning has not. This well-trodden, often replicated approach to scanning is causing significant issues for organisations due to gaps in coverage and the evolution of IT.
If the threat landscape has changed so dramatically since the late 90s, isn’t it time that the approach taken by vulnerability management and asset discovery change too?
For example, the approach taken by IT operations to design and deploy new infrastructure has changed dramatically. Many IT departments have moved from a “tin every time” environment that increased slowly in size, towards a far more expansive, dynamic and business critical infrastructure. What once took operations days can now be accomplished in seconds, enabling massive amounts of computing power required by the business but also increasing the attack surface available to an intruder. We once understood where systems resided and how many new devices were added to the infrastructure; now, we face double digit compound annual infrastructure growth rates and systems spinning up or stopping, when the CPU cycles could be better served elsewhere.
We have also seen widespread adoption of the cloud. As new SaaS, IaaS and other enabling technologies are utilised to decrease costs and deployment efforts and to increase efficiency, the threat model has changed dramatically since the inception of vulnerability scanning tools. We no longer have control over the systems that store our data and often have no understanding of the security model used by the third party vendors to protect the ever increasing amount of information stored outside the traditional perimeter. The approach taken to access this readily available data has also changed dramatically in the intervening years. Gone are the days of a desktop, locked down by IT security, accessing specific silos of data; in the last 5 years we’ve seen a shift from corporate owned devices to BYOD.
Along with these IT paradigm shifts, we are seeing the industrialisation of hacking and white collar crime, creating a fundamental change in the approach that criminals take to extract money from businesses and the public. What was once the preserve of the few is now available to the many. Point of sale breaches are a prime example, with Backoff and BlackPOS – the most common malware used to infiltrate retail – available off the shelf for only a few thousand dollars.
A continuous monitoring program is more than just vulnerability management and asset discovery – it also leverages automation to ensure that your entire security program is working together as designed.
If the threat landscape has changed so dramatically since the late 90s, isn’t it time that the approach taken by vulnerability management and asset discovery change too? Fortunately, the concept of continuous monitoring – defined by NIST as “maintaining ongoing awareness of information security, vulnerabilities, and threats to support organisational risk management decisions” – provides an approach to address the new threat landscape.
Many years ago, Tenable Network Security recognised that the need to continuously monitor the environment is critical in gaining true situational awareness, moving away from snapshots in time towards an uninterrupted view of threats. Tenable also recognized that performing this type of monitoring at scale required new forms of technologies, not just using the traditional approach more frequently.
Over the past decade, Tenable has invested in and created new types of sensors that allow for the continuous automatic discovery and assessment of networks that include traditional IT systems, mobile users, virtual networks, and cloud-based applications. Instead of scanning yearly, quarterly, monthly or even weekly, deploying sensors throughout the environment to detect subtle changes has enabled Tenable customers to identify new hosts and vulnerabilities as they interact with the network, not just when the scanner discovers them.
A continuous monitoring program is more than just vulnerability management and asset discovery – it also leverages automation to ensure that your entire security program is working together as designed. This includes performing an assessment of all real-time security defences such as antivirus and intrusion detection systems, as well as slower activities such as patch management. Continuous monitoring measures the effectiveness of deployed controls to ensure that they are optimally configured and providing the protection that they promise.
A version of this article originally appeared on LinkedIn, October 14, 2014.