The first time I was asked to scan a Class B network, my initial reaction was “Are you kidding me?” I actually thought it was a trick question to see how I reacted to unexpected situations. I had just two weeks to develop a strategy and perform the scan. This seemed to be a daunting task.
Ten years later, I had provided assessments for Class B (or bigger) networks over a dozen times, mostly for government agencies and the occasional university. Performing an audit of tens of thousands of IP addresses is no different from any other audit, unless time is restrictive. Large IP blocks in small time periods require you to revise your normal assessment methodology. Where you typically scan 65,535 ports on a machine, you may only be able to scan a dozen or two. Instead of examining every open port on a machine, time constraints may force you to focus on low-hanging fruit and services that are prone to high-risk vulnerabilities.
Developing a Methodology
Thinking about the polar opposites in assessment, you have a single IP address on one side, and a Class B network on the other. Adjusting your methodology to account for the number of machines becomes a balancing act between allotted time and number of targets. As the number of systems to scan increases, while the time allocated to scan remains constant, the amount of time per system must decrease.
To focus on a large amount of systems in a relatively small amount of time, you must greatly cut back on the time spent per system. This starts with deciding which ports are the most important to look for. When you create a short list of interesting ports, remember that adding or subtracting one port can drastically affect the time involved in the discovery phase. Even if you assume a single second per port discovery, adding a single port could cause your discovery port scan to extend by around 18 hours, seriously affecting your allotted scanning window.
The steps of discovery scanning must consider what 65,535 hosts mean in terms of time involved:
- Host discovery (ping) [65536 hosts @ 1sec per host = ~ 18 hours]
- Port discovery (scanner) [1 port @ 1sec per port, 65536 hosts = ~ 18 hours per port]
- Banner recon (scanner) [1 service @ 2 sec per port, 10 services per host @ 65536 hosts = ~ 15 days]
Of course, the odds of finding a class B with 65,535 hosts all up, each with 10 services is about as great as winning the lottery four times. Not knowing the number of hosts that are alive or how many common services that may be there can swing your discovery time between a couple of days to a week. It is a serious challenge to perform a large-scale assessment in just two weeks when half the time is burned just figuring out what hosts are present. These radical time swings require that you use multiple scanners to perform such an assessment. Where one port could increase the scan time by 18 hours, it would only increase it by 9 hours if two scanners were used, 6 hours if three scanners, and so on.
Once you have determined which hosts are alive via pings and port scans, you must determine the extent of the assessment to be performed. This includes determining which ports need to be tested for vulnerabilities and if you are only interested in a specific risk level or higher.
Using Nessus to Implement the Methodology
Nessus is a network vulnerability scanner first and foremost. While it has depth and diversity in offering configuration audits, credentialed scans and web application testing, it has been focused on network scanning for over a decade. That makes it an ideal tool to perform large-scale assessments. With such a large IP address space, it is highly recommended that you leverage multiple Nessus scanners to perform the work. Dividing the targets up evenly between each scanner increases the efficiency of the assessment and allows you to look for more open ports and vulnerabilities.
There are two general ways to utilize Nessus to leverage the methodology as described above. First, you can create a single scan policy that performs all of the required functions including discovery, scanning a handful of ports and using specific plugins. Such a policy would be convenient and easy to manipulate should a repeat scan be required. However, putting everything into one policy is not efficient. If any adjustments need to be made to the policy, you would have to start over from the beginning or only have the changes used for remaining hosts that have not been scanned yet. If time is not a factor and the scan options are not subject to change, a single policy is a viable option.
The second option is to create one policy for each phase of the methodology. This involves separate policies for host discovery, port scanning and vulnerability checks. Using one policy at a time to refine the target list allows for efficient scans and the ability to make adjustments to the next policy based on previous results.
The first policy would focus on simple host discovery using ‘ping’ and no additional port scanners. This policy would not utilize any plugins or port scanners:
Note: Under Policy -> Preferences -> Ping the remote host, make sure that “Log live hosts in the report” is selected:
The next policy focuses on port scanning to look for important services or those frequently found with vulnerabilities. By selecting a small list of ports, we can quickly scan tens of thousands of hosts in a relatively short amount of time. This policy would be applied to a list of hosts found during the host discovery phase.
The third policy looks for vulnerabilities in the services previously found. The exact list of vulnerabilities will vary depending on the services you looked for, the nature of the services and the time involved. While we looked for web servers on port 80 and 443 in the previous policy, attempting to enable all web server related checks (i.e., server, CGI, CGI XSS) would significantly increase the time required to perform the scan. Carefully selecting plugins to look for specific information, high-risk vulnerabilities and “low hanging fruit” (vulnerabilities that are trivial to exploit) allows us to conduct a meaningful and helpful assessment despite the large number of systems and typically small time frames.
The final challenge is to put the results of all scans into meaningful reports to assist the organization. Exporting the results and sharing with administrators lets them import the data into their own Nessus scanner and use the report filtering to work with the data.
Tips and Tricks
Scanning large networks will always be a balance between how much you can scan versus the allocated time window. Using extra scanners will allow you to perform a significant amount of additional assessment work. Refining your scans to suit the target environment will produce more accurate data. If you know the target network is purely a Windows environment, removing services that are generally not found on that operating system (e.g., SSH) can save time and let you look for additional Windows-specific services.
Before beginning a project of this nature, have a lengthy and frank talk with the client or management. Make sure they understand the limitations of this type of assessment. Provide reasonable estimates based on your resources and ask if that is what they will find useful. Try to elicit additional information about the network and their concerns and use the information to refine your scans. A Linux heavy shop with extensive SSL protected web servers and MySQL databases requires a slightly different policy than a Windows heavy shop with only HTTP servers and a huge Oracle installation.
Managing the scan data for a large number of hosts can be a burden. Using SecurityCenter (SC) to manage the scans and data can provide you not only with a single place to house the data, but advanced reporting ability as well as historical trending data. If you are required to run large scans periodically to determine not only the vulnerabilities present, but to also verify patches are being applied, SecurityCenter can manage multiple Nessus scanners and correlate all the data into a comprehensive report.