Many guidelines and compliance standards state that in order to be "secure" or "compliant" all of your systems must be patched. Turns out that this is easier said than done. Just when you believe your systems to be patched, something fails and patches seemingly disappear. We can then apply the "falling off" principal to several other areas of information technology, such as web applications, configuration management, and anti-virus software. How do security controls in these areas fall off? Below are some reasons how this might happen and what you can do to help correct the problems.
Why Do Patches "Fall Off" Systems?
- Systems were restored from a full backup before the patches were applied (e.g., virtual machine snapshot) - I've spent some time during my career as a Windows and UNIX/Linux administrator, and I was a huge fan of backups as they saved me on several occasions. There are plenty of reasons for a full system restore, such as undoing your poorly implemented configuration changes, or recovering from malware or a system compromise. The dangerous part is that your system, once connected to the network, is exposed until you can re-apply the patches. If you’re in this situation, another dangerous condition may exist — you just want it to work. Another common case, the system has become unstable and a full system restore is the only way to get it back up and running. Once you've got the system running again, you don't want to disrupt service yet again to apply patches, leaving your newly-restored system vulnerable until the next maintenance window or when you remember to go back and re-apply the patches (if ever).
- An older version of the application is installed after a patch was applied - Similar to the circumstances that leave a system vulnerable after a full system restore, installing an older application will produce the same results. The reasons for installing a previous version of an application are typically a new version is not compatible with your files or applications, or could even leave the system unstable and force you to go back to the previous version. I've done this several times on my own system. For example, I use many different software applications for handling RSS feeds, audio editing, and Twitter. If an application upgrade causes the application to crash each time I try to use it, I immediately revert back to a previous version. I then have to wait for the application vendor to identify and fix the problem before I can retry the upgrade. While I wait, my system may be exposed to a known vulnerability.
- There are two instances of an application, but only one was patched - Software can be sneaky and hide from your patching efforts. Users (or even administrators), in the name of troubleshooting, will install multiple versions of the same application in different locations on the same system. When this happens, your patching process may only patch the application in the default location. However, the user may continue using the version installed in a non-standard location that hasn’t been patched. According to your patch management system, the system is fully patched, and you will only find the contrary if you’re looking on the system or the network for the associated vulnerabilities.
Why Do Web Application Vulnerabilities Re-appear?
- Developers re-used vulnerable code - This is common practice for developers at any level of experience. In some cases, it’s an excellent practice. If a developer on your team has come up with an elegant or efficient way to solve a problem, you can re-use the code to save time and avoid duplication of effort. However, if the code you are re-using contains a vulnerability, then you've just created a new place for that vulnerability to exist. Even if the vulnerable code has been fixed, the older code still exists somewhere in your source code version management system, and has a chance of getting copied-and-pasted into programs that go into production.
- Developers delivered a new version of the application and re-wrote the same code containing the vulnerability - It is not uncommon for developers to re-introduce a vulnerability when releasing a new version of software. Sometimes an exact copy of the previous vulnerability appears, and sometimes it’s a variation. Creating a web application vulnerability can be as simple as implementing a new field to capture data and forgetting to use the newly patched input validation functions. Another example is when a developer updates code and removes the input validation functions for debugging purposes, then forgets to put them back.
- A problem was fixed in production, but not in development or QA environments and got pushed back into production - The software development lifecycle is tricky business. If a vulnerability, especially a critical one, is identified in a production environment, developers may have to fix it quickly. In their haste to remediate the problem, they may shortcut the development process. Once things return to normal, the vulnerability could slip back into production if it wasn’t officially part of the development life cycle.
Why Do System Configuration Flaws Come Back Around?
- In an emergency, an administrator backed out a security control to "fix the problem" - I have to admit, I've been in this situation more than once. As a firewall administrator, I configured features and rulesets to improve the security of the systems I was trying to protect. As a side effect, users weren’t able to access a critical service, and great pressure comes down upon you as the administrator to back out the change. From that point forward, compounding the problem, you become hesitant to re-implement the configuration change in fear of creating another incident.
- The application was updated, overwriting the old configuration and your security controls - As we've already established, updating and/or patching an application is not always an easy process. When you apply a software update, the configuration may need to be updated or become overwritten entirely. The best examples of this are on a Linux platform; when you update an Apache web server or SSH service, the new code may require or implement new configuration options. If you accept these changes, or simply overwrite your old configuration file with the new one, your customizations may become overwritten, weakening the security of the service.
- The vulnerability or security control was never configured in the master image in which systems are deployed from - Configuration management is one of the keys to successfully managing any type of systems environment. However, you have to be certain the correct configuration changes are being pushed out to all of your systems. You may find a great new way to lock down a remote access service, but forget to implement the change (or implement it incorrectly) in your configuration management. The new configuration will be pushed out, and essentially back out your changes.
Why Do Anti-virus Updates Fall Off?
- User (or malware) has removed or disabled anti-virus software - You may push out new AV software and virus definitions on a regular basis, however, if a system allows a user to modify the configuration, it can often lead to updates getting backed out (or even your AV software being uninstalled entirely).
- System backup and restore from image/snapshot strikes again - Virtualization technology makes it easy to revert to a previous system state, however, when you roll back, some may forget they have to re-upgrade and re-update their anti-virus software. Updating the anti-virus engine itself is very important, as the updated program code will strive to protect you against the latest malware using behavioral analysis.
- AV vendor has updated the signature or behavior detection for the new malware variant, which allows the old variant to run - I've observed this behavior with anti-virus software and IPS technology alike. For example, a malicious payload from an earlier version of malware (or even an exploit framework) may initially be picked up by AV. As both the malicious payload and AV software get updated, sometimes conditions will change such that older versions of the malicious payload are no longer detected.
What Can I Do About It?
- Continuous Monitoring for Vulnerabilities - The concept is actually quite simple, but the execution is tricky. Scan all of your systems on a regular basis, detect vulnerabilities, execute remediation steps, and repeat the process on a continual basis. Tenable's SecurityCenter is a valuable tool that will aid you in this process. SecurityCenter helps reduce your overall scan time by allowing you to load balance multiple Nessus scanners. You can create accounts for others in your environment to launch scans and identify vulnerabilities themselves in their own systems. The alerting and reporting allows you to effectively communicate problems to the right people to fix the issues, not just the first time, but any time a vulnerability is identified.
- Continuous Passive Monitoring - The Tenable Passive Vulnerability Scanner (PVS) will detect a vulnerability on a system as it sends traffic across the network. For example, if you believe a system is fully patched according to your patch management system, yet the same host is generating alerts from the PVS, it’s an easy way to identify the patching gaps. The same goes for anti-virus software as PVS can detect certain types of malicious behavior and if the host is participating in a botnet.
Technology, such as virtualization and web applications, helps us accomplish our goals and solve business problems. However, it adds an aspect of complexity that grows over time. The more technology we use in the name of making life easier, the more complex our IT systems become and the easier it is to lose track of security controls. By implementing a system of checks, accompanied by solid operational procedures, you can minimize the gaps in your security architecture.