Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

How XcodeGhost Broke our Trust in Whitelists

There has been a lot of press coverage concerning the discovery of the XcodeGhost malware that affects iOS 9 and other Apple systems. This infection has caused 400 apps to be pulled from the Apple App Store so far. The malware was created by compromised compilers, where when an author codes an app, the final step between programming and the finished app is to compile it. These compromised compilers then inject hostile code into the unsuspecting author’s app. The unaware author then uploads the compiled app to a distribution store for download (in this case it was the Apple App Store, but it could have just as easily been any other “store” or software distribution point).

How could this have happened?

As is often the case, the primary questions I am hearing are “How can we detect it?” and “How can we clean it up?” But there is also the unusual question of “How could this have happened?” It’s this last question that I would like to address first.

There would be no indicator of the malware in the original source code

All distribution stores have basic security checks to look for backdoors and malicious applications. Anyone submitting an app, either for initial distribution or updating existing applications, must go through this process. This software was designed in a way to bypass those security checks, as well as hide itself from the original app author. Even when and if the author submitted original source code to the people monitoring the store, there would be no indicator of the malware in the original source code. The only way to have determined modification would be to take the app from the store, reverse engineer it, and then compare that code to the original source code. This is not practical, as many programmers do not have the skill set or the tools to do a proper reverse engineering, and there are just too many apps for any given store to employ resources to do that job.

Protection

Further, since many of the affected devices are mobile devices, we face other issues. Mobile devices—generally Android and Apple—all run each app in its own “sandbox” which includes security software. Apple has built in rudimentary per-app permissions (i.e., turn off access to chat and camera, but not the contacts list for the app) that future versions of the Android OS will also include. But many people do not make full use of these security checks, and security apps can’t break out of their sandboxes to enforce it, although some do scan for and notify of “insecure” and “excessive permissions” apps that they can identify. In future versions of Nessus®, you will be able to audit for applications that may have been infected by XcodeGhost.

As mobile devices operate on their own data networks, we as users and corporate security officers cannot scan or monitor those channels. That leaves us one option: when a device connects to our WiFi, we can scan for and detect abnormal network traffic. Utilities such as Tenable’s SecurityCenter™ have this type of function to do just that. As it has been said many times, we often look at the top 10 or 20 security issues, because by addressing those, we solve 80% of our overall problems. However, by taking the time to look at the bottom of that list, which are the abnormalities that often get lost in the noise, patterns begin to emerge, and those patterns can be leveraged to identify an issue before it balloons into a major incident.

Trust in whitelists

Whitelists are trusted distribution points and need to remain trustworthy

In the case of XcodeGhost, what happened is one of the weaknesses of a whitelist. There is little the authors could have done to validate that their code wasn’t compromised, and the whitelist provider performed industry best practices, so now we have to update those practices based on real threats. Whitelists are trusted distribution points and need to remain trustworthy, as do their contributors. When someone compromises that chain of trust, it compromises the entire system. We as security professionals must add that to our risk analysis equations and remember to watch for the little things that can betray bigger intents.