In 1995 I landed my first independent consulting project: an incident response for an important financial institution in New York City. That experience has informed my attitude about attribution ever since, because it was one of the rare incidents I’ve ever been involved in when we actually learned the identity and location of the attacker with a high degree of certainty. The attacker was accessing an X.25 connection to the institution, had guessed an account/password pair on one of the Unix hosts, logged in and began looking around. He was first detected by one of the system administrators who noticed something unusual: a service account that normally didn’t log in was logged in, running the telnet command. An incident response team was assembled and we started charting out what was going on, what the attacker was doing, and when the break-in had occurred. The financial institution was extremely lucky that the system administrator was so observant: the attack was discovered within the first 3 days of the initial break-in.
System activity logs were not as complete as we would have wanted, but it appeared that the attacker was using the X.25 connection as a jumping off point into the Internet; the hacker had been telnetting to a variety of systems all over the Net. At the time, that was all we knew. Since telnet was the attacker’s favorite tool, I downloaded the BSD source code for telnet and created a version that logged all keystrokes to a file, so we could monitor his actions. We watched him break into a VMS system in Toronto using the FIELD/“service” default account, and telephoned administrators there to let them know they had a problem. The next day when he couldn’t get into the VMS system anymore, we had an amazing stroke of luck: our attacker telnetted into a bulletin board system in California and created a guest account including what appeared to be a real name, address and telephone number. Remember: this was 1995. Apparently our attacker was unsophisticated even in 1995 terms. The phone number was in the London area of the UK. We got an office temp to call the phone number and tell our attacker that she was the support specialist for the bulletin board system. She said that he was the first sign-up from the UK and that they wanted to let him know that they were immediately promoting him to gold status for free. The guy sounded pretty nice on the phone.
To accurately establish attribution, you need evidence and understanding
Then we had a long meeting to figure out what to do about the attacker, and my client sent a pair of their barristers from the London office to the attacker’s flat to have a chat with our hacker and his mother. The hacker promised to take up a new hobby, the passwords on the service accounts were changed, and that was the end of the incident.
The four requirements to establish attribution
This was a perfect storm of attribution. But proving attribution is usually not that easy. To accurately establish attribution, you need evidence and understanding:
- Evidence linking the presumed attacker to the attack
- An understanding of the attacker’s actions, supporting that evidence
- Evidence collected from other systems that matches the understanding of the attacker’s actions
- An understanding of the sequence of events during the attack, matching the evidence
Short of a signed, notarized confession from the attacker, you must collect enough information about what the attacker did to demonstrate that you understand what he did at least as well as he does, so that you can accurately reconstruct events. Most critically, there has to be something in that sequence of events that points toward a specific attacker. Let’s consider each of these requirements in light of the incident I just described.
The farther from the source you collect data about the attack, the harder it will be to attribute accurately.
Having the attacker’s address and phone number, verified by a recorded phone call is as close to a signed, notarized confession as many of us will ever get. Otherwise, remember this principle: The farther from the source you collect data about the attack, the harder it will be to attribute accurately.
Be the gateway
In the incident of the London hacker, we were effectively his Internet gateway and were very close to the origin of his traffic. If we had been on the VAX/VMS system in Toronto, it would have been much harder to reconstruct the attacker’s actions because his point of origin appeared to be a brokerage in New Jersey and not an X.25 connection from London. Security analysts on the VAX/VMS system would have been able to determine the attacker’s actions on that system and would have probably been limited to contacting a few other VMS system administrators to remind them to change their default passwords.
If you read The Cuckoo’s Egg, Cliff Stoll’s 1989 account of a hacking incident, you learn about a hacker who was making more effort at being stealthy, but who was using a similar method for making back-tracking more difficult. He was using a satellite system to dial into modems at the target site, so that if he was detected it would be assumed that he was a local caller. Of course things are different now, but the problem remains the same: many attackers will “launder” their connections through a couple of services, with the intent of crossing administrative domains and making it harder to back-track to them. Since the Internet became a global phenomenon, it has become possible for an attacker in Texas to connect through a compromised system in South Korea, to an onion-router, and then to the target. For a typical back-track to be successful, the analyst would have to be on the system in South Korea or have nation-state level capabilities to attack onion-routed traffic obfuscation. And there’s always the chance that a back-track might lead to a Starbucks or a hotel with open lobby wireless in Texas. Now that hacking has become a matter of national security and attention, only a most inexpert hacker would work from his home base if he was launching a significant attack. Advanced tools like the “Pwn Plug” make it easy to establish a hard-to-attribute connection from which further traffic could be laundered.
Which brings us to another principle: The attacker’s tools and techniques are not relevant to attribution.
Tools are irrelevant
We would have been foolish to declare the London hacker as the source of other attacks just because he used telnet, or because his modus operandi was to go after default passwords. Everyone used telnet in those days, and default passwords were a fairly successful attack-point. Fast-forward to 2014 and you can’t attribute an attack based on the fact that someone used malware or a specific piece/version of malware unless it’s clear that the attacker was deliberately trying to sign his work. A hacker who signs his work is inherently more suspicious because he has actually gone out of his way to establish a certain surface identity; it’s easier to remain anonymous than it is to present an identity. Besides, if a hacking crew wishes to present a particular identity, they can (and often do) leave behind a calling card, or post heavily anonymized comments on Twitter with links to caches of data indicating who they are.
The attacker’s tools and techniques are not relevant to attribution.
This comes back to the requirement that “evidence collected from other systems matches the understanding of the attacker’s actions.” If you want to attribute a data break-in to a specific hacking crew, you must have logs that show when the data was exfiltrated, and that the data is the same as what was posted. The sequence of events has to be plausible and consistent with the purposeful actions that the attackers took – otherwise we’re left speculating.
One thing that concerns me is an attempt to attribute hacks based on the tools that are used. That’s like saying a burglar who broke into a specific house must be Fred The Cat-Burglar because Fred uses a crowbar and the break-in was accomplished with a crowbar as well. You can’t convict Fred based on such flimsy evidence – yet we’ve recently seen the Internet version of exactly that: the assertion that the Sony hacks were sponsored by North Korea because the tools used were similar to other tools used in an attack that was also attributed to North Korea. That's a double failure because the first attack attributed to North Korea was also based on fairly weak attribution.
Next week: In Part 2, I’ll explain the concept of weak attribution and I’ll discuss the fourth requirement for successful attribution.