Parsing Cyberwar – Part 3: Synergies and Interference
Summary: As the cyberwar with Iran continues, we cheer to the news media’s reporting information and misinformation about this next frontier of war. All fodder for laughter at a future version of The Atomic Cafe. But there are reliable sources of insight to prepare us for the big cyber-events that lie in the future, such as this series by Marcus Ranum.
- Shared Weapons: Cyberwarriors and Spies
- Accidental Disarmament
- Spies and Soldiers
- Past chapters and the next up
- For more information
In the previous parts of this series, I assessed the value of cyberwar and cyberespionage as decisive weapons. By this I mean if they are capable of allowing a nation to achieve its strategic goals without additional arms. I believe it ought to be obvious to anyone that they are not.
In order to exploit the short-term advantages gained from a cyberattack, a nation needs a credible military that is capable of winning the meatspace battles that potentially follow. The same holds doubly true for cyberespionage, whether it is military or economic. In order to take advantage of stolen intellectual property the nation engaging in spying needs to have the economic logistical train necessary to do something useful with the stolen technology while escaping punishment. That would virtually always imply that the stealing nation needs to be a power at or near par with their victim, thus any benefit would be incremental not asymmetric.
If we look at the interactions between the various subtypes of cyberwar, we can begin to understand the bigger problem:
This chart briefly illustrates how the different subtypes of cyberwar interact with each other. For example, you can see that cybercriminals compete directly. The actions of a cybercriminal might, however, provide concealment for a cyberspy – a target discovering unusual traffic might dismiss it as ordinary botnet/malware traffic. Alternatively, the actions of cybercriminals might interfere with the cyberspies by forcing the target to constantly adapt and analyze its network and systems, greatly increasing the chance of the spies being discovered. You’ll notice that the cybercriminal’s techniques and tools might be directly usable by a cyberterrorist; we can see exactly this in the many similarities between Stuxnet/Flame and common or garden malware.
We see that cyberspies, cyberwarriors, and cyberterrorists have no effect on cybercriminals; they just go someplace else. The place where there’s interesting friction is between the cyberwarriors and the cyberspies. Targets that are on the lookout for malware (because of the cybercriminals or cyberspies) might also discover the cyberwarriors’ preparatory penetrations. Or vice-versa. This is a serious problem, since discovery of an atypical tool will virtually certainly result in its being published.
The computer security community takes a very active interest in malware analysis and there are several businesses that specialize in reverse-engineering malware. Within days of a new type of militarized cyberweapon being discovered, intrusion detection and firewall vendors will be publishing signatures to detect it, and researchers will be preparing to present fascinating papers about the internals of the software. That is exactly what happened with Stuxnet and Flame, and is now happening with the Gauss trojan.
This is a critical dynamic that many cyberwar proponents fail to understand or choose to ignore: the very dynamic nature of the cybercrime threat makes it certain that any cyberweapon that is staged and emplaced will be outed, dissected, and incorporated into the global gene-pool of cyberweapons. The negative synergies of cyberwar guarantee a form of arms-race the likes of which the military has never seen before and is ill-equipped to understand. Cyberweapons systems will evolve, by definition, on “internet time” – one month equals approximately a year.
A pair of early conclusions we can make from this chart is that the operational independence of cybercriminals and cyberterrorists makes them nimble and robust. Meanwhile, cyberwarriors and cyberspies will need to coordinate so closely that, for all intents and purposes, they are sharing weapons. That brings about another problem that I will discuss later.
A nation’s cyberweapon arsenal will also suffer negative synergies due to the commercial internet security market, as it responds to customer requirements and new techniques being fielded by cybercriminals. It is quite possible that someone might stumble across the traces of a very expensive cyberweapon while looking for something completely different.
Years ago, my friend Ron Dilley was, literally, bored one day and decided to start examining success and failure statistics in his wide-area network’s DNS traffic. He discovered several systems producing hugely anomalous failure-rates and, due to lack of interest and time, had them re-imaged instead of performing a detailed forensic analysis. Had he done the reverse-engineering, he would have discovered the Conficker malware several months before the rest of the computer security community did.
The point is that, in the commercial computer security field, there are hundreds of people like Ron, exploring log data, network traces, application run-states, file system checksums, and configuration management logs. Companies like Dhamballa and Mandiant have teams of analysts trying to make their products and services better, proactively doing exactly that type of proactive analysis. It is my opinion that a new cyberweapon would have to be carefully fielded in a very narrow footprint (like Stuxnet originally was, until the Israelis allegedly lost control of it) or it would have a useful lifespan on the order of months – certainly not years. Fielding such a weapon narrowly achieves the opposite of the ubiquitous widespread cyberwar attack scenario, in which huge chunks of the target’s critical infrastructure are taken offline. I would say it’d be close to impossible to widely deploy a new piece of malware and have it remain unknown for 6 months, especially if any of the target systems was competently managed.
As a piece of weaponry becomes more widely deployed, the likelihood of its remaining secret goes down quite sharply. We see proof of this, today, with “spear phishing” customized malware attacks. The attackers have realized that it is necessary to field per-target malware variants in order to bypass detection/protection mechanisms that are in widespread use. Virtually every computer security technique that blocks an attack also notifies someone that the attack was attempted — and there are techniques that effectively block and detect everything.
For example, white-listing program execution will detect and block even custom malware (at a cost of being annoying to most users!) — one of the high-profile break-ins of the last 3 years was discovered when a piece of malware that had already successfully penetrated a corporate network attempted to penetrate a security team-member’s system that was running a whitelisting program. Thus prematurely ended a highly successful and very valuable penetration effort.
Why don’t the proponents of cyberwar discuss this problem openly? It is cyberwar’s achilles’ heel, yet it remains generally unacknowledged. Perhaps the modern-day Basil Zaharoffs developing cyberweapons simply see this as a money-faucet that they can jam permanently into the “on” position.
(4) Shared Weapons: Cyberwarriors and Cyberspies
Spies and soldiers don’t like to cooperate for the simple reason that they often work at cross-purposes. The warrior’s job is to exploit strategic and tactical advantage as necessary, while the spy’s is to favor strategic positioning over tactical, in general. We see this challenging balance play itself out over and over again through the history of battlefield intelligence. A good example is when the British ULTRA was reading German Navy Enigma-encrypted messages, and Winston Churchill had to decide whether to risk blowing the ULTRA secret by steering a convoy away from a U-boat wolfpack. Churchill made the call and ships went to the bottom. In a cyberwar scenario, we can be sure that the victim will respond to a cyberattack by carefully examining their systems and networks; a cyberattack tool has a good chance of being single-use. Worse, the target’s response to a cyberattack will greatly increase the chance of detecting the actions of embedded cyberspies, as well.
The state of the art in malware response, as practiced by advanced security practitioners, is to have a malware-analysis “petri dish” into which suspicious software can be dropped and closely monitored. Many networkers also have a “black hole” capability – as soon as a system is compromised the black hole can absorb any traffic going in or out of that system for detailed analysis.
A quick response to a cyberattack would be to black hole an entire network; this happens all the time and is a preferred way of detecting malware outbreaks. You simply black hole all the outgoing traffic for 24 hours (thereby taking all non-critical machines off the internet) and examine the network traces in detail for signs of command/control traffic. The point is that an expert’s response to a cyberattack will also discover the cyberspies. An attacker might get away with it on a badly managed network, but sooner or later the attacker is going to run athwart of someone who knows what they are doing.
(5) Accidental Disarmament
Another unfortunate synergy for the cyberwarrior is the dynamic nature of modern operating systems and networks. Picture this scenario of a combined-arms attack: the cybercommand and strike wing commander are in the war room preparing to launch a carefully timed attack. The birds are in the air on their way to the point of departure, when Microsoft’s automatic patch update kicks in and disables the cyberweapon that was intended to take the targets air defenses offline. Or, perhaps a junior system administrator at the target site stumbles on the power-cord to the internet-facing router. Or a scheduled policy update to one of the target’s internet-facing firewalls happens at exactly the worst possible time for the attacker. Perhaps a security researcher presents a paper at DEFCON “outing” several vulnerabilities that they’ve discovered, and applies a patch, blocking the prospective attacker’s primary avenue of attack.
The possibilities are endless and they make cyberweapons fairly unreliable for their high cost. I am not aware of any other weapons systems in the history of warfare that can be disabled by their targets with a click of a mouse-button. The flexibility and mutability of internet-based attack tools cuts both ways: internet defense technologies work the same way!
One side’s cyberweapon could actually become an attack warning for the other side. Frequently, when a new vulnerability is discovered and published, security products look back to see if the vulnerability was exploited in the past. Most expert security operations teams do this, as well – a good example would be the many organizations that immediately began examining their DNS usage after Conficker was discovered. Some attacks can be discovered retroactively – which poses a serious problem for the whole “deep penetration” cyberwar scenario. This is, in fact, exactly what happened with Stuxnet and later with Flame: once the tool was discovered its techniques and its penetration footprint were laid bare. The fact that this hasn’t been happening very much is more of an indicator that there aren’t a lot of cyberweapons being deployed than anything else.
Suppose the target has a security analyst who notices the traces of what might be a cybercriminal botnet’s command and control traffic. So, he reverse-engineers the malware and discovers that it’s something that is new, not in use by the cybercriminal underground, and appears to be weaponized. Then, he puts rules in place to redirect the command and control traffic to a honeypot that will not actually execute the attack orders, but will trigger an attack warning instead.
If that scenario sounds improbable to you, it’s not – the SANS training conference has taught many many security practitioners exactly this kind of procedure. Lance Spitzner and I used to teach SANS’ class on how to build honeypots, and Lenny Zeltser still teaches a terrific course on dissecting and analysing malware. Malware, at this point, becomes a knife with a razor-studded handle, since it’s just software running on the target’s machine, it can be obscured – but in order for it to execute at all it has to be comprehensible to the computer on which it runs. That means it’s always available for dissection. This is the same reason why copy protection systems are always defeated in weeks or at most months.
In a sense, the ideal military weapon is one that can be used without exposing how it works to the enemy. That is definitely not an option with cyberweapons.
(6) Spies and Soldiers
Spies and soldiers have always had to deal with the overlap of the soldier’s tactical objectives and the spies’ strategic goals. The spies have a problem because they don’t want to reveal their sources and methods, but sometimes those sources may, themselves, be targets. The soldiers, on the other hand, want intelligence that is actionable, which practically by definition means that acting upon it gives away its possession to the enemy. It is a crucial balance and must be managed extremely wisely by strategy-makers.
In the case of cyberwar, it’s my opinion that there is not enough separation between the cyberspies and cyberwarriors to treat them as separate problem domains, strategically. I don’t mean that they need to “coordinate closely” but rather that they should be the same department and that they should operate as a unit, making effective judgements about their most critical common shared resource: their enemy’s data network.
Military thinkers have always understood that success or failure is closely tied to ones understanding of logistics, and one’s ability to get the right materials to the right place. In a cyberwar environment, logistics is the battlefield and cannot possibly be ignored.
Frequently when the topic of cyberwar comes up, someone inevitably suggests that “the best defense is a strong offense.” I’ve always argued that that’s exactly backwards: the best defense is a strong defense. Because cyberwar broadly favors the well-prepared defender. The military reasoning behind that dictum is based on the idea that spoiling attacks can be launched against the would-be attacker as they are marshalling their forces. In cyberspace, there is no marshalling-point for forces that can be hit, although the defender can spoil vast numbers of attacks by the simple expedient of updating a firewall rule.
(8) Past chapters and the next up
- The Battlefield
- The Logistical Train
- Synergies and Interference
- Patch #1 – Lessons from the Gauss malware
- The Best Defense is a Good Defense
In the final part, we will conclude with an assessment of what practical actions are available to corporations and governments in the cyberwar environment.
(9) For More Information
(a) For a lengthy bibliography see the FM Reference Page about Cyber-espionage and Cyber-war!, with links to Marcus Ranum’s other posts and a wide range of other resources.
(b) Articles about CyberWar
- Black Ops: How HBGary Wrote Backdoors For The Government, Ars Technica, February 2011
- Cyber Warfare, The 0-Day Exploit Market, and the Rest of Us, MindPoint Group’s Information Security & Privacy division, June 2012
- Pentagon Sets Up Fast Track for Buying Cyberwar Tools , Reuters, April 2012
- Lenny Zeltser’s Classes on Reverse-Engineering Malware, at his website
- Gauss Trojan – Nation-state Cyber-surveillance Trojan Meets Online Banking, SecureList, August 2012