Cyberwar: About Stuxnet, the next generation of warfare?
Summary: We begin our assessment of Stuxnet’s impact on the field of cyberwar. As a case-study of what appears to be a highly effective state-sponsored attack, Stuxnet has pushed the reality of cyberwar to center stage. This is the fourth in a series about cyberwar by guest author Marcus J. Ranum.
- Stuxnet as Cyberwar
- Stuxnet, Briefly
- State-Sponsored What?
- A Prediction
- What About US?
- About the author, including links to other posts in this series
- For more information
(1) Stuxnet as Cyberwar
What is Stuxnet? For an excellent introduction see the Wikipedia entry. For a description of the attack see Stuxnet Malware and Natanz, Institute for Science and International Security, 15 February 2011.
There is no question that Stuxnet is a game-changer for cyberwar: on one hand it represents a successful operation that accomplished an international objective, while on the other it illustrates other countries’ vulnerability to similar attacks. In terms of the big cyberwar scenarios we hear so much about, with countries being plunged into blackness and chaos, it hardly signifies but its success is certain to inspire imitation. Was Stuxnet a “cyberwar” operation though? We need to explore that question very, very carefully.
In 2008, I offered a series of arguments against the big cyberwar scenario (“The Problem with Cyberwar“) involving a hypothetical state-versus-state full-up battle in cyberspace. One of the problems with the scenario that I identified was called the “Who’d Win Anyway?” problem — namely, that cyberwar was unlikely to be as asymmetric as its proponents claimed it would be, and wouldn’t be able to singlehandedly flip the balance of power. In other words, Pee-Wee Herman is not going to beat up Mike Tyson even if you turn the lights out and give Pee-Wee night vision goggles: there are some imbalances that are too great, and whoever would have won without the cyber-attack would win anyway.
In fact, Pee-Wee might earn a more severe beating as a consequence. Stuxnet is significant here because, while it wasn’t a full-up big cyberwar attack, the “side” that would have won anyway, won. And the defeated party, so far, has done nothing but lick its wounds and repair the damage.
The comment that brought clarity to Stuxnet was made by one of my acquaintances at a security conference. He said, “Well, it saved the Iranians a good old-fashioned bombing.” It was such a deep truth it should not have been uttered so casually.
(2) Stuxnet, Briefly
Supervisory Control and Data Acquisition (SCADA) systems have been used since the 1960s. These systems are used to monitor critical infrastructure systems and provide early warning of potential disaster situations. SCADA has evolved from a monolithic architecture to a networked architecture.
The story of Stuxnet does not start where it’s usually told, with the worm’s discovery in June 2010 and its subsequent decompilation and analysis. It starts in 2007 in Idaho, at a US Government national lab, where energy control systems and nuclear reactors are tested. Idaho National Labs began a research project code-named AURORA, and — in March 2007 — hypothesized a series of vulnerabilities in technologies that are usually controlled via SCADA. That SCADA systems had ‘issues’ was not a question – the question AURORA was trying to answer was whether or not SCADA vulnerabilities could be translated into real-world destruction of critical systems. If you’d like to see a video of what happens to a diesel engine driving a generator when the generator’s frequency is shifted out of phase, you can see it below. It’s not pretty.
You’ll note that the engine is more or less OK, what fails is the piece that’s placed under stress – the connection between the engine and the rapidly-moving stuff that is being told it should be rapidly moving in a different direction entirely. What the folks at Idaho National Labs realized was that there are all kinds of places where SCADA systems control the timing of physical events and virtually any place where you can desynchronize a precisely-timed event, you can damage equipment.
AURORA, interestingly, had its codename co-opted to describe the allegedly state-sponsored attacks against Google in 2010 (source: Wired. The article contains nonsensical claims like: “It’s totally changing the threat model.” Just another system vulnerability and sloppy administration writ large.). If you’re a member of the tinfoil hat brigade, make sure your skullcap is properly fitted. Was this a deliberate attempt to confuse people? Fast forward to 2010 when Stuxnet was discovered. The attacks Stuxnet implement are straight out of Idaho Labs’ AURORA playbook: the reactor at Bushehr is damaged when the frequency of a pump’s motor is adjusted while there’s a huge amount of moving coolant that does not want to suddenly stop moving. Russian engineers describe the problem as: “trouble arose as pressure mounted in the reactor during tests. The pump vibrated and joints broke” (source: NY Times). Does that sound familiar?
It’s much harder to learn what happened at Natanz. The best most of us know is that gas centrifuges had problems in which their motors were destroyed by varying the frequency of the motors rapidly and causing wear. Worse, apparently, the frequency converter settings were deliberately set slightly “off” so that the centrifuges were collecting not just enriched uranium in their processing loop — the resulting loop of enrichment was contaminated and would need to be re-processed at considerable expense. In other words, we don’t know when the attack actually started and the Iranians are, obviously, not forthcoming.
We can estimate the development time-line for Stuxnet, with the effort beginning late in 2007. In spite of the trickiness of the code, it probably wasn’t a huge effort to produce – a couple programmers working for a year would have time to build several versions if one focused on the main software infrastructure and the others researched and developed a set of “day zero” attacks that would allow the worm to penetrate target systems. The main expense, no doubt, was testing: the function of the worm indicates considerable knowledge of how gas centrifuge chains are laid out, and the PLCs involved — the authors had to get some highly specialized knowledge, and then build a virtual mock-up to test their code on.
I’d be willing to bet that a few physical components were sacrificed, eventually, in order to measure the time-to-fail once the component came under attack — that’s not something you could accurately simulate. This amounts to solid evidence that the effort was state-sponsored: one does not simply review the Wikipedia entry on “gas centrifuge” then buy a P-1 centrifuge on Ebay, download the manuals from Siemens, and start bashing out code.
How was Stuxnet delivered? It appears to have been designed to replicate its way onto USB removable drives, which makes it lower-profile than a network-based attack. If Stuxnet had been network-based it would have been much more virulent and most likely would have been detected fairly quickly – the state of the art of networked malware detection is actually quite good. The implications of the USB delivery is that a fall-back plan would have been to get an insider to brush past a computer, insert a key-stick for a few seconds, then pull it out and walk off. Where did the actual attack happen? It’s like wondering whether the Polonium-209 was in your scrambled eggs, or in your coffee: does it really matter once you’ve finished your meal? What’s crucial about this is that it illustrates that an entire system and software supply chain can be attacked and latently penetrated.
Initial US reaction could best be summarized as my acquaintance’s comment: “it saved the Iranians a good old-fashioned bombing.” Israel was relatively quiet, though they were one of the more obvious suspects (e.g., see Wired). Indeed, if anyone was to ask cui bono? to attribute Stuxnet, Israel and The United States would be suspects #1 and #2, in that order (see “Wikileaks: US advised by German think tank to sabotage Iranian reactors“, Guardian, 18 January 2011). Stuxnet didn’t save some Iranians from a “good old-fashioned bombing” – several Iranian nuclear researchers were killed with car bombs following the Stuxnet attack. Israel was characteristically mum about Stuxnet, though there are stories that Israeli intelligence listed Stuxnet as one of its accomplishments in a showreel at a retirement party for the head of IDF (source: The Telegraph). That’s hardly concrete evidence and is probably just someone having fun.
Then, in April 2011, the US research facilities at Oak Ridge Labs came under a spear-phishing attack that apparently succeeded in infecting a number of systems, with the intent being to exfiltrate data (Government Computer News, 25 April 2011).
(3) State-Sponsored What?
Labeling Stuxnet is problematic, since the terminology of warfare has been sliced into a fractal-space of nuanced lawyerisms. When the US is carrying out special-operations attacks and flying missile-equipped drone missions in an “ally in the war on terror” when can we say hostilities have commenced? In the world of low-intensity conflict, Stuxnet doesn’t rate as a cyberwar — perhaps it’s the new order of “business as usual”? If so, we’ve got a problem, because on one hand our public officials are willing to point an accusing finger at China and make saber-rattling noises about how cyberattacks could be justification for real-world retaliation — and on the other hand, we’d be handing Iran a gold-plated causus belli in the form of Stuxnet.
If a cyberattack damaged a US nuclear reactor, and could likely be pinned on another country, how do you think that event would be billed? Would it be “state-sponsored cyberterrorism”? If so, then we know now what to call Stuxnet. If not, then our best strategy for defending cyberspace is “the best defense is a strong defense” because we’ve just declared open season on ourselves. That’s not how it will play itself out, however…
(4) A Prediction
Handing out blank hunting licenses against our own critical infrastructure is not really what’s happened. Because anyone with half a brain can read the subtexts in the DoD’s Cyberspace Strategy: we will treat attacks against us as attacks against us, regardless of vector. Which means that, ultimately, our strategy toward cyberwarfare is to make it a weapon of privilege: we’ll use it on you, but — as Mr. White would say: “If you use cyberwar against our critical infrastructure in a dream, you’d better wake up and apologize.” See Reservoir Dogs:
This brings us full circle to my earlier question about “Who’d win anyway?” Cyberwar becomes the purview of any nation that is powerful enough that it can argue, with a straight face, “Well, it saved you a good old-fashioned bombing.”
In that one sense, at least, cyberwar is like weapons of mass destruction: it will become a closely guarded clique that you only gain membership to if you’re sufficiently powerful that nobody in their right mind would mess with you. Pee-Wee Herman is not going to be able to launch cyberattacks — excuse me, “state-sponsored cyberterror” — against Mike Tyson. My original complaint against cyberwar was at least partially wrong, then, because I didn’t correctly factor in all the asymmetries in the situation: the best kind of weapon to have is one that the other guys are afraid to use against you in return. I see how the US flies predator assassination drones and strikes wherever it wants, and I see the future of cyberwar. Let this be my public mea culpa: I was not cynical enough.
From that prediction comes two more: the real danger is non-state actors. The worst possible form that could take is a cyberinsurgency, since we’ve already thoroughly inter-penetrated our government and civilian networks at the administrative layer. Even Pee-Wee Herman can do tremendous harm if he’s the guy who manages your network and has the keys to the kingdom. You don’t need Stuxnet if you can get to an insider; we saw that over and over when the KGB penetrated the intelligence apparatus during the cold war — if there are enemies preparing for cyberwar against us, our Stuxnet will be injected into the network by someone who already has “root”. The second piece of the prediction is that we’ll see some action around attribution. There will be a lot of delicate dancing, there.
(5) What About US?
Immediately, when we think of Stuxnet, we are reminded of the many problems that exist with our own “smart grid” SCADA systems. Could something similar happen to us? The obvious and true answer is “of course” and, all other things being equal, Stuxnet could serve as a justification for a retaliatory attack in kind. Right now, our best defense against that sort of thing is our terrifying potential in a war of “tit for tat” – we’ve gone on record that we’ll treat cyberattacks (against us!) as justification for real world military response. While we might be vulnerable to such attacks, I doubt very much that any nation-state is going to invite the “good old-fashioned bombing” that would result. It’s the defensive strategy of “being Mike Tyson” and it’ll work for a while.
Currently, all the trend-lines are heading the opposite direction from building defensible and reliable systems. Cost-cutting pressures mean a drive for efficiency and centralized management that will exacerbate current SCADA systems’ problems unless we are able to put the brakes on, somewhere, and get security factored into management and reliability costs. There’s a deep and growing divide between the sense of “ownership” of one’s data and networks, and the objective to outsource as much as possible. If we accept that computers are a critical tool for future generations of warriors, then there has never been a time before in history in which a military force depended for long on something it understood so poorly. Reforms are necessary and critical, both in the civilian infrastructure control systems and in US Government networks.
Consider Stuxnet as a parable of the dangers of outsourcing and you have the Iranian government, which, rather than build and design its own centrifuges “outsourced” them from someplace, then “outsourced” PLCs, and assembled their plant. Unfortunately, the tendency to outsource something is strongest when the thing you’re outsourcing is something you don’t understand or know how to make yourself – which leaves you in the position of having to adopt that system without understanding whether or not it has weaknesses and vulnerabilities. That’s why, if you don’t understand the basics of how a car works, your trip to the mechanic is always going to be a bit more expensive.
Once you’ve built something complex and expensive out of components you don’t know how to build and don’t fully understand, you’ve got a complex supply-chain that can be attacked at any point upstream of your equipment provider, and anyone who can penetrate that supply-chain will be able to infer a great deal about your operations without your being able to do anything about it. I’m sure that one of the downstream consequences of Stuxnet will be a reduction in the tendency to trust nuclear technologies acquired from other sources; the end result of which will be an increase in the cost as would-be nuclear states “roll their own” and, eventually, amortize that cost by productizing what they develop.
The most important step to getting that process going in the right direction is to re-assess the cost/benefit analysis to factor in the cost of increased likelihood of failure. In the commercial world this happens in the form of “unexpected costs” when an outsourced service comes up snake-eyes and a million-dollar incident response effort is needed. In the world of government affairs, it’s more serious: how much money was saved by building “cheaper” RBMK-style reactors versus the cost of Chernobyl? The way to claw back from our state of vulnerability is to not allow people to claim security is an extra cost that’s to be tacked on to a project after it was completed, but rather that its being left out in the first place amounted to building an unreliable system; not actually completing the project to spec. This is why it’s especially egregious when I see some of the beltway insiders who presided over the construction of a potential disaster standing around with their hands out asking for more money to improve the disaster. The time to deal with these issues is before someone signs off on accepting a vulnerable, unreliable piece of infrastructure. The job’s not done until it’s built right.
Also: see this update in the comments.
(6) About the author, including links to other posts in this series
See the About the Authors page for information about Marcus J. Ranum
Other publications by Ranum:
- “The Problem with Cyberwar“, Rear Guard Security
The series Cyberwar: a Whole New Quagmire, by Marcus J. Ranum:
- The Pentagon Cyberstrategy, 2 September 2011
- “Do as I say, not as I do” shall be the whole of the law, 11 September 2011
- Conflating Threats, 14 September 2011
- About Stuxnet, the next generation of warfare?, 29 September 2011
- When the Drones Come To Roost, 8 October 2011
- About Attribution (identifying your attacker), 21 October 2011
(7) For more information
(a) About Stuxnet:
- Stuxnet Under the Microscope, ESET (a cyber security company)
- Recommended: “The Stuxnet Computer Worm: Harbinger of an Emerging Warfare Capability“, Congressional Research Service, 9 December 2010
- “Computer security: Is this the start of cyberwarfare?“, Sharon Weinberger, Nature, 8 June 2011 — “Last year’s Stuxnet virus attack represented a new kind of threat to critical infrastructure.”
- “Stuxnet as Cyberwarfare: Applying the Law of War to the Virtual Battlefield“, John C. Richardson (JMR Portfolio Intelligence), 22 July 2011
- “Stuxnet: Cyber Conflict, Article 2(4), and the Continuum of Culpability“, Colin S. Crawford (Wake Forest U School of Law), 2011
(b) About cyberwar:
- “Assessing the Risks of Cyber Terrorism, Cyber War and Other Cyber Threats“, James A. Lewis, Center for Strategic and International Studies, December 2002
- “Meet Your New Commander-in-Geek“, Katherine Mangu-Ward, Reason, 26 May 2010 — “U.S. Cyber Command has no idea why it exists.” But their fear-mongering PR is first-rate.
- “China’s Emerging Cyber War Doctrine“, Gurmeet Kanwal, Journal of Defense Studies (Institute for Defense Studies and Analysis), July 2009
- They cyber war threat has been grossly exaggerated, NPR, 8 June 2010 — Audio here.
- ‘Tehran’s Lost Connection“, Geneive Abdo, Foreign Policy, 10 June 2010 — “Is the Iranian regime’s cyberwar with the United States real, or a paranoid delusion?” — Abdo expects to know if the US waged cyberwar against Iran, ignoring our long history of covert offensive operations.
- “Reducing Systemic Cybersecurity Risk”, Peter Sommer (London School of Economics) and Ian Brown (Oxford), OECD, 14 January 2011
- “Cyberwar an exaggerated threat“, UPI, 17 January 2011 — Says Peter Sommer, now of the London School of Economics and author of the Hacker’s Handbook (1985) under the pseudonym Hugo Cornwall.
- “Cyber war threat exaggerated claims security expert“, BBC, 16 February 2011 — Says Bruce Schneier, chief security officer for British Telecom.
- “Don’t Believe Scare Stories about Cyber War“, John Horgan, Scientific American, 3 June 2011