Cybersecurity & cyberwar

Cyberwar: About Stuxnet‏, the next generation of warfare?

Summary:  We begin our assessment of Stuxnet’s impact on the field of cyberwar.  As a case-study of what appears to be a highly effective state-sponsored attack, Stuxnet has pushed the reality of cyberwar to center stage.   This is the fourth in a series about cyberwar by FM co-author Marcus J. Ranum.

Stuxnet

Contents

  1. Stuxnet as Cyberwar
  2. Stuxnet, Briefly
  3. State-Sponsored What?
  4. A Prediction
  5. What About US?
  6. About the author
  7. For more information

(1) Stuxnet as Cyberwar

What is Stuxnet?  For an excellent introduction see the Wikipedia entry.  For a description of the attack see Stuxnet Malware and Natanz, Institute for Science and International Security, 15 February 2011.

There is no question that Stuxnet is a game-changer for cyberwar: on one hand it represents a successful operation that accomplished an international objective, while on the other it illustrates other countries’ vulnerability to similar attacks. In terms of the big cyberwar scenarios we hear so much about, with countries being plunged into blackness and chaos, it hardly signifies but its success is certain to inspire imitation. Was Stuxnet a “cyberwar” operation though? We need to explore that question very, very carefully.

In 2008, I offered a series of arguments against the big cyberwar scenario (“The Problem with Cyberwar“) involving a hypothetical state-versus-state full-up battle in cyberspace. One of the problems with the scenario that I identified was called the “Who’d Win Anyway?” problem — namely, that cyberwar was unlikely to be as asymmetric as its proponents claimed it would be, and wouldn’t be able to singlehandedly flip the balance of power. In other words, Pee-Wee Herman is not going to beat up Mike Tyson even if you turn the lights out and give Pee-Wee night vision goggles: there are some imbalances that are too great, and whoever would have won without the cyber-attack would win anyway.

 

In fact, Pee-Wee might earn a more severe beating as a consequence. Stuxnet is significant here because, while it wasn’t a full-up big cyberwar attack, the “side” that would have won anyway, won. And the defeated party, so far, has done nothing but lick its wounds and repair the damage.

The comment that brought clarity to Stuxnet was made by one of my acquaintances at a security conference. He said, “Well, it saved the Iranians a good old-fashioned bombing.” It was such a deep truth it should not have been uttered so casually.

(2)  Stuxnet, Briefly

Supervisory Control and Data Acquisition (SCADA) systems have been used since the 1960s. These systems are used to monitor critical infrastructure systems and provide early warning of potential disaster situations.  SCADA has evolved from a monolithic architecture to a networked architecture.

The story of Stuxnet does not start where it’s usually told, with the worm’s discovery in June 2010 and its subsequent decompilation and analysis. It starts in 2007 in Idaho, at a US Government national lab, where energy control systems and nuclear reactors are tested. Idaho National Labs began a research project code-named AURORA, and — in March 2007 — hypothesized a series of vulnerabilities in technologies that are usually controlled via SCADA. That SCADA systems had ‘issues’ was not a question – the question AURORA was trying to answer was whether or not SCADA vulnerabilities could be translated into real-world destruction of critical systems. If you’d like to see a video of what happens to a diesel engine driving a generator when the generator’s frequency is shifted out of phase, you can see it below.  It’s not pretty.

You’ll note that the engine is more or less OK, what fails is the piece that’s placed under stress – the connection between the engine and the rapidly-moving stuff that is being told it should be rapidly moving in a different direction entirely. What the folks at Idaho National Labs realized was that there are all kinds of places where SCADA systems control the timing of physical events and virtually any place where you can desynchronize a precisely-timed event, you can damage equipment.

AURORA, interestingly, had its codename co-opted to describe the allegedly state-sponsored attacks against Google in 2010 (source:  Wired.  The article contains nonsensical claims like: “It’s totally changing the threat model.” Just another system vulnerability and sloppy administration writ large.). If you’re a member of the tinfoil hat brigade, make sure your skullcap is properly fitted. Was this a deliberate attempt to confuse people? Fast forward to 2010 when Stuxnet was discovered. The attacks Stuxnet implement are straight out of Idaho Labs’ AURORA playbook: the reactor at Bushehr is damaged when the frequency of a pump’s motor is adjusted while there’s a huge amount of moving coolant that does not want to suddenly stop moving. Russian engineers describe the problem as: “trouble arose as pressure mounted in the reactor during tests. The pump vibrated and joints broke” (source:  NY Times).  Does that sound familiar?

It’s much harder to learn what happened at Natanz. The best most of us know is that gas centrifuges had problems in which their motors were destroyed by varying the frequency of the motors rapidly and causing wear. Worse, apparently, the frequency converter settings were deliberately set slightly “off” so that the centrifuges were collecting not just enriched uranium in their processing loop — the resulting loop of enrichment was contaminated and would need to be re-processed at considerable expense. In other words, we don’t know when the attack actually started and the Iranians are, obviously, not forthcoming.

We can estimate the development time-line for Stuxnet, with the effort beginning late in 2007. In spite of the trickiness of the code, it probably wasn’t a huge effort to produce – a couple programmers working for a year would have time to build several versions if one focused on the main software infrastructure and the others researched and developed a set of “day zero” attacks that would allow the worm to penetrate target systems. The main expense, no doubt, was testing: the function of the worm indicates considerable knowledge of how gas centrifuge chains are laid out, and the PLCs involved — the authors had to get some highly specialized knowledge, and then build a virtual mock-up to test their code on.

I’d be willing to bet that a few physical components were sacrificed, eventually, in order to measure the time-to-fail once the component came under attack — that’s not something you could accurately simulate. This amounts to solid evidence that the effort was state-sponsored: one does not simply review the Wikipedia entry on “gas centrifuge” then buy a P-1 centrifuge on Ebay, download the manuals from Siemens, and start bashing out code.

How was Stuxnet delivered? It appears to have been designed to replicate its way onto USB removable drives, which makes it lower-profile than a network-based attack. If Stuxnet had been network-based it would have been much more virulent and most likely would have been detected fairly quickly – the state of the art of networked malware detection is actually quite good. The implications of the USB delivery is that a fall-back plan would have been to get an insider to brush past a computer, insert a key-stick for a few seconds, then pull it out and walk off. Where did the actual attack happen? It’s like wondering whether the Polonium-209 was in your scrambled eggs, or in your coffee: does it really matter once you’ve finished your meal? What’s crucial about this is that it illustrates that an entire system and software supply chain can be attacked and latently penetrated.

Initial US reaction could best be summarized as my acquaintance’s comment: “it saved the Iranians a good old-fashioned bombing.” Israel was relatively quiet, though they were one of the more obvious suspects (e.g., see Wired). Indeed, if anyone was to ask cui bono? to attribute Stuxnet, Israel and The United States would be suspects #1 and #2, in that order (see “Wikileaks: US advised by German think tank to sabotage Iranian reactors“, Guardian, 18 January 2011). Stuxnet didn’t save some Iranians from a “good old-fashioned bombing” – several Iranian nuclear researchers were killed with car bombs following the Stuxnet attack. Israel was characteristically mum about Stuxnet, though there are stories that Israeli intelligence listed Stuxnet as one of its accomplishments in a showreel at a retirement party for the head of IDF (source:  The Telegraph). That’s hardly concrete evidence and is probably just someone having fun.

Then, in April 2011, the US research facilities at Oak Ridge Labs came under a spear-phishing attack that apparently succeeded in infecting a number of systems, with the intent being to exfiltrate data (Government Computer News, 25 April 2011).

(3)  State-Sponsored What?

Labeling Stuxnet is problematic, since the terminology of warfare has been sliced into a fractal-space of nuanced lawyerisms. When the US is carrying out special-operations attacks and flying missile-equipped drone missions in an “ally in the war on terror” when can we say hostilities have commenced?  In the world of low-intensity conflict, Stuxnet doesn’t rate as a cyberwar — perhaps it’s the new order of “business as usual”? If so, we’ve got a problem, because on one hand our public officials are willing to point an accusing finger at China and make saber-rattling noises about how cyberattacks could be justification for real-world retaliation — and on the other hand, we’d be handing Iran a gold-plated causus belli in the form of Stuxnet.

If a cyberattack damaged a US nuclear reactor, and could likely be pinned on another country, how do you think that event would be billed? Would it be “state-sponsored cyberterrorism”? If so, then we know now what to call Stuxnet. If not, then our best strategy for defending cyberspace is “the best defense is a strong defense” because we’ve just declared open season on ourselves. That’s not how it will play itself out, however…

 (4)  A Prediction

Handing out blank hunting licenses against our own critical infrastructure is not really what’s happened. Because anyone with half a brain can read the subtexts in the DoD’s Cyberspace Strategy: we will treat attacks against us as attacks against us, regardless of vector. Which means that, ultimately, our strategy toward cyberwarfare is to make it a weapon of privilege: we’ll use it on you, but — as Mr. White would say: “If you use cyberwar against our critical infrastructure in a dream, you’d better wake up and apologize.” See Reservoir Dogs:

This brings us full circle to my earlier question about “Who’d win anyway?” Cyberwar becomes the purview of any nation that is powerful enough that it can argue, with a straight face, “Well, it saved you a good old-fashioned bombing.”

In that one sense, at least, cyberwar is like weapons of mass destruction: it will become a closely guarded clique that you only gain membership to if you’re sufficiently powerful that nobody in their right mind would mess with you. Pee-Wee Herman is not going to be able to launch cyberattacks — excuse me, “state-sponsored cyberterror” — against Mike Tyson. My original complaint against cyberwar was at least partially wrong, then, because I didn’t correctly factor in all the asymmetries in the situation: the best kind of weapon to have is one that the other guys are afraid to use against you in return. I see how the US flies predator assassination drones and strikes wherever it wants, and I see the future of cyberwar. Let this be my public mea culpa: I was not cynical enough.

From that prediction comes two more: the real danger is non-state actors. The worst possible form that could take is a cyberinsurgency, since we’ve already thoroughly inter-penetrated our government and civilian networks at the administrative layer. Even Pee-Wee Herman can do tremendous harm if he’s the guy who manages your network and has the keys to the kingdom. You don’t need Stuxnet if you can get to an insider; we saw that over and over when the KGB penetrated the intelligence apparatus during the cold war — if there are enemies preparing for cyberwar against us, our Stuxnet will be injected into the network by someone who already has “root”. The second piece of the prediction is that we’ll see some action around attribution. There will be a lot of delicate dancing, there.

(5)  What About US?

Immediately, when we think of Stuxnet, we are reminded of the many problems that exist with our own “smart grid” SCADA systems. Could something similar happen to us? The obvious and true answer is “of course” and, all other things being equal, Stuxnet could serve as a justification for a retaliatory attack in kind. Right now, our best defense against that sort of thing is our terrifying potential in a war of “tit for tat”  — we’ve gone on record that we’ll treat cyberattacks (against us!) as justification for real world military response. While we might be vulnerable to such attacks, I doubt very much that any nation-state is going to invite the “good old-fashioned bombing” that would result. It’s the defensive strategy of “being Mike Tyson” and it’ll work for a while.

Currently, all the trend-lines are heading the opposite direction from building defensible and reliable systems. Cost-cutting pressures mean a drive for efficiency and centralized management that will exacerbate current SCADA systems’ problems unless we are able to put the brakes on, somewhere, and get security factored into management and reliability costs. There’s a deep and growing divide between the sense of “ownership” of one’s data and networks, and the objective to outsource as much as possible. If we accept that computers are a critical tool for future generations of warriors, then there has never been a time before in history in which a military force depended for long on something it understood so poorly. Reforms are necessary and critical, both in the civilian infrastructure control systems and in US Government networks.

Consider Stuxnet as a parable of the dangers of outsourcing and you have the Iranian government, which, rather than build and design its own centrifuges “outsourced” them from someplace, then “outsourced” PLCs, and assembled their plant. Unfortunately, the tendency to outsource something is strongest when the thing you’re outsourcing is something you don’t understand or know how to make yourself  — which leaves you in the position of having to adopt that system without understanding whether or not it has weaknesses and vulnerabilities. That’s why, if you don’t understand the basics of how a car works, your trip to the mechanic is always going to be a bit more expensive.

Once you’ve built something complex and expensive out of components you don’t know how to build and don’t fully understand, you’ve got a complex supply-chain that can be attacked at any point upstream of your equipment provider, and anyone who can penetrate that supply-chain will be able to infer a great deal about your operations without your being able to do anything about it. I’m sure that one of the downstream consequences of Stuxnet will be a reduction in the tendency to trust nuclear technologies acquired from other sources; the end result of which will be an increase in the cost as would-be nuclear states “roll their own” and, eventually, amortize that cost by productizing what they develop.

The most important step to getting that process going in the right direction is to re-assess the cost/benefit analysis to factor in the cost of increased likelihood of failure. In the commercial world this happens in the form of “unexpected costs” when an outsourced service comes up snake-eyes and a million-dollar incident response effort is needed. In the world of government affairs, it’s more serious: how much money was saved by building “cheaper” RBMK-style reactors versus the cost of Chernobyl? The way to claw back from our state of vulnerability is to not allow people to claim security is an extra cost that’s to be tacked on to a project after it was completed, but rather that its being left out in the first place amounted to building an unreliable system; not actually completing the project to spec. This is why it’s especially egregious when I see some of the beltway insiders who presided over the construction of a potential disaster standing around with their hands out asking for more money to improve the disaster. The time to deal with these issues is before someone signs off on accepting a vulnerable, unreliable piece of infrastructure. The job’s not done until it’s built right.

Also:  see this update in the comments.

Marcus Ranum

Marcus Ranum

(6)  About the author

See the About the Authors page for information about the impressive bio of Marcus J. Ranum. Also see this article by Ranum: “The Problem with Cyberwar“, Rear Guard Security.

For those who want to learn about this new dimension of war a must-read is his 2011 series Cyberwar: a Whole New Quagmire, by Marcus J. Ranum:

  1. The Pentagon Cyberstrategy.
  2. “Do as I say, not as I do” shall be the whole of the law.
  3. Conflating Threats.
  4. About Stuxnet‏, the next generation of warfare?.
  5. When the Drones Come To Roost.
  6. About Attribution (identifying your attacker).

(7)  For more information

(a)  About Stuxnet:

  1. Stuxnet Under the Microscope, ESET (a cyber security company).
  2. Recommended:  “The Stuxnet Computer Worm: Harbinger of an Emerging Warfare Capability“, Congressional Research Service, 9 December 2010.
  3. Computer security: Is this the start of cyberwarfare?“, Sharon Weinberger, Nature, 8 June 2011 — “Last year’s Stuxnet virus attack represented a new kind of threat to critical infrastructure.”
  4. Stuxnet as Cyberwarfare: Applying the Law of War to the Virtual Battlefield“, John C. Richardson (JMR Portfolio Intelligence), 22 July 2011.
  5. Stuxnet: Cyber Conflict, Article 2(4), and the Continuum of Culpability“, Colin S. Crawford (Wake Forest U School of Law), 2011.

(b)  About cyberwar:

  1. “Assessing the Risks of Cyber Terrorism, Cyber War and Other Cyber Threats“, James A. Lewis, Center for Strategic and International Studies, December 2002.
  2. Meet Your New Commander-in-Geek“, Katherine Mangu-Ward, Reason, 26 May 2010 — “U.S. Cyber Command has no idea why it exists.”  But their fear-mongering PR is first-rate.
  3. China’s Emerging Cyber War Doctrine“, Gurmeet Kanwal, Journal of Defense Studies (Institute for Defense Studies and Analysis), July 2009.
  4. They cyber war threat has been grossly exaggerated, NPR, 8 June 2010 — Audio here.
  5. Tehran’s Lost Connection“, Geneive Abdo, Foreign Policy, 10 June 2010 — “Is the Iranian regime’s cyberwar with the United States real, or a paranoid delusion?” — Abdo expects to know if the US waged cyberwar against Iran, ignoring our long history of covert offensive operations.
  6. Reducing Systemic Cybersecurity Risk”, Peter Sommer (London School of Economics) and Ian Brown (Oxford), OECD, 14 January 2011.
  7. Cyberwar an exaggerated threat“, UPI, 17 January 2011 — Says Peter Sommer, now of the London School of Economics and author of the Hacker’s Handbook (1985) under the pseudonym Hugo Cornwall.
  8. Cyber war threat exaggerated claims security expert“, BBC, 16 February 2011 — Says Bruce Schneier, chief security officer for British Telecom.
  9. Don’t Believe Scare Stories about Cyber War“, John Horgan, Scientific American, 3 June 2011.

 

Advertisements

5 replies »

  1. Son of Stuxnet?“, Blake Hounshell, Foreign Policy, 19 October 2011 — Excerpt:

    When an unknown entity, most likely some combination of Western and Israeli intelligence agencies, created Stuxnet, the mysterious computer worm widely thought to be targeted at Iran’s nuclear program, cybersecurity experts warned that a new digital threat had been unleashed, with potentially dangerous and wideranging consequences.

    …Now, tech researchers at Symantec and F-Secure have identified a new piece of malware they’re calling Duqu, and which they say is very similar to Stuxnet. …

    Like

  2. I think it’s important to point out that regular software engineers could produce a Stuxnet. In many ways Stuxnet is much less sophisticated malware than Zeus or some of the transaction-intercepting commercial malware being used to hijack funds transfers, today.

    What was sophisticated about Stuxnet and was significant evidence that the authors had uncommon information was that Stuxnet appeared to be designed with knowledge of the specific gas-centrifuge cascades that were being used at Natanz. While any experienced system programmer can eventually turn out pretty workable malware (and a good systems programmer can quickly turn out pretty impressive stuff indeed) most programmers don’t have a specific type of centrifuge to test against, nor do they know how many centrifuge programmable logic controllers to attempt to manipulate in a given centrifuge cascade.

    Let me try an analogy: if someone breaks into your house by throwing a brick through the window, that’s one thing. If someone breaks into your house, bypasses your security system, and immediately goes directy to your wall-safe that’s hidden behind your fireplace – then they had inside knowledge. In the case of Stuxnet it’s the attackers’ understanding of the target’s layout that’s the interesting fact, not the actual code of the malware. Whoever did it knew a lot about that one target, and that knowledge was not anything close to common.

    Like

  3. Stuxnet weapon has at least 4 cousins“, Reuters, 28 December 2011 — Excerpt:

    The Stuxnet virus that last year damaged Iran’s nuclear program was likely one of at least 5cyber weapons developed on a single platform whose roots trace back to 2007, according to new research from Russian computer security firm Kaspersky Lab. … Stuxnet has already been linked to another virus, the Duqu data-stealing trojan, but Kaspersky’s research suggests the cyber weapons program that targeted Iran may be far more sophisticated than previously known. Kaspersky’s director of global research & analysis,

    Costin Raiu, told Reuters on Wednesday that his team has gathered evidence that shows the same platform that was used to build Stuxnet and Duqu was also used to create at least 3 other pieces of malware. Raiu said the platform is comprised of a group of compatible software modules designed to fit together, each with different functions. Its developers can build new cyber weapons by simply adding and removing modules. “It’s like a Lego set. You can assemble the components into anything: a robot or a house or a tank,” he said.

    Kaspersky named the platform “Tilded” because many of the files in Duqu and Stuxnet have names beginning with the tilde symbol “~” and the letter “d.”

    Researchers with Kaspersky have not found any new types of malware built on the Tilded platform, Raiu said, but they are fairly certain that they exist because shared components of Stuxnet and Duqu appear to be searching for their kin. When a machine becomes infected with Duqu or Stuxnet, the shared components on the platform search for two unique registry keys on the PC linked to Duqu and Stuxnet that are then used to load the main piece of malware onto the computer, he said.

    Kaspersky recently discovered new shared components that search for at least 3 other unique registry keys, which suggests that the developers of Stuxnet and Duqu also built at least three other pieces of malware using the same platform, he added. Those modules handle tasks including delivering the malware to a PC, installing it, communicating with its operators, stealing data and replicating itself.

    … Kaspersky believes that Tilded traces back to at least 2007 because specific code installed by Duqu was compiled from a device running a Windows operating system on 31 August 2007.

    Like

  4. Equipment Maker Caught Installing Backdoor Account in Control System Code“, Wired, 25 April 2012 — Excerpt:

    A Canadian company that makes equipment and software for critical industrial control systems planted a backdoor login account in its flagship operating system, according to a security researcher, potentially allowing attackers to access the devices online.

    The backdoor, which cannot be disabled, is found in all versions of the Rugged Operating System made by RuggedCom, according to independent researcher Justin W. Clarke, who works in the energy sector. The login credentials for the backdoor include a static username, “factory,” that was assigned by the vendor and can’t be changed by customers, and a dynamically generated password that is based on the individual MAC address, or media access control address, for any specific device.

    Attackers can uncover the password for a device simply by inserting the MAC address, if known, into a simple Perl script that Clarke wrote. MAC addresses for some devices can be learned by doing a search with SHODAN, a search tool that allows users to find internet-connected devices, such as industrial control systems and their components, using simple search terms.

    Clarke, who is based in San Francisco, says he discovered the backdoor after purchasing two used RuggedCom devices – an RS900 switch and an RS400 serial server – on eBay for less than $100 and examining the firmware installed on them.

    … RuggedCom switches and servers are used in “mission-critical” communication networks that operate power grids and railway and traffic control systems as well as manufacturing facilities. RuggedCom asserts on its website that its products are “the product of choice for high-reliability, high-availability, mission-critical communications networks deployed in harsh environments around the world.”

    Clarke says he notified RuggedCom about his discovery in April 2011 and says the representative he spoke with acknowledged the existence of the backdoor. “They knew it was there,” he told Threat Level. “They stopped communicating with me after that.” The company failed to notify customers or otherwise address the serious security vulnerability introduced by the backdoor.

    … RuggedCom, which is based in Canada, was recently purchased by the German conglomerate Siemens. Siemens, itself, has been highly criticized for having a backdoor and hard-coded passwords in some of its industrial control system components. The Siemens vulnerabilities, in the company’s programmable logic controllers, would let attackers reprogram the systems with malicious commands to sabotage critical infrastructures or lock out legitimate administrators.

    A hardcoded password in a Siemens database was used by the authors of the Stuxnet worm to attack industrial control systems used by Iran in its uranium enrichment program.

    Hardcoded passwords and backdoor accounts are just two of numerous security vulnerabilities and security design flaws that have existed for years in industrial control systems made by multiple manufacturers. The security of the devices came under closer scrutiny in 2010 after the Stuxnet worm was discovered on systems in Iran and elsewhere. Numerous researchers have been warning about the vulnerabilities for years. But vendors have largely ignored the warnings and criticism because customers haven’t demanded that the vendors secure their products.

    Like

  5. ActivitiesPlanning the activities and will achieve better and higher
    results. Chilingo one of the smartphones and tablets.

    The game will provide excellent operating facility for High-end mobile games and see
    how far they would get bored. In addition to the players an effective vehicle for tapping into new challenges and also improve speed.
    Check out my games like wrestling, sports or casino, all of
    us would really like but I can criminal case cheats unify to the computer games are a
    lot of how-to topics here.

    Like

Leave a comment & share your thoughts...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s