The rigging of the Millennium Challenge 2002 simulation echos events in the planning for the Vietnam War. Chapter 21 of David Halberstam’s The Best and the Brightest describes the elaborate simulations run to prepare for our involvement in Vietnam, each involving weeks of preparation and conducted by senior military and civilian officials. The first set did not go well for the Blue Team. Then we see the first weakness of all analytical tools: moral weakness of the human components, as they cheat in order to get the desired result.
The second set of war games went a little better. … There was a greater US willingness to commit more and more of its resources to the war, and corollary change among the North Vietnamese, a downplaying of their willingness to meet the larger American commitment.
As Joshua Foust says, in his forthright manner:
… the military is not adverse to rigging games so America always wins. This is an endemic problem: one interesting bit of information I’ve gleaned from the volumes of self-praise in Thomas Barnett’s latest book is that the DoD absolutely loves hearing how great it is. Though understandable — who does not like having his ego stroked oh-so-lovingly? — it is a critical fault for the agency responsible for the defense of the country and the execution of its foreign policy. Hearing “the past decade of research you’ve done into air power, robots, special forces is kind of wasted effort” doesn’t make one any friends. Saying “here’s how you can take your tactics to the next imaginary level,” on the other hand, does. (source)
Both the pre-Vietnam War games and Millennium Challenge 2002 also illustrate the the second weakness of any analytic system: no matter how good the simulation, they have no utility if decision-makers choose to ignore the results.
… the real lesson of the games, and it was not a lesson they wanted to talk about, was not how vulnerable the North was to US bombing, but how invulnerable it was, how much of an American input it would require to dent the North Vietnamese will, and how even that dent was not assured, and finally, for some of the more neutral observers, the fact that the basic strategy of limited bombing already split the civilians and much of the military.
Simulations can provide a competitive edge for conventional military organizations, especially when done with size and complexity that our non-state 4GW foes cannot duplicate.
Simulations, like most tools, require moral courage in their operators (i.e., willingness to accept career risk), an organizational culture focused on external realities (such as battlefield success, not court politics or domestic pork spending), and some intelligence in the senior decision-makers (not necessarily genius-level, but not fools either).
Military simulations, like all analytical tools, must be run correctly to produce useful results. Their results have impact only if listened to. So simple to say, so difficult to do.
Update: Zenpundit‘s note on this series is (as usual) worth a look. Esp. his comments on the nature and function of gaming, both military and in general.
Previous articles on this subject
- Recommended reading: an autopsy of the 2002 Millennium Challenge war games
- War games, the antidote to “Victory disease”
- Are war games a competitive edge of conventional forces vs. non-state 4GW foes?
- During Millenium Challenge 2002, by Ed Beakley (Project White Horse blog), posted at the DNI blog
- What we should have learned from MC02, by Dag von Lubitz, posted at the DNI blog