Main Book Contents Background Reflections Building on Breakthrough  

Overlapping False Alarms: Reason for Concern?

Linn I. Sennott

Professor of Mathematics, Illinois State University, Normal, Illinois. Dr. Sennott is a member of the Mathematical Association of America, the Operations Research Society of America, and the Association for Women in Mathematics.

 


Overlapping False Alarms

Failure of Dual Phenomenology

Significance of Launch on Warning

Conclusions

References

 


 

Overlapping False Alarms

The brief history of the nuclear era is replete with nuclear false alarms, including a flock of geese being mistaken by radar for a flight of missiles, a flock of swans being mistaken for a squadron of MiGs, the rising moon being mistaken for a massive ICBM attack, and a war games tape being accidentally left on a computer and mistaken for the real thing. False alarms are so frequent that no one, by itself, is likely to start an accidental nuclear war. Yet there is reason for concern.

Data made available by the American government under its Freedom of Information Act show that a total of 1,152 moderately serious false alarms occurred during the period 1977 to 1984, an average of almost three false alarms per week.1 Officially known as "Missile Display Conferences to Evaluate Possible Threats" (MDCs), these are called as soon as a possible launch is detected or unusual information appears from warning sensors. The issue of false alarms is considered so sensitive that data are no longer being released by the American government and data on the Soviet system have never been available. But one may assume fairly stable rates of occurrence over time and similar rates of occurrence from one nation to the other.

A nuclear false alarm does not usually cause much concern. With three occurring in an average week, they are too routine. However, a new and potentially dangerous situation arises if a second false alarm occurs before the previous one has been resolved. Two such simultaneous false alarms tend to corroborate each other and could lead to disastrous actions. Bracken's paper in this volume provides a detailed explanation of the danger inherent in such multiple failures.

The high frequency of false alarms makes overlap a significant possibility. I have therefore analyzed the probability distributions involved, using the available data on failure rates in the North American Aerospace Defense Command (NORAD) Early Warning System. The complete mathematical analysis can be found in my other work; this paper restricts itself to summarizing the results.2   3

 

"A nuclear false alarm does not usually cause much concern. With three occurring in an average week, they are too routine. However, a new and potentially dangerous situation arises if a second false alarm occurs before the previous one has been resolved."

 

The problem of overlapping false alarms can be analyzed using the mathematics of queueing theory. We have all had the annoying experience of waiting in a long line or queue, be it waiting for service in a store or waiting for an open telephone line. Queueing theory was developed to analyze these situations and to tell the store or the phone company the trade-offs that are possible between waiting time for customers and waiting time for servers. Having more servers means that customers wait less, but servers are idle more often, waiting for a customer.

In our model, the "customers" are false alarms and there is just one server, the command and control apparatus that deals with false alarms. An overlapping false alarm corresponds to a new "customer" having to "wait" when it seeks "service." That is, a new false alarm arrives and finds that the previous one has not yet been cleared ("served") by the command and control system.

While the average resolution time of false alarms (MDCs) is not public information, there have been reports that they typically take a minimum of one minute for resolution. It is also known that at least one such alarm lasted six minutes. In my model, I use the average of these two numbers, 3.5 minutes, as the assumed resolution time. The average time until two false alarms overlap is then derived, with the results shown in Table 1 for various rates of individual false alarms. (While there is some sensitivity to the assumed resolution time, my general conclusions are not affected if a different resolution time in the range one to six minutes is used.)2


False alarms
per year
Expected time
until two alarms
overlap (years)

5
10
50
100
150
200
300
  6,000   
1,5000   
60   
15   
6.7
3.8
1.7
 

Table 1: Expected Time until Overlapping False Alarms


Using the figure of 144 false alarms per year (NORAD's MDC rate for 1977 through 1984), overlapping false alarms should occur about once every seven years. If less serious false alarms than MDCs are counted, overlaps occur much more frequently for two reasons. First, there are literally thousands of less serious alarms per year. Second, doubling the number of false alarms quadruples the rate of occurrence of overlaps. The mathematics behind this statement is beyond the scope of this paper, but the principle is evident from Table 1. For example, doubling the false alarm rate from 150 to 300 per year quadruples the rate of overlaps from one every 6.7 years to one every 1.7 years.

 

"The more frequent false alarms are usually regarded as less serious. But ... these may be the most dangerous ... of all."

 

The more frequent false alarms are usually regarded as less serious. But, given the quadrupling phenomenon and the instabilities in military command and control systems (see Bracken and Raushenbakh's papers in this volume), these may be the most dangerous false alarms of all.

 

Failure of Dual Phenomenology

Another failure mode of warning systems can also be modeled by queueing theory. Warning systems consist basically of two components: satellite systems to detect the infrared trail of a burning missile motor and radars to detect and track incoming ICBMs.

Because of the severe consequences of incorrectly declaring that we are under attack, a requirement has evolved for "dual phenomenology" - the requirement that an indication of attack picked up by satellite sensors be independently verified by radar.4 Satellites orbiting the earth see the missile at the time of launch while radar installations around the defending country see it a short time later as it comes within range. In our model, dual phenomenology fails if a radar false alarm occurs before the last satellite false alarm has been resolved. We require this order of events because satellite detection must precede radar detection to simulate an attack.

 

"With each nation aware that the other might consider a decapitation strike, there is tremendous pressure to strike first."

 

Again thinking of false alarms as customers and their resolution as service times, we now have two kinds of customers: satellite customers and radar customers. Dual phenomenology fails if a new radar customer finds the last satellite customer still being served (resolved). Our model assumes that false alarms in the satellite and radar systems are independent (totally random), but is conservative because correlation (a tendency of false alarms to cluster together) would increase the chance for overlap and failure.

Again using a resolution time of 3.5 minutes for each satellite false alarm, the expected time until a failure of dual phenomenology is given in Table 2.2


False alarms per year Expected time
until failure of dual
phenomenology (years)

Satellites Radars

5
10
50
100
50
100
200
  5
10
50
50
100
100
200
  6,000   
1,500   
60   
30   
30   
15   
3.8
 

Table 2: Expected Time until Failure of Dual Phenomenology


Note that doubling the rate of either type of false alarm halves the expected time until failure of dual phenomenology, and that doubling the rate of both types cuts the expected time by a factor of four, similar to Table 1.

 

Significance of Launch on Warning

The short flight time of today's ICBMs (approximately thirty minutes) and the even shorter flight time of some submarine launched and intermediate range ballistic missiles (less than ten minutes) has reduced warning times to virtually zero. One possible response to this threat is to move to launch on warning (LOW) or launch under attack (LUA). Consideration of such policies is motivated by fear that, without them, a surprise attack could prove crippling, for example, by a "decapitation strike."

 

"While there is general recognition that human control of the decision process is absolutely necessary, we are rapidly approaching a situation in which the 'man in the loop' is obsolete."

 

"Decapitation" is a strategy in which one nation, fearing an imminent attack by the other, strikes at the opponent's national leaders and command centers.5 The hope is to paralyze the opponent's ability to attack before he exercises that option. With each nation aware that the other might consider a decapitation strike, there is tremendous pressure to strike first. As Bracken notes in this volume: "Each nation might not want war but might feel driven to hit first rather than second. Instead of war versus peace, the decision would be seen as either striking first or striking second."

To counter decapitation and similar strategies, LOW or LUA would initiate a counterattack as soon as reliable evidence is received that a nuclear attack is under way, before the enemy missiles arrive. Such reliable evidence consists essentially of satellite-sensor indication of attack, corroborated by radar a few minutes later. This is the requirement of dual phenomenology analyzed above.

There is much speculation about, and disagreement over, whether the US follows an LOW or LUA strategy. In a recent article, Bruce Blair and Robert McNamara urged that the US should publicly disavow such a policy immediately.6 The official response has been neither to confirm nor deny the adoption of such a strategy. The USSR has warned that it might move to launch on warning as a response to the deployment of short-flight-time Pershing missiles by NATO.7 Table 2 shows that the expected time until failure of dual phenomenology is an uncomfortable fifteen years if the false alarm rates are one hundred per year for both satellites and radars.

 

Conclusions

Borning, Bracken, and Raushenbakh's papers document the destabilizing effect that technological escalation of the arms race has had to date. The future promises more of the same.

As stealth technology decreases the ability of radar to detect bombers and missiles, the quality of the evidence required to say that an attack is underway will have to be lowered and the number of false alarms will increase.

The presence of Soviet missile-carrying submarines near the coast of the US, the presence of similar short-flight-time American missiles in Europe and off the coast of the USSR, coupled with a fear that a decapitation strike would be the likely precursor to a full-scale nuclear attack, is dramatically shortening decision times and making the system increasingly unstable.

While there is general recognition that human control of the decision process is absolutely necessary, we are rapidly approaching a situation in which the "man in the loop" is obsolete. Launch on warning and launch under attack are discussed as if they were serious options.

These factors, coupled with the significant chance for overlapping false alarms or failure of dual phenomenology, have created an extremely volatile and hazardous situation.

 

 

 

 

REFERENCES

1. The Center for Defense Information, "Accidental Nuclear War: A Rising Risk?" The Defense Monitor, Vol. 15 No. 7 (1986).

2. Linn I. Sennott, "Distributions Arising in False Alarm Analysis of Defense Surveillance Systems," conference on The Risk of Accidental Nuclear War, Vancouver, May 26-30, 1986. (Conference proceedings to appear 1988.)

3. Michael Wallace, Brian Crissey, and Linn Sennott, "Accidental Nuclear War: A Risk Assessment," J. Peace Research, Vol. 23 No. 1 (1986), pp. 9-27.

4. D. McLane, "North American Security Rests on NORAD Mission," Defense Systems Review, January 1984.

5. John Steinbruner, "Nuclear Decapitation," Foreign Policy, Vol. 45 (Winter 1981-1982), pp. 16-18.

6. Bruce G. Blair and Robert McNamara, "Science and the Citizen," Scientific American, Vol. 255 No. 4 (October 1986), pp. 74, 76.

7. Dusko Doder, "Kremlin Defense Official Warns of Policy Shift to Quicken Nuclear Response," Washington Post, July 13, 1982, p. A-1a.

 

Main Book Contents Background Reflections Building on Breakthrough  
Contact Breakthrough      Foundation for Global Community      Copyright ©2001 FGC