Let us come back to the « threshold logic » of a detection sensor, in which the presence or the absence of the target is deduced from the comparison between the output voltage and a preset threshold voltage. According to that logic, the probability of detection of a given target or signal is the probability that the instantaneous output voltage, or sample, be larger than the threshold whenever the target (or signal to be detected, such as a laser pulse) is present. Mathematically, the probability of detection is the integral of the « signal + noise » probability density function, ps+n(v) above the threshold value :
Similarly, the probability of false alarm, PFA, is the probability for a voltage sample to be larger than the threshold value while the target, or the signal to be detected, is absent (i.e. in the presence of noise alone). As for the probability of detection, the probability of false alarm is the integral of the « noise alone » density probability function, pn(v), above the threshold :
Probability density functions of output voltage in the presence of the target (or of the signal to be detected) vary quite a lot from one application to the other : the fluctuations on « useful signals » depend upon many parameters : in some cases, the signal is said to be stationary, which means that from sample to sample the difference comes mainly from the noise of the sensor, which may vary widely according to the mode of detection (direct or heterodyne), the spectral bandwidth of operation. Otherwise, these fluctuations may arise from the source itself, in case it is highly coherent, as is the case for single mode lasers, and if there is presence of speckle. They may arise from the object under illumination (changes in its orientation, objects with diffuse or specular surface,...).
For a given average value of the signal, one can easily understand that the actual performance of an EO sensor will be more or less degraded because of signal fluctuations from sample to sample. Indeed, some of the signal sampled values will be unnecessarily high, while other will be too low for acceptable detection. Among the most often encountered probability density functions of « useful signals », one will mention the following : Gauss, Laplace, Rayleigh, Gamma functions, ...
As for the output fluctuations due to the sensor itself (coming for a large part from shot and Johnson noises), their statistics is essentially gaussian. The performance analysis of this case study is limited to EO sensors in which probability density functions are gaussian, on the signal as well as on the noise. This hypothesis is far from being representative of all cases, but it is the most simple in its results and its bases may be extended to the other configurations, main difference being in the results.
This hypothesis being agreed upon, let us write down the two probability density functions, applicable either on the « signal » or on the « noise alone », i.e. in the absence or in the presence of the target (or of the useful signal) :
In these conditions, the probability of detection and the probability of false alarm are respectiveley given by :
One of the first parameters to choose when designing a detection system (be it radar or electro-optical) is its threshold value (for instance, in voltage : Vth), which must be computed from the specification on the probability of false alarm and from the noise level of the equipment. From the table below, one will realize that the probability of false alarm is a rapidly decreasing function of the ratio, vth/σv,n , between the threshold value and the rms value of the « noise alone ». For example, the probability of false alarm is 10-3 when this ratio is equal to 3 ; 10-9 for a ratio of 6, 10-12 for a ratio of 7,...
Up to now we have shown that what counts in the choice of the threshold level is the probability of false alarm, i.e. the probability that an output sample be larger than the threshold in the absence of the target. For most detection systems, that parameter is not mentionned at all in the customer's specifications ; from the user's point of view, what is important in the specifications of false alarms is not the probability of false alarm itself but rather what is called the « false alarm rate », or FAR. FAR is the maximum number of false alarms tolerated by the user (or by the designer) per unit time, or during the operating time of the device ; it represents the number, per second or during the period of operation, of false detections, i.e. of voltage samples larger than the threshold, while no target is present inside the field of view of the equipment. FAR is hence one of the fundamental operating parameters of a detection system.
So the designer of a detection system must convert the operational FAR specification from the customer into the probability of false alarm, which is the mathematical parameter to be introduced into the design model. As mentionned before, the probability of false alarm (PFA) is the probability that an instantaneous sample be above the threshold value, while no target is present. FAR and PFA are related to each other by the fact that the number of false alarms over some time duration is the product of the false alarm rate per individual sample (which is nothing else than the PFA) by the number of samples being made during that time :
For example, let us consider the case of a sensor operating without interruption, (100% of the time) : the number of samples being obtained during a given period of time is the product of that duration by the sampling rate of the sensor, so that :
Because he generally does not know the sampling rate of the sensor and is not directly interested in that parameter, the user of a detection system will only specify the desired FAR. Because he knows the necessary sampling rate of the sensor, defined from the signal duration and from such considerations as Shannon sampling theorem (« sampling frequency should be at least twice as high as the maximum frequency of the signal to be detected »), the designer is able to deduce PFA from FAR.
In quite many applications, the output signal is not relevant 100% of the operating time of the sensor : that typically happens with pulsed laser rangefinders, where target range is deduced from the time of flight of laser pulses ; if the maximum distance of the target , dmax , is known, it is recommended not to take the output signal into consideration for times of flight corresponding to longer ranges.For example, let us consider that the maximum target range is 15 km : that means that the flight time of the laser pulses is at most equal to :
Beyond that duration, it useless to sample the output voltage, and hence it is advisable to stop the counter, which eliminates risks of false alarm from targets thought to be too far away to give rise to some detectable laser echo. If the pulse repetition frequency (PRF) of the rangefinder is 10 Hz, each second of operation of the device leads to only 10-3 s of effective measurements. This corresponds to (Tu/T) = 10-3 as being the percentage of time effectively dedicated to measurements. In the case of such « part-time » sensors, the relationship between FAR and PFA is the following :
By discarding the output signal whenever it is not relevant, the designer will be able to set a much higher value to the probability of false alarm. As will be shown in the following paragraph, this procedure, i.e. reducing the number of samples to the minimum, leads to a reduction in the threshold value and hence to an improvement on the probability of detection, while the false alarm rate is being kept compatible with the sensor specifications.