Optimal Quantiser Performance with a Small Dither

A quantiser is a non-smooth function and no inverse function exist that can be applied to correct for the error it introduces. Applying a dither signal and averaging to a quantiser produces a smooth image for which an inverse function does exist. This article describes methods for minimising the error after inverse compensation. We show that there is an optimal dither variance that minimises the error after inversion. Simple rules for choosing the optimal dither variance are presented. The error after inversion can be made arbitrarily small by increasing the averaging length. This can be done by oversampling the signal by the same factor as the number of averages. Quantisation of a dither signal with a continuous probability distribution, produces a discrete probability mass function. We discuss a method for recovering an unknown continuous probability distribution from the empirical discrete probability mass function of the quantised dither signal. This enables inverse compensation in systems where exact control of the dither signal is not possible, as inverse compensation requires information about the continuous probability distribution of the dither signal and the step-size of the quantiser.


Introduction
Quantisation is the process of mapping a large set of values to a smaller set of values; thereby discarding some values and introducing a signal dependent error, as illustrated in Fig. 1.However, by applying a dithering signal as illustrated in Fig. 2, a bijective correspondence between the expected value of the quantised signal and the reference signal can be obtained; that is, E{y} = N (x), where y = y(t) is the quantized version of x = x(t), as illustrated in Fig. 4 (see also Fig. 5) By applying averaging, a resolution below the step-size of the dithered quantiser can then be achieved [1].Improving the resolution in this manner is particularly useful when a signal is sampled using coarse quantisation or in applications where a large dynamic range is required.
In general, the function N (x) is non-linear, and determined by the distribution of the dither signal and the step-size of the quantiser.If this functional relationship between the reference and the expected value of the quantised signal is known, the error in the expected value introduced by N (x) compared to x can be compensated by using the inverse of the function [2].
When the probability distribution for a dither signal fulfils certain criteria, the moments of the quantised signal can be made independent of the reference signal [3,4,5,6,7,8].In terms of the first moment, the expected value, the quantiser can then be perfectly linearised by applying a uniformly distributed dither signal with a range that is exactly ±∆/2, where ∆ denotes the step-size of the quantiser [7].
However, dither distributions that render the moments of the quantised signal independent of the reference, e.g. a uniform distribution, are infeasible to produce in a physical system as they require a high degree of precision in the dither signal reproduction.This is because any dynamics in the signal path, by design or due to parasitic effects, will cause the reproduced dither signal to tend to a normal distribution.This effect is explained by the central limit theorem [9].If the realisation of the distribution is not exact, the linearising effect is significantly reduced [10].In contrast, since introducing dynamics in a signal path can be done using e.g.linear filters, a normally distributed dither signal is almost trivial to produce.
In the normally distributed case, the quantiser can only be perfectly linearised in terms of the expected value when the standard deviation, or variance, of the dither is infinite, as shown in Sec.3.4.When applying averaging, considering the combined error introduced by N (x) (a deterministic effect) as well as the variance of the dither (a stochastic effect), the minimum error of the dithered and averaged quantiser output signal is obtained when the standard deviation is approximately ∆/2 [10,11].This error is illustrated in Fig. 5 using a non-optimal dither variance to   emphasise the effect of N (x).
As noted above, if the dither distribution parameters and quantiser step-size are known, then applying the inverse of the non-linear function N −1 (ŷ) to the output of the dithered and averaged quantiser reduces the overall error [2].This is illustrated in Fig. 6.
By using the inverse, it may be intuitively expected that the standard deviation of the dither can be reduced significantly below ∆/2, as N (x) becomes bijective for any standard deviation greater than zero.As the results in this paper show, this is not the case: The derivative of N (x) approaches zero in the midpoints between the steps when the dither variance is reduced, implying that the derivative of N −1 (ŷ) at these points approaches infinity.A large derivative, or sensitivity, for N −1 (ŷ) amplifies noise, and this noise contributes to the overall error more than the removal of the systematic error due to N (x) can reduce it.An optimal dither variance that minimises the overall error in the inverted signal therefore exists.

Contributions
The main contributions are an analysis of the overall error in the signal when applying N −1 (ŷ) and a method for identifying the parameters of the probability distribution using the quantised dither signal.The error analysis makes it possible to determine the performance limits when applying the inverse, and the identification method enables the application of the inverse to systems where a dither with a known type of probability distribution can be introduced, but where the exact distribution parameters are unknown.
The functional relationship between the reference signal and the expected value of the quantised signal as well as the expression for the variance of the output signal is developed directly using convolution integrals rather than via characteristic functions, such as in [10,2,11].This approach makes it simpler to prove convergence to a linear function, as compared to the approaches in [3,4,5,7,8].
It is demonstrated that when using N −1 (ŷ) in combination with sufficient averaging, it is possible to reduce the error compared to the case when not using an inverse.It is also demonstrated that there exists an optimal dither variance, and that the optimal dither depends on assumptions made about the reference signal.

Notation
The set of natural numbers {0,

The uniform quantiser
A uniform quantiser can represent values separated by the quantisation step-size ∆ > 0 and can be defined using the truncation operator trunc(u) as The quantised output y = y(t) The uniform quantiser is demonstrated in Fig. 1.

Alternative formulation
The uniform quantiser can equivalently be expressed as where each (odd) step-function, as shown in Fig. 3, in the quantiser, is described by the step functions Figure 4: A quantiser is not bijective as it maps the continuum to a discrete set, but the smoothed quantiser ( 17) is; thus giving a oneto-one correspondence between reference and expected value.Here a zero-mean, normally distributed dither (with σ d = ∆/6) has been used.
with the threshold of the step T k given by and H(u) χ [0,∞) (u) the Heaviside step-function.Note that the sum in (3) converges for every t since the summands become zero when T k ≥ u ≥ −T k or equivalently when k ≥ |u|/∆ − 1/2.

Smoothing the quantiser
The input signal u when a stochastic dithering signal is present is where x(t) is the deterministic reference signal that should be recovered from a quantised and averaged time-series.It is assumed that the stochastic dither signal d(t) is an identically distributed stochastic process having the strictly white noise property, i.e. d(t i ) and d(t j ) are independent for each pair t i and t j [12].In the sequel the input u is viewed as a function of x and t viz., when the time dependency of x is irrelevant.If this is not the case we write u(t) = u(x(t), t).
The dither signal d(t) is either already present naturally due to sampling a noisy measurement, or it is added artificially.In either case, the effect is that by averaging the quantised signal y(x, t) = n(u(x, t)), the effect of the discontinuous quantiser can be smoothed, imbuing the quantiser with continuity, as illustrated in Fig. 5.However, apart from the smoothing effect, the dither signal is otherwise unwanted in the recovered signal.
Let F d (z) be the cumulative distribution function (CDF) and f d (z) the probability density function (PDF) corresponding to d(t), and note that neither of these functions depend on time t by the i.d., property of d(t).Moreover, let δ(•) denotes the Dirac delta function and x ∈ R. Then f x (z, x) = δ(z − x) is the PDF corresponding to x which depends on time t if x = x(t) does; f x (z, t) f x (z, x(t)).The signal u(x, t) will then have the PDF The application of the dither signal makes it possible to define an averaged and smoothed step-function related to n k (u) and the stochastic dither signal d(t).
Note that in general, when n k (z) is any arbitrary function of bounded variation, with L N k a Lipshitz constant of N k (z) and T V (n k ) the total variation of n k (u).In the case considered in this article T V (n k ) = 2∆, hence the dither PDF and quantisation step-size determines an upper bound on the smoothing effect on the quantiser.Moreover, using (4) It will be assumed throughout that the CDF F d is such that is finite for each z.The summand k in (12a) correspond to the two summands n = k and n = −(k + 1) in (12b).
In the special case where f d (z) is symmetric around zero1 (11) can be written as  Hence sign (N k (z)) = sign(z).

First and Second Order Moments
Consider the strictly white noise output for each step and the (strictly white noise) quantised output signal with y k (t) y k (x(t), t) and y(t) y(x(t), t) if x = x(t).Using ( 8), the expected value for each step is The expected value of the quantiser output is therefore With ( 17) and x = x(t), this leads to an output autocovariance given by where the first case in (18b) follows by the strictly white noise property of y(t) and the second case is by (17).
It is remarked that the right-hand side of ( 16) and ( 17) depend on time only when x = x(t), implying the same for the left-hand sides.That is, the time evolution of the stochastic dither does not affect the expected value of the output.This is in general not the case for the autocovariance (18).However, as shown below the variance v = v(t) can be viewed as a function of x when time dependency of x(t) is irrelevant.
Consider the zero-mean strictly white noise error signal having, with x = x(t), auto-covariance C ε (t, s) = C y (t, s).In order to find the time-varying variance of the error signal ε(t) ε(x(t), t), the output of the terms (4) in the summation (3) is considered individually as correlated variables.The variance and covariance of the terms can then be found individually and then summed using the formula to find the variance of the error.
For this note first that the Heaviside step-function has the properties and (more generally) if a ≤ b.The variance for y k (t) can then be found as using ( 22) to obtain (24a), and (11b) to obtain (24b).For k < l, the covariance between terms can be found as using (23).Note that when the time dependency of x(t) is irrelevant then, from ( 24) and ( 25), the variance v may be viewed as a function of x viz., v = v(x) with the special case v(t) = v(x(t)).For this reason (20) is generalised to with (20) obtained from (26) by letting x = x(t).
An example of the variance v(x) and its dependence on x is shown in Fig. 7.

Convergence to a linear function
We show that d dz N (z) is constant, and therefore that N (z) is affine linear, when the characteristic function ϕ d (v) of the PDF f d (z) has a specific property (see (31)).
From (12b) we get which by the Poisson sum formula [9] yields: It can then be seen that if then the derivative of N (z) is constant, and hence N (z) is affine linear, that is, when (31) is fulfilled, then the quantisation error is constant.Note that the quantisation error (32) is in terms of the reference x(t), whereas the output error (19), with x = x(t) is in terms of the output y(t).If N (x(t)) intersects the origin, N (0) = 0.The above derivation leads to the same well-known result as found in [3,4,5,7,8].
Below are two examples of distributions that can fulfill the criterion (31) and in principle linearise the quantiser and remove the quantisation error.As will be seen, neither have practical use in measurement systems.

Application of a uniformly distributed dither
The simplest distribution that can fulfil (31) is the uniform distribution [7,8].In this case which has characteristic function If the dither signal has uniform distribution, the dither signal reconstruction must be exact in order to linearise the quantiser.Engineering a device that can reproduce a physical white noise signal with a uniform distribution is in general infeasible due to the precision requirements.Hence, the uniformly distributed case has little practical use in most physical measurement systems.

Application of a normally distributed dither
Producing a normally distributed signal is comparatively trivial compared to a uniformly distributed white noise signal.If the process d is normally distributed with mean µ d and standard deviation σ d , then hence and N (z) will therefore approach an affine linear function as σ d → ∞, i.e. this distribution only fulfills (31) for an infinite standard deviation.An infinite variance is of course impractical, and in general, dithering with a large variance has limited practical application.

Sample-averaging
Considering (18b) and ( 19), the output y(x, t) can be seen to be a linear combination of the effective non-linearity N (x), and the white noise error term ε(x, t), that is The expression E{y(x, t)} = N (x) represents the expected value in terms of an average over the sample space.We can in principle find this average as follows.
For i = 1, 2, . . ., M let d i (t) denote a stochastic dither signal and assume that d 1 (t), d 2 (t), . . ., d M (t) are independent and identically distributed (i.i.d.) for each fixed time t.As above we construct, for each i, the (strictly white noise) quantised output signal Hence y(x, t), y 1 (x, t), . . ., y M (x, t) are i.i.d., and the sample average converges almost surely to E{y(x, t)} = N (x) by the law of large numbers.That is, with y(x) lim M →∞ y(x, t) M .Note that by direct calculations the variance of y(x, t) M is which converge to zero, implying that y(x, t) M converge to E{y(x, t)} = N (x) in the mean square sense since Moreover, for and with the time dependent case x = x(t) as a special case.
The sample averaging is often not implementable in applications since, in general, a large value of M is required (that is, a large number of physical channels is required).A common configuration in measurement systems is to have a single channel.Hence, time-averaging may therefore be the only option for producing an average value.

Time-averaging
Time-averaging the output of the quantiser is in practice done using discrete time-samples of y(t), at the instances t n = τ n where τ is the sampling-time.The samples will then be averaged using a linear time-invariant (LTI) low-pass filter.The discrete-time output is denoted y[n] y(t n ), and the LTI-filter impulse response is denoted g [n].Denoting the time-averaged output ȳ[n] (g * y)[n], we have where where The filter H(z) is used to describe the error introduced by the time-averaging operation.
Since the error signal (19) is not stationary, common systems norms [14] do not apply.However, an LTI-filter will still reduce the variance of the error signal ε(t) as will be shown next.First, the filtered error signal is defined as where g[n] is the output filter and ε[n] is the error signal.The variance of the filtered error signal is then where (47b) follows by the discrete-time version of [12, Theorem 9-3].Hence if v ∈ l 1 and g2 ∈ l 2 then where (48b) follows by Young's convolution theorem for sequences.By (48) the variance of the filtered error signal will be attenuated by the energy of the impulse response squared g 2

Moving average filter
Applying a moving average filter gives The variance of the filtered error signal will then be according to (47), and it will be attenuated by according to (48).Note that a moving average filter in the z-domain is a uniformly weighted sum of delay elements, which can simplified using the geometric series sum formula to yield By setting z = e i2πf the frequency response of G(z) can be found as The magnitude response is shown in Fig. 8.The frequency response can be used to compensate for some of the effects the moving average filter has on the reference signal x[n], as explained in Sec.5.2.

Accumulate-and-dump filter
A large number of samples is desireable in order to reduce the variace due to the dither signal.However, in terms of implementability, it may be necceary to decimate, or downsample, the output of the moving average filter in order to reduce the sampling rate.If the moving filter with M samples is subsequently downsampled by the same factor M , it is known as an accumulate-and-dump filter.This operation is attractive due to its simplicity.The effect of the decimation can be analysed by considering the interpolation formula

Variance for Small ∆
If it can be assumed that ∆ is small in the sense that x(t) varies approximately linearly over the quantisation step-size, the variance v(x) can be approximated by an average [2,11]; the limiting case being that x varies linearly across one step.
In the sample-averaged case the average variance is defined as and the variance C ε M (x) from ( 43) can be approximated by using ( 40).
In the time-averaged case the average variance is similarly defined as and, assuming a moving average filter, the variance of ε[n] will then be E{ε by ( 49).Note that if ∆/M in ( 55) is interpreted as the samplingtime then v ≈ vd for sufficiently small sampling-time (or equivalently for sufficiently large M ).An example of the average variance is shown in Fig. 7, and compared to the case when it is dependent on x.By assuming that the variance is constant, the process describing the error becomes stationary and information about the worst-case error variance is neglected.

Inverting the Effective Non-linearity
Consider again the output of the quantiser (37) with x = x(t).If there was no error signal, ε(t) = 0, then the reference signal x(t) could directly be recovered by application of the inverse of N (z), that is Clearly (57) can not be true since we always have ε(t) = 0 due to the presence of the dither signal.However, we may obtain an estimate of x(t) as follows.Let ŷ denote an average of the measured output (e.g.y(t) M y(x(t), t) M or ȳ[n]).If the dither d is given, then N (z) is known analytically and the estimate x N −1 (ŷ) can then be computed numerically by solving F (x) = 0 where One such method is the bisection method [15], which is guaranteed to converge if F (z) is a continuous function in a domain [z l , z h ] and F (z l ) and F (z h ) have opposite signs.Since |N (z) − ŷ| ≤ ∆, it is always possible to choose a constant z 0 such that w l = ŷ − z 0 and z h = ŷ + z 0 will fulfil this encompassing condition.See Fig. 6 for an example of N −1 (y(t)), solved using the bisection method.
It is clearly of interest to obtain an estimate of the variance of x.This can be done using a first-order Taylor series of N −1 (w), at N (x(t)), evaluated along the quantised output signal y(t).That is, where we have defined the sensitivity α as Using (59) directly, or the delta-method [16], an approximation to the variance of N −1 (y(t)) can be found as with x(t) replaced by x since time dependence is irrelevant.

Inverting the sample-average
Sample-averaging reduces the variance of the quantised signal, and hence also the variance of the signal obtained by application of the inverse N −1 (z).Using (59) and (42) we get where denotes the stochastic error due to the dithering signal, see Fig. 6 for an example of N −1 ( y(t) M ), found solving (58).Using (43) and (40), the variance of (62) is with x(t) replaced by x since time dependence is irrelevant.Hence the sample-average reduces the variance (61) by approximately a factor of 1/M .

Inverting the time-average
Time-averaging using an LTI low-pass filter G(z) such as (51) reduces the variance of the stochastic error signal ε[n] but introduces a frequency dependent error for the deterministic reference signal x[n].At low frequencies the error for x[n] is small but increases toward the Nyquistfrequency.This can be seen from the high-pass characteristic of the filter H(z) = G(z) − 1 in (45b).From (59) and (19), the inverse can be written as where in the passband can be compensated for bye using a pre-filter (z).By using the pre-filter, the amplitude response can be made to be unity in the passband, compensating for the amplitude attenuation by the moving average filter.The filter W (z) has been synthesised using the least-squares method [17].
denotes the deterministic error due to filtering, and denotes the stochastic error due to the dithering signal.Under the assumptions of Sec.4.3, the variance of ε t [n] will be reduced by a factor of 1/M .Note that if x(t) is bandwidth limited, it is straightforward to compensate for the error e f [n] by designing a finite impulse-response (FIR) compensating pre-filter W (z) which mininises in the passband for x(t), and minimises elsewhere, as demonstrated in Fig. 8 for the moving average filter.Given (68), the error If the passband is sufficiently small, the energy due to the difference G(z) − G(z)W (z) 2 will be small.Hence, the effect of H(z) can be ignored (e.g. it was neglected for the results obtained in Fig. 11).

Optimal dither variance when inverting
The variance 64) dependes on the sensitivity α(x) from (60) and the variance of the error v(x) from (26).Consider (27) for a normal distribution: The variance v(x) will then have maxima at the thresholds (5), that is at x = T k , as seen in Fig. 7.Moreover, the sensitivity α(x) will have maxima at k∆, as demonstrated in Fig. 10, with the maximum sensitivity tending to infinity as σ d goes to zero (and α(x) tending to a constant as σ d goes to ∞).It then follows that the variance C N −1 ( y M ) (x) will be dominated by α(x) when σ d /∆ 1 and by v(x) when σ d /∆ grows larger.As shown below it turns out that there is an optimal σ d , balancing the sensitivity and the magnitude of σ d .Two  metrics for C N −1 ( y M ) (x) can be devised, suitable for different conditions on the reference signal: An averaged case where it is assumed that x = x(t) varies linearly over the quantisation step-size (small ∆, as suggested in [2,11]), and a worst case where the maximum variance is considered.

Small ∆
As in Sec.4.3, it is assumed that ∆ is small in the sense that x(t) varies approximately linearly over the quantisation step-size.Then the variance C N −1 ( y M ) (x) can be approximated by an average variance vα with where the Cauchy-Schwarz inequality and (53) has been used to obtain (71).Note that a C = a C (M, σ d ), so for each M an approximative smallest upper bound a C (M, σ * d ) on the variance C N −1 ( y M ) (x) can be obtained through (71) by varying σ d as in Fig. 10.This is shown in Fig. 11 for a scaled version of a C (M, σ d /∆).
It is also informative to look at a mean square error (MSE) estimate of the quantisation error e q (x(t)) y(t)− x(t), which is the case without inversion [11].Consider the sample average where (72) follows from ( 42) and (32).The second moment of the sample average is therefore Setting x = x(t) and averaging yields an MSE where the MSE of the deterministic component is and is in addition to the average variance v representing the stochastic part of e q .The MSE can be computed numerically, as in Fig. 11, or using the expressions prestented in [11].

Worst case
If no assumptions can be made about x(t), the worstcase variance will be a more reasonable estimate of C N −1 ( y M ) (x).Note that m C = m C (M, σ d ), similar to a C , which is shown in Fig. 11 for a scaled version.

Optimal variances
Upper bounds for the standard deviation C N −1 ( y M ) (x) from ( 71) and (76), and the root mean squared error (RMSE) using (74) are shown in Fig. 11 for a normally distributed dither signal for different standard deviations σ d and three values of M .Both axes have been normalised relative to the step-size ∆.The numerical values of the optimal cases are presented in Tab. 1.
Considering (74), it can be seen that increasing M decreases the error due to stocastic dither, v.The contribution to the overall error due to the deterministic component ēx therefore becomes more significant relative to the stochastic component when using a large number of averages.This is the reason the minimal RMSE depends on M , in addition to the dither variance σ d .When applying the inverse N −1 (z), the deterministic component ēx is

Probability mass function of the quantised dither
Consider the case when the reference signal is a constant, x(t) = µ x .The PDF for the input will then be In a similar manner as described in [7], the probability that the input signal u(t) = µ x + d(t) will truncate to a given integer k ∈ Z can be found by considering the probability of u(t) in the domain ∆ with µ u = µ x + µ d , and f from (78) can be written as )

Identification of the dither probability density function
Assuming that the reference signal can be held constant and that the dither signal has a PDF of a known type, the parameters θ ∈ R p of the input PDF f u (k; θ) can be determined from experiment and used to construct an accurate inverse N −1 (z).By sampling a quantised signal, say {s 1 , s 2 , . . ., s n }, when the reference is constant, an empirical probabilty mass function where the problem has been scaled by the square root, as the masses associated with values away from k tend to be very small.In this case it is only possible to identify µ u , which means that the bias µ x can not be compensated for using N −1 (z).The latter causes a constant bias to be introduced when applying N −1 (z).

Conclusions
When dithering a quantiser, the expected value of the output has a one-to-one correspondence with the reference value, and this function can be determined if the probability density function of the dither and the step-size of the quantiser are known.The deterministic error introduced by the quantiser can then be compensated by using the inverse of this function, and the stochastic error due to the dither can be reduced by averaging.Sample and time-averaging achieves approximately the same performance gain.Time-averaging is more feasible in a physical system, but may require a compensation filter to reduce amplitude and phase distortion of the reference signal due to the averaging operation.
It was demonstrated that when applying inversion there is an optimal dither variance that minimises the error after averaging.The optimal variance depends on assumptions about the reference signal: Either the step-size ∆ is assumed to be small compared to the reference and an average error variance is considered, or the worst-case is considerd by finding the maximum error variance.The variance of the error when using inversion scales with a factor 1/M , where M is the number of averages.For M ≥ 4, using an optimal dither, inverting the average output provides improved performance over only averaging an optimally dithered quantiser.With a very small amount of averaging, applying inversion is better than mere averaging.
A method for identifying the probability density function for the dither using empirical data was presented and demonstrated numerically.This enables compensation by inversion in systems where it is difficult to have exact control over the generated dither signal.

Figure 1 :
Figure 1: A quantised signal will have a signal dependent error, referred to as the quantisation error: y(t) − x(t).

Figure 5 :
Figure 5: Applying dither makes is possible to reduce the quantisation error if the output is averaged.Here a zero-mean, normally distributed dither (with σ d = ∆/6) has been used.The averaged signal was obtained with M = 1000 samples.

Figure 6 :
Figure 6: The inverse of the non-linear function relating the reference and expected value can be used to reduce the quantisation error of the averaged signal.Using a zero-mean, normally distributed dither with (σ d = ∆/6), the averaged signal ŷ = y(t) M was obtained with M = 1000 samples.

Figure 8 :
Figure8: The frequency response of the moving average filter G(z) in the passband can be compensated for bye using a pre-filter (z).By using the pre-filter, the amplitude response can be made to be unity in the passband, compensating for the amplitude attenuation by the moving average filter.The filter W (z) has been synthesised using the least-squares method[17].

Figure 10 :
Figure 10: The sensitivity α(x) from (60) and its dependency on the variance σ d for a normally distributed dither.

Figure 11 :
Figure 11:Upper bounds for the standard deviation C N −1 ( y M ) (x), using normally distributed dither.Diamond, square and circle indicate optimal σ d /∆ for the three cases.

2 ,
i − k) can be produced.This can then be used to find the parameters θ, by solving the parameter identification problem min θ f (k; θ) − fn (k) (82) using the analytic expression (78).In the case of a normal distribution, robust estimates for μu and σd are found by solving a problem of the form min µu,σ d f (k; µ u , σ d ) − fn (k) 2 ,
1, 2, . . .} is denoted N. A definition is denoted by and * indicate the convolution product.The Laplace operator is denoted L and the Z-transform Z. Functions of time t (or discrete-time n) are usually denoted by lower case, e.g.g(t) (or g[n]), and the Laplace transform (or Z-transform) by upper case, e.g.G(s) = L{g}(s) (or G[z] = Z{g}[z]).The standard notation L p (R) and l p indicate the L p -space and l p -space p = 1, 2, . . ., ∞ and g p the p-norm of g(t) or g[n].For a stochastic process d(t, ω) the dependency on the sample variable ω is omitted, µ = E{d(t)} denote the mean value, C d (t, s) E{d(t)d(s)} − E{d(t)}E{d(s)} denote the auto-covariance and C d (t) C d (t, t) the variance.The indicator function for the set S is denoted χ S (z) and the floor operator by • .