1 of 38

1

For a deterministic signal x(t), the spectrum is well defined: If

represents its Fourier transform, i.e., if

then represents its energy spectrum. This follows from

Parseval’s theorem since the signal energy is given by

Thus represents the signal energy in the band

(see Fig 18.1).

(18-1)

(18-2)

Fig 18.1

18. Power Spectrum

PILLAI

Energy in

2 of 38

2

However for stochastic processes, a direct application of (18-1)

generates a sequence of random variables for every Moreover,

for a stochastic process, E{| X(t) |2} represents the ensemble average

power (instantaneous energy) at the instant t.

To obtain the spectral distribution of power versus frequency for

stochastic processes, it is best to avoid infinite intervals to begin with,

and start with a finite interval (– T, T ) in (18-1). Formally, partial

Fourier transform of a process X(t) based on (– T, T ) is given by

so that

represents the power distribution associated with that realization based

on (– T, T ). Notice that (18-4) represents a random variable for every

and its ensemble average gives, the average power distribution

based on (– T, T ). Thus

(18-3)

(18-4)

PILLAI

3 of 38

3

represents the power distribution of X(t) based on (– T, T ). For wide

sense stationary (w.s.s) processes, it is possible to further simplify

(18-5). Thus if X(t) is assumed to be w.s.s, then

and (18-5) simplifies to

Let and proceeding as in (14-24), we get

to be the power distribution of the w.s.s. process X(t) based on

(– T, T ). Finally letting in (18-6), we obtain

(18-5)

(18-6)

PILLAI

4 of 38

4

to be the power spectral density of the w.s.s process X(t). Notice that

i.e., the autocorrelation function and the power spectrum of a w.s.s

Process form a Fourier transform pair, a relation known as the

Wiener-Khinchin Theorem. From (18-8), the inverse formula gives

and in particular for we get

From (18-10), the area under represents the total power of the

process X(t), and hence truly represents the power

spectrum. (Fig 18.2).

(18-9)

(18-8)

(18-7)

(18-10)

PILLAI

5 of 38

5

Fig 18.2

The nonnegative-definiteness property of the autocorrelation function

in (14-8) translates into the “nonnegative” property for its Fourier

transform (power spectrum), since from (14-8) and (18-9)

From (18-11), it follows that

(18-11)

(18-12)

PILLAI

represents the power

in the band

6 of 38

6

If X(t) is a real w.s.s process, then so that

so that the power spectrum is an even function, (in addition to being

real and nonnegative).

(18-13)

PILLAI

7 of 38

7

Power Spectra and Linear Systems

If a w.s.s process X(t) with autocorrelation

function is

applied to a linear system with impulse

response h(t), then the cross correlation

function and the output autocorrelation function are

given by (14-40)-(14-41). From there

But if

Then

since

h(t)

X(t)

Y(t)

Fig 18.3

(18-14)

(18-16)

(18-15)

PILLAI

8 of 38

8

(18-20)

(18-19)

(18-18)

Using (18-15)-(18-17) in (18-14) we get

since

where

represents the transfer function of the system, and

(18-17)

PILLAI

9 of 38

9

From (18-18), the cross spectrum need not be real or nonnegative;

However the output power spectrum is real and nonnegative and is

related to the input spectrum and the system transfer function as in

(18-20). Eq. (18-20) can be used for system identification as well.

W.S.S White Noise Process: If W(t) is a w.s.s white noise process,

then from (14-43)

Thus the spectrum of a white noise process is flat, thus justifying its

name. Notice that a white noise process is unrealizable since its total

power is indeterminate.

From (18-20), if the input to an unknown system in Fig 18.3 is

a white noise process, then the output spectrum is given by

Notice that the output spectrum captures the system transfer function

characteristics entirely, and for rational systems Eq (18-22) may be

used to determine the pole/zero locations of the underlying system.

(18-22)

(18-21)

PILLAI

10 of 38

10

Example 18.1: A w.s.s white noise process W(t) is passed

through a low pass filter (LPF) with bandwidth B/2. Find the

autocorrelation function of the output process.

Solution: Let X(t) represent the output of the LPF. Then from (18-22)

Inverse transform of gives the output autocorrelation function

to be

(18-23)

(18-24)

PILLAI

Fig. 18.4

(a) LPF

(b)

11 of 38

11

Eq (18-23) represents colored noise spectrum and (18-24) its

autocorrelation function (see Fig 18.4).

Example 18.2: Let

represent a “smoothing” operation using a moving window on the input

process X(t). Find the spectrum of the output Y(t) in term of that of X(t).

Solution: If we define an LTI system

with impulse response h(t) as in Fig 18.5,

then in term of h(t), Eq (18-25) reduces to

so that

Here

(18-25)

(18-28)

(18-27)

(18-26)

PILLAI

Fig 18.5

12 of 38

12

so that

(18-29)

Notice that the effect of the smoothing operation in (18-25) is to

suppress the high frequency components in the input

and the equivalent linear system acts as a low-pass filter (continuous-

time moving average) with bandwidth in this case.

PILLAI

Fig 18.6

13 of 38

13

Discrete – Time Processes

For discrete-time w.s.s stochastic processes X(nT) with

autocorrelation sequence (proceeding as above) or formally

defining a continuous time process we get

the corresponding autocorrelation function to be

Its Fourier transform is given by

and it defines the power spectrum of the discrete-time process X(nT).

From (18-30),

so that is a periodic function with period

(18-30)

(18-31)

(18-32)

PILLAI

14 of 38

14

This gives the inverse relation

and

represents the total power of the discrete-time process X(nT). The

input-output relations for discrete-time system h(nT) in (14-65)-(14-67)

translate into

and

where

represents the discrete-time system transfer function.

(18-33)

(18-34)

(18-35)

(18-37)

(18-36)

PILLAI

15 of 38

15

Matched Filter

Let r(t) represent a deterministic signal s(t) corrupted by noise. Thus

where r(t) represents the observed data,

and it is passed through a receiver

with impulse response h(t). The

output y(t) is given by

where

and it can be used to make a decision about the presence of absence

of s(t) in r(t). Towards this, one approach is to require that the

receiver output signal to noise ratio (SNR)0 at time instant t0 be

maximized. Notice that

h(t)

r(t)

y(t)

Fig 18.7 Matched Filter

(18-38)

(18-39)

(18-40)

PILLAI

16 of 38

16

represents the output SNR, where we have made use of (18-20) to

determine the average output noise power, and the problem is to

maximize (SNR)0 by optimally choosing the receiver filter

Optimum Receiver for White Noise Input: The simplest input

noise model assumes w(t) to be white noise in (18-38) with spectral

density N0, so that (18-41) simplifies to

and a direct application of Cauchy-Schwarz’ inequality

in (18-42) gives

(18-41)

(18-42)

PILLAI

17 of 38

17

and equality in (18-43) is guaranteed if and only if

or

From (18-45), the optimum receiver that maximizes the output SNR

at t = t0 is given by (18-44)-(18-45). Notice that (18-45) need not be

causal, and the corresponding SNR is given by (18-43).

(18-43)

(18-44)

(18-45)

PILLAI

Fig 18.8

(a)

(b) t0=T/2

(c) t0=T

t0

Fig 18-8 shows the optimum h(t) for two different values of t0. In Fig

18.8 (b), the receiver is noncausal, whereas in Fig 18-8 (c) the

receiver represents a causal waveform.

18 of 38

18

If the receiver is not causal, the optimum causal receiver can be

shown to be

and the corresponding maximum (SNR)0 in that case is given by

Optimum Transmit Signal: In practice, the signal s(t) in (18-38) may

be the output of a target that has been illuminated by a transmit signal

f (t) of finite duration T. In that case

where q(t) represents the target impulse response. One interesting

question in this context is to determine the optimum transmit

(18-48)

(18-47)

(18-46)

q(t)

Fig 18.9

PILLAI

19 of 38

19

signal f (t) with normalized energy that maximizes the receiver output

SNR at t = t0 in Fig 18.7. Notice that for a given s(t), Eq (18-45)

represents the optimum receiver, and (18-43) gives the corresponding

maximum (SNR)0. To maximize (SNR)0 in (18-43), we may substitute

(18-48) into (18-43). This gives

where is given by

and is the largest eigenvalue of the integral equation

(18-49)

(18-50)

(18-51)

PILLAI

20 of 38

20

PILLAI

If the causal solution in (18-46)-(18-47) is chosen, in that case the

kernel in (18-50) simplifies to

and the optimum transmit signal is given by (18-51). Notice

that in the causal case, information beyond t = t0 is not used.

and

Observe that the kernal in (18-50) captures the target

characteristics so as to maximize the output SNR at the observation

instant, and the optimum transmit signal is the solution of the integral

equation in (18-51) subject to the energy constraint in (18-52).

Fig 18.10 show the optimum transmit signal and the companion receiver

pair for a specific target with impulse response q(t) as shown there .

(18-52)

(18-53)

(a)

(b)

(c)

Fig 18.10

21 of 38

21

What if the additive noise in (18-38) is not white?

Let represent a (non-flat) power spectral density. In that case,

what is the optimum matched filter?

If the noise is not white, one approach is to whiten the input

noise first by passing it through a whitening filter, and then proceed

with the whitened output as before (Fig 18.7).

Notice that the signal part of the whitened output sg(t) equals

where g(t) represents the whitening filter, and the output noise n(t) is

white with unit spectral density. This interesting idea due to

(18-54)

PILLAI

Whitening Filter

g(t)

Fig 18.11

colored noise

white noise

22 of 38

22

Whitening Filter: What is a whitening filter? From the discussion

above, the output spectral density of the whitened noise process

equals unity, since it represents the normalized white noise by design.

But from (18-20)

which gives

i.e., the whitening filter transfer function satisfies the magnitude

relationship in (18-55). To be useful in practice, it is desirable to have

the whitening filter to be stable and causal as well. Moreover, at times

its inverse transfer function also needs to be implementable so that it

needs to be stable as well. How does one obtain such a filter (if any)?

[See section 11.1 page 499-502, (and also page 423-424), Text

for a discussion on obtaining the whitening filters.].

(18-55)

PILLAI

Wiener has been exploited in several other problems including

prediction, filtering etc.

23 of 38

23

From there, any spectral density that satisfies the finite power constraint

and the Paley-Wiener constraint (see Eq. (11-4), Text)

can be factorized as

where H(s) together with its inverse function 1/H(s) represent two

filters that are both analytic in Re s > 0. Thus H(s) and its inverse 1/ H(s)

can be chosen to be stable and causal in (18-58). Such a filter is known

as the Wiener factor, and since it has all its poles and zeros in the left

half plane, it represents a minimum phase factor. In the rational case,

if X(t) represents a real process, then is even and hence (18-58)

reads

(18-56)

(18-57)

(18-58)

PILLAI

24 of 38

24

Example 18.3: Consider the spectrum

which translates into

The poles ( ) and zeros ( ) of this

function are shown in Fig 18.12.

From there to maintain the symmetry

condition in (18-59), we may group

together the left half factors as

(18-59)

Fig 18.12

PILLAI

25 of 38

25

and it represents the Wiener factor for the spectrum

Observe that the poles and zeros (if any) on the appear in

even multiples in and hence half of them may be paired with

H(s) (and the other half with H(– s)) to preserve the factorization

condition in (18-58). Notice that H(s) is stable, and so is its inverse.

More generally, if H(s) is minimum phase, then ln H(s) is analytic on

the right half plane so that

gives

Thus

and since are Hilbert transform pairs, it follows that

the phase function in (18-60) is given by the Hilbert

PILLAI

(18-60)

26 of 38

26

transform of Thus

Eq. (18-60) may be used to generate the unknown phase function of

a minimum phase factor from its magnitude.

For discrete-time processes, the factorization conditions take the form

(see (9-203)-(9-205), Text)

and

In that case

where the discrete-time system

(18-63)

(18-62)

PILLAI

(18-61)

27 of 38

27

is analytic together with its inverse in |z| >1. This unique minimum

phase function represents the Wiener factor in the discrete-case.

Matched Filter in Colored Noise:

Returning back to the matched filter problem in colored noise, the

design can be completed as shown in Fig 18.13.

(Here represents the whitening filter associated with the noise

spectral density as in (18-55)-(18-58). Notice that G(s) is the

inverse of the Wiener factor L(s) corresponding to the spectrum

i.e.,

The whitened output sg(t) + n(t) in Fig 18.13 is similar

h0(t)=sg(t0 t)

Whitening Filter

Matched Filter

Fig 18.13

(18-64)

PILLAI

28 of 38

28

to (18-38), and from (18-45) the optimum receiver is given by

where

If we insist on obtaining the receiver transfer function for the

original colored noise problem, we can deduce it easily from Fig 18.14

Notice that Fig 18.14 (a) and (b) are equivalent, and Fig 18.14 (b) is

equivalent to Fig 18.13. Hence (see Fig 18.14 (b))

or

L-1(s)

L(s)

(a)

(b)

Fig 18.14

PILLAI

29 of 38

29

turns out to be the overall matched filter for the original problem.

Once again, transmit signal design can be carried out in this case also.

AM/FM Noise Analysis:

Consider the noisy AM signal

and the noisy FM signal

where

(18-65)

(18-66)

(18-67)

(18-68)

PILLAI

30 of 38

30

Here m(t) represents the message signal and a random phase jitter

in the received signal. In the case of FM, so that

the instantaneous frequency is proportional to the message signal. We

will assume that both the message process m(t) and the noise process

n(t) are w.s.s with power spectra and respectively.

We wish to determine whether the AM and FM signals are w.s.s,

and if so their respective power spectral densities.

Solution: AM signal: In this case from (18-66), if we assume

then

so that (see Fig 18.15)

(18-69)

(18-70)

(a)

(b)

Fig 18.15

PILLAI

31 of 38

31

Thus AM represents a stationary process under the above conditions.

What about FM?

FM signal: In this case (suppressing the additive noise component in

(18-67)) we obtain

since

(18-71)

PILLAI

32 of 38

32

Eq (18-71) can be rewritten as

where

and

In general and depend on both t and so that noisy FM

is not w.s.s in general, even if the message process m(t) is w.s.s.

In the special case when m(t) is a stationary Gaussian process, from

(18-68), is also a stationary Gaussian process with autocorrelation

function

for the FM case. In that case the random variable

(18-72)

(18-74)

(18-73)

(18-75)

PILLAI

33 of 38

33

where

Hence its characteristic function is given by

which for gives

where we have made use of (18-76) and (18-73)-(18-74). On comparing

(18-79) with (18-78) we get

and

so that the FM autocorrelation function in (18-72) simplifies into

(18-76)

(18-77)

(18-78)

(18-79)

(18-80)

(18-81)

PILLAI

34 of 38

34

Notice that for stationary Gaussian message input m(t) (or ), the

nonlinear output X(t) is indeed strict sense stationary with

autocorrelation function as in (18-82).

Narrowband FM: If then (18-82) may be approximated

as

which is similar to the AM case in (18-69). Hence narrowband FM

and ordinary AM have equivalent performance in terms of noise

suppression.

Wideband FM: This case corresponds to In that case

a Taylor series expansion or gives

(18-82)

(18-83)

PILLAI

35 of 38

35

and substituting this into (18-82) we get

so that the power spectrum of FM in this case is given by

where

Notice that always occupies infinite bandwidth irrespective

of the actual message bandwidth (Fig 18.16)and this capacity to spread

the message signal across the entire spectral band helps to reduce the

noise effect in any band.

(18-85)

(18-86)

PILLAI

(18-84)

(18-87)

Fig 18.16

36 of 38

36

Spectrum Estimation / Extension Problem

Given a finite set of autocorrelations one interesting

problem is to extend the given sequence of autocorrelations such that

the spectrum corresponding to the overall sequence is nonnegative for

all frequencies. i.e., given we need to determine

such that

Notice that from (14-64), the given sequence satisfies Tn > 0, and at

every step of the extension, this nonnegativity condition must be

satisfied. Thus we must have

Let Then

(18-88)

(18-89)

PILLAI

37 of 38

37

so that after some algebra

or

where

(18-90)

(18-91)

PILLAI

Fig 18.17

(18-92)

38 of 38

38

PILLAI

Eq. (18-91) represents the interior of a circle with center and radius

as in Fig 18.17, and geometrically it represents the admissible

set of values for rn+1. Repeating this procedure for it

follows that the class of extensions that satisfy (18-85) are infinite.

It is possible to parameterically represent the class of all

admissible spectra. Known as the trigonometric moment problem,

extensive literature is available on this topic.

[See section 12.4 “Youla’s Parameterization”, pages 562-574, Text for

a reasonably complete description and further insight into this topic.].