EE 779 Advanced Topics in Signal Processing Assignment 2

$30.00

Download Details:

  • Name: Assignment2-99ma7q.zip
  • Type: zip
  • Size: 632.46 KB

Category:

Description

Rate this product

1. [∗]The estimated autocorrelation sequence of a random process x(n) is
rx(k) = 2, 1, 1, 0.5, 0; for lags k = 0, 1, 2, 3, 4;
Estimate the power spectrum of x(n) for each of the following cases.
(a) x(n) is an AR(2) process.
(b) x(n) is an MA(2) process.
(c) x(n) is an ARMA(1,1) process.
2. Given the autocorrelation sequence
rx(k) = 1, 0.8, 0.5, 0.1; for lags k = 0, 1, 2, 3;
find the reflection coefficients, Γj , the model parameters, aj (k) and the modeling errors, j , for j =
1, 2, 3. Use Levinson Durbin’s algorithm.
3. [∗]Determine whether the following statements are True or False. Justify.
(a) If rx(k) is an autocorrelation sequence with rx(k) = 0 for |k| > p, then Γk = 0 for |k| > p.
(b) Given an autocorrelation sequence, rx(k) for k = 0, . . . , p, if the (p + 1) × (p + 1) Toeplitz matrix
Rp = T oep{rx(0), rx(1), . . . , rp(p)}
is positive definite, then
rx = [rx(0), rx(1), . . . , rx(p), 0, 0, . . .]
T
will always be a valid autocorrelation sequence, i.e., extending rx(k) with zeros is a valid autocorrelation extension. Note: A Toeplitz matrix is described by one of its column or row.
(c) If rx(k) is periodic, then Γj will be periodic with the same period.
4. A random process may be classified in terms of the properties of the prediction error sequence k that
is produced when fitting an all–pole model to the process. Listed below are different classifications for
the error sequence:
(a) k = c > 0 for all k ≥ 0.
(b) k = c > 0 for all k ≥ k0 for some k0 > 0.
(c) k → c as k → ∞ where c > 0.
(d) k → 0 as k → ∞
(e)  = 0 for all k ≥ k0 for some k0 > 0.
1
For each of these classifications, describe as completely as possible the characteristics that may be
attributed to the process and its power spectrum.
5. Show that the optimal one-step linear predictor for an AR(p) process based on the infinite past is
the same as one based on the previous p samples. Hint: Show that the solution of the Yule-Walker
equation yields a(k) = 0 for k > p for an AR(p) process.
6. [∗]Estimates are made of the correlation function of a particular signal and the values obtained are:
rx(0) = 7.24, rx(1) = 3.6. Determine the parameters of the MA(1) model:
H(z) = b0 + b1z
−1
,
which matches these correlation values using:
(a) Direct solution of the Yule-Walker MA equations.
(b) By spectral factorization.
Sketch the power spectral estimate obtained using this MA model. Fit a AR(1) model for the given
correlation data and sketch the resulting spectral estimate. Is this estimate better than that obtained
using the MA model ?
7. In the MUSIC algorithm, finding the peaks of the frequency estimation function
PMUSIC =
1
P
M
i=P +1
|eHvi
|
2
is equivalent to finding the minima of the denominator. Here M is the number of samples, e represents a
complex exponential, P is the number of complex exponentials, and vi are the normalized eigenvectors
of the input correlation matrix. Show that the minima of the denominator is equivalent to the maxima
of
X
P
i=1
|e
Hvi
|
2
.
8. [∗]The 3 × 3 autocorrelation matrix of a harmonic process is
Rx =


3 −j −1
j 3 −j
−1 j 3

 .
(a) Using Pisarenko harmonic decomposition, find the complex exponential frequencies and the variance of the white noise.
(b) Repeat (a) using MUSIC algorithm.
(c) Repeat (a) using Min-norm method.
9. In ESPRIT technique, the signal space eigenvector matrices VS1 and VS2 are obtained as
VS1 = [IM˜ 0]VS and VS2 = [0 IM˜ ]VS.
The translation matrix Φ, whose eigenvalues will be used to estimate the complex frequencies, can be
obtained by solving Vb S2 ‘ Vb S1Φ. Obtain the least-squares solution for Φ.

1. In this simulation you will use the same data set you used in your assignment 1. You will first estimate
the AR parameters for the data and use these parameters to estimate the power spectrum.
(a) Use the autocorrelation method to estimate a 3×3 Toeplitz correlation matrix for the signal data.
Show this matrix in your report.
(b) Solve the Yule-Walker equations corresponding to the matrix generated in part (a) to obtain the
second-order linear prediction filter parameters and the prediction error variance.
(c) Apply the filter to the original data set and generate the prediction error signal. Plot this signal
and compute its variance. Does it compare well with the theoretical prediction error variance you
obtained in part (b).
(d) Take the upper 2times2 block of the correlation matrix you generated in part (a) and solve for
the coefficients and prediction error variance of a first-order linear predictive filter. How does this
first-order prediction filter compare to the second-order filter you generated in part (b).
(e) Compute and plot AR power spectral estimates for the data sets using the first-order model
parameters.
(f) Compute and plot AR power spectral estimates for the data sets using the second-order model
parameters.
(g) Compare the power spectrum obtained using the periodogram method (best case) with that
obtained using the AR models.
2. The data sets R01, R10, R40 and I01, I10, I40 contain 32 samples of the real and imaginary parts
respectively of the following complex signal in white noise:
x[n] = s[n] + Kw[n]
for K = 0.01, K = 0.10, and K = 0.40. The signal s[n] is defined by
s[n] = e
j3πn
8 + e
j5πn
8 + e
jπn
2
and w[n] is a zero-mean, unit-variance white noise sequence.
(a) Compute the periodogram for K = 0.01 and plot it in the range 0 ≤ ω ≤ π. This will be the
reference plot.
(b) Plot and compare the spectral estimates for each noise value using the following methods.
i. An AR model using the autocorrelation method. You can use p = 7.
ii. An AR model using the covariance method. You can use p = 7.
iii. MUSIC method. You can use a 8 × 8 autocorrelation matrix. Use the covariance method to
estimate the correlation matrix.
iv. Minimum-norm method. You can use a 8 × 8 autocorrelation matrix. Use the covariance
method to estimate the correlation matrix.
Place the plots close by for easy comparison. Also have a plot where all four methods can be
overlaid on a relative dB scale.
Reference
1. Petre Stoica and Randolph Moses, “Spectral analysis of signals”, Prentice Hall, 2005. (Indian edition
available)
2. Monson H. Hayes, “Statistical signal processing and modeling”, Wiley India Pvt. Ltd., 2002. (Indian
edition available)
3. Charles W. Therrien, “Discrete random signals and statistical signal processing”, Charles W. Therrien,
2004.