Linear Prediction

The goal of linear-prediction is to model a signal as a linear combination of its past/future.

Yule-Walker equations

For a discrete-time signal x, we want to approximate the sample x[n] as a linear combination xp[n] of the k preceding samples:

xp[n] = -c[1] * x[n-2] - ... - c[k-1] * x[n-k-1]

The best approximation in the mean-square sense is a tuple(c[1], ..., c[k]) such as the squared error:

e = xp - x

Is minimal. Noting p(x) = xp, and x^{-k} the signal x^{-k}[n] = x[n-k], since p is a linear combination of the (x^-1, ..., x^-k), we know that the error p(x) - x is minimal for p the orthogonal project of x on the vector space V spanned by the x^-1, ..., x^-k. In particular, the error e is then orthogonal to any vector in V:

TODO: decent blob for above

\begin{pmatrix}
        -R[1] \\
        -R[2] \\
        \vdots \\
        -R[p]
\end{pmatrix}
=
\begin{pmatrix}
        R[0]    & \overline{R}[1] & \dots  & \overline{R}[p-1] \\
        R[1]    & R[0]            & \ddots & \vdots \\
        \vdots  & \ddots          & \ddots & \overline{R}[1]\\
        R[p-1]  & \dots           & R[1]   & R[0]
\end{pmatrix}
\begin{pmatrix}
        a[1] \\
        \vdots \\
        \vdots \\
        a[p]
\end{pmatrix}

Levinson-Durbin recursion

Levinson-Durbin recursion is a recursive algorithm to solve the Yule-Walker equations in O(p^2) instead of O(p^3) usually necessary to inverse a matrix. It uses the Hermitian-Toeplitz structure of the correlation matrix.

scikits.talkbox.levinson(r, order, axis=-1)

Levinson-Durbin recursion, to efficiently solve symmetric linear systems with toeplitz structure.

Parameters:

r : array-like

input array to invert (since the matrix is symmetric Toeplitz, the corresponding pxp matrix is defined by p items only). Generally the autocorrelation of the signal for linear prediction coefficients estimation. The first item must be a non zero real, and corresponds to the autocorelation at lag 0 for linear prediction.

order : int

order of the recursion. For order p, you will get p+1 coefficients.

axis : int, optional

axis over which the algorithm is applied. -1 by default.

Returns:

a : array-like

the solution of the inversion (see notes).

e : array-like

the prediction error.

k : array-like

reflection coefficients.

Notes

Levinson is a well-known algorithm to solve the Hermitian toeplitz equation:

               _          _
-R[1] = R[0]   R[1]   ... R[p-1]    a[1]
 :      :      :          :         :
 :      :      :          _      *  :
-R[p] = R[p-1] R[p-2] ... R[0]      a[p]

with respect to a. Using the special symmetry in the matrix, the inversion can be done in O(p^2) instead of O(p^3).

Only double argument are supported: float and long double are internally converted to double, and complex input are not supported at all.

Linear prediction coding

Solve the Yule-Walker equation for a signal x, using the autocorelation method and Levinson-Durbin for the Yule-Walker inversion.

scikits.talkbox.lpc(signal, order, axis=-1)

Compute the Linear Prediction Coefficients.

Return the order + 1 LPC coefficients for the signal. c = lpc(x, k) will find the k+1 coefficients of a k order linear filter:

xp[n] = -c[1] * x[n-2] - ... - c[k-1] * x[n-k-1]

Such as the sum of the squared-error e[i] = xp[i] - x[i] is minimized.

Parameters:

signal: array_like :

input signal

order : int

LPC order (the output will have order + 1 items)

Returns:

a : array-like

the solution of the inversion.

e : array-like

the prediction error.

k : array-like

reflection coefficients.

Notes

This uses Levinson-Durbin recursion for the autocorrelation matrix inversion, and fft for the autocorrelation computation.

For small order, particularly if order << signal size, direct computation of the autocorrelation is faster: use levinson and correlate in this case.