Review of LTI systems and filters¶

  • Mathematical definitions
  • Impulse response and convolution
  • Frequency response
  • Magnitude and phase response
  • Frequency Selectivity

Finite impulse response (FIR) filters¶

For a causal discrete-time FIR filter, each output sample $y[n]$ is computed by taking the inner product of the input samples $x[n]$ with the filter coefficients $b[n]$.

$$ y[n] = \sum_{m=0}^{N}{x[n-m]b[m]} $$

This is often visualized as a 'tapped delay line'

FIR filter design¶

For typical frequency selective filters, the design goals are threefold

  • Minimize distortion of the signal in the passband(s)
  • Attenuate the stopband as much as possible
  • Keep the order of the filter low

  • Filter order $N$
    • Lower order will provide shorter delay
    • Higher order will provide better frequency selectivity
  • Relative weights $W$
    • Specifies how to weight each passband and stopband when computing the error that is minimized by the algorithm
  • Location of transition regions (specified by $f_{\text{pass}_i}$ and $f_{\text{stop}_i}$)
    • Transition regions cause to much distortion to function as a passband but not enough attenuation to function as a stopband
    • Smaller transition region increases the amount of usable bandwidth
    • Larger transition region will provide better performance in the passband and stopband

Week 2¶

Frame-based processing¶

  • Less overhead
    • Each interrupt requires a fixed number of processor cycles $N_I$.
    • For sample-by-sample processing, this cost is incurred for each sample
    • For frame-based processing, this is reduced to $N_I / N_F$ per sample
  • Computational advantages
    • Algorithms to filter $N_f$ samples can be more efficient than algorithms that operate sample-by-sample
    • Allows use of single instruction multiple data (SIMD) operations
  • Memory advantages
    • In some systems, large blocks of memory can be moved more efficiently.
    • Caching may be more efficient with large frames

Week 3¶

IIR Filters¶

For a causal discrete-time IIR filter, each output sample $y[n]$ is computed from a set of feedforward terms and a set of feedback terms.

$$ y[n] = \underbrace{\sum_{k=0}^{N}{b_k x[n-k]}}_{\text{feedforward}}-\underbrace{\sum_{k=1}^{N}{a_k y[n-k]}}_{\text{feedback}} $$

The choice to subtract the feedback terms rather than add them is arbitrary, but this is conventional because it leads to a slightly simpler expression of the transfer function in the z-domain

$$ H(z) = \frac{b_0 + b_1 z^{-1} + \cdots + b_N z^{-N}}{1 + a_1 z^{-1} + \cdots + a_N z^{-N}} $$

Numerical errors¶

With finite precision data types, many sequences of operations can be carried out with perfect accuracy. For example:

>> single(7.2)*single(13.8)*single(1.0)

        99.36

In general however, operations using finite precision data types (such as single precision floating point) are subject to numerical errors. For example:

>> single(7.2)*single(13.8e-35)*single(1.0e35)

        99.35999

These numerical errors can cause serious issues in the design and implementation of IIR filters.

Cascade of biquads¶

Previously, the transfer function was expressed in terms of the coefficients $b_k$ and $a_k$. Alternatively, we can express it in terms of poles $p_k$, zeros $\zeta_k$, and a constant gain $C$.

$$ H(z) = C \frac{(z-\zeta_1)(z-\zeta_2)\cdots(z-\zeta_N)}{(z-p_1)(z-p_2)\cdots(z-p_N)} $$

A downside of this representation is that, even if all $a_k$ and $b_k$ are real, the corresponding $p_k$ and $\zeta_k$ may be complex. Alternatively, we can group together pairs of conjugate symmetric poles and zeros to create a similar similar representation where all of the coefficients are real. This is known as a cascade of biquads or a second-order sections representation.

$$ H_k(z) = \frac{b_{k,0} + b_{k,1} z^{-1} + b_{k,2} z^{-2}}{1 + a_{k,1} z^{-1} + a_{k,2} z^{-2}} $$$$ H(z) = H_1(z)H_2(z) \cdots H_M (z) $$