Main Content

fixed.forgettingFactor

Compute forgetting factor required for streaming input data

Since R2021b

Description

example

alpha = fixed.forgettingFactor(m) returns the forgetting factor ɑ for an infinite number of rows with the equivalent gain of a matrix A with m rows.

Examples

collapse all

This example shows how to use the fixed.forgettingFactor and fixed.forgettingFactorInverse functions.

The growth in the QR decomposition can be seen by looking at the magnitude of the first element R(1,1)of the upper-triangular factor R, which is equal to the Euclidean norm of the first column of matrix A,

|R(1,1)|=||A(:,1)||2.

To see this, create matrix A as a column of ones of length n and compute R of the economy-size QR decomposition.

n = 1e4;
A = ones(n,1);

Then |R(1,1)|=||A(:,1)||2=i=1n12=n.

R = fixed.qlessQR(A)
R = 100.0000
norm(A)
ans = 100
sqrt(n)
ans = 100

The diagonal elements of the upper-triangular factor R of the QR decomposition may be positive, negative, or zero, but fixed.qlessQR and fixed.qrAB always return the diagonal elements of R as non-negative.

In a real-time application, such as when data is streaming continuously from a radar array, you can update the QR decomposition with an exponential forgetting factor α where 0<α<1. Use the fixed.forgettingFactor function to compute a forgetting factor α that acts as if the matrix were being integrated over m rows to maintain a gain of about m. The relationship between α and m is α=e-1/(2m).

m = 16;
alpha = fixed.forgettingFactor(m);
R_alpha = fixed.qlessQR(A,alpha)
R_alpha = 3.9377
sqrt(m)
ans = 4

If you are working with a system and have been given a forgetting factor α, and want to know the effective number of rows m that you are integrating over, then you can use the fixed.forgettingFactorInverse function. The relationship between m and α is m=-12log(α).

fixed.forgettingFactorInverse(alpha)
ans = 16

Input Arguments

collapse all

Number of rows in matrix A, specified as a positive integer-valued scalar.

Data Types: double

Output Arguments

collapse all

Forgetting factor, returned as a scalar.

Algorithms

In real-time applications, such as when data is streaming continuously from a radar array [1], the QR decomposition is often computed continuously as each new row of data arrives. In these systems, the previously computed upper-triangular matrix, R, is updated and weighted by forgetting factor ɑ, where 0 < ɑ < 1. This computation treats the matrix A as if it is infinitely tall. The series of transformations is as follows.

R0=zeros(n,n)[R0A(1,:)][R10][αR1A(2,:)][R20][αRkA(k,:)][Rk+10]

Without the forgetting factor ɑ, the values of R would grow without bound.

With the forgetting factor, the gain in R is

g=120αxdx=12log(α).

The gain of computing R without a forgetting factor from an m-by-n matrix A is m. Therefore,

m=12log(α)m=12log(α)α=e1/(2m).

References

[1] Rader, C.M. "VLSI Systolic Arrays for Adaptive Nulling." IEEE Signal Processing Magazine (July 1996): 29-49.

Version History

Introduced in R2021b