Main Content

dlqr

Linear-quadratic (LQ) state-feedback regulator for discrete-time state-space system

Description

[K,S,P] = dlqr(A,B,Q,R,N) calculates the optimal gain matrix K, the solution S of the associated algebraic Riccati equation, and the closed-loop poles P using the discrete-time state-space matrices A and B. This function is only valid for discrete-time models. For continuous-time models, use lqr.

Input Arguments

collapse all

State matrix, specified as an n-by-n matrix, where n is the number of states.

Input-to-state matrix, specified as an n-by-m input-to-state matrix, where m is the number of inputs.

State-cost weighted matrix, specified as an n-by-n matrix, where n is the number of states. You can use Bryson's rule to set the initial values of Q given by:

Qi,i=1maximum acceptable value of (errorstates)2, i{1,2,...,n}Q=[Q1,1000Q2,200000Qn,n]

Here, n is the number of states.

Input-cost weighted matrix, specified as a scalar or a matrix of the same size as D'D. Here, D is the feed-through state-space matrix. You can use Bryson's rule to set the initial values of R given by:

Rj,j=1maximum acceptable value of (errorinputs)2, j{1,2,...,m}R=[R1,1000R2,200000Rm,m]

Here, m is the number of inputs.

Optional cross term matrix, specified as a matrix. If N is not specified, then lqr sets N to 0 by default.

Output Arguments

collapse all

Optimal gain of the closed-loop system, returned as a row vector of size n, where n is the number of states.

Solution of the associated algebraic Riccati equation, returned as an n-by-n matrix, where n is the number of states. In other words, S is the same dimension as state-space matrix A. For more information, see idare.

Poles of the closed-loop system, returned as a column vector of size n, where n is the number of states.

Algorithms

dlqr computes the optimal gain matrix K such that the state-feedback law u[n]=Kx[n] minimizes the quadratic cost function

J(u)=n=1(x[n]TQx[n]+u[n]TRu[n]+2x[n]TNu[n])

for the discrete-time state-space model x[n+1]=Ax[n]+Bu[n].

In addition to the state-feedback gain K, dlqr returns the infinite horizon solution S of the associated discrete-time Riccati equation

ATSAS(ATSB+N)(BTSB+R)1(BTSA+NT)+Q=0

and the closed-loop eigenvalues P = eig(ABK). The gain matrix K is derived from S using

K=(BTSB+R)1(BTSA+NT)

In all cases, when you omit the cross term matrix N, dlqr sets N to 0.

Version History

Introduced before R2006a

See Also

| | | |