Mcs-16-08 Optimal Control

  • Uploaded by: api-19442940
  • 0
  • 0
  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Mcs-16-08 Optimal Control as PDF for free.

More details

  • Words: 1,655
  • Pages: 28
‫ي َفْهما‬ ِ ‫عْلمًا َو ْرُزْقن‬ ِ ‫ي‬ ِ ‫ب زْدن‬ ّ ‫َر‬ ً My Lord! Advance me in Knowledge and true understanding

Modern Control Systems MCT 4321 Lecture #16: Optimal Control Assoc. Prof. Dr. Wahyudi Martono Department of Mechatronics Engineering International Islamic University Malaysia, E-mail: [email protected]

I M -RS G

Summary of Last Lecture

 State observer (full order or reduced-order observer) is required when the state is unmeasurable.  State observer is designed using Pole Placement Method – Direct Comparison Method – CCF Transformation Method – Ackerman’s Formula

 The control law and the estimator can be designed separately and combined at the end

I M -RS G

Outlines  Concept of Optimal Control  Linear Quadratic Regulator

I M -RS G

Concept of Optimal Control

We can use pole-placement to place the desired closed-loop poles: • Dominant Poles Design • Try to make the closed-loop system ‘like’ a simple standard 2nd order system. • Prototype Design • Select all poles to match standard prototype systems such as: • Bessel polynomial systems • ITAE-based polynomial

Which one is the “optimal” controller K?

I M -RS G

Concept of Optimal Control

Let’s consider state feedback control system r

+ +

u

B

+

x

+



x

C

y

A -K

• How to determine “optimal” controller K? • What is the meaning of optimal?

I M -RS G

Concept of Optimal Control

• The word optimal intuitively means doing job in best possible way. • the Is the UIA the best university in the world? • Is the MIT the best university in the world? • Before beginning a search of optimal solution • the job must be defined • a mathematical scale must be established for quantifying what best means • the possible solution must be spell out

I M -RS G

Concept of Optimal Control

Hence, before searching optimal controller, the following mathematical statement should be defined: • A description of the system to be controlled • A description of the system constraint and possible alternative • A description of task to be accomplished • A statement of the criterion for judging optimal performance.

I M -RS G

Concept of Optimal Control

• A description of the system to be controlled

x = Ax + Bu, y = Cx • A description of the system constraint and possible alternative • Control input constraint • State constraint • A description of task to be accomplished • For example, transfer the state from initial state x(t0) to specified final state x(tf)

I M -RS G

Concept of Optimal Control

• A statement of the criterion for judging optimal performance. For example to optimize: • the cycle time of an operation • the control effort (maximum force; energy spent) • the transient and/or steady-state errors, or • the dollar cost of an operation In general, we use cost function or performance criteria J for judging the optimal best performance.

I M -RS G

Concept of Optimal Control

Cost Function or Performance Criteria The most general continuous performance criteria to be considered is tf

J = S( x( t f ), t f ) + ∫ L( x( t ), u( t ) ) dt t0

Where: • S scalar valued cost function associated with error in stopping or terminal state at time tf • L is the cost or lost function associated with transient state errors and control effort.

I M -RS G

Concept of Optimal Control

Cost Function or Performance Criteria There are some optimal control problems which depend on the chosen performance criteria: • The minimum-time control problem • The terminal control problem • The minimum energy control problem • The regulator control problem • The tracking control problem

I M -RS G

Concept of Optimal Control The terminal control problem

If we select S = [x(tf) – xd]T [x(tf) – xd], and L = 0, then the performance criteria is

J = [ x( t f ) − x d ] [ x( t f ) − x d ] T

Minimizing J means minimizing the square of the norm error between final state and desired state.

I M -RS G

Concept of Optimal Control The minimum-time control problem

If we select S = 0, and L = 1, then the performance criteria is tf

J = ∫ dt = t f − t 0 t0

The minimum of J means the states reach the terminal point in the shortest possible time period.

I M -RS G

Concept of Optimal Control The minimum energy control problem

If we select S = 0, and L = uTu, then the performance criteria is tf

J = ∫ u T udt t0

Minimizing J means minimizing control energy.

I M -RS G

Concept of Optimal Control

Suppose the following has been decided: • A description of the system to be controlled • A description of the system constraint and possible alternative • A description of task to be accomplished • A statement of the criterion for judging optimal performance. How to determine matrix controller K which gives the optimal solution for the specified J?

I M -RS G

Concept of Optimal Control

We can solve the optimal control problem by using the following method: • Calculus of Variation • Dynamic Programming • Pontryagin Minimum Principle

I M -RS G

LQR Optimal Control

A common choice of the performance criteria is the following quadratic performance criteria:

J=

∫ ( x Qx + u Ru )dt



T

T

0

What kind of optimal control problem? Consider, Q=0, What is that? Consider, R=0, What is that?

I M -RS G

Concept of Optimal Control

Quadratic performance criteria: ∞

(

)

J = ∫ x Qx + u Ru dt 0

T

T

• Q and R are symmetric, non-negative definite weighting matrices to be selected • Q determines the relative cost penalty to be assigned to excursions of each state from its equilibrium value – the more ‘critical’ states would be given a higher weighting

• R determines the relative cost penalty to be assigned to the level of each control signal • Aim is to drive states close to 0 at t while penalising

I M -RS G

Concept of Optimal Control The LQR Optimal Control Statement

Given the system described by

x = Ax + Bu, y = Cx Determine the optimal feedback gain matrix

u = −K lqr x So as to minimize the performance index ∞

(

)

J = ∫ x Qx + u Ru dt 0

T

T

I M -RS G

Concept of Optimal Control The LQR Optimal Control Solution

The optimal gain matrix K (obtainable by a variety of method) is

u = −K lqr x

where

−1

K lqr = R B P T

Here, P is solution of the following Algebraic Riccati Equation (ARE): −1

PA + A P + Q − PBR B P = 0 T

T

I M -RS G

Concept of Optimal Control

Example Consider the following plant

0 1  0  x =  x +  u  0 0 1

y = [1 0] x

And has a performance index

 T 1 0  T J = ∫  x  x + u u dt  0 2  0  ∞

Determine the optimal matrix gain Klqr and find the closed-loop poles!

I M -RS G

Concept of Optimal Control

Exercise Consider the following plant

1  0  0 x =  x +  u   − 4 − 2  4

y = [1 0] x

And has a performance index

 T 2 0  T J = ∫  x  x + u u dt  0 1  0  ∞

Determine the optimal matrix gain Klqr and find the closed-loop poles!

I M -RS G

Concept of Optimal Control

The LQR Optimal Control Solution The optimal gain matrix K (obtainable by a variety of method) is

u = −K lqr x

where

−1

K lqr = R B P T

Here, P is solution of the following ARE: −1

PA + A P + Q − PBR B P = 0 T

T

How to choose Q and R matrices?

I M -RS G

Concept of Optimal Control How to select Q and R matrices?

• The choice of the weighting matrices Q and R is a trade-off between control performance (Q large) and low input energy (R large). • It is adequate to let the two matrices simply be diagonal. • The Q and R parameters generally need to be tuned until satisfactory behavior is obtained, or until the designer is satisfied with the result.

I M -RS G

Concept of Optimal Control

How to select Q and R matrices?

• An initial guess is to choose Q and R diagonal as 0  0  Q11 0  0 Q  0  0 22  Q=         0 0 0  Q mm  

where

Q ii =

1 max x i2

[ ] max[ u ]

[ ]

R 11  0 R=    0

0 0  0  R 22 0  0        0 0  R mm  1 R ii = max[ u i2 ]

max x i2

x i of is maximum acceptable

2

2 i

2 u is maximum acceptable i of

I M -RS G

Next Topics

 More on Optimal Control

I M -RS G

Further Readings  Gopal, Digital Control and State Variable Methods – Chapter 9, Section 9.5.

‫س ِ ہم‬ ِ ‫حّت ٰىُيَغّيُروْا َما ِبَأنُف‬ َ ‫ل َل ُيَغّيُر َما ِبَق ۡوٍم‬ َّ ‫ٱ‬ “…Verily, God will never change the condition of a people until they change what is in themselves…” Al Qur’an 13:11

Related Documents