SKEDSOFT

Control Systems - 1

Modeling in state space

Modern control theory:

 The modern trend in engineering systems is toward greater complexity, due mainly to the requirements of complex tasks and good accuracy. Complex systems may have multiple inputs and multiple outputs and may be time varying. Because of the necessity of meeting increasingly stringent requirements on the performance of control systems, the increase in system complexity, and easy access to large-scale computers, modern control theory, which is a new approach to the analysis and design of complex control systems, has been developed since around 1960. This new approach is based on the concept of state. The concept of state by itself is not new since it has been in existence for a long time in the field of classical dynamics and other fields.

Modern control theory versus conventional control theory:

 Modern control theory is contrasted with conventional control theory in that the former is applicable to multiple-input-multiple-output systems, which may be linear or nonlinear, time invariant or time varying, while the latter is applicable only to linear time invariant single-input-single-output systems. Also, modern control theory is essentially a time-domain approach, while conventional control theory is a complex frequency domain approach. Before we proceed further, we must define state, state variables, state vector, and state space.

State:

 The state of a dynamic system is the smallest set of variables (called state variables) such that the knowledge of these variables at t = to, together with the knowledge of the input for t ≥ to, completely determines the behavior of the system for any time t ≥ to.

Note that the concept of state is by no means limited to physical systems. It is applicable to biological systems, economic systems, social systems, and others.

State variables:

 The state variables of a dynamic system are the variables making up the smallest set of variables that determine the state of the dynamic system. If at least n variables X1, X2, ... , Xn are needed to completely describe the behavior of a dynamic system (so that once the input is given for t ≥ to and the initial state at t = to is specified, the future state of the system is completely determined), then such n variables are a set of state variables.

Note that state variables need not be physically measurable or observable quantities. Variables that do not represent physical quantities and those that are neither measurable nor observable can be chosen as state variables. Such freedom in choosing state variables is an advantage of the state-space methods. Practically, however, it is convenient to choose easily measurable quantities for the state variables, if this is possible at all, because optimal control laws will require the feedback of all state variables with suitable weighting.

State vector:

 If n state variables are needed to completely describe the behavior of a given system, then these n state variables can be considered the n components of a vector x. Such a vector is called a state vector. A state vector is thus a vector that determines uniquely the system state x(t) for any time t ≥ to, once the state at t = to is given and the input U(t) for t ≥ to is specified.

State space:

 The n-dimensional space whose coordinate axes consist of the X1 axis, X2 axis, ... ,Xn axis is called a state space. Any state can be represented by a point in the state space.