State variable

From HandWiki
Short description: Quantity used to describe the mathematical state of a dynamical system


A state variable is one of the set of variables that are used to describe the mathematical "state" of a dynamical system. Intuitively, the state of a system describes enough about the system to determine its future behaviour in the absence of any external forces affecting the system. Models that consist of coupled first-order differential equations are said to be in state-variable form.[1]

Examples

  • In mechanical systems, the position coordinates and velocities of mechanical parts are typical state variables; knowing these, it is possible to determine the future state of the objects in the system.
  • In thermodynamics, a state variable is an independent variable of a state function. Examples include internal energy, enthalpy, temperature, pressure, volume and entropy. Heat and work are not state functions, but process functions.
  • In electronic/electrical circuits, the voltages of the nodes and the currents through components in the circuit are usually the state variables. In any electrical circuit, the number of state variables are equal to the number of (independent) storage elements, which are inductors and capacitors. The state variable for an inductor is the current through the inductor, while that for a capacitor is the voltage across the capacitor.
  • In ecosystem models, population sizes (or concentrations) of plants, animals and resources (nutrients, organic material) are typical state variables.

Control systems engineering

In control engineering and other areas of science and engineering, state variables are used to represent the states of a general system. The set of possible combinations of state variable values is called the state space of the system. The equations relating the current state of a system to its most recent input and past states are called the state equations, and the equations expressing the values of the output variables in terms of the state variables and inputs are called the output equations. As shown below, the state equations and output equations for a linear time invariant system can be expressed using coefficient matrices: A, B, C, and D

[math]\displaystyle{ A \in \R^{N \times N}, \quad B \in \R^{N \times L}, \quad C \in \R^{M \times N}, \quad D \in \R^{M \times L} , }[/math]

where N, L and M are the dimensions of the vectors describing the state, input and output, respectively.

Discrete-time systems

The state vector (vector of state variables) representing the current state of a discrete-time system (i.e. digital system) is [math]\displaystyle{ x[n] }[/math], where n is the discrete point in time at which the system is being evaluated. The discrete-time state equations are

[math]\displaystyle{ x[n+1] = Ax[n] + Bu[n], }[/math]

which describes the next state of the system (x[n+1]) with respect to current state and inputs u[n] of the system. The output equations are

[math]\displaystyle{ y[n] = Cx[n] + Du[n], }[/math]

which describes the output y[n] with respect to current states and inputs u[n] to the system.

Continuous time systems

The state vector representing the current state of a continuous-time system (i.e. analog system) is [math]\displaystyle{ x(t) }[/math], and the continuous-time state equations giving the evolution of the state vector are

[math]\displaystyle{ \frac{dx(t)}{dt} = Ax(t) + Bu(t), }[/math]

which describes the continuous rate of change [math]\displaystyle{ \frac{dx(t)}{dt} }[/math] of the state of the system with respect to current state x(t) and inputs u(t) of the system. The output equations are

[math]\displaystyle{ y(t) = Cx(t) + Du(t), }[/math]

which describes the output y(t) with respect to current states x(t) and inputs u(t) to the system.

See also


References

  1. Palm, III William J. (2009). System Dynamics (2nd ed.). McGraw-Hill Medical Publishing. p. 420. ISBN 978-0-07-126779-3.