# Change of variables

__: Mathematical technique for simplification__

**Short description**Part of a series of articles about |

Calculus |
---|

In mathematics, a **change of variables** is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.

Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution).

A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial:

- [math]\displaystyle{ x^6 - 9 x^3 + 8 = 0. }[/math]

Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written

- [math]\displaystyle{ (x^3)^2-9(x^3)+8=0 }[/math]

(this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable [math]\displaystyle{ u = x^3 }[/math]. Substituting *x* by [math]\displaystyle{ \sqrt[3]{u} }[/math] into the polynomial gives

- [math]\displaystyle{ u^2 - 9 u + 8 = 0 , }[/math]

which is just a quadratic equation with the two solutions:

- [math]\displaystyle{ u = 1 \quad \text{and} \quad u = 8. }[/math]

The solutions in terms of the original variable are obtained by substituting *x*^{3} back in for *u*, which gives

- [math]\displaystyle{ x^3 = 1 \quad \text{and} \quad x^3 = 8. }[/math]

Then, assuming that one is interested only in real solutions, the solutions of the original equation are

- [math]\displaystyle{ x = (1)^{1/3} = 1 \quad \text{and} \quad x = (8)^{1/3} = 2. }[/math]

## Simple example

Consider the system of equations

- [math]\displaystyle{ xy+x+y=71 }[/math]
- [math]\displaystyle{ x^2y+xy^2=880 }[/math]

where [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are positive integers with [math]\displaystyle{ x\gt y }[/math]. (Source: 1991 AIME)

Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as [math]\displaystyle{ xy(x+y)=880 }[/math]. Making the substitutions [math]\displaystyle{ s=x+y }[/math] and [math]\displaystyle{ t=xy }[/math] reduces the system to [math]\displaystyle{ s+t=71, st=880 }[/math]. Solving this gives [math]\displaystyle{ (s,t)=(16,55) }[/math] and [math]\displaystyle{ (s,t)=(55,16) }[/math]. Back-substituting the first ordered pair gives us [math]\displaystyle{ x+y=16, xy=55, x\gt y }[/math], which gives the solution [math]\displaystyle{ (x,y)=(11,5). }[/math] Back-substituting the second ordered pair gives us [math]\displaystyle{ x+y=55, xy=16, x\gt y }[/math], which gives no solutions. Hence the solution that solves the system is [math]\displaystyle{ (x,y)=(11,5) }[/math].

## Formal introduction

Let [math]\displaystyle{ A }[/math], [math]\displaystyle{ B }[/math] be smooth manifolds and let [math]\displaystyle{ \Phi: A \rightarrow B }[/math] be a [math]\displaystyle{ C^r }[/math]-diffeomorphism between them, that is: [math]\displaystyle{ \Phi }[/math] is a [math]\displaystyle{ r }[/math] times continuously differentiable, bijective map from [math]\displaystyle{ A }[/math] to [math]\displaystyle{ B }[/math] with [math]\displaystyle{ r }[/math] times continuously differentiable inverse from [math]\displaystyle{ B }[/math] to [math]\displaystyle{ A }[/math]. Here [math]\displaystyle{ r }[/math] may be any natural number (or zero), [math]\displaystyle{ \infty }[/math] (smooth) or [math]\displaystyle{ \omega }[/math] (analytic).

The map [math]\displaystyle{ \Phi }[/math] is called a *regular coordinate transformation* or *regular variable substitution*, where *regular* refers to the [math]\displaystyle{ C^r }[/math]-ness of [math]\displaystyle{ \Phi }[/math]. Usually one will write [math]\displaystyle{ x = \Phi(y) }[/math] to indicate the replacement of the variable [math]\displaystyle{ x }[/math] by the variable [math]\displaystyle{ y }[/math] by substituting the value of [math]\displaystyle{ \Phi }[/math] in [math]\displaystyle{ y }[/math] for every occurrence of [math]\displaystyle{ x }[/math].

## Other examples

### Coordinate transformation

Some systems can be more easily solved when switching to polar coordinates. Consider for example the equation

- [math]\displaystyle{ U(x, y) := (x^2 + y^2) \sqrt{ 1 - \frac{x^2}{x^2 + y^2} } = 0. }[/math]

This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution

- [math]\displaystyle{ \displaystyle (x, y) = \Phi(r, \theta) }[/math] given by [math]\displaystyle{ \displaystyle \Phi(r,\theta) = (r \cos(\theta), r \sin(\theta)). }[/math]

Note that if [math]\displaystyle{ \theta }[/math] runs outside a [math]\displaystyle{ 2\pi }[/math]-length interval, for example, [math]\displaystyle{ [0, 2\pi] }[/math], the map [math]\displaystyle{ \Phi }[/math] is no longer bijective. Therefore, [math]\displaystyle{ \Phi }[/math] should be limited to, for example [math]\displaystyle{ (0, \infty] \times [0, 2\pi) }[/math]. Notice how [math]\displaystyle{ r = 0 }[/math] is excluded, for [math]\displaystyle{ \Phi }[/math] is not bijective in the origin ([math]\displaystyle{ \theta }[/math] can take any value, the point will be mapped to (0, 0)). Then, replacing all occurrences of the original variables by the new expressions prescribed by [math]\displaystyle{ \Phi }[/math] and using the identity [math]\displaystyle{ \sin^2 x + \cos^2 x = 1 }[/math], we get

- [math]\displaystyle{ V(r, \theta) = r^2 \sqrt{ 1 - \frac{r^2 \cos^2 \theta}{r^2} } = r^2 \sqrt{1 - \cos^2 \theta} = r^2\left|\sin\theta\right|. }[/math]

Now the solutions can be readily found: [math]\displaystyle{ \sin(\theta) = 0 }[/math], so [math]\displaystyle{ \theta = 0 }[/math] or [math]\displaystyle{ \theta = \pi }[/math]. Applying the inverse of [math]\displaystyle{ \Phi }[/math] shows that this is equivalent to [math]\displaystyle{ y = 0 }[/math] while [math]\displaystyle{ x \not= 0 }[/math]. Indeed, we see that for [math]\displaystyle{ y = 0 }[/math] the function vanishes, except for the origin.

Note that, had we allowed [math]\displaystyle{ r = 0 }[/math], the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of [math]\displaystyle{ \Phi }[/math] is crucial. The function is always positive (for [math]\displaystyle{ x,y\in\reals }[/math]), hence the absolute values.

### Differentiation

The chain rule is used to simplify complicated differentiation. For example, consider the problem of calculating the derivative

- [math]\displaystyle{ \frac{d}{dx}\sin(x^2). }[/math]

Let [math]\displaystyle{ y = \sin u }[/math] with [math]\displaystyle{ u = x^2. }[/math] Then:

- [math]\displaystyle{ \begin{align} \frac{d}{dx}\sin(x^2) &= \frac{dy}{dx} \\[6pt] &= \frac{dy}{du} \frac{du}{dx} && \text{This part is the chain rule.} \\[6pt] &= \left( \frac d {du} \sin u \right) \left( \frac{d}{dx} x^2 \right) \\[6pt] &= (\cos u) (2x) \\ &= \left (\cos(x^2) \right) (2x) \\ &= 2x\cos(x^2) \end{align} }[/math]

### Integration

Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant.^{[1]} Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems.

#### Change of variables formula in terms of Lebesgue measure

The following theorem^{[2]} allows us to relate integrals with respect to Lebesgue measure to an equivalent integral with respect to the pullback measure under a parameterization G. The proof is due to approximations of the Jordan content.

Suppose that [math]\displaystyle{ \Omega }[/math] is an open subset of [math]\displaystyle{ \mathbb{R}^n }[/math] and [math]\displaystyle{ G:\Omega \to \mathbb{R}^n }[/math] is a [math]\displaystyle{ C^1 }[/math] diffeomorphism.

- If [math]\displaystyle{ f }[/math] is a Lebesgue measurable function on [math]\displaystyle{ G(\Omega) }[/math], then [math]\displaystyle{ f \circ G }[/math] is Lebesgue measurable on [math]\displaystyle{ \Omega }[/math]. If [math]\displaystyle{ f \geq 0 }[/math] or [math]\displaystyle{ f\in L^1(G(\Omega),m), }[/math] then [math]\displaystyle{ \int_{G(\Omega)} f(x) dx = \int_\Omega f\circ G(x)|\text{det}D_xG|dx }[/math].
- If [math]\displaystyle{ E\subset \Omega }[/math] and [math]\displaystyle{ E }[/math] is Lebesgue measurable, then [math]\displaystyle{ G(E) }[/math] is Lebesgue measurable, then [math]\displaystyle{ m(G(E)) = \int_E |\text{det}D_xG| dx }[/math].

As a corollary of this theorem, we may compute the Radon-Nikodym derivatives of both the pullback and pushforward measures of [math]\displaystyle{ m }[/math] under [math]\displaystyle{ T }[/math].

##### Pullback measure and transformation formula

The pullback measure in terms of a transformation [math]\displaystyle{ T }[/math] is defined as [math]\displaystyle{ T^*\mu:= \mu(T(A)) }[/math]. The change of variables formula for pullback measures is

[math]\displaystyle{ \int_{T(\Omega)}g d\mu = \int_\Omega g \circ T dT^* \mu }[/math].

**Pushforward measure and transformation formula**

The pushforward measure in terms of a transformation [math]\displaystyle{ T }[/math], is defined as [math]\displaystyle{ T_*\mu:= \mu(T^{-1}(A)) }[/math]. The change of variables formula for pushforward measures is

[math]\displaystyle{ \int_{\Omega }g\circ T d\mu = \int_{T(\Omega)} g dT_* \mu }[/math].

As a corollary of the change of variables formula for Lebesgue measure, we have that

- Radon-Nikodym derivative of the pullback with respect to Lebesgue measure: [math]\displaystyle{ \frac{dT^*m}{dm}(x) = |\text{det}D_xT| }[/math]
- Radon-Nikodym derivative of the pushforward with respect to Lebesgue measure: [math]\displaystyle{ \frac{dT_*m}{dm}(x) = |\text{det}D_xT^{-1}| }[/math]

From which we may obtain

- The change of variables formula for pullback measure: [math]\displaystyle{ \int_{T(\Omega)}g d\mu = \int_\Omega g \circ T dT^* \mu=\int_\Omega g \circ T |\text{det}D_xT|dm(x) }[/math]
- The change of variables formula for pushforward measure:[math]\displaystyle{ \int_{\Omega }g d\mu = \int_{T(\Omega)} g \circ T^{-1} dT_* \mu= \int_{T(\Omega)} g \circ T^{-1}|\text{det}D_xT^{-1}|dm(x) }[/math]

### Differential equations

Variable changes for differentiation and integration are taught in elementary calculus and the steps are rarely carried out in full.

The very broad use of variable changes is apparent when considering differential equations, where the independent variables may be changed using the chain rule or the dependent variables are changed resulting in some differentiation to be carried out. Exotic changes, such as the mingling of dependent and independent variables in point and contact transformations, can be very complicated but allow much freedom.

Very often, a general form for a change is substituted into a problem and parameters picked along the way to best simplify the problem.

### Scaling and shifting

Probably the simplest change is the scaling and shifting of variables, that is replacing them with new variables that are "stretched" and "moved" by constant amounts. This is very common in practical applications to get physical parameters out of problems. For an *n*^{th} order derivative, the change simply results in

- [math]\displaystyle{ \frac{d^n y}{d x^n} = \frac{y_\text{scale}}{x_\text{scale}^n} \frac{d^n \hat y}{d \hat x^n} }[/math]

where

- [math]\displaystyle{ x = \hat x x_\text{scale} + x_\text{shift} }[/math]

- [math]\displaystyle{ y = \hat y y_\text{scale} + y_\text{shift}. }[/math]

This may be shown readily through the chain rule and linearity of differentiation. This change is very common in practical applications to get physical parameters out of problems, for example, the boundary value problem

- [math]\displaystyle{ \mu \frac{d^2 u}{d y^2} = \frac{d p}{d x} \quad ; \quad u(0) = u(L) = 0 }[/math]

describes parallel fluid flow between flat solid walls separated by a distance δ; μ is the viscosity and [math]\displaystyle{ d p/d x }[/math] the pressure gradient, both constants. By scaling the variables the problem becomes

- [math]\displaystyle{ \frac{d^2 \hat u}{d \hat y^2} = 1 \quad ; \quad \hat u(0) = \hat u(1) = 0 }[/math]

where

- [math]\displaystyle{ y = \hat y L \qquad \text{and} \qquad u = \hat u \frac{L^2}{\mu} \frac{d p}{d x}. }[/math]

Scaling is useful for many reasons. It simplifies analysis both by reducing the number of parameters and by simply making the problem neater. Proper scaling may *normalize* variables, that is make them have a sensible unitless range such as 0 to 1. Finally, if a problem mandates numeric solution, the fewer the parameters the fewer the number of computations.

### Momentum vs. velocity

Consider a system of equations

- [math]\displaystyle{ \begin{align} m \dot v & = - \frac{ \partial H }{ \partial x } \\[5pt] m \dot x & = \frac{ \partial H }{ \partial v } \end{align} }[/math]

for a given function [math]\displaystyle{ H(x, v) }[/math]. The mass can be eliminated by the (trivial) substitution [math]\displaystyle{ \Phi(p) = 1/m \cdot p }[/math]. Clearly this is a bijective map from [math]\displaystyle{ \mathbb{R} }[/math] to [math]\displaystyle{ \mathbb{R} }[/math]. Under the substitution [math]\displaystyle{ v = \Phi(p) }[/math] the system becomes

- [math]\displaystyle{ \begin{align} \dot p & = - \frac{ \partial H }{ \partial x } \\[5pt] \dot x & = \frac{ \partial H }{ \partial p } \end{align} }[/math]

### Lagrangian mechanics

Given a force field [math]\displaystyle{ \varphi(t, x, v) }[/math], Newton's equations of motion are

- [math]\displaystyle{ m \ddot x = \varphi(t, x, v). }[/math]

Lagrange examined how these equations of motion change under an arbitrary substitution of variables [math]\displaystyle{ x = \Psi(t, y) }[/math], [math]\displaystyle{ v = \frac{\partial \Psi(t, y)}{\partial t} + \frac{\partial\Psi(t, y)}{\partial y} \cdot w. }[/math]

He found that the equations

- [math]\displaystyle{ \frac{ \partial{L} }{ \partial y} = \frac{\mathrm{d}}{\mathrm{d}t} \frac{\partial{L}}{\partial{w}} }[/math]

are equivalent to Newton's equations for the function [math]\displaystyle{ L = T - V }[/math],
where *T* is the kinetic, and *V* the potential energy.

In fact, when the substitution is chosen well (exploiting for example symmetries and constraints of the system) these equations are much easier to solve than Newton's equations in Cartesian coordinates.

## See also

- Change of variables (PDE)
- Change of variables for probability densities
- Substitution property of equality
- Universal instantiation

## References

- ↑ Kaplan, Wilfred (1973). "Change of Variables in Integrals".
*Advanced Calculus*(Second ed.). Reading: Addison-Wesley. pp. 269–275. - ↑ Folland, G. B. (1999).
*Real analysis : modern techniques and their applications*(2nd ed.). New York: Wiley. pp. 74–75. ISBN 0-471-31716-0. OCLC 39849337. https://www.worldcat.org/oclc/39849337.

Original source: https://en.wikipedia.org/wiki/Change of variables.
Read more |