# Power rule

Short description: Method of differentiating single term polynomials

In calculus, the power rule is used to differentiate functions of the form $\displaystyle{ f(x) = x^r }$, whenever $\displaystyle{ r }$ is a real number. Since differentiation is a linear operation on the space of differentiable functions, polynomials can also be differentiated using this rule. The power rule underlies the Taylor series as it relates a power series with a function's derivatives.

## Statement of the power rule

Let $\displaystyle{ f }$ be a function satisfying $\displaystyle{ f(x)=x^r }$ for all $\displaystyle{ x }$, where $\displaystyle{ r \in \mathbb{R} }$.[lower-alpha 1] Then,

$\displaystyle{ f'(x) = rx^{r-1} \, . }$

The power rule for integration states that

$\displaystyle{ \int\! x^r \, dx=\frac{x^{r+1}}{r+1}+C }$

for any real number $\displaystyle{ r \neq -1 }$. It can be derived by inverting the power rule for differentiation. In this equation C is any constant.

## Proofs

### Proof for real exponents

To start, we should choose a working definition of the value of $\displaystyle{ f(x) = x^r }$, where $\displaystyle{ r }$ is any real number. Although it is feasible to define the value as the limit of a sequence of rational powers that approach the irrational power whenever we encounter such a power, or as the least upper bound of a set of rational powers less than the given power, this type of definition is not amenable to differentiation. It is therefore preferable to use a functional definition, which is usually taken to be $\displaystyle{ x^r = \exp(r\ln x) }$ for all values of $\displaystyle{ x \gt 0 }$, where $\displaystyle{ \exp(x) = e^x }$ is the natural exponential function and $\displaystyle{ e }$ is Euler's number.[1][2] First, we may demonstrate that the derivative of $\displaystyle{ f(x) = e^x }$ is $\displaystyle{ f'(x) = e^x }$.

If $\displaystyle{ f(x) = e^x }$, then $\displaystyle{ \ln (f(x)) = x }$, where $\displaystyle{ \ln }$ is the natural logarithm function, the inverse function of the exponential function, as demonstrated by Euler.[3] Since the latter two functions are equal for all values of $\displaystyle{ x \gt 0 }$, their derivatives are also equal, whenever either derivative exists, so we have, by the chain rule, $\displaystyle{ \frac{1}{f(x)}\cdot f'(x) = 1 }$ or $\displaystyle{ f'(x) = f(x) = e^x }$, as was required. Therefore, applying the chain rule to $\displaystyle{ f(x) = e^{r\ln x} }$, we see that $\displaystyle{ f'(x)=\frac{r}{x} e^{r\ln x}= \frac{r}{x}x^r }$ which simplifies to $\displaystyle{ rx^{r-1} }$.

When $\displaystyle{ x \lt 0 }$, we may use the same definition with $\displaystyle{ x^r = ((-1)(-x))^r = (-1)^r(-x)^r }$, where we now have $\displaystyle{ -x \gt 0 }$. This necessarily leads to the same result. Note that because $\displaystyle{ (-1)^r }$ does not have a conventional definition when $\displaystyle{ r }$ is not a rational number, irrational power functions are not well defined for negative bases. In addition, as rational powers of −1 with even denominators (in lowest terms) are not real numbers, these expressions are only real valued for rational powers with odd denominators (in lowest terms).

Finally, whenever the function is differentiable at $\displaystyle{ x = 0 }$, the defining limit for the derivative is: $\displaystyle{ \lim_{h\to 0} \frac{h^r - 0^r}{h} }$ which yields 0 only when $\displaystyle{ r }$ is a rational number with odd denominator (in lowest terms) and $\displaystyle{ r \gt 1 }$, and 1 when $\displaystyle{ r = 1 }$. For all other values of $\displaystyle{ r }$, the expression $\displaystyle{ h^r }$ is not well-defined for $\displaystyle{ h \lt 0 }$, as was covered above, or is not a real number, so the limit does not exist as a real-valued derivative. For the two cases that do exist, the values agree with the value of the existing power rule at 0, so no exception need be made.

The exclusion of the expression $\displaystyle{ 0^0 }$ (the case $\displaystyle{ x = 0 }$) from our scheme of exponentiation is due to the fact that the function $\displaystyle{ f(x, y) = x^y }$ has no limit at (0,0), since $\displaystyle{ x^0 }$ approaches 1 as x approaches 0, while $\displaystyle{ 0^y }$ approaches 0 as y approaches 0. Thus, it would be problematic to ascribe any particular value to it, as the value would contradict one of the two cases, dependent on the application. It is traditionally left undefined.

### Proofs for integer exponents

#### Proof by induction (natural numbers)

Let $\displaystyle{ n\in\N }$. It is required to prove that $\displaystyle{ \frac{d}{dx} x^n = nx^{n-1}. }$ The base case may be when $\displaystyle{ n=0 }$ or $\displaystyle{ n=1 }$, depending on how the set of natural numbers is defined.

When $\displaystyle{ n=0 }$, $\displaystyle{ \frac{d}{dx} x^0 = \frac{d}{dx} (1) = \lim_{h \to 0}\frac{1-1}{h} = \lim_{h \to 0}\frac{0}{h} = 0 = 0x^{0-1}. }$

When $\displaystyle{ n=1 }$, $\displaystyle{ \frac{d}{dx} x^1 = \lim_{h \to 0}\frac{(x+h)-x}{h} = \lim_{h \to 0}\frac{h}{h} = 1 = 1x^{1-1}. }$

Therefore, the base case holds either way.

Suppose the statement holds for some natural number k, i.e. $\displaystyle{ \frac{d}{dx}x^k = kx^{k-1}. }$

When $\displaystyle{ n=k+1 }$,$\displaystyle{ \frac{d}{dx}x^{k+1} = \frac{d}{dx}(x^k \cdot x) = x^k \cdot \frac{d}{dx}x + x \cdot \frac{d}{dx}x^k = x^k + x \cdot kx^{k-1} = x^k + kx^k = (k+1)x^k = (k+1)x^{(k+1)-1} }$By the principle of mathematical induction, the statement is true for all natural numbers n.

#### Proof by binomial theorem (natural number)

Let $\displaystyle{ y=x^n }$, where $\displaystyle{ n\in \mathbb{N} }$.

Then,\displaystyle{ \begin{align} \frac{dy}{dx} &=\lim_{h\to 0}\frac{(x+h)^n-x^n}h\\[4pt] &=\lim_{h\to 0}\frac{1}{h} \left[x^n+\binom n1 x^{n-1}h+\binom n2 x^{n-2}h^2+\dots+\binom nn h^n-x^n \right]\\[4pt] &=\lim_{h\to 0}\left[\binom n 1 x^{n-1} + \binom n2 x^{n-2}h+ \dots+\binom nn h^{n-1}\right]\\[4pt] &=nx^{n-1} \end{align} }

#### Generalization to negative integer exponents

For a negative integer n, let $\displaystyle{ n=-m }$ so that m is a positive integer. Using the reciprocal rule,$\displaystyle{ \frac{d}{dx}x^n = \frac{d}{dx} \left(\frac{1}{x^m}\right) = \frac{-\frac{d}{dx}x^m}{(x^m)^2} = -\frac{mx^{m-1}}{x^{2m}} = -mx^{-m-1} = nx^{n-1}. }$In conclusion, for any integer $\displaystyle{ n }$, $\displaystyle{ \frac{d}{dx}x^n = nx^{n-1}. }$

### Generalization to rational exponents

Upon proving that the power rule holds for integer exponents, the rule can be extended to rational exponents.

#### Proof by chain rule

This proof is composed of two steps that involve the use of the chain rule for differentiation.

1. Let $\displaystyle{ y=x^r=x^\frac1n }$, where $\displaystyle{ n\in\N^+ }$. Then $\displaystyle{ y^n=x }$. By the chain rule, $\displaystyle{ ny^{n-1}\cdot\frac{dy}{dx}=1 }$. Solving for $\displaystyle{ \frac{dy}{dx} }$, $\displaystyle{ \frac{dy}{dx} =\frac{1}{ny^{n-1}} =\frac{1}{n\left(x^\frac1n\right)^{n-1}} =\frac{1}{nx^{1-\frac1n}} =\frac{1}{n}x^{\frac1n-1} =rx^{r-1} }$Thus, the power rule applies for rational exponents of the form $\displaystyle{ 1/n }$, where $\displaystyle{ n }$ is a nonzero natural number. This can be generalized to rational exponents of the form $\displaystyle{ p/q }$ by applying the power rule for integer exponents using the chain rule, as shown in the next step.
2. Let $\displaystyle{ y=x^r=x^{p/q} }$, where $\displaystyle{ p\in\Z, q\in\N^+, }$ so that $\displaystyle{ r\in\Q }$. By the chain rule, $\displaystyle{ \frac{dy}{dx} =\frac{d}{dx}\left(x^\frac1q\right)^p =p\left(x^\frac1q\right)^{p-1}\cdot\frac{1}{q}x^{\frac1q-1} =\frac{p}{q}x^{p/q-1}=rx^{r-1} }$

From the above results, we can conclude that when $\displaystyle{ r }$ is a rational number, $\displaystyle{ \frac{d}{dx} x^r=rx^{r-1}. }$

#### Proof by implicit differentiation

A more straightforward generalization of the power rule to rational exponents makes use of implicit differentiation.

Let $\displaystyle{ y=x^r=x^{p/q} }$, where $\displaystyle{ p, q \in \mathbb{Z} }$ so that $\displaystyle{ r \in \mathbb{Q} }$.

Then,$\displaystyle{ y^q=x^p }$Differentiating both sides of the equation with respect to $\displaystyle{ x }$,$\displaystyle{ qy^{q-1}\cdot\frac{dy}{dx} = px^{p-1} }$Solving for $\displaystyle{ \frac{dy}{dx} }$,$\displaystyle{ \frac{dy}{dx} = \frac{px^{p-1}}{qy^{q-1}}. }$Since $\displaystyle{ y=x^{p/q} }$,$\displaystyle{ \frac d{dx}x^{p/q} = \frac{px^{p-1}}{qx^{p-p/q}}. }$Applying laws of exponents,$\displaystyle{ \frac d{dx}x^{p/q} = \frac{p}{q}x^{p-1}x^{-p+p/q} = \frac{p}{q}x^{p/q-1}. }$Thus, letting $\displaystyle{ r=\frac{p}{q} }$, we can conclude that $\displaystyle{ \frac d{dx}x^r = rx^{r-1} }$ when $\displaystyle{ r }$ is a rational number.

## History

The power rule for integrals was first demonstrated in a geometric form by Italian mathematician Bonaventura Cavalieri in the early 17th century for all positive integer values of $\displaystyle{ {\displaystyle n} }$, and during the mid 17th century for all rational powers by the mathematicians Pierre de Fermat, Evangelista Torricelli, Gilles de Roberval, John Wallis, and Blaise Pascal, each working independently. At the time, they were treatises on determining the area between the graph of a rational power function and the horizontal axis. With hindsight, however, it is considered the first general theorem of calculus to be discovered.[4] The power rule for differentiation was derived by Isaac Newton and Gottfried Wilhelm Leibniz, each independently, for rational power functions in the mid 17th century, who both then used it to derive the power rule for integrals as the inverse operation. This mirrors the conventional way the related theorems are presented in modern basic calculus textbooks, where differentiation rules usually precede integration rules.[5]

Although both men stated that their rules, demonstrated only for rational quantities, worked for all real powers, neither sought a proof of such, as at the time the applications of the theory were not concerned with such exotic power functions, and questions of convergence of infinite series were still ambiguous.

The unique case of $\displaystyle{ r = -1 }$ was resolved by Flemish Jesuit and mathematician Grégoire de Saint-Vincent and his student Alphonse Antonio de Sarasa in the mid 17th century, who demonstrated that the associated definite integral,

$\displaystyle{ \int_1^x \frac{1}{t}\, dt }$

representing the area between the rectangular hyperbola $\displaystyle{ xy = 1 }$ and the x-axis, was a logarithmic function, whose base was eventually discovered to be the transcendental number e. The modern notation for the value of this definite integral is $\displaystyle{ \ln(x) }$, the natural logarithm.

## Generalizations

### Complex power functions

If we consider functions of the form $\displaystyle{ f(z) = z^c }$ where $\displaystyle{ c }$ is any complex number and $\displaystyle{ z }$ is a complex number in a slit complex plane that excludes the branch point of 0 and any branch cut connected to it, and we use the conventional multivalued definition $\displaystyle{ z^c := \exp(c\ln z) }$, then it is straightforward to show that, on each branch of the complex logarithm, the same argument used above yields a similar result: $\displaystyle{ f'(z) = \frac{c}{z}\exp(c\ln z) }$.[6]

In addition, if $\displaystyle{ c }$ is a positive integer, then there is no need for a branch cut: one may define $\displaystyle{ f(0) = 0 }$, or define positive integral complex powers through complex multiplication, and show that $\displaystyle{ f'(z) = cz^{c-1} }$ for all complex $\displaystyle{ z }$, from the definition of the derivative and the binomial theorem.

However, due to the multivalued nature of complex power functions for non-integer exponents, one must be careful to specify the branch of the complex logarithm being used. In addition, no matter which branch is used, if $\displaystyle{ c }$ is not a positive integer, then the function is not differentiable at 0.

## References

### Notes

1. If $\displaystyle{ r }$ is a rational number whose lowest terms representation has an odd denominator, then the domain of $\displaystyle{ f }$ is understood to be $\displaystyle{ \mathbb R }$. Otherwise, the domain is $\displaystyle{ (0,\infty) }$.

### Citations

1. Landau, Edmund (1951). Differential and Integral Calculus. New York: Chelsea Publishing Company. p. 45. ISBN 978-0821828304.
2. Spivak, Michael (1994). Calculus (3 ed.). Texas: Publish or Perish, Inc.. pp. 336–342. ISBN 0-914098-89-6.
3. Maor, Eli (1994). e: The Story of a Number. New Jersey: Princeton University Press. p. 156. ISBN 0-691-05854-7.
4. Boyer, Carl (1959). The History of the Calculus and its Conceptual Development. New York: Dover. p. 127. ISBN 0-486-60509-4.
5. Boyer, Carl (1959). The History of the Calculus and its Conceptual Development. New York: Dover. pp. 191, 205. ISBN 0-486-60509-4.
6. Freitag, Eberhard; Busam, Rolf (2009). Complex Analysis (2 ed.). Heidelberg: Springer-Verlag. p. 46. ISBN 978-3-540-93982-5.