Formal calculation

From HandWiki
Revision as of 21:18, 6 February 2024 by NBrush (talk | contribs) (change)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In mathematical logic, a formal calculation, or formal operation, is a calculation that is systematic but without a rigorous justification. It involves manipulating symbols in an expression using a generic substitution without proving that the necessary conditions hold. Essentially, it involves the form of an expression without considering its underlying meaning. This reasoning can either serve as positive evidence that some statement is true when it is difficult or unnecessary to provide proof or as an inspiration for the creation of new (completely rigorous) definitions.

However, this interpretation of the term formal is not universally accepted, and some consider it to mean quite the opposite: a completely rigorous argument, as in formal mathematical logic.

Examples

Formal calculations can lead to results that are wrong in one context, but correct in another context. The equation

[math]\displaystyle{ \sum_{n=0}^{\infty} q^n = \frac{1}{1-q} }[/math]

holds if q has an absolute value less than 1. Ignoring this restriction, and substituting q = 2 to leads to

[math]\displaystyle{ \sum_{n=0}^{\infty} 2^n = -1. }[/math]

Substituting q=2 into the proof of the first equation, yields a formal calculation that produces the last equation. But it is wrong over the real numbers, since the series does not converge. However, in other contexts (e.g. working with 2-adic numbers, or with integers modulo a power of 2), the series does converge. The formal calculation implies that the last equation must be valid in those contexts.

Another example is obtained by substituting q=-1. The resulting series 1-1+1-1+... is divergent (over the real and the p-adic numbers) but a value can be assigned to it with an alternative method of summation, such as Cesàro summation. The resulting value, 1/2, is the same as that obtained by the formal computation.

Formal power series

Formal power series is a concept that adopts the form of power series from real analysis. The word "formal" indicates that the series need not converge. In mathematics, and especially in algebra, a formal series is an infinite sum that is considered independently from any notion of convergence and can be manipulated with algebraic operations on series (addition, subtraction, multiplication, division, partial sums, etc.).

A formal power series is a special kind of formal series, which may be viewed as a generalization of a polynomial, where the number of terms is allowed to be infinite, with no requirements of convergence. Thus, the series may no longer represent a function of its variable, merely a formal sequence of coefficients, in contrast to a power series, which defines a function by taking numerical values for the variable within a radius of convergence. In a formal power series, the powers of the variable are used only as position-holders for the coefficients, so that the coefficient of [math]\displaystyle{ \displaystyle x^{5}}x^{5 }[/math]is the fifth term in the sequence. In combinatorics, the method of generating functions uses formal power series to represent numerical sequences and multisets, for instance allowing concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved. More generally, formal power series can include series with any finite (or countable) number of variables, and with coefficients in an arbitrary ring.

Rings of formal power series are complete local rings, which supports calculus-like methods in the purely algebraic framework of algebraic geometry and commutative algebra. They are analogous to p-adic integers, which can be defined as formal series of the powers of p.

Symbol manipulation

Differential Equations

To solve the differential equation

[math]\displaystyle{ \frac{dy}{dx} = y^2 }[/math]

these symbols can be treated as ordinary algebraic symbols, and without giving any justification regarding the validity of this step, we take reciprocals of both sides:

[math]\displaystyle{ \frac{dx}{dy} = \frac{1}{y^2} }[/math]

A simple antiderivative:

[math]\displaystyle{ x = \frac{-1}{y} + C }[/math]
[math]\displaystyle{ y = \frac{1}{C-x} }[/math]

Because this is a formal calculation, it is acceptable to let [math]\displaystyle{ C = \infty }[/math] and obtain another solution:

[math]\displaystyle{ y = \frac{1}{\infty - x} = \frac{1}{\infty} = 0 }[/math]

The final solutions can be checked to confirm that they solve the equation.

Cross Product

The cross product can be expressed as the following determinant:

[math]\displaystyle{ \mathbf{a\times b} = \begin{vmatrix} \mathbf{i}&\mathbf{j}&\mathbf{k}\\ a_1&a_2&a_3\\ b_1&b_2&b_3\\ \end{vmatrix} }[/math]

where [math]\displaystyle{ ( \mathbf{i},\mathbf{j},\mathbf{k}) }[/math] is a positively oriented orthonormal basis of a three-dimensional oriented Euclidean vector space, while [math]\displaystyle{ a_1,a_2,a_3, b_1, b_2, b_3 }[/math] are scalars such that [math]\displaystyle{ \mathbf{a} = a_1 \mathbf{i} + a_2 \mathbf{j} + a_3 \mathbf{k} }[/math], and similar for [math]\displaystyle{ \mathbf{b} }[/math].

See also

References

  • Stuart S. Antman (1995). Nonlinear Problems of Elasticity, Applied Mathematical Sciences vol. 107. Springer-Verlag. ISBN 0-387-20880-1.