Decimal data type

From HandWiki

Some programming languages (or compilers for them) provide a built-in (primitive) or library decimal data type to represent non-repeating decimal fractions like 0.3 and -1.17 without rounding, and to do arithmetic on them. Examples are the decimal.Decimal type of Python, and analogous types provided by other languages.

Rationale

Fractional numbers are supported on most programming languages as floating-point numbers or fixed-point numbers. However, such representations typically restrict the denominator to a power of two. Most decimal fractions (or most fractions in general) cannot be represented exactly as a fraction with a denominator that is a power of two. For example, the simple decimal fraction 0.3 (3/10) might be represented as 5404319552844595/18014398509481984 (0.299999999999999988897769...). This inexactness causes many problems that are familiar to experienced programmers. For example, the expression 0.1 * 7 == 0.7 might counterintuitively evaluate to false in some systems, due to the inexactness of the representation of decimals.

Although all decimal fractions are fractions, and thus it is possible to use a rational data type to represent it exactly, it may be more convenient in many situations to consider only non-repeating decimal fractions (fractions whose denominator is a power of ten). For example, fractional units of currency worldwide are mostly based on a denominator that is a power of ten. Also, most fractional measurements in science are reported as decimal fractions, as opposed to fractions with any other system of denominators.

A decimal data type could be implemented as either a floating-point number or as a fixed-point number. In the fixed-point case, the denominator would be set to a fixed power of ten. In the floating-point case, a variable exponent would represent the power of ten to which the mantissa of the number is multiplied.

Languages that support a rational data type usually allow the construction of such a value from two integers, instead of a base-2 floating-point number, due to the loss of exactness the latter would cause. Usually the basic arithmetic operations ('+', '−', '×', '/', integer powers) and comparisons ('=', '<', '>', '≤') would be extended to act on them — either natively or through operator overloading facilities provided by the language. These operations may be translated by the compiler into a sequence of integer machine instructions, or into library calls. Support may also extend to other operations, such as formatting, rounding to an integer or floating point value, etc.. An example of this is 123.456

Standard formats

IEEE 754 specifies three standard floating-point decimal data types of different precision:

Language support

  • C# has a built-in data type 'decimal', consisting of 128-bit resulting in 28-29 significant digits. It has an approximate Range of (-7.9 x 10^28 to 7.9 x 10^28) / (10^(0 to 28)).[1]
  • Starting with Python 2.4, Python's standard library includes a Decimal class in the module decimal.[2]
  • Ruby's standard library includes a BigDecimal class in the module bigdecimal.
  • Java's standard library includes a java.math.BigDecimal class.
  • In Objective-C, the Cocoa and GNUstep APIs provide an NSDecimalNumber class and an NSDecimal C datatype for representing decimals whose mantissa is up to 38 digits long, and exponent is from -128 to 127.
  • Some IBM systems and SQL systems support DECFLOAT format with at least the two larger formats.[3]
  • ABAP's new DECFLOAT data type includes decimal64 (as DECFLOAT16) and decimal128 (as DECFLOAT34) formats.[4]
  • PL/I natively supports both fixed-point and floating-point decimal data.
  • GNU Compiler Collection (gcc) provides support for decimal floats as an extension.[5]

See also

References