Department of Mathematics - University of Utah Home • Computing • Course Schedules • CSME • Current Positions • FAQ (Computing) • Forms • Graduate • High School • Lecture Videos • Mailbox Access • Math Biology • Math Education • Newsletter • People • Research • RTG Grants • Seminars •

# Multiple-precision arithmetic FAQ

Last update(s): Sat Oct 22 17:17:26 2005     Thu Mar 23 14:04:08 2017 1.   What is multiple-precision arithmetic?

It is arithmetic carried out in software at higher precision than provided by hardware.

Since the mid-1980s, essentially all new desktop and smaller computers have implemented arithmetic as defined by the ANSI/IEEE 754-1985 Standard for Binary Floating-Point Arithmetic, usually abbreviated to IEEE 754. This provides for 32-bit, 64-bit, and optionally, either 80-bit or 128-bit formats. These encode a 1-bit sign, a biased power-of-two exponent (8, 11, 15, and 15 bits respectively), and a significand (24, 53, 64, and 113 bits respectively) capable of representing approximately 7, 15, 19, and 34 decimal digits respectively.

Although the IEEE 754 hardware precisions suffice for many practical purposes, there are many areas of computation where higher precision is required.

2.   Who/what needs multiple-precision arithmetic?

Three simple examples where higher-precision arithmetic is required are in the conversion between decimal and binary number bases, the computation of exactly rounded elementary functions, and the computation of vector dot products:

1. The first of these is a problem that was not solved properly until about 1990, and regrettably, the algorithms are still not implemented in most programming-language libraries. Work at IBM in 1999 showed that there are cases where the 32-bit format needs 126 bits (38 decimal digits), the 64-bit format needs 752 bits (227 decimal digits), and the 128-bit format needs 11,503 bits (3463 decimal digits) of precision in intermediate results to guarantee correct number-base conversion.
2. The second is an area of active research, and while algorithms are now known for generating correctly rounded results for all of the elementary functions defined in the ISO standards for the Fortran and C/C++ programming languages, the algorithms are simplest when the working precision is at least two or three times the precision of the final results, and special cases may require much higher precision. When only hardware precision is available, the work can be excruciatingly difficult: a recent paper on the computation of the correctly rounded exponential function, exp(), has about 25 pages of dense mathematics to finally prove correctness of the algorithm. Implementing this in correct computer code is a very challenging problem. Exhaustive testing of any of the elementary functions is impossible in high precision, because it would require very much longer than the age of the universe. It is therefore very hard to have confidence in any such code.
3. In 2003, an algorithm was discovered that can be used on machines with a fused multiply-add (FMA) instruction (e.g., Intel IA-64 up to the 80-bit format, and IBM PowerPC up to the 64-bit format) to compute vector dot products provably accurate to the next-to-last digit. However, higher precision is needed for correctly rounded results, and for machines that lack FMA instructions. Notice that the simple case of summing a series of numbers is just a special case of the dot product, when one of the vectors has unit elements. The seemingly simple programming-language statement to add three numbers, sum = x + y + z, is a distinctly non-trivial task; the statement as written is frequently completely wrong in floating-point arithmetic! The committee working on the pending revision of the IEEE 754 Standard has a proposal before it to require support for correctly rounded dot products.

Two recent books, Experimentation in mathematics: computational paths to discovery (ISBN 1-56881-136-5) and Mathematics by Experiment: Plausible Reasoning in the 21st Century (ISBN 1-56881-211-6), show how high-precision computation can lead to fundamental new discoveries in mathematics, and be essential for the solution of some important physical problems.

3.   What programming languages provide native multiple-precision arithmetic?

The Axiom, Maple, Mathematica, Maxima, MuPAD, PARI/GP, and Reduce symbolic-algebra languages, the Unix bc and dc calculators, and the python and rexx scripting languages, all provide such a facility and make it easy to use. There are separate BigFloat packages available for the perl and ruby scripting languages.

For example, in Maple, you can change the decimal precision at any time by a simple assignment to a global variable, Digits := 100;, without making any other changes to your program. All subsequent arithmetic, and all of the built-in functions, are then computed to the specified precision.

If you need multiple-precision arithmetic for experimental code that you are developing, these languages are likely to prove most convenient. However, because the arithmetic is performed in software, and code is interpreted, rather than compiled, run times can be hundreds, or even thousands, of times slower than they would be in a compiled language using hardware arithmetic.

Java and C# provide a BigDecimal data type, but it supports only fixed-point arithmetic, not floating-point arithmetic, and library support beyond the basic operations (add, subtract, multiply, and divide) is nonexistent. Its utility is limited, and it is primarily useful only for simple computations in business accounting.

4.   What programming languages provide nonnative multiple-precision arithmetic?

If you need high-precision arithmetic in a traditional programming language, such as Fortran, C, or C++, life can be much more difficult. The Brent MP package (ACM Trans. Math. Software 4(1), 57--70 (1978), the Bailey mpfun77 package, the GNU gmp package, the French LORIA mpfr package, and the Moshier extended double package, all provide libraries of routines, but you must use function calls for coding of all numerical and I/O operations that involve multiple-precision data.

The Ada, C++, and Fortran 90/95 languages support user-definable data types and operator overloading. This makes it possible to define libraries that allow you to code numerical expressions in the conventional way, such as a = b * c + d / sqrt(e). Two such libraries for Fortran 90/95 are the Bailey mpfun90 package, and the Schonfelder vpa package. The Bailey arprec package offers similar support for C++.

Regrettably, none of these packages provides the standard elementary functions, and that deficiency remains a great weakness, because those functions are very likely to be required.

If you need these libraries, please consult with systems staff for advice and instruction.

Consult some of the recent books listed in the extensive fparith floating-point arithmetic bibliography. The author of this FAQ has written a draft of a book on the subject; consult systems staff for its prepublication availability.

Dept Info Outreach College of Science Newsletter

Department of Mathematics
University of Utah
155 South 1400 East, JWB 233
Salt Lake City, Utah 84112-0090
Tel: 801 581 6851, Fax: 801 581 4148
Webmaster

Entire Web                     Only http://www.math.utah.edu/