4-7 Numerical stability of computations 
 ***************************************

 You may hear sometimes the following naive argument:

    Why spend time on all those small floating point errors? Even 
    REAL*4 gives us at least 7 significant digits, and surely that 
    is enough for most applications, and there is the REAL*8, and 
    that must surely suffice.

 One counter-argument is that errors may propagate and grow rapidly, 
 and since "small" roundoff errors are inevitable, there is always 
 the danger that our computations will produce garbage.


 Introduction 
 ------------
 Problems that have an inherent tendency for "error amplification"
 are called 'ill-conditioned', they require using multi-precision
 arithmetic packages and tight error control. 

 A classical example of an ill-conditioned problem is a certain 
 polynomial of degree 20, upon multiplying one of its coefficients
 (the one that multiplies the 19th power) by a factor of 1.0000000005
 the location of most of the roots is completely changed.

 In the above example, simply entering the input data (the coefficients
 of the polynomial) to the computer may be enought to trash the whole
 calculation, as the conversion from the external decimal representation
 to the internal binary one may introduce too large errors. 

 A 'well-conditioned' problem may have an error amplifying algorithm,
 such an algorithm is said to be 'unstable'. In simple terms, an unstable 
 algorithm transforms a "small" change in the input quantities into a 
 "large" change in the output quantities. 

 Common mechanisms for error amplification are:

   I)   Evaluating a rapidly changing quantity (large derivative),
        the errors are amplified by the magnitude of the derivative 

   II)  Mixing/alternating of different solutions to the same problem,
        e.g. ill-conditioned polynomials, ordinary differential 
        equations with another solution that increases rapidly

   III) Iterative evaluation, may increase an initial error repeatedly
        via other mechanisms

 Sometimes an unstable algorithm may be modified to produce a stable one.

   +------------------------------------------------------------+
   |  COMPUTATIONAL ERRORS MAKES YOU SOLVE A DIFFERENT PROBLEM  |
   +------------------------------------------------------------+


 A very simple example for mechanism I - square root
 ---------------------------------------------------
 This problem is simple to the point of being artificial, but don't
 get it wrong, such situations do occur in real programs.

 We compute the square root of a small number DX, in two ways: after 
 adding and subtracting 1.0 and directly.


C    For DEC machines use:  SQUARE1 = 9.0E-08,  SQUARE2 = 5.76E-08
C    For IEEE machines use: SQUARE1 = 9.0E-08,  SQUARE2 = 5.76E-08
C
      PROGRAM SQROOT
C     ------------------------------------------------------------------
      REAL 
     *                  X,
     *                  SQRT,
     *                  DIRECT_SQRT,
     *                  ERROR,
     *                  SQUARE1, SQUARE2
C     ------------------------------------------------------------------
      PARAMETER(
     *                  SQUARE1 = 9.0E-08,
     *                  SQUARE2 = 5.76E-08)
C     ------------------------------------------------------------------
      WRITE(*,*) 
      X = 1.0 + SQUARE1
      WRITE(*,*) ' We choose DX =            ', SQUARE1
      WRITE(*,*) ' X = 1.0 + DX =            ', X
      WRITE(*,*) ' X - 1.0 =                 ', (X - 1.0)
C     ------------------------------------------------------------------
      SQRT  = (X - 1.0) ** 0.5
      WRITE(*,*) ' (X - 1.0) ** 0.5  =       ', SQRT
      DIRECT_SQRT = (9.0E-08 ** 0.5)
      WRITE(*,*) ' Direct root of 9.0E-08 =  ', DIRECT_SQRT
      ERROR = (SQRT - DIRECT_SQRT) / DIRECT_SQRT
      WRITE(*,*) ' Relative error =          ', ERROR
      WRITE(*,*) 
C     ------------------------------------------------------------------
      WRITE(*,*) 
      X = 1.0 + SQUARE2
      WRITE(*,*) ' We choose DX =            ', SQUARE2
      WRITE(*,*) ' X = 1.0 + DX =            ', X
      WRITE(*,*) ' X - 1.0 =                 ', (X - 1.0)
C     ------------------------------------------------------------------
      SQRT  = (X - 1.0) ** 0.5
      WRITE(*,*) ' (X - 1.0) ** 0.5  =       ', SQRT
      DIRECT_SQRT = (5.76E-08 ** 0.5)
      WRITE(*,*) ' Direct root of 5.76E-08 = ', DIRECT_SQRT
      ERROR = (SQRT - DIRECT_SQRT) / DIRECT_SQRT
      WRITE(*,*) ' Relative error =          ', ERROR
      WRITE(*,*) 
C     ------------------------------------------------------------------
      END


 In the first case we get a relative error of 15%, and in the second
 we have an error of 100%, unbelievable, isn't it?

 The large relative errors are a consequence of an unstable algorithm.
 In the example program we computed the square root of a small number,
 we know that: 

        y = x ** 0.5            y' = 0.5 / (x ** 0.5)

 The derivative goes infinite when we approach x = 0.0, so clearly a
 small change there in x will produce a LARGE change in y. 

 Of course, in our finite arithmetic we can't get infinitely close to
 x = 0.0, near that point we have   x = n * Xmin

 Let's stabilize our simple algorithm, in our example it is enough 
 to replace:

      X    = 1.0 + 9.0E-08
      SQRT = (X - 1.0) ** 0.5

 By the more reasonable:

      SQRT = 9.0E-08 ** 0.5

 Now we get the correct result, why? 

 In the first case we introduced a large roundoff error in  computing 
 1.0 + 9.0E-08, because a small number was added to 1.0. Remember that 
 the "density of points" near 1.0 is much lower than near 0.0, or said 
 differently the "granularity" is higher.

 Subtracting the 1.0 again didn't help, on the contrary, and the large 
 roundoff error stayed and produced a wrong square root. 

 Of course you can't always transform an unstable algorithm to a stable
 one by simple arithmetic juggling, sometimes the problem is deeper, and 
 you have to use a better algorithm.


 A more realistic example for mechanism I - the quadratic equation
 -----------------------------------------------------------------



 An example program for computing error amplification
 ----------------------------------------------------
 The following procedure (STDRVR) can track error propagation through 
 a routine with a certain arrangement of arguments. The tested routine 
 should have REAL arguments, the first is the "result", and 1 to 3 
 "variables" follow it.

 Such "variable argument list" routines usually work quite nicely,
 if however they don't please notify this guide.

 In this example (see the call to STDRVR) the routine COMPUTE is run 
 20 times, the arguments X1, X2 are varied by random increments on the 
 order of the common REAL machine precision, and the resulting values 
 are monitored.

 Try giving small positive values to X1 and X2, and note the large
 error amplifications that are generated.


      PROGRAM DRIVER
C     ------------------------------------------------------------------
      REAL
     *			X1, X2
C     ------------------------------------------------------------------
      EXTERNAL
     *			COMPUTE
C     ==================================================================
      WRITE (*,'(1X,A,$)') 'ENTER X1, X2:    '
      READ  (*,*) X1, X2
C     ------------------------------------------------------------------
      CALL STDRVR(COMPUTE, 2.0E-7, 20, 2, X1, X2)
C     ------------------------------------------------------------------
      END


      SUBROUTINE COMPUTE(Y, X1, X2)
C     ------------------------------------------------------------------
      REAL
     *			Y, X1, X2
C     ------------------------------------------------------------------
      Y = 1.0E+03 * (X1 ** -0.5) + 1.0E+03 / X2
C     ------------------------------------------------------------------
      RETURN
      END


      SUBROUTINE STDRVR(FUNC, ERRABS, NITER, NARGS, X1, X2, X3)
C     ------------------------------------------------------------------
      INTEGER
     *			NITER, NARGS, 
     *			SEED, COUNT, I
C     ------------------------------------------------------------------
      REAL
     *			ERRABS, X1, X2, X3, Y,
     *			EPS, MINEPS, XIN(3), MIN, MAX, SUM,
     *			RELERR, ABSERR
C     ------------------------------------------------------------------
      PARAMETER(
     *			MINEPS =	2.0E-07)
C     ------------------------------------------------------------------
      SAVE 
     *			COUNT
C     ------------------------------------------------------------------
      DATA
     *			COUNT	/0/
C     ------------------------------------------------------------------
      EXTERNAL
     *			FUNC
C     ==================================================================
      IF ((NARGS .LT. 1) .OR. (NARGS .GT. 3)) 
     *		STOP 'STDRVR: Wrong number of args '
C     ------------------------------------------------------------------
      COUNT = COUNT + 1
      WRITE (*,*) 
      WRITE (*,*) ' Data collection run # ', COUNT
C     ------------------------------------------------------------------
      SEED = 1234567
C     ------------------------------------------------------------------
      IF (NARGS .GE. 1) XIN(1) = X1
      IF (NARGS .GE. 2) XIN(2) = X2
      IF (NARGS .GE. 3) XIN(3) = X3
      WRITE (*,*) ' Base values =   ', (XIN(I), I = 1, NARGS)
C     ------------------------------------------------------------------
      IF (NARGS .EQ. 1) CALL FUNC(Y, XIN(1))
      IF (NARGS .EQ. 2) CALL FUNC(Y, XIN(1), XIN(2))
      IF (NARGS .EQ. 3) CALL FUNC(Y, XIN(1), XIN(2), XIN(3))
C     ------------------------------------------------------------------
      MIN  = Y
      MAX  = Y
      SUM  = Y
C     ------------------------------------------------------------------
      IF (ERRABS .GE. MINEPS) THEN
        EPS = ERRABS
        WRITE (*,*) ' Initial error = ', EPS
      ELSE
        EPS = MINEPS
        WRITE (*,*) ' Initial error raised to ', MINEPS
      ENDIF
C     ------------------------------------------------------------------
      DO I = 2, NITER
        IF (NARGS .GE. 1) XIN(1) = X1 + EPS * 2.0 * (RAN(SEED) - 0.5)
        IF (NARGS .GE. 2) XIN(2) = X2 + EPS * 2.0 * (RAN(SEED) - 0.5)
        IF (NARGS .GE. 3) XIN(3) = X3 + EPS * 2.0 * (RAN(SEED) - 0.5)
C       ----------------------------------------------------------------
        IF (NARGS .EQ. 1) CALL FUNC(Y, XIN(1))
        IF (NARGS .EQ. 2) CALL FUNC(Y, XIN(1), XIN(2))
        IF (NARGS .EQ. 3) CALL FUNC(Y, XIN(1), XIN(2), XIN(3))
C       ----------------------------------------------------------------
        IF (Y .LT. MIN) MIN = Y
        IF (Y .GT. MAX) MAX = Y
        SUM = SUM + Y
      ENDDO
C     ------------------------------------------------------------------
      ABSERR = ABS(MAX - MIN) 
      RELERR = NITER * ABSERR / SUM
C     ------------------------------------------------------------------
      WRITE (*,'(2X,70(1H-))') 
      WRITE (*,*) ' Number of points =    ', NITER
      WRITE (*,*) ' Max = ', MAX, '   Min = ', MIN
      WRITE (*,*) ' Relative error =      ', 100.0 * RELERR, ' percent '
      WRITE (*,*) ' Error amplification = ', ABSERR / EPS
      WRITE (*,'(2X,70(1H=))') 
C     ------------------------------------------------------------------
      RETURN
      END



Return to contents page