Results 1 - 10
of
23
Algorithms for Arbitrary Precision Floating Point Arithmetic
- Proceedings of the 10th Symposium on Computer Arithmetic
, 1991
"... We present techniques which may be used to perform computations of very high accuracy using only straightforward floating point arithmetic operations of limited precision, and we prove the validity of these techniques under very general hypotheses satisfied by most implementations of floating point ..."
Abstract
-
Cited by 65 (1 self)
- Add to MetaCart
We present techniques which may be used to perform computations of very high accuracy using only straightforward floating point arithmetic operations of limited precision, and we prove the validity of these techniques under very general hypotheses satisfied by most implementations of floating point arithmetic. To illustrate the application of these techniques, we present an algorithm which computes the intersection of a line and a line segment. The algorithm is guaranteed to correctly decide whether an intersection exists and, if so, to produce the coordinates of the intersection point accurate to full precision. Moreover, the algorithm is usually quite efficient; only in a few cases does guaranteed accuracy necessitate an expensive computation. 1. Introduction "How accurate is a computed result if each intermediate quantity is computed using floating point arithmetic of a given precision?" The casual reader of Wilkinson's famous treatise [21] and similar roundoff error analyses might...
Faster Numerical Algorithms via Exception Handling
, 1993
"... this paper we explore the use of this paradigm in the design of numerical algorithms. We exploit the fact that there are numerical algorithms that run quickly and usually give the right answer as well as other, slower, algorithms that are always right. By "right answer" we mean that the algorithm is ..."
Abstract
-
Cited by 43 (7 self)
- Add to MetaCart
this paper we explore the use of this paradigm in the design of numerical algorithms. We exploit the fact that there are numerical algorithms that run quickly and usually give the right answer as well as other, slower, algorithms that are always right. By "right answer" we mean that the algorithm is stable, or that it computes the exact answer for a problem that is a slight perturbation of its input [9]; this is all we can reasonably ask of most algorithms. To take advantage of the faster but occasionally unstable algorithms, we will use the following paradigm: (1) Use the fast algorithm to compute an answer; this will usually be done stably. (2) uickly and reliably assess the accuracy of the computed answer. (3) In the unlikely event the answer is not accurate enough, recompute it slowly but accurately.
THE ACCURACY OF FLOATING POINT SUMMATION
, 1993
"... The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are ..."
Abstract
-
Cited by 39 (0 self)
- Add to MetaCart
The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are shown to be special cases of a general class of methods, and an error analysis is given for this class. No one method is uniformly more accurate than the others, but some guidelines are givenon the choice of method in particular cases.
NEW FAST AND ACCURATE JACOBI SVD ALGORITHM: II
, 2002
"... This paper presents new implementation of one–sided Jacobi SVD for triangular matrices and its use as the core routine in a new preconditioned Jacobi SVD algorithm, recently proposed by the authors. New pivot strategy exploits the triangular form and uses the fact that the input triangular matrix i ..."
Abstract
-
Cited by 32 (3 self)
- Add to MetaCart
This paper presents new implementation of one–sided Jacobi SVD for triangular matrices and its use as the core routine in a new preconditioned Jacobi SVD algorithm, recently proposed by the authors. New pivot strategy exploits the triangular form and uses the fact that the input triangular matrix is the result of rank revealing QR factorization. If used in the preconditioned Jacobi SVD algorithm, it delivers superior performance leading to the currently fastest method for computing SVD decomposition with high relative accuracy. Furthermore, the efficiency of the new algorithm is comparable to the less accurate bidiagonalization based methods. The paper also discusses underflow issues in floating point implementation, and shows how to use perturbation theory to fix the imperfectness of machine arithmetic on some systems.
On Properties of Floating Point Arithmetics: Numerical Stability and the Cost of Accurate Computations
, 1992
"... Floating point arithmetics generally possess many regularity properties in addition to those that are typically used in roundoff error analyses; these properties can be exploited to produce computations that are more accurate and cost effective than many programmers might think possible. Furthermore ..."
Abstract
-
Cited by 26 (0 self)
- Add to MetaCart
Floating point arithmetics generally possess many regularity properties in addition to those that are typically used in roundoff error analyses; these properties can be exploited to produce computations that are more accurate and cost effective than many programmers might think possible. Furthermore, many of these properties are quite simple to state and to comprehend, but few programmers seem to be aware of them (or at least willing to rely on them). This dissertation presents some of these properties and explores their consequences for computability, accuracy, cost, and portability. For example, we consider several algorithms for summing a sequence of numbers and show that under very general hypotheses, we can compute a sum to full working precision at only somewhat greater cost than a simple accumulation, which can often produce a sum with no significant figures at all. This example, as well as others we present, can be generalized further by substituting still more complex algorith...
The bidiagonal singular values decomposition and Hamiltonian mechanics
- SIAM J. Num. Anal
, 1991
"... We consider computing the singular value decomposition of a bidiagonal matrixB. This problem arises in the singular value decomposition of a general matrix, and in the eigenproblem for a symmetric positive de nite tridiagonal matrix. We show that if the entries of B are known with high relative accu ..."
Abstract
-
Cited by 25 (6 self)
- Add to MetaCart
We consider computing the singular value decomposition of a bidiagonal matrixB. This problem arises in the singular value decomposition of a general matrix, and in the eigenproblem for a symmetric positive de nite tridiagonal matrix. We show that if the entries of B are known with high relative accuracy, the singular values and singular vectors ofB will be determined to much higher accuracy than the standard perturbation theory suggests. We also show that the algorithm in [Demmel and Kahan] computes the singular vectors as well as the singular values to this accuracy. We also give a Hamiltonian interpretation of the algorithm and use di erential equation methods to prove many of the basic facts. The Hamiltonian approach suggests a way to use ows to predict the accumulation of error in other eigenvalue algorithms as well.
Handling Floating-Point Exceptions in Numeric Programs
- ACM Transactions on Programming Languages and Systems
, 1996
"... Language Constructs Termination exception mechanisms like in Ada and C++ are supposed to terminate an unsuccessful computation as soon as possible after an exception occurs. However, none of the examples of numeric exception handling presented earlier depends ACM Transactions on Programming Languag ..."
Abstract
-
Cited by 21 (0 self)
- Add to MetaCart
Language Constructs Termination exception mechanisms like in Ada and C++ are supposed to terminate an unsuccessful computation as soon as possible after an exception occurs. However, none of the examples of numeric exception handling presented earlier depends ACM Transactions on Programming Languages and Systems, Vol. 18, No. 2, March 1996. Handling Floating-Point Exceptions 167 on the immediate termination of a calculation signaling an exception. The IEEE exception flags scheme actually takes advantage of the fact that an immediate jump is not necessary; by raising a flag, making a substitution, and continuing, the IEEE Standard supports both an attempted/alternate form and a default substitution with a single, simple reponse to exceptions. A detraction of the IEEE flag solution, though, is its obvious lack of structure. Instead of being forced to set and reset flags, one would ideally have available a language construct that more directly reflected the attempted/alternate algorit...
On The Correctness Of Some Bisection-Like Parallel Eigenvalue Algorithms In Floating Point Arithmetic
- Electronic Trans. Num. Anal
, 1995
"... Bisection is a parallelizable method for finding the eigenvalues of real symmetric tridiagonal matrices, or more generally symmetric acyclic matrices. ..."
Abstract
-
Cited by 13 (4 self)
- Add to MetaCart
Bisection is a parallelizable method for finding the eigenvalues of real symmetric tridiagonal matrices, or more generally symmetric acyclic matrices.




