Re: Calculator site update Message #11 Posted by Valentin Albillo on 26 Aug 2005, 5:15 a.m., in response to message #9 by Palmer O. Hanson, Jr.
Hi, Palmer:
Palmer posted:
"Many references suggest that this is a risky procedure"
It may be a risky procedure for a number of reasons. Some depend
on the kind of fitting you're up to (i.e., exact, mathematical
data or experimental data), others depend on the size of the
problem and the computing precision being used. Both can limit
the accuracy and/or usefulness of the polynomial fit you get.
For highdegree exact polynomial fits to a set of data points,
you're solving a large system of linear equations. The related
matrix approaches a Hilbert matrix as the degree increases, and
as has been mentioned a number of times (and you can see a
thorough discussion in my article "Mean Matrices", published
in Datafile V24N4 (July/August 2005)), Hilbert matrices are
extremely difficult to handle numerically, as they're almost
singular. Thus, exactly fitting an Nthdegree polynomial to N+1
data points involves numerically dealing with a close approximation
to an NxN Hilbert matrix, and for large N this will quickly
degrade precision to the point of the result being unusable.
"The curve generated will pass
through every point in the set [...] and the fit will
be "perfect". The problem with this is that inherent measurement noise will be magnified, the curve will gyrate wildly outside the
points in the set, the "energy" will be nowhere near minimized, and nothing will have been made simple. ..."
That's not so simple. It depends on the nature of the problem, i.e.. if we're dealing with approximate, empirical data obtained via
some measurement or experiment, or else it's just a mathematical
problem of fitting some polynomial to a given data
set or mathematical function (say y=exp(x)) over some range.
For this last case, approximating a polynomial to some given
function, it further depends on the function being approximated
AND the data points used in the approximation, i.e., equallyspaced
or arbitrarily spaced.
For suitable functions (say y=sin(x)) and equally spaced
data points, convergence does occur, and the polynomial thus
fitted closely resembles the original function as the degree
(i.e., number of points used) increases. But in the general
case and for many transcendental functions, this convergences
does not hold and the resulting polynomial, though passing
through all data points used, oscillates wildly, and absolutely
diverges from the function at all points except the fit set.
"When is it safe to use just one more data point than the degree of the polynomial to be fitted? How does one tell? "
As stated, it's a difficult theoretical question. As a rule of thumb, I would advice the following:
 If the data set are experimental and there are a large number of them, forget about an exact fit. Your best bet is to compute a number of polynomial leastsquares regressions for degrees 1, 2, 3, ... and for each degree, compute the RMS of the error. As long as the RMS is decreasing sharply, keep on increasing the degree. When the RMS stops significantly changing, then halt.
For instance, if for degrees 1 to 5 you get RMS errors 0.532, 0.183, 0.027, 0.019, 0.015, you should stick to degree 3 (RMS = 0.027). The 'extra' accuracy you seem to get with degrees 4 and 5 is just experimental noise in the data. Leastsquares approximations are essentially a smoothing of data, which will remove most of the noise. Increasing the degree is just a senseless attempt to also fit the noise as well.
There's also the fact that, as for the case of exact polynomial fit, the related matrix for leastsquares regression is also nearly singular for large degree, but this can be avoided by using orthogonal polynomials instead of directly dealing with said matrix. However, for experimental data, highdegree leastsquare regressions are seldom justifiable, stick to the diminishing RMS criterium to decide where to stop. The computed polynomial will result in improved values bettering even the data points themselves, as most experimental noise will be smoothed out by the regression.
 If the data set is a mathematical set of data points or you're trying to fit a suitable polynomial to a mathematical function (say the Gamma function or some Bessel function), there's no smoothing of experimental noise involved and your best bet is not a leastsquares polynomial fit but a minimax polynomial fit. This is a very complex subject and this is not the place to discuss it, have a look at my Minimax Polynomial Fit article in Datafile V24N5, where this is discussed at length.
Using this kind of approximation will get you the absolute optimal fit, with guaranteed smallest maximum error over the whole specified interval, no oscillations or artifacts whatsoever.
Best regards from V.
Edited: 26 Aug 2005, 7:53 a.m.
