There are two main types of questions concerning polynomial approximations, "best fit", and "exact".

The question of “best fit”.

Two matched lists of numbers that have the same length n are given:

x_{1}, x_{2}, ..., x_{n} and y_{1},
y_{2}, ... , y_{n}.

The task is to find a polynomial of a given degree k < n,

P(x) = a_{0}
+ a_{1}*x + ... + a_{k}*x^{k}

such that P(x_{1}) ≈
y_{1}, P(x_{2}) ≈ y_{2}, ..., P(x_{n}) ≈ y_{n}.

How good the match is, is measured by the "square error" e,

e = (y_{1} - P(x_{1}))^{2}
+ (y_{2} - P(x_{2}))^{2} + ...
+ (y_{n} - P(x_{n}))^{2}

and the smaller the e, the better match.

Remarks.

In this problem the only assumption is that both lists have the same length. Each list may contain repetitions.

In the case when the degree k = 1, P(x) = a_{0} + a_{1}*x,
the equation P(x) = y is an equation of a regression line that is often used in
statistics.

Before modern computing technology was developed, finding polynomials of higher degree was computationally too slow and too error prone to be practical. But today it can be easily done on a graphing calculator for reasonably long lists of data.

This is a main technique to construct a "data fitting curve" when the relationship between the x's and y's is not linear.

The question of an exact fit.

"Technically" this is a special case of the best fit question.

Again we have two lists given,

x_{1}, x_{2}, ..., x_{n} and y_{1},
y_{2}, ... , y_{n},

but this time all the x's must be DISTINCT.

And we are looking for a polynomial P(x) of ANY degree k < n such that,

y_{1}
= P(x_{1}), y_{2} = P(x_{2}), ...
y_{n} = P(x_{n}).

(Such a polynomial always exists.)

In both cases, the coefficients a_{0}, a_{1}, ..., ak are found by using
matrices. And in the second case the
formula that is used can be made slightly simpler, but it is not significant
difference.

The significant difference lies in applications. Numerical computation of values of most functions, trigonometric, exponential, logarithmic, and others, is done by constructing approximating polynomials.

It is done as follows. We want to compute values of a given function F(x) for any given x between a and b. We construct a polynomial P(x) which is "approximately" F(x), namely F(x) ≈ P(x) for all x between a and b, and then we compute P(x) instead of F(x).

To construct P(x) we proceed as follows. We choose n different numbers between a and b,

a ≤ x_{1} < x_{2} < ... < x_{n} ≤ b.

We compute (by any means),

y_{1} = F(x_{1}), y_{2} =
F(x_{2}), ... , y_{n} = F(x_{n}).

Finally we construct a polynomial P(x) such that,

y_{1} = P(x_{1}), y_{2} =
P(x_{2}), ... y_{n} = P(x_{n}).

Different choices for the x's usually lead to different polynomials, and which one is "better" depends on the purpose of the approximation. For example, an "overall" good fit is not the best if we want to compute an integral of F(x), and so on. Therefore we have many different methods of approximations that go under the names of their authors, Lagrange (approximating) polynomials, Chebyshev (approximating) polynomials, Legendre (approximating) polynomials, and many others.

Calculus Index