## Monday, November 28, 2016

### Unizor - Derivatives - Taylor Series

Notes to a video lecture on http://www.unizor.com

Derivatives - Taylor Series

Functions can be simple, like

f(x)=2x+3 or f(x)=x²−1

or more complex, like

f(x)=[x+ln(x)]1−sin(x)·etan(x+5)

Obviously, it is always easier to deal with simple functions. Unfortunately, sometimes real functions describing some processes are too complex to analyze at each value of its argument, and mathematicians recommend to approximate a complex function with another, much simpler to deal with.

And the favorite simplification is approximation of a function with a polynomial.

There is a very simple reason for this. Computer processors can perform calculations very fast, but their instruction set includes only four arithmetic operations. That makes it relatively easy to calculate the values of polynomials, but not such functions as sin(x) or ln(x) frequently occurred in real life problems.

Yet, we all know that computers do calculations involved with these functions. The way they do it is using the approximation of these and many other functions with polynomials.

Our task is to approximate any sufficiently smooth (in a sense of differentiability) function with a polynomial.

In particular, we will come up with a power series that converges to our function.

So, cutting this series on any member would produce an approximation with a polynomial, and the approximation would be better and better if we cut the series further and further from the beginning, increasing the number of elements participating in polynomial approximation.

First of all, we mentioned power series. Here we mean an infinite series, nth member of which is Cn·x n (where n=0, 1, 2...), which we can express as

P(x) = Σn≥0[Cn·x n].

Any finite series of this type is a polynomial itself and does not need any other simplification. So, we are talking about infinite series that has a value in a sense of the limit, when the number of members infinitely grows.

Obviously, not any power series of this type is convergent, but for sufficiently smooth functions defined on finite segment [a,b] there exists such a power series that converges to our function at each point of this segment, and we can achieve any quality of approximation by allowing sufficient number of members of a power series to participate in the approximation, that is cutting the tail of a series sufficiently far from the beginning.

Let's assign symbol PN(x) for a partial sum of the members of our series up to Nth power:

PN(x) = Σn∈[0,N][Cn·x n].

Using this symbolics,

P(x) = limN→∞PN(x)

Let's analyze the representation of a sufficiently smooth on segment [a,b] function f(x) with a power series P(x).

In particular, let's assume that we want to find coefficients Cn of such a power series that

(1) for some specific value of argument x = x0, called a center of expansion, any partial sum PN(x) has the same value as function f(x) regardless of how many members N participate in a sum, that is

∀N ≥ 0: f(x0)=PN(x0);

(2) this power series converges to our function for every argument x∈[a,b], that is f(x)=P(x).

The first requirement assures that, at least at one point x = x0 our approximation of a function with a partial power series will be exact, regardless of how long the series is.

Based on the first requirement for any partial sum of a power series at point x = x0, that is PN(x0), to be equal to the value of original function f(x0) at this point, it is convenient to represent our power sum as

P(x) = Σn≥0[Cn·(x−x0) n]

with C0=f(x0).

Now, no matter how close to the beginning we cut P(x) to PN(x), we see that

f(x0) = PN(x0) for all N ≥ 0.

The first requirement is, therefore, satisfied in this form of our power series.

Let's now concentrate on the second requirement for P(x) to converge to f(x) for any point x of a segment [a,b].

We will do it in two steps.

Step 1 would assume that P(x) does converge to f(x) at any point. Based on this assumption, we will determine all the coefficients Cn. In a way, these specific values of coefficients are a necessary condition for equality between P(x) and f(x).

On the step 2, knowing that coefficients Cn of a power series P(x) must have certain values derived in step 1, we will discuss the issue of convergence.

So, assume the following is true:

f(x) = Σn≥0[Cn·(x−x0) n].

As we have already determined, C0=f(x0).

Let's differentiate both sides of the equality above.

A member C0·(x−x0)0 will disappear during differentiation since it's a constant and any member of type K·(x−x0)k will become k·K·(x−x0)k−1.

So, the resulting equality will look like this:

f I(x) = Σn≥1[nCn(x−x0)n−1].

This is an equality that is supposed to be true for any argument x. Substituting x=x0, all members of the infinite series except the first one will be zero. The first one is equal to

1·C1·(x−x0)0 and, since the exponent is 0, we have the following equality:

f I(x0) = 1·C1

Now we know the value of the next coefficient in our infinite series:

C1=f I(x0) / 1

The next procedure repeats the previous one. Let's take another derivative.

A member 1·C1·(x−x0)0 will disappear during differentiation since it's a constant and any member of type K·(x−x0)k will become k·K·(x−x0)k−1.

So, the resulting equality will look like this:

f II(x) =

= Σn≥2[n(n−1)Cn(x−x0)n−2].

This is an equality that is supposed to be true for any argument x. Substituting x=x0, all members of the infinite series except the first one will be zero. The first one is equal to

2·1·C2·(x−x0)0 and, since the exponent is 0, we have the following equality:

f II(x0) = 1·2·C2

Now we know the value of the next coefficient in our infinite series:

C2=f II(x0) / (1·2)

It can easily be seen that the repetition of the same procedure leads to the following values of coefficients Cn of our series:

C3=f III(x0) / (1·2·3)

C4=f IV(x0) / (1·2·3·4)

and, in general,

Cn=f (n)(x0) / (n!)

where f (n)(x0) signifies nth derivative at point x0 (with 0th derivative being an original function) and n! is "n factorial" - a product of all integer numbers from 1 to n with 0! being equal to 1 by definition.

We came up with the following form of representation of a function f(x) as a power series:

f(x)=Σn≥0[f (n)(x0)·(x−x0)n/(n!)]

This representation is called Taylor series.

Sometimes, in case of x0=0, it is referred to as Maclaurin series.

This form satisfies the first requirement we set in the beginning: for a center of expansion x = x0 any partial sum of this series has the same value as function f(x) regardless of how many members N participate in a sum.

We can also say that, if there is a power series converging to our function for every argument x∈[a,b], it must have a form above with coefficients as derived.

Our next task is to examine conditions under which the power series above exists and converges.

Obvious first requirement is infinite differentiability of the function f(x) since the coefficients of our series contain derivatives to any level.

As for convergence, it depends on the values of derivatives at point x0. A reasonable assumption might be that derivative f (n)(x0) for any level (n) is bounded by some maximum value M:

|f (n)(x0)| ≤ M

Let's prove that in this case the series converges.

Assuming the above boundary for derivatives of any level at point x0, the problem of convergence is reduced to proving that the following series is converging for any c:

S(c) = Σn≥0 cn/n!

Theorem

A sequence cn/n!, where c is any positive constant and n is an infinitely increasing index number, is bounded by a geometric progression with a positive quotient smaller than 1, starting at some index m.

Proof

Choose an integer m greater than c and start analyzing the members of this sequence with index numbers n greater than index m.

The following inequalities are true then:

cn/n! ≤ cn−m·cm/[n·(n−1)·...

...·(m+1)·m!] ≤

≤ cn−m·cm/[mn−m·m!] =

= (c/m)n−m·Q

where constant Q equals to

Q = cm/m!

The last expression represents a geometric progression with the first member Q·c/m and quotient c/m. Since m was chosen as an integer greater than c, the quotient of this geometric progression is less than 1.

End of proof.

Now we see that the members of polynomial series we considered above are bounded by members of a geometric progression with positive quotient smaller than 1. For geometric progressions that is a sufficient condition for their sum to converge. Therefore, the polynomial series is convergent.

This convergence, as was mentioned above, is true under assumption that all derivatives of the original function f(x) at point x0 are bounded by some constant M.

This condition on derivatives can be weakened in different ways, which we will not mention here. Also open remains a question of precision of the approximation with partial sums of a polynomial series. There are different approaches to evaluation of the quality of this approximation, that we leave for self-study.

Subscribe to:
Post Comments (Atom)

## No comments:

Post a Comment