Wednesday, August 24, 2016

Unizor - Derivatives - Equivalence of Function Limit Definitions





Notes to a video lecture on http://www.unizor.com


Function Limit - Why Two Definitions?

Recall two definitions of function limit presented in the previous lecture.

Definition 1

Value a is a limit of functionf(x) when its argument xconverges to real number r, if for ANY sequence of argument values {xn} converging to r the sequence of function values {f(xn)} converges to a.

Symbolically:
∀{xn}→r ⇒ {f(xn}→a

Definition 2

Value a is a limit of functionf(x) when its argument xconverges to real number r, if for any positive ε there should be positive δ such that, if x is within δ-neighborhood of r(that is, |x−r| ≤ δ), then f(x)will be within ε-neighborhood of a (that is, |f(x)−a| ≤ ε).

Symbolically:
∀ ε>0 ∃ δ>0:
|x−r| ≤ δ ⇒ |f(x)−a| ≤ ε


Sometimes the last two inequalities in the above definition are specified as "less" instead of "less or equal". It makes no difference.

First of all, let's answer the question of the title of this lecture: Why two definitions?

Obviously, there were historical reasons. Mathematicians of 18th and early 19th centuries suggested different approaches to function limits and functioncontinuity that led to both definitions. Cauchy, Bolzano Weierstrass and others contributed to these definitions.
The Definition 1 seems to sound "more human", it seems more natural, though difficult to deal with if we want to prove the existence of a limit.
The Definition 2, seemingly "less human", is easier to use when proving the existence of a limit. It is more constructive.

In this lecture we will prove the equivalence of both definitions. That is, if function has a limit according to Definition 1, it is the same limit according to Definition 2 and, inversely, from existence of a limit by Definition 2 follows that this same limit complies with Definition 1.

Theorem 1
IF
for any sequence of argument values {xn} converging to r the sequence of function values{f(xn)} converges to a
[that is, if f(x)→a as x→raccording to Definition 1],
THEN
for any positive ε there should be positive δ such that, if x is within δ-neighborhood of r(symbolically, |x−r| ≤ δ), thenf(x) will be withinε-neighborhood of a(symbolically, |f(x)−a| ≤ ε)
[that is, it follows that f(x)→awhile x→r, according to Definition 2].

Proof

Choose any positive ε, however small, thereby fixing someε-neighborhood around limit value a.
Let's prove an existence of δsuch that, if x is closer to r thanδ, then f(x) will be closer to athan ε.
Assume the opposite, that is, no matter what δ we choose, there is some value of argument x in the δ-neighborhood of r such that f(x) is outside ofε-neighborhood of a.

Let's choose δ1=1 and find the argument value x1 such that|x1−r| ≤ δ1, while |f(x) is outside of ε-neighborhood of a.
Next choose δ2=1/2 and find the argument value x2 such that|x2−r| ≤ δ2, while |f(x) is outside of ε-neighborhood of a.
Next choose δ3=1/3 and find the argument value x3 such that|x3−r| ≤ δ3, while |f(x) is outside of ε-neighborhood of a.
etc.
Generally, on the nth step choose δn=1/n and find the argument value xn such that|xn−r| ≤ δn, while |f(x) is outside of ε-neighborhood of a.

Continue this process of building sequence {xn}.
This sequence, obviously, converges to r since|xn−r| ≤ 1/n, but f(xn) is always outside of ε-neighborhood of a, that is {f(xn)} is not converging to a, which contradicts our premise that, as long as {xn}converges to r{f(xn)} must converge to a.

So, our assumption that, no matter what δ we choose, there is some value of argument x in the δ-neighborhood of r such that f(x) is outside of ε-neighborhood of a, is incorrect and there exists such δ that, as soon as argument x is in theδ-neighborhood of r, value of function f(x) is withinε-neighborhood of a.
End of proof.

Theorem 2
IF
for any positive ε there is positive δ such that, if x is within δ-neighborhood of r(symbolically, |x−r| ≤ δ), thenf(x) will be withinε-neighborhood of a(symbolically, |f(x)−a| ≤ ε)
[that is, if f(x)→a as x→raccording to Definition 2],
THEN
for any sequence of argument values {xn} converging to r the sequence of function values{f(xn)} converges to a
[that is, it follows that f(x)→awhile x→r, according to Definition 1].

Proof

Let's consider any sequence{xn}→r and prove that{f(xn)}→a.
In other words, for any positiveε we will find such number Nthat for all n ≥ N the inequality|f(xn)−a| ≤ ε is true.

Based on the premise of this theorem, there exists positive δsuch that, if |x−r| ≤ δ, it is true that |f(x)−a| ≤ ε.

Since {xn}→r, for any δ there exist number N such that, ifn ≥ N, it is true that |xn−r| ≤ δ.
So, for this particular N it is true that |f(xn)−a| ≤ ε.
End of proof.

Tuesday, August 23, 2016

Unizor - Derivatives - Function Limit Definition





Notes to a video lecture on http://www.unizor.com


Function Limit -
Two Definition


sequence {Xn} from the functional viewpoint can be considered as a function from the set of all natural numbers N(domain of this function) into a set of all real numbers R (co-domain of this function).

Order number n is an argumentof this function, while Xnrepresents a value of this function for argument n.

When we consider a limit of asequence, we have in mind a process of increasing the argument n without any bounds (to infinity, as we might say) and observing the convergence or not convergence of the values Xn to some real number, which in case of convergence is called the limit of this sequenceas order number n increases toinfinity.

More rigorously, real numbera is a limit of a sequence {Xn}, if for any (however small) ε > 0 there exists order number Nsuch that
|a - Xn| ≤ ε for any n ≥ N
.

In this lecture we will expand our field from sequences to functions of any real argument and real values. We will generalize the concept of a limit to a process of an argument not only increasing to infinity, but converging to any real number.
After this we will be ready to define derivatives and other interesting principles of Calculus.

To consider a function instead of a sequence presents no efforts, since a sequence is a function. All we do is expanding a domain from all natural numbers N to all real numbers R.

Without much efforts we can define a limit of a function as its argument increases to infinity. This practically repeats the definition of a limit of a sequence and looks like this.

Real number a is a limit of function f(x) when x increases to infinity, if for any positive distance ε, however small, there is a real number r such that for all x ≥ r it is true thatf(x) is closer to a than distance ε, that is |f(x)−a| ≤ ε.

Let's express this symbolically.
∀ ε>0 ∃ r: x ≥ r ⇒ |f(x)−a| ≤ ε

Having a set of natural numbersN as a domain dictates only one type of process where we can observe the change of sequence values - when order number nincreases to infinity.
With a domain expanded to all real numbers we have more choices. One such way, as defined above, is to increase an argument of a function to infinity - total analogy with sequences. Another is to decrease the argument to negative infinity.
Here is how it can be defined.

Real number a is a limit of function f(x) when x decreases to negative infinity, if for any positive distance ε, however small, there is a real number rsuch that for all x ≤ r it is true that f(x) is closer to a than distance ε, that is |f(x)−a| ≤ ε.

Let's express this symbolically.
∀ ε>0 ∃ r: x ≤ r ⇒ |f(x)−a| ≤ ε

Our final expansion of a limit of sequence to limit of function is to define a limit of function when its argument gets closer and closer (converges) to some real number instead of going to positive or negative infinity.

First of all, we have to define this process of convergence of an argument to some real number more precisely.
Obvious choice is to measure the distance between an argument x and some real number r our argument, supposedly, approaches. So, if xis changing from x1 to x2, tox3... to xn etc. such that sequence {|xn−r|} isinfinitesimal (that is converges to zero), we can say that xconverges to r.

It is very important to understand that argument x can converge to value r in many different ways forming different infinitesimals. For example, sequence xn = r+1/n is one such way. Another is xn = r·(1+1/n). Yet another is xn = r·21/n.
Even more sophisticated way to converge is for x to approach ronly on rational numbers skipping irrationals or, inversely, only on irrational numbers, skipping rationals.
As you see, convergence of an argument to a specific value can be arranged in many different ways, but in any way the distance |x−r| must be infinitesimal, that is must converge to zero.

Let's examine now the behavior of a function f(x) as its argument x converges to valuer. It is natural to assume that function f(x) converges to valuea when its argument xconverges to value r if |f(x)−a|is an infinitesimal when |x−r| is infinitesimal.
Symbolically, we can describe this as
{xn}→r ⇒ {f(xn)}→a

It is very important to understand that there might be cases when x converges to r in some way (that is, sequence{|xn−r|} is infinitesimal) and corresponding sequence of function values f(xn) converges to a, but, if argument xconverges to r in some other way, function values f(xn) do not converge to a.
Here is an interesting example. Consider a function f(x) that takes value 0 for all rational arguments x and takes value 1for all irrational x. Now let xapproach point r=0 stepping only on rational numbers likexn=1/n. All values of f(xn) will be 0 and we could say that f(x)converges to 0 as x converges to 0. But if we step only on irrational numbers like xn=π/n, our function will take valuesf(xn) equaled to 1 and we would assume that f(x) converges to 1as x converges to 0. This does not seem right.

It is appropriate then to formulate the concept of limit in terms of sequences as follows.
Value a is a limit of functionf(x) when its argument xconverges to real number r, if for ANY sequence of argument values {xn} converging to r the sequence of function values {f(xn)} converges to a.

Symbolically:
∀{xn}→r: {f(xn}→a

Though logically we have come up with a correct definition of a limit of function when its argument converges to a specific real number, it is not easy to verify that concrete function has a concrete limit when its argument converges to concrete real number. We cannot possibly examine ALL the ways an argument approaches its target.
Let's come up with another (equivalent) definition of a limit that can be used to constructively prove statements about limits.

Again, we will use analogy with sequence limits.
The key point to a definition of a limit for a sequence was that, when order number n is sufficiently large (non-mathematically, we can say "sufficiently close to infinity"), the values of sequence members are sufficiently close to its assumed limit. The degree of order number to be "sufficiently close to infinity" (that is, sufficiently large, greater than some number) depends on how close we want our sequence to be to its limit. Greater closeness of a sequence to its limit necessitates larger order number, that is its greater "closeness to infinity", so to speak.

For function limits we will approach more constructive definition analogously. If we assume that some real number ais a limit of a function f(x) as xconverges to r, then for any degree of closeness between a function and its limit there should be a neighborhood of value r in which (that is, if x is within this neighborhood) this degree of closeness between a function and its limit is observed. Greater closeness requires a narrower neighborhood.

Expressing it more precisely, for any positive ε there should be positive δ such that, if x is within δ-neighborhood of r(that is, |x−r| ≤ δ), then f(x) will be within ε-neighborhood of a(that is, |f(x)−a| ≤ ε).

Symbolically:
∀ ε>0 ∃ δ>0:
|x−r| ≤ δ ⇒ |f(x)−a| ≤ ε


Sometimes the last two inequalities in the above definition are specified as "less" instead of "less or equal". It makes no difference.

Monday, August 22, 2016

Unizor - Derivatives - Limit of Ratio of Polynomials





Notes to a video lecture on http://www.unizor.com


Sequence Limit -
Ratio of Polynomials


When a sequence is represented by a ratio of two polynomials of order number n, it's easy to find its limit.
It's all about the members of the highest power in numerator and denominator.

Assume, a sequence is given by an expression
Xn = P(n) / Q(n)
where P(n) is a polynomial of power p (here p - some natural number) of n and Q(n) is a polynomial of power q (here q - some natural number) of n.

So, we can write the following expressions for our polynomials:
P(n) = a0np+a1np−1+...+apn0
(where a0 ≠ 0)
Q(n) = b0nq+b1nq−1+...+bqn0
(where b0 ≠ 0)

In order to determine the limit of their ratio, let's transform them as follows:
P(n)=np(a0+a1n−1+...+apn−p)
Q(n)=nq(b0+b1n−1+...+bpn−q)

Consider two expressions in parenthesis:
R(n) = a0+a1n−1+...+apn−p
S(n) = b0+b1n−1+...+bpn−q

As order number n increases to infinity, each member of these expressions, except the first (aand b0) is an infinitesimal and, therefore, has its limit equal to0. Since there is only finite number of these infinitesimals in each expression, the limit of the first one is a0 and the limit of the second one is b0.
That means that the limit of their ratio is a0 / b0, that is
R(n) / S(n) → a0 / b0.

Let's return back to our original ratio of two polynomials.
P(n) / Q(n) =
= [np·R(n)] / [nq·S(n)] =
= np−q·[R(n) / S(n)]


Ratio [R(n) / S(n)] has a limit a0 / b0 and, therefore, is bounded.
Expression np−q is either
infinitesimal (for p less than q) or
constant 1 (for p equaled to q) or
infinitely growing (for p greater than q)

Therefore, the ratio of original polynomials P(n) / Q(n) is a sequence that
(a) is infinitesimal
if p is less than q
(b) converges to a limit a0 / b0
if p equals to q
(c) is infinitely growing
if p is greater than q

We have reduced a problem for the limit of a ratio of two polynomials to a simple comparing their highest powers (p and q) and corresponding coefficients at the members in these powers (a0 and b0).

We can easily expand this approach to a ratio of two functions that can be represented as a sum of finite number of power functions, like
Xn = U(n) / V(n)
where
U(n) = Σui·npi (0 ≤ i ≤ M)
and
V(n) = Σvi·nqj (0 ≤ j ≤ N)

In the above expressions powers pi and qj can be any real numbers, not necessarily natural (like 0.5 for a square root or even irrational powers like π).

All we have to do now is position all members of U(n) in order of decreasing of their powers, same with V(n), and factor out that highest power. Remaining members (analogous to R(n) and S(n)above) will contain only members with negative powers, except the first constants (uand v0 correspondingly), and they all will converge.

Therefore, assuming our functions U(n) and V(n) are already written in the order of decreasing powers of their members, the following expression would correctly represent our ratio
U(n) / V(n) = np0−q0·W(n)
where p0 is the highest power among members of U(n)q0 is the highest power among members of V(n)W(n) is a sequence converging to u0 / v0- a ratio of coefficients at the highest powers of functions U(n) and V(n).

Hence, we can make a judgment about convergence of our ratio U(n) / V(n).
(a) it is infinitesimal
if p0 (the highest power of U(n)) is less than q0 (the highest power of V(n))
(b) converges to a limit a0 / b0
if p0 equals to q0
(c) is infinitely growing
if p0 is greater than q0.

Unizor - Derivatives - Indeterminate Forms





Notes to a video lecture on http://www.unizor.com


Sequence Limit -
Indeterminate Forms


When a sequence is represented by a short simple formula of the order number n, like {1/n²}, it's easy to find its limit.

When a sequence is a simple operation (sum or product) on other sequences with known or easily obtainable limits, like
{[1+1/(n+1)]+[2/(n+2)]},
it's easy too - just perform the operation on corresponding limits.

The problem arises when we cannot break a sequence into individual components, determine a limit for each component and do the required operations on the limits. Here are a few simple examples:
(a) Xn = (1/n)·(2n+3)
here 1/n is infinitesimal and 2n+3 is infinitely growing, their product is undefined.
(b) Xn = (2n+3)/n
here both numerator and denominator are infinitely growing with undefined limit (you may say that limit is infinity, but it's not a number, so you cannot perform an operation of division anyway)
(c) Xn = sin(n)/n²
here numerator is not even a convergent sequence, while denominator is infinitely growing.

In all cases where we cannot simply determine the limit of a complex sequence as a result of a few operations on the limits of the components, we deal with indeterminate forms.
Each such case should be dealt in some way, very specific for each given sequence, to transform the sequence into equivalent, but easier to deal with form.

Let's consider different cases we might have using concrete example of sequences.

1. Ratio of two infinitesimals
(indeterminate of type 0/0)

(a) (2−(n+1)+3−n)/2−n =
2−1+(3/2)−n → 1/2

(b) sin²(1/n) / [1−cos(1/n)] =
= sin²(1/n)·[1+cos(1/n)] /
/ [1−cos²(1/n)] =
= 1+cos(1/n) → 2


2. Product of infinitesimal and infinitely growing sequence
(indeterminate of type 0·∞)

(a) [1/(n+1)]·n² =
(n−1)(n+1)/(n+1)+1/(n+1) =
n−1+1/(n+1)
which is an infinitely growing sequence

(b) n·sin(1/n) =
sin(1/n) / (1/n) → 1
see lecture Trigonometry - Trigonometric Identities and Equations - Geometry with Trigonometry - Lim sin(x)/x, where it was proven that the limit of sin(x)/x as x→0 equals to 1.

3. Ratio of two infinitely growing sequences
(indeterminate of type ∞/∞)

(a) (n²+n−2)/(2n²+3n−5) =
[(n+2)(n−)]/[(2n+5)(n−1)] =
(n+2)/(2n+5) =
[(2n+5)−1]/[2(2n+5)] =
1/2 − 1/[2(2n+5)] → 1/2

(b) [n²·sin(1/n)+1]/n =
n·sin(1/n)+1/n =
sin(1/n)/(1/n)+1/n → 1

4. Difference of two infinitely growing sequences
(indeterminate of type ∞−∞)

(a) (n²+n)−n =
/ [√(n²+n)+n] =
/ [n√(1+1/n)+n] =
/ [√(1+1/n)+1] → 1/2

(b) n+1−log2(2n−1) =
1+log2[2n/(2n−1)] =
1+log2[1+1/(2n−1)] → 1

CONCLUSION
Many indeterminate forms do have a limit, but, to find it, it's necessary to transform the original sequence into equivalent without indeterminate components.

Wednesday, August 17, 2016

Unizor - Derivatives - Infinity





Notes to a video lecture on http://www.unizor.com


Sequence Limit - Infinity

We use the term infinity rather casually, understanding that this is a large, very large, larger than anything quantity.
This lecture is about a concept of infinity as it is understood by mathematicians.

First of all, let's agree that there is no such number or such quantity as infinity in classical math. There are some advanced parts of higher levels of math, where infinity is introduced as a concrete object, but it is beyond the scope of this course. So, for our purposes infinity is not a number or quantity. What is it then?

It is a short form of specifying the directional and limitless behavior of a sequence.

Consider a sequence {Xn} that grows boundlessly. That is, for any, however large, number Aexists an order number N such that all members of this sequence with order numbers not less than N are not less than number A.
Using symbols ∀ ("for all" or "for any") and ∃ ("exist"), this can be symbolically written as follows:
∀ A ∃ N: n ≥ N ⇒ Xn ≥ A

For any sequence that behaves in this manner we may say thatit's limit is infinity.
Sometimes we add a characteristic "positive" to a word infinity, if it helps to better understand the behavior of a sequence and to differentiate it from negative infinity described below.
So, the expression about infinity(or positive infinity) being a limit of some sequence cannot be considered absolutely rigorous and it just means that the sequence is boundlessly grows in a sense described above. It would be better to use the term infinitely growing sequence than mentioning the word "limit" for such cases. Also not advisable to use the term "convergent" for these sequences, this is reserved for sequences convergent to real numbers.
Example of such infinitely growing sequence:
{Xn = 2n}

Similarly, we can introduce a sequence that boundlessly "grows" (in a sense of absolute value, while being negative) to a negative infinity.
The description of this property is analogous to a case withpositive infinity.
If for any negative number A, however large by absolute value, exists an order number Nsuch that all members of this sequence with order numbers not less than N are not greater than number A, we say that the limit of this sequence isnegative infinity.
Symbolically, it can be written as
∀ A ∃ N: n ≥ N ⇒ Xn ≤ A
(notice, we don't have to specify that A is negative since we use "for any" symbol, which includes all negative numbers as well as positive)
Example of such sequence:
{Xn = log2(1/n)} = {−log2(n)}

So, terms infinitypositive infinity (same as infinity) andnegative infinity are legitimate mathematical characteristics of sequences that either, being positive starting at some number, grow boundlessly or, while negative after some number, grow by absolute value boundlessly.
Using these terms implies the properties described in detail above. That's why these terms can be considered as a short description of these properties. It's easier and no less rigorous to state "a sequence grows toinfinity" instead of "for any, however large, number A exists an order number N such that all members of this sequence with order numbers not less than Nare not less than number A".
Both expressions mean the same and can be used interchangeably. The former is just a lot shorter and quicker to understand. So is an expression "an infinitely growing sequence".

Examples:
1. Xn = (n²+1)/n
Let's prove that this sequence is limitlessly increasing to infinity.
Choose any boundary numberA, however large. Expression(n²+1)/n=n+1/n is monotonically increasing with an increase of n because, as nincreases by 11/n decreases by a fraction of 1. Therefore, once it grows above A, it will stay above A. So, we just have to find the first member of this sequence that is above A.
Let's find natural order numberN in our sequence for a chosen boundary A by solving the inequality
(n²+1)/n ≥ A
which is equivalent to (since nis positive)
n²−An+1 ≥ 0
Expression
n²−An+1
is a quadratic polynomial of nwith discriminant A²−4, which is positive for large A (any value greater than 2).
This quadratic polynomial limitlessly grows with its argument n.
For any large number A there are two solutions to a quadratic equation n²−An+1 = 0:
n1 = (A−√A²−4)/2 and
n2 = (A+√A²−4)/2
For any N greater or equal to a larger of these two solutions n2the inequality we need will be true.
Therefore, for any number A, however large, the members of our sequence {(n²+1)/n} will be greater or equal to this A as long as the order number n is greater than or equal to
(A+√A²−4)/2.
This proves that this sequence is limitlessly increasing toinfinity.

2. Xn = tan(−πn/[2(n+1)])
Let's prove that this sequence is limitlessly decreasing tonegative infinity.
Choose any boundary numberA, negative and however large by absolute value.
Let's find N for a chosen boundary - number A - by solving the inequality
tan(−πn/[2(n+1)]) ≤ A
which is equivalent to (sincetan(−φ) = −tan(φ))
−tan(πn/[2(n+1)]) ≤ A
or
tan(πn/[2(n+1)]) ≥ −A
Here −A is a positive number, however large, function tan is monotonically increasing on interval [0,π/2).
Expression πn/[2(n+1)] is monotonically increasing to π/2and its tangent is monotonically increasing to infinity as n is increasing. So, all we have to find is such n that
tan(πn/[2(n+1)]) = −A,
which happens at
πn/[2(n+1)] = arctan(−A)
or
πn = 2(n+1)arctan(−A)
From the above equation follows:
n(π−2arctan(−A)) =
= 2arctan(−A)

Therefore,
n = 2arctan(−A) /
/ (π−2arctan(−A))

Since n is a natural number, we have to choose the next natural number greater than the above expression.

3. Xn = n·sin(n)
This sequence is growing by absolute value, but changing the sign from positive to negative and back with the periodicity of
That means, the sequence cannot be qualified as having aninfinity (positive or negative) as a limit.
It's not infinitely growing topositive infinity, nor infinitely decreasing to negative infinity.

Monday, August 15, 2016

Unizor - Derivatives - Infinitesimals





Notes to a video lecture on http://www.unizor.com


Sequence Limit - Infinitesimal
(infinitely small)


Special role in mathematics in general and, in particular, in calculus is played by sequences that converge to zero.
These sequences have a special name - infinitesimal. More descriptive name might beinfinitely small.

It is very important to understand that here we are not dealing with any concrete, however small, number, but with a sequence converging to zero.

When we say that ε is an infinitesimal value, we mean that it represents a sequence of values converging to zero, that is a process, a variable that changes its value, gradually getting closer and closer to zero.

When we say that a distance between two objects is an infinitesimal, we imply that these objects are moving towards each other such that the distance between them converges to zero.

When we say that the speed of an object changes by an infinitesimal value during infinitesimal time interval, we mean the following process:
(a) we fix some moment in timeT0 and the speed of an object at this moment V0;
(b) we consider an infinite sequence of time intervals starting at T0 and ending at T1,T2,...Tn... such that the difference in time |Tn − T0|converges to zero as index nincreases;
(c) we measure a speed Vn of an object at each end of intervalTn;
(d) the difference between the original speed V0 and the speed at each end of interval Vn is a sequence that converges to zero:
|Vn − V0| → 0

Properties of infinitesimals are direct consequences of properties of the limits in general:

1. If ε is an infinitesimal (more precisely, if a sequence n}converges to zero, but we will use the former expression for brevity), then K·ε is also an infinitesimal, where K - any real constant, positive, negative or zero.

2. If ε and δ are two infinitesimals, their sum ε+δ is an infinitesimal as well.

3. If ε and δ are two infinitesimals, their product ε·δis an infinitesimal as well.

Lots of problems are related to a division of one infinitesimal by another, provided the infinitesimal in the denominator does not take the value of zero. The result of this operation can be a sequence that might or might not converge at all and, if it converges, it can converge to any number.
Here are a few examples.

A) ε = {13/n}→0;
δ = {37/n}→0;
⇒ ε/δ = {13n/37n} = {13/37}.
Since sequence ε/δ is a constant, its limit is the same constant, that is 13/37.
Obviously, we can similarly construct two sequences with ratio converging to any number.

B) ε = {13/n}→0;
δ = {37n/(n²+1)}→0;
⇒ ε/δ = {[13(n²+1)]/37n²} =
{(13/37)·[1+1/(n²+1)]} =
{(13/37)+13/[37(n²+1)]} =
13/37 + γ → 13/37,
since γ is an infinitesimal {13/[37(n²+1)]}0.

C) ε = {13/n²}→0;
δ = {37n/(n²+1)}→0;
⇒ ε/δ = {[13(n²+1)]/37n³} =
{(13/37)·[1/n+1/(n²+1)]} =
{13/(37n)+13/[37(n²+1)]} =
γ1 + γ2 → 0,
since both γ1=13/(37n) andγ2=13/[37(n²+1)] are infinitesimals.

D) ε = {13/n}→0;
δ = {37/(n²+1)}→0;
⇒ ε/δ = {[13(n²+1)]/(37n)} =
(13n)/37+13/(37n)
The last expression is growing limitlessly as n is increasing.
So, the result is not a convergent sequence. However, it is reasonable to say that this sequence "grows to positive infinity" as n increases.
We will discuss a concept of infinity later.

E) ε = {1/n}→0;
δ = {(−1)n/n}→0;
⇒ ε/δ = {(−1)n}
The last expression represents a sequence that alternatively takes only two values, 1 and −1. It is not converging to any number, nor can we say that it grows to infinity.

As we see, a ratio of two infinitesimals might be convergent or not-convergent sequence, bounded or unbounded.
If we suspect that it should converge to some limit, we have to resolve this by some transformation into an expression without division between two infinitesimals.

It should be noted, however, that the most interesting cases in dealing with infinitesimals occur exactly at the point of division of one by another. Numerous examples of this can be found in physics (like a concept of speed, where we divide an infinitesimal distance by infinitesimal time interval during which this distance is covered), analysis of smooth functions (like in determining their local maximums and minimums) and many others.

Wednesday, August 10, 2016

Unizor - Derivatives - Limit of Sequence - Definition and Properties





Notes to a video lecture on http://www.unizor.com


Sequence Limit -
Definition and Properties


Please refer to lectures on sequence limits in the "Limits" chapter of Algebra subject of this course.
Here is a brief reminder of a definition and basic properties of sequence limits.

sequence S={an} is an infinite countable ordered set of real numbers, where for each natural number n exists one and only one element of this set an.

Real number L is a limit of a sequence {an}, if for any (however small) ε > 0 there exists order number N such that
|L - an| ≤ ε for any n ≥ N.

The requirement of existence of an order number N with corresponding sequence term being closer to a limit than any chosen distance ε, however small we choose it, assures that elements of a sequence eventually become, as we say, infinitely close to a limit.
The requirement of absolute value of a distance between limit L and elements of a sequence an to be not greater than ε for any n ≥ N assures that, once a sequence get sufficiently close to a limit, it will stay not farther from it.

A sequence that has a limit is called convergent, it converges to its limit.

Let's address some simple properties of limits. All of them were proven in the lectures about sequence limit in the Algebra subject of this course. We strongly recommend to review these proofs in that lecture.

Theorem 1
A convergent sequence is bounded, that is there are two numbers, lower and upper bounds, such that all elements of this sequence are not less than lower and not greater than upper bound.
Symbolically,
{an}→L
⇒ ∃ A, B ∀ n: A ≤ an ≤ B


Theorem 2
A convergent sequence, multiplied by a factor, converges to a limit that is equal to a limit of an original sequence, multiplied by this factor.
Symbolically,
{an}→L
⇒ {K·an}→K·L


Theorem 3
A sum of two convergent sequences converges to a limit that is equal to a sum of the limits of these two sequences.
Symbolically,
{an}→L; {bn}→M
⇒ {an+bn}→L+M


Theorem 4
A product of two convergent sequences converges to a limit that is equal to a product of limits of these two sequences.
Symbolically,
{an}→L; {bn}→M
⇒ {an·bn}→L·M


Theorem 5
An inverse of a convergent sequence, that has a non-zero limit, converges to a limit that is equal to an inverse of its limit.
Symbolically,
{an}→L; L≠0
⇒ {1/an}→1/L


Theorem 6
A ratio of two convergent sequences converges to a limit that is equal to a ratio of limits of these two sequences, provided the limit of denominator is not zero.
Symbolically,
{an}→L; {bn}→M; M≠0
⇒ {an/bn}→L/M


Here are examples of simple sequences that have limits:
{1/n}→0
{(n+1)/n}→1
{(12n²+3n)/(5n²−5)}→12/5