Tuesday, June 27, 2017

Unizor - Ordinary Differential Equations - Hooke's Law





Notes to a video lecture on http://www.unizor.com

Higher Order Ordinary
Differential Equations -
Hooke'sLaw


Our next subject is Hooke's Law.
This law describes the force of a stretched or compressed spring.

Let's assume that we have a weightless spring horizontally lying on the frictionless table along an imaginary X-axis and fixed at the left end. Its free right end is at coordinate x=0 and there is a point mass m attached to this free end of a spring.
Then we stretch this spring by pulling the right end from a neutral position by certain length x.

Obviously, the spring exerts a force to compress back to a neutral position. The Hooke's Law states that within certain reasonable boundaries (no over-stretching) this force is proportional to a difference in length between a stretched string and a string in a neutral position.
This is expressed by the formula
F = −k·x
where F is the force exerted by a spring, x is a displacement of the free end of a spring from a neutral position, k is a positive constant that characterizes a spring (called a spring constant) and the minus sign signifies that the direction of force is opposite to the direction of displacement because, if displacement is positive (stretching), the force is directed towards negative direction of the X-axis and, if displacement is negative (compression), the force is directed towards positive direction of the X-axis.

Now recall Newton's Second Law that related the force and acceleration
F = m·a
where F is the force, m is the mass of an object and a is its acceleration.

From these two laws we conclude that
m·a = −k·x

Since x is a distance along the X-axis and a is an acceleration along this axis, that is a second derivative from a distance by time, we came up with the following differential equation
m·x''(t) = −k·x or
m·x''(t) + k·x(t) = 0 or
x''(t) + (k/m)·x(t) = 0
This is a second order ordinary differential equation. It is a little more complex than we considered in a lecture about acceleration and Newton's Second Law.
Let's try to solve it.

First of all, let us mention that even a simple guessing in this and many other cases is a good choice. Recall that first derivative of sin() is cos() and the first derivative of cos(), that is the same as the second derivative of sin(), is −sin(). So, the equation x''(t)+x(t)=0 has a solution x=sin(t). This is very close to what we have. Adding a factor α to an argument might help to satisfy multipliers in our equation:
if x(t)=sin(α·t) then
x'(t)=α·cos(α·t) and
x''(t) = −α²·sin(α·t)
and, therefore,
x''(t)+α²·x(t) = 0
Now we can choose α to satisfy α²=k/m, and the solution to our equation is found.

Guessing is good, when we can guess (as in this case), but guessing might not be successful and, even if you managed to guess one solution, it's not a guarantee that all solutions are found. By the way, if we start with cos(), we will also find a solution.
So, let's have some theory.

Our differential equation belongs to a class of linear ordinary differential equations of second order with constant coefficients and can be generalized as
x''(t) + p·x'(t) + q·x(t) = 0

As we saw above, functions sin() and cos() might be involved in a solution. Analogous quality of derivative being similar to a function itself is possessed by exponential functions. Recall also that exponential functions with complex exponent is related to trigonometric function through famous Euler's formula
eit = cos(t) + i·sin(t)
So, exponential functions, in some way, are more general than trigonometric, they encompass them.
Therefore, it's only natural to look for a solution in terms of exponential functions.
Let's try.

Assume, we are looking for a solution to our equation in the form x(t)=eλ·t, where λ might be any (including complex to accommodate trigonometric functions) number.
Then derivatives of this function are:
x'(t) = λ·eλ·t
x''(t) = λ²·eλ·t
Putting this into our equations, we get
λ²·eλ·t + λ·p·eλ·t + q·eλ·t = 0
Canceling eλ·t, we get a simple quadratic equation for λ called a characteristic polynomial of a given differential equation:
λ² + p·λ + q = 0
Since this equation always has two solutions λ1 and λ2 among complex numbers, we will have two particular solutions to our differential equation:
eλ1·t and eλ2·t
Finally, any linear combination of these two particular solutions will also be a solution (since our differential equation is linear and a derivative of linear combination of functions is a linear combination of derivatives).
Therefore, we can state the general solution to our differential equation:
x(t) = C1·eλ1·t+C2·eλ2·t
which depends on two unknown complex constants C1and C2, their values can be defined only if some initial conditions of the movement are given.

Let's get back to a movement of a spring.
Our initial equation
x''(t) + (k/m)·x(t) = 0
has a characteristic polynomial
λ² + k/m = 0
(where both k and m are positive) with two solutions:
λ1 = √(k/m)·i and
λ2 = −√(k/m)·i
where i²=−1 is an imaginary unit in the field of complex numbers.

Let ω = √(k/m).
Now we can represent the general solution to a movement of a spring based on the Hooke's Law as follows:
x(t) = C1·eiωt + C2·e−iωt
This expression can be easily transformed using Euler's formula into
x(t) = C1·cos(ωt)+i·C1·sin(ωt)+
+C2·cos(−ωt)+i·C2·sin(−ωt)


Since C1 can be represented as A1+i·B1 and C2 can be represented as A2+i·B2, where A1B1A2 and B2 are undefined unknown real numbers, the whole expression can be represented as
D1·cos(ωt) + D2·sin(ωt) + i·Z
where coefficients D1 and D2are any real numbers and i·Z represents purely imaginary part.

Since we deal with physics, we should exclude all imaginary solutions and leave only those, where D1 and D2 are real numbers.
So, the general physical solution looks like
x(t) = D1·cos(ωt) + D2·sin(ωt)
where D1 and D2 are undefined unknown real numbers.

In our experiment we have stretched a string by some known distance from a neutral position and let it spring back. That means, we know the initial position x(0)=d and initial speed x'(0)=0.
These initial conditions are sufficient to determine two unknown constants in our equation of a motion:
x(0)=d ⇒
⇒ D1cos(0)+D2sin(0)=d
⇒ D1 = d
x'(0)=0 ⇒
⇒ −ω·D1sin(0)+ω·D2cos(0)=0
⇒ D2 = 0

The final form of an equation of motion is
x(t) = d·cos(ωt)
where ω = √(k/m)k is a spring constant, m is a point mass at its free end and d is the initial distance we have stretched a spring from its neutral position.
As seen from this equation of motion, a free end of a spring with a mass attached to it will indefinitely oscillate around the neutral point.
The end.

Friday, June 23, 2017

Unizor - Ordinary Differential Equations - Acceleration





Notes to a video lecture on http://www.unizor.com

Higher Order Ordinary
Differential Equations -
Acceleration


Differential equations can include derivatives of higher order - second derivative, third, etc.
Probably, most common equations of this type are those with the second order derivative.
These equations are very often occur in science, especially in Physics. Let's address these equations and approaches to solve them.

Our first subject is a concept of acceleration and Newton's Second Law.

Recall that speed measures how fast a distance from some starting point changes, that is, if this distance is represented as a function x(t) of time tspeedv(t) at any moment t is the first derivative of distance by time:
v(t) = x'(t) = dx/dt

But speed does not have to be constant, we can move faster, increasing our speed (accelerating), or slower, decreasing it (decelerating).
To measure how fast our speed changes with time, as usually, when we want to measure how fast anything changes with time, we use a derivative.
Differentiating speed (a function of time) by time we obtain this measure of change of speed at any moment. This derivative of speed by time is called acceleration a(t):
a(t) = v'(t) = dv/dt =
= x''(t) = 
x/dx²

That is, acceleration is the second derivative of distance by time.

Newton's Second Law states that the force F applied to an object and the acceleration a this object obtains as a result of this application of force are related as follows:
F = m·a
where m is the object's mass (presumed constant).

Assuming that our motion occurs along a straight line with coordinates and, therefore, the position of an object is defined by its X-coordinate x(t), Newton's Second Law is an ordinary differential equation of second order because acceleration is the second derivative of the X-coordinate of an object:
F(t) = m·x''(t)
Usually our task is to find where exactly our object is located (that is, its X-coordinate), if the force, as a function of time, is given.

Consider a case when there is no force applied to an object, that is F(t)=0.
Then, according to the Newton's Second Law,
0 = m·a(t)
from which we derive a(t)=0
Since a(t)=v'(t), we can find the speed:
v'(t)=0
⇒ v(t) = C
(where C is an unknown constant)
⇒ x'(t) = C
⇒ x(t) = C·t + D
(where D is another unknown constant)

That concludes the solution of our differential equation of the second order, and the solution includes two unknown constants that cannot be determined from the equation alone. It's understandable since we don't know initial position of an object on the coordinate axis x(0) and the initial speed it moved v(0). These two additional pieces of information (initial conditions) are needed to determine unknown constants participating in the solution.
If x(0)=x0 and v(0)=v0, we can easily determine
x0 = D and
v0 = C
which results in the final equation of motion of an object, to which no forces are applied (or, more generally, all forces applied to it are balancing each other).
x(t) = x0 + v0·t

By solving the above differential equation of the second order, we have mathematically derived the Newton's First Law as a consequence of the Second Law.
Newton's First Law (law of inertia) states that if the sum of all forces applied to an object is zero, then the object at rest will continue to stay at rest (its speed is and will be 0) and objects moving at some speed will continue to move with the same speed and direction (its speed is constant).

Now consider a case when the force applied to an object is not zero, but constant, that is F(t)=P (const). Let's attempt to solve our differential equation in this case to determine the coordinate of an object as a function of time x(t).
F(t)=P=m·a(t)=m·v'(t)
where P is a known constant.
This implies that acceleration a(t) must be a known constant and equals to P/m. Let's use symbol a instead of a(t) to signify this.
Since a=v'(t), we derive
v(t) = a·t + C
where C is an unknown constant.
Then
x'(t) = a·t + C
⇒ x(t) = a·t²/2 + C·t + D
where D is another unknown constant.
To determine two unknown constants we need additional information - initial conditions.
Assume that the original position of an object is x(0)=x0. This allows to determine D=x0.
If initial speed v(0)=v0 is known, we can determine C=v0.
So, the final equation of the motion, when a constant force is applied is
x(t) = a·t²/2 + v0·t + x0

In general, if the force is variable and/or the mass is variable, from Newton's Second Law we can construct a differential equation of the second order, where the second derivative is explicitly represented by a known function:
F(t) = m(t)·x''(t)
⇒ x''(t) = F(t)/m(t)
⇒ d/dt[x'(t)] = F(t)/m(t)
⇒ x'(t) = [F(t)/m(t)]dt
⇒ x(t) = {[F(t)/m(t)]dt}dt

As we see, Newton's Second Law presents the simplest kind of ordinary differential equation of the second order. It can be solved by double integration.
It should not be forgotten that in the process of each integration there will appear an unknown constant, to get its value an initial condition should be known and applied.

Thursday, June 22, 2017

Unizor - Ordinary Differential Equations - Linear Equations





Notes to a video lecture on http://www.unizor.com

Linear Ordinary Differential Equations

Standard form of linear ordinary differential equations is
f(x)·y' + g(x)·y + h(x) = 0
As the first step, we can divide all members of this equation by f(x) (assuming it's not identically equal to 0), getting a simpler equation
y'+u(x)·y+v(x) = 0
The suggested solution lies in the substitution y(x)=p(x)·q(x), where p(x) and q(x) are unknown (for now) functions.
Express y'(x) in terms of p(x) and q(x):
y' = p·q'+q·p'
Substitute this into our equation:
p·q'+q·p'+u·p·q+v = 0
Let's simplify this
p(q'+u·q)+q·p'+v = 0
If there are such functions p(x) and q(x) that satisfy conditions
(1) q'+u·q = 0 and
(2) q·p'+v = 0
our job would be finished.
Let's try to find such functions.
From the equation (1) in our pair of equations we derive
q'/q = −u,
which can be converted into
dq(x)/q(x) = −u(x)·dx
that can be solved by integrating:
ln(q(x)) = −u(x)·dx
q(x) = e−∫u(x)·dx
Once q(x) is found, we solve the equation (2) for p(x):
p'(x) = −v(x)/q(x),
which can be integrated to find
p(x) = −v(x)/q(x) dx
and, consequently, y(x)=p(x)·q(x) can be fully determined.
Let's consider a few examples.

Example 1

Solve the following linear differential equation
y' + y + x = 0

Let's look for a solution in a form
y(x)=p(x)·q(x)
Then
y'(x) = p'(x)·q(x)+p(x)·q'(x)
Our equation looks like this now
p'·q+p·q' + p·q + x = 0
Factor out p, getting
p·(q'+q) + (p'·q+x) = 0
We will try to find p(x) and q(x) to separately bring to zero q'+q and p'·q+x.
Let's look for a function q(x) that brings expression q'+q to zero:
q'+q = 0
dq/q = −dx
dq/q = −dx
ln(q) = −x + A
(where A is any constant)
q(x) = B·e−x
(where B=eA, so it represents any positive number)
Next, let's find a solution to
p'·q+x = 0
p'·B·e−x+x = 0
p(x) = −(1/B)·x·exdx
This integral can be found using the "by parts" technique:
p(x) = −(1/B)·(x·exexdx) =
= −(1/B)·(x·ex−ex−C) =
= −(1/B)·(x−1)·ex+C/B

(where B is any positive constant and C is any constants)
Now let's find y(x)=p(x)·q(x):
y(x) = [−(1/B)·(x−1)·ex+C/B]·[B·e−x] =
1−x+C·e-x

Checking:
y'(x) = −1 − C·e-x
y'+y+x = −1 − C·e-x+1−x+C·e-x+x = 0
Solution was correct.

Example 2

Solve the following linear differential equation
y'·cos(x) + y·sin(x) − 1 = 0

First of all, let's normalize it by dividing by cos(x), noticing that sin(x)/cos(x)=tan(x) and 1/cos(x)=sec(x):
y' + y·tan(x) − sec(x) = 0
Let's look for a solution to this equation in a form
y(x)=p(x)·q(x)
Then
y'(x) = p'(x)·q(x)+p(x)·q'(x)
Our equation looks like this now:
p'·q+p·q'+p·q·tan(x)−sec(x) = 0
Factor out p, getting
p·(q'+q·tan(x))+p'·q−sec(x) = 0
First, let's find function q(x) such that
q' + q·tan(x) = 0
It can be solved using the technique of separation:
dq/q = −tan(x)dx
Since [ln(x)]' = 1/x and [cos(x)]' = −sin(x), the last equation can be transformed into
d(ln(q)) = d(cos(x))/cos(x)
d(ln(q)) = d(ln(cos(x)))
Now it's easy to integrate, the result is
ln(q) = ln(cos(x))+C
where C - any real number, from which, raising number e to both left and right sides, follows that
q = D·cos(x)
(new constant D=eC represents any positive number)
Now let's find function p(x) such that
p'·q − sec(x) = 0
Substitute already found q(x) getting
p'·D·cos(x) − sec(x) = 0
p'(x) = (1/D)·(1/cos²(x))
p(x) = (1/D)·dx/cos²(x) =
= (1/D)·
[tan(x) + E]
where D and E are constants (D - any positive, E - any real)
Let's determine y=p·q now.
y(x) = p(x)·q(x) =
= (1/D)·
[tan(x) + E]·D·cos(x) =
= sin(x) + E·cos(x)


Let's check this result.
y'(x) = cos(x)−E·sin(x)
y'·cos(x) + y·sin(x) − 1 =
= cos²(x)−E·sin(x)·cos(x) + sin²(x)+E·cos(x)·sin(x)−1 =
= sin²(x) + cos²(x) −1 = 0

which proves the correctness of our answer.


Example 3

Solve the following differential equation
ln(x·y'+y) = ln(2x)+x²

It's not linear, but can be made linear if we raise e to a power defined by its left and right sides, getting
x·y'+y = 2x·e
Let's normalize it by dividing by x:
y' + y/x = 2e
Now it's a linear equation that we know how to solve.
Let's look for a solution in a form
y(x)=p(x)·q(x)
Then
y'(x) = p'(x)·q(x)+p(x)·q'(x)
Our equation looks like this now
p'·q+p·q' + p·q/x = 2e
Factor out p, getting
p·(q'+q/x) + p'·q = 2e
We will try to find p(x) and q(x) to separately
(a) bring to zero q'+q/x and
(b) equalize p'·q with 2e.
Let's solve equation (a) and look for a function q(x) that brings expression q'+q/x to zero:
q'+q/x = 0
dq/q = −dx/x
dq/q = −dx/x
ln(|q|) = −ln(|x|) + A
where A - any constant.
Raising e to both sides of this equation, we get
|q| = B/|x|
where B=eA - any positive number.
Let's get rid of absolute values in the above equation by allowing B to be any non-zero real number, so
q = B/x
Substitute it to equation (b):
p'·B/x = 2e
p(x) = (1/B)2x·e·dx
Since derivative of  is 2x,
p(x) = (1/B)e·d(x²)
Now we can integrate directly:
p(x) = (1/B)e + C
where C is any real number.
This allows to express the solution to our differential equation in the form
y(x) = p(x)·q(x)
where
p(x) = (1/B)e + C and
q(x) = B/x
That produces
y = e/x + C/x = (e+C)/x
where C - any real number.

Checking:
y' = −(e+C)/x² + e·2x/x =
= e·(2−1/x²) −C/x²

y/x = e/x² + C/x²
y' + y/x = 2e,
which corresponds to the original equation after multiplying both sides by x and taking logarithm.
The end.

Monday, June 19, 2017

Unizor - Ordinary Differential Equations - Homogeneous Equations





Notes to a video lecture on http://www.unizor.com

Homogeneous Ordinary Differential Equations

We have defined homogeneous ordinary differential equations of the first order as an equation
F(x, y, y')=0
which does not change if we replace x with λ·x and y with λ·y, where λ - any real number not equal to zero.
In other words,
F(x, y, y') = F(λ·x, λ·y, y')
Examples:
F(x, y, y') = y'+y/x
F(x, y, y') = 3y'+x·y/(x²+y²)
etc.

The recommended technique to solve these equations is to substitute function y(x) with x·z(x) and solve the equation for z(x), after which determine y(x)=x·z(x).

Let's solve a few equations of this kind.

Example 1

Check for homogeneousness and solve the following equation:
x·y' = x·sin(y/x) + y

Checking for homogeneousness.
Substitute x with λ·x and y with λ·y:
λ·x·y' = λ·x·sin(λ·y/(λ·x)) + λ·y
Obviously, λ cancels out completely, which proves homogeneous character of the equation.
Now let's solve this equation using the substitution z(x)=y(x)/x, which results in y(x)=z(x)·x, and express the initial equation in terms of xz and z'.
x·(z'·x + z) = x·sin(z) + z·x
Simplifying:
z'·x + z = sin(z) + z
z'·x = sin(z)
dz/sin(z) = dx/x
dz/sin(z) = dx/x
The right side is easy, the integral equals to ln(|x|)+C.
The left side is more involved.
dz/sin(z) =
dz/(2sin(z/2)·cos(z/2)) =
d(z/2)/(sin(z/2)·cos(z/2))

Substitute u=z/2, getting
du/(sin(u)·cos(u)) =
cos(u)d(u)/(sin(u)·cos²(u)) =
d(sin(u))/(sin(u)·cos²(u))

Substitute t=sin(u), getting
dt/[t·(1−t²)]
The polynomial in the denominator is
t·(1−t²) = t·(1−t)·(1+t)
Its inverse can be represented as
1/t − 1/[2(1+t)] + 1/[2(1−t)]
which makes our integral equal to
{[2/t−1/(1+t)+1/(1−t)]/2}dt
The last expression can be represented as a sum of three integrals, the result of integration is:
ln(|t|) − ln(|1+t|)/2 − ln(|1−t|)/2
where t=sin(z/2)
This leads us to a final solution of our differential equation.
ln(|sin(z/2)|) − ln(1+sin(z/2))/2 − ln(1−sin(z/2))/2 = ln(|x|)+C
and then we should substitute z=y/x to get the final expression
ln(|sin(y/2x)|) − ln(1+sin(y/2x))/2 − ln(1−sin(y/2x))/2 = ln(|x|)+C
Using this as an exponent, we come up with an expression without logarithms
|sin(y/2x)| /[(1+sin(y/2x))·(1−sin(y/2x))] = C·|x|
A simplification in the denominator results in
|sin(y/2x)| / cos²(y/2x) = C·|x|
We leave it "as is" without resolving for y(x).

Example 2

Check for homogeneousness and solve the following equation:
[(y − x·y')/x]x = ey

Checking for homogeneousness.
Substitute x with λ·x and y with λ·y:
[(λy − λx·y')/λx]λx = eλy
Cancel λ in the ratio, getting:
[(y − x·y')/x]λx = eλy
This can be written as
{[(y − x·y')/x]x}λ = [ey]λ
Raising both sides to power 1/λ (or, which is the same, extracting a root of power λ) we come to the original equation, which proves homogeneous character of the equation.
Now we will solve it using the recommended technique.
Substitute z(x)=y(x)/x, which results in y(x)=z(x)·x and express the initial equation in terms of xz and z'.
The expression for a derivative y' is:
y' = (z·x)' = z'·x+z
New equation is, therefore,
[(z·x − x·(z'·x+z))/x]x = ez·x
Simplifying it by raising to power 1/x both sides (or, equivalently, extracting a root of power x):
[(z·x − x·(z'·x+z))/x] = ez
Cancel x:
z − (z'·x+z) = ez
Cancel z:
−z'·x = ez
This equation is separable, let's separate x from x, getting
−e−z·dz = dx/x
Ready to integrate:
−e−z·dz = dx/x
e-z = ln(x)+C
(assuming for simplicity positive only sign for x, so integral on the right is ln(x) instead of ln(|x|))
From the last equation we derive:
−z = ln(ln(x)) + C
z = −ln(ln(x)) + C
Now we can use it to find an expression for y:
y = −x·ln(ln(x)) + C

Solution must be checked.
It's easier, instead of checking the original equation
[(y − x·y')/x]x = ey
to check the equality of logarithms from both sides:
x·ln[(y − x·y')/x] = y
or, simpler,
ln(y/x − y') = y/x
where we should substitute
y = −x·ln(ln(x)) + C
and
y' = −ln(ln(x))−x·(1/ln(x))·(1/x)
or, simpler,
y' = −ln(ln(x)) − 1/ln(x)
Let's disregard constant C in this checking to make manipulations simpler.
Then, since
y/x = −ln(ln(x))
we will have to check that ln(−ln(ln(x)) + ln(ln(x)) + 1/ln(x)) = −ln(ln(x))
Canceling opposite positive and negative members under logarithm on the left, we come to an obvious equality
ln(1/ln(x)) = −ln(ln(x))
which proves the correctness of our solution.

Example 3

Check for homogeneousness and solve the following equation:
x·y·y' = (x+y)²

Checking for homogeneousness.
Substitute x with λ·x and y with λ·y:
λx·λy·y' = (λx+λy)²
λ²x·y·y' = λ²(x+y)²
Obviously, λ cancels out, and we get the same original equation.
Now let's solve it by substituting z(x)=y(x)/x, which results in y(x)=z(x)·x and express the initial equation in terms of xz and z'.
The expression for a derivative y' is:
y' = (z·x)' = z'·x+z
So, our equation looks like
x·(z·x)·(z'·x+z) = (x+z·x)²
Simplifying by opening all parenthesis, we get
x²·(x·z·z'+z²) = x²·(1+z)²
x·z·z'+z² = (1+z)²
x·z·z' = 1+2z
This equation can be solved using the method of separation.
dz/(1+2z) = dx/x
Integrating the left side of this equation:
dz/(1+2z) =
= (1/2)(1+2z−1)·
dz/(1+2z) =
= (1/2)
[dz − dz/(1+2z)] =
= (1/2)
[z−(1/2)ln(1+2z)] + C
Integrating the right side of the equation:
dx/x = ln(x) + C
Since integral of both sides are equal,
(1/2)[z−(1/2)ln(1+2z)] =
= ln(x)+ C

which can be simplified
2z − ln(1+2z) = 4ln(x) + C
Though this equation for z(x) cannot be easily solve for z, it allows to replace the original differential equation for y with purely algebraic one, replacing z with y/x:
(A) 2y/x − ln(1+2y/x) =
= 4ln(x) + C

This is the final algebraic answer to our differential equation. Though it's not resolved for y(x), it's still the best solution we can come up with.

Solution must be checked.
If this equality that includes function y(x) is correct, derivatives of both parts are also equal. Let's differentiate them both.
−2y/x² + 2y'/x − (1/(1+2y/x))·(−2y/x²+2y'/x) = 4/x
Simplifying by multiplying by :
−2y + 2xy' − x·(−2y+2xy')/(x+2y) = 4x
Multiplying by x+2y:
−2xy−4y²+2x²y'+4xyy'+2xy−2x²y' = 4x²+8xy
After cancellation of mutually opposing by sign members and dividing by 4 we get:
−y²+xyy' = x²+2xy
which easily transforms into
xyy' = (x+y)²
that corresponds to original differential equation.
This proves the correctness of the answer (A) as an equation that includes x and y(x) without derivatives that we obtained above.

Friday, June 16, 2017

Unizor - Ordinary Differential Equations - Separable Equations





Notes to a video lecture on http://www.unizor.com

Separable Ordinary Differential Equations

The process of "separation" as a method of solving differential equation of the first order F(x,y,y')=0 should result in the following equality:
f(y)·dy = g(x)·dx
which allows for separate integration of left and right sides.

This can be assured if our initial equation F(x,y,y')=0 can be transformed into y'=P(x)·Q(y).
Indeed, from the last equation follows
dy/dx = P(x)·Q(x)
and
dy/Q(y) = P(x)·dx
which can be integrated separately, left side - by y and right side - by x.

Examples below use exactly this approach.

Example 1

y' + x·y + y − x = 1

Perform the transformation:
y' = −x·y − y + x + 1
y' = −y·(x+1) + (x + 1)
y' = (1−y)·(x+1)
dy/(1−y) = (x+1)dx
dy/(1−y) = (x+1)dx
Both integrals are trivial.
d(y−1)/(y−1)=(x+1)d(x+1)
−ln(y−1) = (x+1)²/2 + C
y = 1 + C·e−(x+1)²/2

Example 2

y' − (x+1)·e(x+y) = 0

Perform the transformation:
y' = (x+1)·ex·ey
y'·e−y = (x+1)·ex
e−y·dy = (x+1)·ex·dx
e−y·dy = (x+1)·ex·dx
Integral on the left is straight forward.
Integral on the right can be calculated using the integration "by-part":
−e−y = (x+1)·ex −ex·d(x+1)
−e−y = (x+1)·ex − ex + C
−e−y = x·ex + C
e−y = C−x·ex
y = −ln(C−x·ex)

Example 3

ln(y') = x + y

Perform the transformation:
y' = ex · ey
e−y·dy = ex·dx
e−y·dy = ex·dx
−e−y = ex+C
e−y = −ex−C
y = −ln(C−ex)

Example 4

sin(y)·y' = sin(x+y) + sin(x−y)

First of all, recall the trigonometric identities
sin(x+y) =
= sin(x)·cos(y)+cos(x)·sin(y)

sin(x−y) =
= sin(x)·cos(y)−cos(x)·sin(y)

from which follows
sin(x+y)+sin(x−y) =
= 2·sin(x)·cos(y)

Perform the transformation of our equation using the last expression:
sin(y)·y' = 2·sin(x)·cos(y)
Now we can separate:
sin(y)·dy/cos(y) = 2·sin(x)·dx
Continue transformation:
dcos(y)/cos(y) = −2·dcos(x)
Easy to integrate now:
ln(|cos(y)|) = 2·cos(x) + C
Ignoring difficulties with absolute value and periodicity to shorten the presentation of an idea, it can be solved for y
|cos(y)| = e2·cos(x)+C
y = arccos(e2·cos(x)+C)

Wednesday, June 14, 2017

Unizor - Ordinary Differential Equations - Major Types





Notes to a video lecture on http://www.unizor.com

Ordinary Differential Equations
Major Types of Equations


In this lecture we will only consider first order ordinary differential equations for a function of one argument y(x) (no higher order derivatives). The general form of these equations is
F(x, y, dy/dx) = 0

We will consider three major types of these differential equations with known approaches to integration:
separable equations,
homogeneous equations,
linear non-homogeneous equations.

Separable Ordinary Differential Equations

A few examples we were working with in the introductory lecture to ordinary differential equations are separable in a sense that the original differential equation, that can be generally expressed as F(x, y, dy/dx) = 0, can be transformed into
f(y)·dy = g(x)·dx
that can be separately integrated, using the techniques of calculating indefinite integrals, and, hopefully, resolved for y.
Even if it will not be possible to resolve it for y, the result of integration will be a simpler formula G(x,y)=0 (it will also include a constant as a result of integration, which can be found if some initial condition on a function y(x) is imposed).
In any case, whether the result of integration can or cannot be resolved for y, it's still a significantly better than original equation that includes a derivative.

Example

y' + x·y = 0
Let's use the Leibniz notation for derivatives to facilitate the separation of function from its argument and resolve the equation for a derivative.
dy/dx = −x·y
Separate x and y:
dy/y = −x·dx
Now we can apply an indefinite integral to both sides to solve the equation.

Homogeneous Ordinary Differential Equations

Homogeneous equations can be defined using the following criterion.
Replace all occurrences of x with λ·x and all occurrences of y with λ·y. Do not change anything with derivative dy/dx. If, as a result, all λ's cancel each other out, the equation is homogeneous.
For example, consider the following equation:
y' + x/y + x²/y² = 0
Substitute x with λ·x and y with λ·y:
y' + (λ·x)/(λ·y) + (λ·x)²/(λ·y)² = 0
Obviously, we can reduce both ratios, getting exactly the same equation as before.
Now we will use the above example to explain the method of solving homogeneous equations.
Let's introduce a new function z(x)=y(x)/x, which results in y(x)=z(x)·x and express the initial equation in terms of xz and z'.
The expression for a derivative y' is:
y' = (z·x)' = z'·x+z
So, our equation looks like
z'·x + z + x/(z·x) + x²/(z·x)² = 0
Simplifying by reducing the ratios by x and , we get
z'·x + z + 1/z + 1/z²= 0
This equation can be solved using the method of separation.
z'·x = −(z + 1/z + 1/z²)
dz/(z+1/z+1/z²) = −dx/x
Now we can apply an indefinite integral to both sides to solve the equation for z(x) and then multiply it by x to get y(x).

Linear Non-Homogeneous Ordinary Differential Equations

Standard form of this type of differential equations is
f(x)·y' + g(x)·y + h(x) = 0
As the first step, we can divide all members of this equation by f(x) (assuming it's not identically equal to 0), getting a simpler equation
y'+u(x)·y+v(x) = 0
The suggested solution lies in the substitution y(x)=p(x)·q(x), where p(x) and q(x) are unknown (for now) functions.
Express y'(x) in terms of p(x) and q(x):
y' = p·q'+q·p'
Substitute this into our equation:
p·q'+q·p'+u·p·q+v = 0
Let's simplify this
p(q'+u·q)+q·p'+v = 0
If there are such functions p(x) and q(x) that satisfy conditions
(1) q'+u·q = 0 and
(2) q·p'+v = 0
our job would be finished.
Let's try to find such functions.
From the equation (1) in our pair of equations we derive
q'/q = −u,
which can be converted into
dq(x)/q(x) = −u(x)·dx
that can be solved by integrating:
ln(q(x)) = −u(x)·dx
q(x) = e−∫u(x)·dx
Once q(x) is found, we solve the equation (2) for p(x):
p'(x) = −v(x)/q(x),
which can be integrated to find
p(x) = −v(x)/g(x) dx
and, consequently, y(x)=p(x)·q(x) can be fully determined.
Let's consider an example.
y' + x·y + x² = 0
If y(x)=p(x)·q(x), our equation looks like this:
p'·q+p·q'+x·p·q+x² = 0
(q'+x·q)·p+(q·p'+x²) = 0
Now we have to solve the following equation to nullify the first term:
q'+x·q = 0
(which is solvable through separation)
and substitute the resulting function q(x) into
q·p'+x² = 0
to solve it for p(x)
(which is a simple integration).

Tuesday, June 13, 2017

Unizor - Ordinary Differential Equations - Introduction





Notes to a video lecture on http://www.unizor.com

Ordinary Differential Equations
Introduction


Ordinary differential equations are equations, where derivatives of some function participate in the equation.
Assuming that y(x) is some unknown function, a differential equation, in its general form, looks like this:
F(x, y, y', y'',...) = 0
where F(...) is some function of many arguments.
The goal is to find the function y(x) that satisfies this equation.

Let's start with a simple example of an ordinary differential equation.
y'(x) = 2x
We can easily guess that, if a derivative of a function equals to 2x, the function must be y(x)=x²+C, where C - any constant.

On the other hand, we can represent this equation in the form
dy/dx = 2x
and transform it into
dy = 2x·dx
This is a relationship between two infinitesimals that signifies that these infinitesimals are equal in a sense that the difference between them is an infinitesimal of a higher order than themselves.
Now we can apply an operation of integration to both getting the following
 1·dy =  2x·dx

Integration results in the following equality
y + C1 = x² + C2,
where C1 and C2 are any constants, and therefore, can be combined into one, getting
y = x² + C
This method of integration is a little more "scientific" than straight guessing that we employed above, though, by itself, might be difficult since it involves the operation of integration.

Notice the presence of any constant in the result. This is typical for differential equations and is similar to indefinite integrals.

Arguably, the method of separation of argument x and function y into different sides of an equation with subsequent integration is the most effective way to solve differential equations. Those equations that allow solution of this type are called separable differential equations

Let's consider a few more examples.

Example 1

x²·y'(x) = y(x)
Let's represent y'(x) as a ratio of differentials dy/dx, our equation will look like
x²·dy/dx = y(x)
Now we can separate argument x and function y into different sides of an equation
dy/y = dx/x²
Integrate both sides
dy/y = dx/x²
which results in
ln(y) = −1/x + C
(where C is any constant) or, since we have to find an expression for y in terms of x, we can use this equality as exponents and raise e into it, getting
y = C·e−1/x

Let's check this result.
y'(x) = C·e−1/x·(1/x²)
x²·y'(x) = C·e−1/x = y(x)
All is correct.

Example 2

tan(x)·y'(x) = y²(x)
Let's represent y'(x) as a ratio of differentials dy/dx, our equation will look like
tan(x)·dy/dx = y²(x)
Now we can separate argument x and function y into different sides of an equation
dy/y² = dx/tan(x)
Integrate both sides
dy/y² = dx/tan(x)
which results in
−1/y + C = cos(x)dx/sin(x)
or, equivalently, since
cos(x)·dx = dsin(x),
it can be transformed into
−1/y + C = dsin(x)/sin(x)
The integral on the right can be calculated and the result is
−1/y + C = ln(sin(x))
(where C is any constant) or, since we have to find an expression for y in terms of x, we can transform it into
y = −1/[ln(sin(x))+C]
It would look better if we bring the constant under a logarithm, getting
y = −1/ln(C·sin(x))

Let's check this result.
y'(x) = [1/ln²(C·sin(x))] · [1/(C·sin(x))] · C·cos(x) = cot(x)/ln²(C·sin(x))
tan(x)·y'(x) = 1/ln²(C·sin(x)) = y²(x)
All is correct.

As you see, in all examples above there is a constant that can take any value, as in the case of indefinite integrals. That's because we explicitly use integration as a tool to solve our differential equation. That's why the term "solving" as related to differential equations sometimes is replaced with term "integrate". So, to integrate a differential equation means to solve it.

Without any additional information, as we see, a differential equation can have infinite number of solutions. But we need only one, that corresponds to some practical problem, from which this equation was obtained. Therefore, we need some condition imposed on our solution to determine this constant that is present in the general solution.

Consider Example 1 above
x²·y'(x) = y(x)
and its solution
y = C·e−1/x
This solution represents a whole family of functions, each satisfying our differential equation.
To determine a particular solution we are interested in, we have to define what we are interested in using some additional information about function y(x). For example, we know that our function y(x) equals to 1 if x=1.
Let's substitute this into a general solution to our differential equation to find the value of constant C needed to satisfy our condition.
y(1) = C·e−1/1 = 1
from which we can find constant C:
C·e-1 = 1
C/e = 1
C=e
Therefore, particular solution we are looking for is
y = e·e−1/x = e1-(1/x)

In the Example 2 let's determine constant C by a condition y(π/2)=1
That results in the following
y(π/2) = −1/ln(C·sin(π/2)) = 1
ln(C) = −1
C = 1/e
So, our particular solution is
y = −1/ln(sin(x)/e)

Friday, June 9, 2017

Unizor - Partial Derivatives - Stationary Points





Notes to a video lecture on http://www.unizor.com

Partial Derivatives Properties - Stationary Points

We will mostly be concerned with partial derivatives of functions with two arguments.
The theory can be extended to functions of any number of arguments, but it's outside of the scope of this course.
Besides, functions of two arguments can be visualized as surfaces in three-dimensional space to better understand their properties.

Stationary points are those, where both partial derivatives of function f(x,y) of two arguments are equal to zero.

Let
g(x,y)=∂f(x,y)/∂x
h(x,y)=∂f(x,y)/∂y

Definition:
Point (a,b) is a stationary point for function f(x,y) if g(a,b)=0 and h(a,b)=0.

Theorem
A smooth function f(x,y) of two variables that has a local maximum at point (a,b) has both of its partial derivatives at this point equal to zero.

Proof

Let's prove that f(x,y)/∂x=0 for x=a and y=b. The proof for other partial derivative f(x,y)/∂y is analogous.
So, we fix variable y=b and calculate the partial derivative of f(x,y) by x at point x=a as follows:
f(x,y)/∂x = {at x=a,y=b} = lim[f(a+Δx,b)−f(a,b)]/Δx
(the limit is taken as Δx→0)
Since point (a,b) is a local maximum, the numerator [f(a+Δx,b)−f(a,b)] is negative, while the denominator Δx=(x+Δx)−x is non-positive for x+Δx ≤ x and non-negative for x+Δx ≥ x.
For a sufficiently smooth function (at least, we need the continuity of partial derivatives) this implies that the limit above must be equal to zero.

So, we have proven that for a smooth function of two variables the necessary condition for having a local maximum at point (a,b) is the equality of its partial derivatives to zero at this point.

The situation with local minimum is analogous and the equality of partial derivatives to zero at some point is a necessary condition for having a local minimum at this point.


IMPORTANT NOTE
The equality of partial derivatives to zero at some point is only a necessary condition for a function to have a local maximum or minimum at that point. It's not a sufficient condition.
This is similar to a situation with functions of one variable, when a derivative can be zero at some point, but a function can have an inflection point like function y=x³ at point x=0.
For a function of two variables a situation like this might occur when it has a saddle point.
Here is an example:

At the point in the middle of this "saddle" both partial derivatives are equal to zero, but this point is not a local minimum or maximum of a function.

Obviously, we would like to differentiate cases of a stationary point being a local maximum, a local minimum or a saddle point similarly to a situation with functions of one argument, where the second derivative sign (positive or negative) indicated whether a stationary point is minimum, maximum or inflection point.

Here is the rule, which we provide without rigorous proof.
Let's assume that function f(x,y) can be partially differentiated twice (that is, f(x,y)/∂xf(x,y)/∂y∂²f(x,y)/∂x²∂²f(x,y)/∂y²∂²f(x,y)/∂xy exist) and all second partial derivatives are continuous.
Let's further assume that at point (a,b) both first partial derivatives equal to zero:
f(x,y)/∂x = 0 at x=a, y=b
f(x,y)/∂y = 0 at x=a, y=b
Consider the expression
Δ = ∂²f(x,y)/∂x² · ∂²f(x,y)/∂y² − [∂²f(x,y)/∂xy
at point x=a, y=b.
The rule is:
if Δ < 0(a,b) is a saddle point;
if Δ > 0(a,b) is a local minimum or local maximum point and the sign of ∂²f(x,y)/∂x² or ∂²f(x,y)/∂y² can be used to distinguish minimum from maximum (positive for minimum, negative for maximum and these two second derivatives must have the same sign since otherwise Δ would be negative).
All other cases are not sufficient to determine the behavior of the function at this point.

Example 1

f(x,y)=1/(1+x²+y²)

f(x,y)/∂x = −2x/(1+x²+y²)²
f(x,y)/∂y = −2y/(1+x²+y²)²
At point (0,0) both partial derivatives are equal to zero, therefore (0,0) is a stationary point.
Examine the second derivatives.
∂²f(x,y)/∂x² = (6x²−2y²−2)/(1+x²+y²)³
∂²f(x,y)/∂y² = (6y²−2x²−2)/(1+x²+y²)³
∂²f(x,y)/∂xy = 8x·y/(1+x²+y²)³
At point x=0, y=0 the three expressions above can be used to calculate
Δ = (−2)·(−2)−0² = 4
Since Δ is positive, we have a local minimum or maximum at point (0,0). To distinguish between them, look at the sign of the second partial derivative by x. It is negative. Therefore, we have a local maximum as is obvious from the graph above.

Example 2

f(x,y)=x·y

f(x,y)/∂x = y
f(x,y)/∂y = x
At point (0,0) both partial derivatives are equal to zero, therefore (0,0) is a stationary point.
Examine the second derivatives.
∂²f(x,y)/∂x² = 0
∂²f(x,y)/∂y² = 0
∂²f(x,y)/∂xy = 1
At point x=0, y=0 the three expressions above can be used to calculate
Δ = 0·0−1² = −1
Since Δ is negative, we have a local saddle point (0,0), as is obvious from the graph above.

Thursday, June 8, 2017

Unizor - Partial Derivatives - Basic Properties





Notes to a video lecture on http://www.unizor.com

Partial Derivatives -
Basic Properties


We will mostly be concerned with partial derivatives of functions of two variable arguments.
The theory can be extended to functions of any number of arguments, but it's outside of the scope of this course.
Besides, functions of two arguments can be visualized as surfaces in three-dimensional space to better understand their properties.

Certain simple properties of partial derivatives of multivariable functions coincide with corresponding properties of derivatives of functions of one variable because the process of partial differentiation is, actually, a differentiation by one variable, keeping the others as constants.

So, we will not bother with proofs since they are based on corresponding properties of derivatives of functions of a single variable.
Also, whatever property is listed below for one argument of partial differentiation of multivariable function is valid for any other argument.

1. [f(x,y)+g(x,y)]/∂x = f(x,y)/∂x + f(x,y)/∂x

Example 1
[ln(x²+y²)+x·y]/x = 2x/(x²+y²)+y

2. [K·f(x,y)]/∂x = K·f(x,y)/∂x

Example 2
[2·ln(x²+y²)]/x = 2·2x/(x²+y²) = 4x/(x²+y²)

3. [f(x,y)·g(x,y)]/∂x = g(x,y)·∂f(x,y)/∂x + f(x,y)·∂g(x,y)/∂x

Example 3
[(x−y)·(x²+xy+y²)]/x = (x−y)·(2x+y)+1·(x²+xy+y²) = 3x²
At the same time,
(x−y)·(x²+xy+y²) = x³−y³ and a partial derivative by x of this expression equals to 3x², which corresponds to a previous result.

4. f(x(t),y)/∂t = f(x,y)/∂x · dx(t)/dt

Example 4
[ln²(t)+y²]/t = 2ln(t)/t

Let's consider properties specific for multivariable functions.
Recall that for a function of a single variable f(x) the following approximation can be established:
Δf(x) = f(x+Δx) − f(x) ≅ (df(x)/dx)·Δx
As Δx→0, this approximation is transformed into a relationship between infinitesimals:
df(x) = (df(x)/dx)·dx

Similarly, we can establish a relationship in case of multivariable functions for each of its arguments:
f(x+Δx,y) − f(x,y) ≅ (∂f(x,y)/x)·Δx
and
f(x,y+Δy) − f(x,y) ≅ (∂f(x,y)/y)·Δy

Using the above properties, let's consider a general case when both arguments of function f(x,y) are incremented.
f(x+Δx,y+Δy) − f(x,y) =
f(x+Δx,y+Δy) − f(x,y+Δy) + f(x,y+Δy) − f(x,y) ≅
≅ (f(x,y+Δy)/x)·Δx + (f(x,y)/y)·Δy

As Δx→0 and Δy→0, we can replace these increments with infinitesimal differentials and, using the smoothness of our functions (in particular, continuity of partial derivatives), we can write the following equivalence between infinitesimals:
df(x,y)=f(x+dx,y+dy)−f(x,y) =
(f(x,y)/x)·dx + (f(x,y)/y)·dy
This expression is called total differential of a function of two variables.

Consider now that both arguments of our function f(x,y) are, in turn, functions of some argument:
x = x(t)
y = y(t)
We can use the same expression for a total differential
df(x,y) = (f(x,y)/x)·dx + (f(x,y)/y)·dy
but, considering that x and y are functions of t and, therefore,
dx = x'(t)·dt and
dy = y'(t)·dt,
we get the following:
df(x,y) = (f(x,y)/x)·x'(t)dt + (f(x,y)/y)·y'(t)dt

From the last equivalence we derive the formula for total derivative
df(x,y)/dt = (f(x,y)/x)·x'(t) + (f(x,y)/y)·y'(t)
This formula represents the chain rule for a function of two arguments when these arguments are, in turn, functions of some other same argument.

Example 5

Consider certain quantity of ideal gas in a reservoir with a piston, so we can change its volume, and a heater, so we can change its temperature.
The law of physics tells that the pressure P, volume V and absolute temperature T are connected by a formula
P·V/T = const
where the constant on the right depends on quantity of gas and in our case can be fixed and equal to C.

Let's apply some pressure to a piston to squeeze the gas.
The function that describes the change of volume with time is V(t).
Let's heat the gas.
The function that describes the change of temperature with time is T(t).
How fast the pressure would rise?

Since P(t)·V(t)/T(t)=C,
P(t) = C·T(t)/V(t)
The speed, with which the pressure is rising is a derivative dP(t)/dt
To calculate this derivative, we can use the formula of total derivative of a function of two arguments:
dP(t)/dt = (P(V,T)/V)·V'(t) + (P(V,T)/T)·T'(t) =
= −C·T(t)·V'(t)/V²(t) + C·T'(t)/V(t)

Monday, June 5, 2017

Unizor - Partial Derivatives - ∂²/∂x ∂y = ∂²/∂y ∂x





Notes to a video lecture on http://www.unizor.com

Partial Derivatives -
∂²/∂x ∂y = ∂²/∂y ∂x


In the following examples compare ∂²z/∂x ∂y and ∂²z/∂y ∂x.
They should be identical.

Example 1

Let z=√x·y
Then
z/∂x = y/(2√x·y)
∂²z/∂y ∂x = 1/(4√x·y)
z/∂y = x/(2√x·y)
∂²z/∂x ∂y = 1/(4√x·y)

Example 2

Let z=ex·y
Then
z/∂x = y·ex·y
∂²z/∂y ∂x = (x·y+1)·ex·y
z/∂y = x·ex·y
∂²z/∂x ∂y = (x·y+1)·ex·y

Example 3

Let z=1/(x²+y²)
Then
z/∂x = −2x/(x²+y²)²
∂²z/∂y ∂x = 8x·y/(x²+y²)³
z/∂y = −2y/(x²+y²)²
∂²z/∂x ∂y = 8x·y/(x²+y²)³

Example 4

Let z=sin(x)/y²
Then
z/∂x = cos(x)/y²
∂²z/∂y ∂x = −2cos(x)/y³
z/∂y = −2sin(x)/y³
∂²z/∂x ∂y = −2cos(x)/y³

Example 5

Let z=arctan(x√y)
Then
z/∂x = y/(1+x²·y)
∂²z/∂y ∂x = (1−x²·y)/2√y(1+x²·y)²
z/∂y = x/[2√y(1+x²·y)]
∂²z/∂x ∂y = (1−x²·y)/2√y(1+x²·y)²

Example 6

Let z=yx
Then
z/∂x = yx·ln(y)
∂²z/∂y ∂x = yx−1·(x·ln(y)+1)
z/∂y = x·yx−1
∂²z/∂x ∂y = yx−1·(x·ln(y)+1)

Unizor - Partial Derivatives - Examples





Notes to a video lecture on http://www.unizor.com

Partial Derivatives - Examples

Example 1

Let z=√x·y
Then
z/∂x = assuming y constant = y/(2√x·y)
z/∂y = assuming x constant = x/(2√x·y)

Example 2

Let z=ex·y
Then
z/∂x = assuming y constant = y·ex·y
z/∂y = assuming x constant = x·ex·y

Example 3

Let z=1/(x²+y²)
Then
z/∂x = assuming y constant = −2x/(x²+y²)²
z/∂y = assuming x constant = −2y/(x²+y²)²

Example 4

Let z=sin(x)/y²
Then
z/∂x = assuming y constant = cos(x)/y²
z/∂y = assuming x constant = −2sin(x)/y³

Example 5

Let z=arctan(x√y)
Then
z/∂x = assuming y constant = y/(1+x²·y)
z/∂y = assuming x constant = x/[2√y(1+x²·y)]

Example 6

Let z=yx
Then
z/∂x = assuming y constant = yx·ln(y)
z/∂y = assuming x constant = x·yx−1

Friday, June 2, 2017

Unizor - Definite Integrals - Length of Curve





Notes to a video lecture on http://www.unizor.com

Definite Integrals -
Length of Curve


Our task is to calculate a length of a curve on a coordinate plane, defined parametrically as a set of points (x(t),y(t)), where x(t) and y(t) are smooth functions of parameter t∈[a,b].

This is one of the typical integration problem and can be approached similarly to our approach to calculate the area under a curve.

Let's break down a segment [a,b] into N intervals by points a=t0, t1,...tN=b, which results in corresponding breaking of our curve into smaller pieces by points
(x0,y0)=(x(t0),y(t0)),
(x1,y1)=(x(t1),y(t1)),...
...
(xN,yN)=(x(tN),y(tN)),

Each small piece of a curve can be approximated with a straight line from one of its ends to another, and this approximation will be better with the density of break points increasing.
So, the n-th piece of a curve is approximated with a segment from point (xn−1,yn−1) to point (xn,yn).

The length of this n-th piece of a curve Ln can be calculated using the regular formula of a length of a segment between two points on a plane:
Ln² = (xn−xn−1)²+(yn−yn−1

Taking into account that both coordinates are functions of parameter t, we can express it differently:
Ln² = (x(tn)−x(tn−1))² +
+ (y(tn)−y(tn−1))²


Notice that
x(tn)−x(tn−1) ≅ xI(tn)·(tn−tn−1)
where xI(t) is a derivative of function x(t) by t and approximation gets better and better as we increase the density of points tn.

So, we can express the length of the n-th piece of a curve as
Ln² ≅ [xI(tn)² + yI(tn] ·
· (tn−tn−1


From this we get
Ln ≅ SQRT[xI(tn)² + yI(tn] ·
· (tn−tn−1)


As before, we use Δtn=tn−tn−1.
Now the length of a curve can be approximated by this sum:
L ≅ Σn∈[1,N] Ln ≅
≅ Σn∈[1,N] f(xn)·Δtn
where
f(t)=SQRT[xI(t)² + yI(t)²]

Recall the definition of the definite integral:
abf(x) dx =
lim 
Σi∈[1,N] f(xiΔxi
where Δxi=xi−xi−1 represents partitioning of segment [a,b] into N parts, and it is assumed that the widest interval Δxi is shrinking to zero by length as N→∞.

Clearly, we are dealing with an integral in our case. The sum of pieces of our curve represents Riemann sum, and the limit of this sum, as the density of points tn increases, is the following integral:
abSQRT[xI(t)² + yI(t)²dt

So, the length of the curve on a coordinate plane, defined parametrically by coordinate functions x(t) and y(t), where t∈[a,b], equals to
abSQRT[xI(t)² + yI(t)²dt

Let's apply this to a couple of practical problems.

Problem 1
Calculate the length of a circle of radius R.

Solution
To define a circle parametrically, let's choose an angle between its radius and a positive direction of the X-axis as a parameter t∈[0,2π].
Then the X-coordinate of a point on a circle, whose radius forms an angle t with positive direction of the X-axis, equals to x(t)=R·cos(t) and Y-coordinate equals to y(t)=R·sin(t).

Now we can use the above formula to calculate the length of circle.
0SQRT[xI(t)² + yI(t)²dt =
0SQRT[R²sin²(t) +
+ R²cos²(t)
dt =
0dt =
R·2π − R·0 = 2πR
As we see, the length, as we counted, equals to the one in a classical formula for the length of the circle. No surprise here.
The end.

Problem 2
Calculate the length of an astroid given parametrically as
x=R·cos³(t)y=R·sin³(t)
where t∈[0,2π]

Solution
Astroid looks like this:

Its vertices are at distance R from the origin, and its four arcs correspond to parameter t changing in each quadrant from 0 to .
Using the symmetry, lets' calculate the length of only one arc for t∈[0,π/2] and then multiply it by four.
First, calculate the derivatives by t:
xI(t) = −3R·cos²(t)·sin(t)
yI(t) = 3R·sin²(t)·cos(t)
Now we use the formula for the length of a curve.
SQRT[xI(t)² + yI(t)²] =
3R·|sin(t)·cos(t)| =
(3/2)R·|sin(2t)|
This should be integrated within margins t∈[0,π/2].
Within these margins 2t changes from 0 to π and sin(t) is non-negative, so we can drop absolute value.
So, we need to find the following integral:
0π/2(3/2)R·sin(2t) dt
Indefinite integral of sin(2t) is −cos(2t)/2, which means that we have to calculate the following
(−cos(π)/2) − (−cos(0)/2) =
1/2 + 1/2 = 1.
Therefore, the length of one quarter of an astroid equals to (3/2)R·1 = (3/2)R and the length of an entire astroid is (3/2)R·4=6R.
The end.

Wednesday, May 31, 2017

Unizor - Partial Derivatives - Definition





Notes to a video lecture on http://www.unizor.com

Partial Derivatives - Definition

A concept of partial derivative is applicable to functions of two and more arguments (we will consider it only for functions of two arguments) and is based on a concept of a derivative of a function of one argument that we are familiar with.

Graphically, a function of two arguments z=f(x,y) can be represented as a surface in a three-dimensional space.

If we fix one of the arguments, for instance, x=a, and consider our function for all values of the other argument y, while the fixed argument does not change its value, we will obtain a function of one argument f(a,y).

Similarly, if we fix another argument, y=b, and consider our function for all values of x, while the fixed argument y does not change its value, we will also obtain a function of one argument f(x,b).

Graphically, we can see this function of one argument if we cut the graph of the original function z=f(x,y) by a plane x=a (or a plane y=b) and consider their intersection as a graph of this new function of one argument within plane x=a (or a plane y=b).

Here is an illustration of cutting a graph of a function of two arguments with a plane where x is fixed or by a plane where y is fixed:

For any real a we can analyze the behavior of function of one argument f(a,y), using familiar apparatus of differential calculus. For example, we can find points of maximum, minimum or inflection, intervals of increasing or decreasing etc.

In particular, we can find a derivative of this function, which is called a partial derivative of function z=f(x,y) by y.
In our description it depends on constant a (or a might get canceled in the process of differentiation as a constant, but this is irrelevant).

For example, function f(x,y)=x²+y² for x=a looks like a²+y² (where a is constant) and it has a partial derivative by y equal to 2y.
Function f(x,y)=x²·y³ for x=a looks like a²·y³ (where a is constant) and it has a partial derivative by y equal to a²·3y²=3a²y².

Traditionally, when talking about partial derivatives, we don't emphasize that one of the variables takes some special value (in our example, x=a), just that it is fixed, while we take a derivative by another variables. So, we don't substitute x=a before taking a partial derivative, we just assume that x (or y) is constant and then take a derivative by a variable that is not fixed.

So, for f(x,y)=x²+y² the partial derivative by y (assuming x is constant) is 2y, partial derivative by x (assuming y is constant) is 2x.
For f(x,y)=x²·y³ the partial derivative by y (assuming x is constant) is 3x²·y² and partial derivative by x (assuming y is constant) is 2x·y³.

We will also use symbols ∂/∂x and ∂/∂y to signify partial derivatives by x (keeping y fixed) and by y (keeping x fixed), correspondingly.
So, for function z=x²+y² we can write
z/∂x = 2x (y is considered constant)
z/∂y = 2y (x is considered constant)
For function z=x²·y³; we can write
z/∂x = 2x·y³ (y is considered constant)
z/∂y = 3x²·y² (x is considered constant)

Two important notes about this process of partial differentiation.

Note 1

After taking a partial derivative by any argument we obtain a function that can be partially differentiated again by any argument.
Thus, we can talk about partially differentiating twice by x or twice by y, or once by x and then by y, or once by y and then by x (order is important).

Using the symbolics suggested above and self explanatorily expanding it to derivatives of higher order, we can write:

∂²/∂(x²·y³) =
= ∂/∂x(∂/∂x (x²·y³)) =
= ∂/∂x (2x·y³) = 2y³


∂²/∂(x²·y³) =
= ∂/∂y(∂/∂y (x²·y³)) =
= ∂/∂y (3x²·y²) = 6x²·y


∂²/∂xy(x²·y³) =
= ∂/∂x(∂/∂y (x²·y³)) =
= ∂/∂x (3x²·y²) = 6x·y²


∂²/∂yx(x²·y³) =
= ∂/∂y(∂/∂x (x²·y³)) =
= ∂/∂y (2x·y³) = 6x·y²


The fact that two mixed partial derivatives (first by x then by y or first by y then by x) are the same in the above example is not a coincidence. There is a theorem that states that for a broad range of functions they are supposed to be the same. This will be addressed later.

Note 2

We can expand this definition of partial derivative to functions of any number of arguments. In all these cases we just have to assume that, partially differentiating by one argument, we assume all the others are constant.
For examples,
∂/∂x(x²+x·y) = 2x+y

Thursday, May 25, 2017

Unizor - Definite Integrals - Improper Integrals Examples 2





Notes to a video lecture on http://www.unizor.com

Improper Definite Integrals

Example 2.1

012x/(1−x²) dx =
limb→1 0b2x/(1−x²) dx =
limb→1 0b1/(1−x²) d(x²)

Substitute t=x², including the limits of integration for t, which, considering x∈[0,b], would be t∈[0,b²].
The resulting integral would be
limb→1 01/(1−t) dt

Indefinite integral of function f(t)=1/(1−t) is F(t)=−ln(1−t).
Indeed, let's take a derivative of F(t)=−ln(1−t):
Dx F(t) = −(−1/(1−t)) = 1/(1−t)

Using Newton-Leibniz formula,
01/(1−t) dx =
= F(b²) − F(0) =
= − ln(1−b²) + ln(1−0)

As b→1ln(1−b²) is decreasing to negative infinity. So, this expression is increasing to positive infinity and we conclude that the original integral diverges.

Answer: This integral diverges, it has no real value.

__________

Example 2.2

1sin(1/x)/x² dx =
limb→∞ 1bsin(1/x)/x² dx =
=
limb→∞ −1bsin(1/x)d(1/x)=
limb→∞ −11/bsin(t)dt =
limb→∞ 11/b d(cos(t)) =
limb→∞ 
[cos(1/b) − cos(1)]
As b→∞, this expression converges to 1 − cos(1)

Answer1 − cos(1)

__________


Example 2.3

Analyze the convergence of the following integral, depending on the value of parameter a.
1xa dx

The indefinite integral of the function f(x)=xa for any a≠−1 is F(x)=xa+1/(a+1)
The case of a=−1 should be considered separately (see below).
Convergence of our integral for all cases when a≠−1 depends on convergence of the following limit:
limb→∞[F(b)−F(1)] =
= [1/(a+1)limb→∞[ba+1−1]
Since infinitely increasing variable b in any positive power produces infinitely increasing variable, values of a that are greater than −1 should be excluded.
Infinitely increasing variable b, raised to a negative power will converge to zero.
Therefore, for all values of a that are less then −1 the limit will be −1/(a+1).
Consider now a case of a=−1.
Indefinite integral of function f(x)=x−1=1/x is F(x)=ln(x)
Since this function is infinitely growing as x→∞, the original integral does not converge to any real number.

Answer:
This improper integral converges only for those values of parameter a that are less than −1, in which case the integral equals to
1xa dx = −1/(a+1)
For example, for a=−2:
1x−2 dx = 11/x² dx = 1

__________


Example 2.4

Analyze the convergence of the following integral, depending on the value of parameter a.
01xa dx

First of all, for all positive values of parameter a this is a proper integral and its value is
01xa dx = 1/(a+1)
For a=0 our function f(x)=xa is constant and equals to 1 everywhere except on the left margin x=0, where it is undefined.
So, we really have to calculate
limb→0b1x0 dx =
limb→0(1−b) = 1

(in which case the answer 1/(a+1) still valid).

Assume now that parameter a is negative. Obviously, in this case the function xa grows to infinity as x→0, which makes our integral improper.
The indefinite integral of the function f(x)=xa for any a≠−1 is F(x)=xa+1/(a+1)
The case of a=−1 should be considered separately (see below).
Convergence of our integral for all cases when a≠−1 depends on convergence of the following limit:
limb→0[F(1)−F(b)] =
= [1/(a+1)limb→0[1−ba+1]
Since infinitesimal variable b in any negative power produces infinitely growing variable, values of a that are less than −1 should be excluded.
Infinitesimal variable b, raised to a positive power will converge to zero.
Therefore, for all values of a that are greater then −1 the limit will be 1/(a+1).
Consider now a case of a=−1.
Indefinite integral of function f(x)=x−1=1/x is F(x)=ln(x)
Since this function is infinitely decreasing to negative infinity as x→0, the original integral does not converge to any real number.

Answer:
This improper integral converges only for those values of parameter a that are greater than −1, in which case the integral equals to
01xa dx = 1/(a+1)
For example, for a=−1/2:
01x−1/2 dx = 011/√x dx = 2