Unizor is a site where students can learn the high school math (and in the future some other subjects) in a thorough and rigorous way. It allows parents to enroll their children in educational programs and to control the learning process.
Among all smooth (sufficiently differentiable) functions f(x) defined on segment [a,b] and taking values at endpoints f(a)=A and f(b)=B find the one with the shortest graph between points (a,A) and (b,B).
Solution
First of all the length of a curve representing a graph of a function is a functional with that function as an argument. Let's determine its explicit formula in our case.
The length ds of an infinitesimal segment of a curve that represents a graph of function y=f(x) is ds = [(dx)² + (dy)²]½ =
= [(dx)² + (df(x))²]½ =
= [1 + (df(x)/dx)²]½·(dx) =
= [1 + f '(x)²]½·(dx)
The length of an entire curve would then be represented by the following functional of function f(x): Φ[f(x)] = ∫[a,b][1 + f '(x)²]½dx
We have to minimize this functional within a family of smooth functions defined on segment [a,b] and satisfying initial conditions f(a)=A and f(b)=B
As explained in the previous lecture, if functional Φ[f(x)] has local minimum at function-argument f0(x), the variation (directional derivative) d/dt Φ[f0(x)+t(f1(x)−f0(x))]
at function-argument f0(x) (that is, for t=0) in the direction from f0(x) to f1(x) should be equal to zero regardless of location of f1(x) in the neighborhood of f0(x).
Assume, f0(x) is a function that minimizes the functional Φ[f(x)] above.
Let f1(x) be another function from the family of functions defined on segment [a,b] and satisfying initial conditions f(a)=A and f(b)=B
Let Δ(x) = f1(x) − f0(x).
It is also defined on segment [a,b] and, according to its definition, satisfies the initial conditions Δ(a)=0 and Δ(b)=0.
Using an assumed point (function-argument) of minimum f0(x) of our functional Φ[f(x)], another point f1(x) that defines the direction of an increment of a function-argument and real parameter t, we can describe a subset of points (function-arguments) linearly dependent on f0(x) and f1(x) as f0(x)+t·(f1(x)−f0(x)) =
= f0(x) + t·Δ(x)
Let's calculate the variation (we will use symbol δ for variation) of functional Φ[f(x)] at any point (function-argument) defined above by function-argument f0(x) minimizing our functional, directional point f1(x) and real parameter t : δ[f0,f1,t]Φ[f(x)] =
= d/dt Φ[f0(x)+t(f1(x)−f0(x))] =
= d/dt Φ[f0(x)+t·Δ(x)] =
(use the formula for a length of a curve)
= d/dt ∫[a,b][1+((f0+t·Δ)')²]½dx
In the above expression we dropped (x) to shorten it.
The derivative indicated by an apostrophe is by argument x of functions f(x) and Δ(x).
Under very broad conditions, when smooth functions are involved, a derivative d/dt and integral by dx are interchangeable.
So, let's take a derivative d/dt from an expression under an integral first, and then we will do the integration by dx.
Now we can integrate the above expression by x on segment [a,b].
Let's use the integrating by parts using a known formula for two functions u(x) and v(x)
∫[a,b]u·dv = u·v|[a,b] − ∫[a,b]v·du
Use it for
u(x) =
f0'(x)+t·Δ'(x)
[1+(f0'(x)+t·Δ'(x))²]½
v(x) = Δ(x)
and, therefore, dv(x) = dΔ(x) = Δ'(x)·dx
with all participating functions assumed to be sufficiently smooth (differentiable to, at least, second derivative).
Since v(a) = Δ(a) = 0 and v(b) = Δ(b) = 0,
the first component of integration is zero u·v|[a,b]=u(b)·v(b)−u(a)·v(a)=0
Now the variation of our functional is δ[f0,f1,t]Φ[f(x)] =
= −∫[a,b]v(x)·du(x,t)
where
u(x,t) =
f0'(x)+t·Δ'(x)
[1+(f0'(x)+t·Δ'(x))²]½
v(x) = Δ(x)
As we know, the necessary condition for a local minimum of functional Φ[f(x)] at function-argument f0(x) is equality to zero of all its directional derivatives at point f0(x) (that is at t=0).
It means that for any direction defined by function f1(x) or, equivalently, defined by any Δ(x)=f1(x)−f0(x), the derivative by t of functional Φ[f0(x)+t·Δ(x)] should be zero at t=0.
So, in our case of minimizing the length of a curve between two points in space, the proper order of steps would be
(1) calculate the integral above getting variation of a functional δ[f0,f1,t]Φ[f0(x)+t·Δ(x)] which is a functional of three variables:
- real parameter t,
- function f0(x) that is an argument to an assumed minimum of functional Φ[f(x)],
- function Δ(x) that signifies an increment of function f0(x) in the direction of function f1(x);
(2) set t=0 obtaining a directional derivative of functional Φ[f(x)] at assumed minimum function-argument f0(x) and increment Δ(x): δ[f0,f1,t=0]Φ[f0(x)];
(3) equate this functional to zero and find f0(x), that solves this equation regardless of argument shift to f1(x).
Integration in step (1) above is by x, while step (2) sets the value of t.
Since x and t are independent variables, we can exchange the order and, first, set t=0 and then do the integration.
This simplifies the integration to the following δ[f0,f1,t=0]Φ[f(x)] =
= −∫[a,b]Δ(x)·du(x,0)
where
u(x,0)=
f0'(x)+0·Δ'(x)
[1+(f0'(x)+0·Δ'(x))²]½
=
f0'(x)
[1+(f0'(x))²]½
Therefore, δ[f0,f1,t=0]Φ[f(x)] =
= −∫[a,b]Δ(x)·d
f0'(x)
[1+(f0'(x))²]½
And the final formula for variation δ[f0,f1,t=0]Φ[f(x)] is
−∫[a,b]Δ(x)·[
f0'(x)
[1+f0'(x)²]½
]'dx
For δ[f0,f1,t=0]Φ[f(x)] to be equal to zero for t=0 regardless of Δ(x), or, in other words, for the integral above to be equal to zero at t=0 regardless of function Δ(x), function u'(x,0) (the one in [...]') must be identically equal to zero for all x∈[a,b].
If u'(x,0) is not zero at some point x (and in some neighborhood of this point since we deal with smooth functions), there can always be constructed such function Δ(x) that makes the integral above not-zero.
From this follows that u(x,0)=const.
Therefore,
f0'(x)
[1+f0'(x)²]½
= u(x,0) = const
from which easily derived that f0'(x)=const and, therefore, the function f0(x), where our functional has a minimum, is a linear function of x.
All that remains is to find a linear function f0(x) that satisfies initial conditions f0(a)=A and f0(b)=B.
Obviously, it's one and only function f0(x) = (B−A)·(x−a)/(b−a) + A
whose graph in (X,Y) Cartesian coordinates is a straight line from (a,A) to (b,B).
In this lecture we continue discussing the problem of finding a local extremum (minimum or maximum) of a functional that we introduced in the previous lectures.
To find a local extremum point x0 of a smooth real function of one argument F(x) we usually do the following.
(a) find a derivative F'(x) of function F(x) by x;
(b) if x0 is a local extremum point, the derivative at this point should be equal to zero, which means that x0 must be a solution of the following equation F'(x) = 0
Let's try to find a local extremum of a functional Φ[f(x)] using the same approach.
The first step presents the first problem: how to take a derivative of a functional Φ[f(x)] by its function-argument f(x)?
The answer is: WE CANNOT.
So, we need another approach, and we would like to explain it using an analogy with finding an extremum of a real function of two arguments F(x,y) defined on a two-dimensional XY plane with Cartesian coordinates, that is finding such a point P0(x0,y0) that the value F(P0)=F(x0,y0) is greater (for maximum) or smaller (for minimum) than all other F(P)=F(x,y), where point P(x,y) is in a small neighborhood of point P0(x0,y0).
As in case of functionals, we cannot differentiate function F(P) by P because, geometrically, there is no such thing as differentiating by point and, algebraically, we cannot simultaneously differentiate by two coordinates.
Yes, we can apply partial differentiation ∂F(x,y)/∂x by x separately from partial differentiation ∂F(x,y)/∂y by y, which will give a point of extremum in one or another direction. But what about other directions?
Fortunately, there is a theorem that, if both partial derivatives are zero at some point, it's really a point of extremum in all directions, but this is not applicable to functionals, and we will not talk about it at this moment.
An approach we choose to find an extremum of function F(P)=F(x,y) defined on a plane and that will be used to find an extremum of functionals is as follows.
Assume, point P0(x0,y0) is a point of a local minimum of function F(P)=F(x,y) (with local maximum it will be analogous).
Choose any other point P(x,y) in a small neighborhood of P0 and draw a straight line between points P0 and P.
Consider a point Q(q1,q2) moving along this line from P to P0 and beyond.
As point Q moves towards an assumed point of minimum P0 along the line from P to P0, the value of F(Q) should diminish. After crossing P0 the value of F(Q) will start increasing.
What's important is that this behavior of function F(Q) (decreasing going towards P0 and increasing after crossing it) should be the same regardless of a choice of point P from a small neighborhood of P0, because P0 is a local minimum in its neighborhood, no matter from which side it's approached.
The trajectory of point Q is a straight line - a one-dimensional space. So, we can parameterize it with a single variable t like this: Q(t) = P0 + t·(P−P0)
In coordinate form: q1(t) = x0 + t·(x−x0) q2(t) = y0 + t·(y−y0)
At t=1 point Q coincides with point P because Q(1)=P0+1·(P−P0)=P.
At t=0 point Q coincides with point P0 because Q(0)=P0+0·(P−P0)=P0.
Now F(Q(t)) can be considered a function of one argument t that is supposed to have a minimum at t=0 when Q(0)=P0.
That means that the derivative d/dt F(Q(t)), as a function of points P0, P and parameter t, must be equal to zero for t=0, that is at point P0 with a chosen direction towards P.
This is great, but what about a different direction defined by a different choice of point P?
If P0 is a true minimum, change of direction should not affect the fact that directional derivative at P0 towards another point P equals to zero.
So, d/dt F(Q(t)) must be equal to zero for t=0 regardless of the position of point P in the small neighborhood of P0.
It's quite appropriate to demonstrate this technique that involves directional derivatives on a simple example.
Consider a function defined on two-dimensional space f(x,y) = (x−1)² + (y−2)²
Let's find a point P0(x0,y0) where it has a local minimum.
Let's step from point P0(x0,y0) to a neighboring one P(x,y) and parameterize all points on a straight line between P0 and P Q(t) = P0 + t·(P−P0) =
= (x0+t·(x−x0), y0+t·(y−y0))
The value of our function f() at point Q(t) is f(Q(t)) = (x0+t·(x−x0)−1)² + (y0+t·(y−y0)−2)²
The directional derivative of this function by t will then be f 't (Q(t)) = 2(x0+t·(x−x0)−1)·(x−x0) +
+ 2(y0+t·(y−y0)−2)·(y−y0)
If P0(x0,y0) is a point of minimum, this directional derivative from P0 towards P for t=0, that is at point P0(x0,y0), should be equal to zero for any point P(x,y).
At t=0 f 't (Q(0)) = 2(x0−1)·(x−x0) + 2(y0−2)·(y−y0)
If P0(x0,y0) is a point of minimum, the expression above must be equal to zero for any x and y, and the only possible values for x0 and y0 are x0=1 and y0=2.
Therefore, point P0(1,2) is a point of minimum.
The same result can be obtained by equating all partial derivatives to zero, as mentioned above. ∂f(x,y)/∂x = 2(x−1) ∂f(x,y)/∂y = 2(y−2)
System of equations ∂f(x,y)/∂x = 0 ∂f(x,y)/∂y = 0
is 2(x−1) = 0 2(y−2) = 0
Its solutions are x = 1 y = 2
Of course, this was obvious from the expression of function f(x,y)=(x−1)²+(y−2)² as it represents a paraboloid z=x²+y² with its vertex (the minimum) shifted to point (1,2).
Variation of Functionals
Let's follow the above logic that uses directional derivatives and apply it to finding a local minimum of functionals.
To find a local minimum of a functional Φ[f(x)], we should know certain properties of a function-argument f(x) where this minimum takes place.
In the above case of a function defined on two-dimensional space we used the fact that a directional derivative at a point of minimum in any direction is zero.
We do analogously with functionals.
Assume, functional Φ[f(x)] has a local minimum at function-argument f0(x) and takes value Φ[f0(x)] at this function.
Also, assume that we have defined some metrics in the space of all functions f(x) where our functional is defined. This metrics or norm with symbol ||.|| can be defined in many ways, like ||f(x)|| = max[a,b]{f(x), f '(x)}
or other ways mentioned in previous lectures.
This norm is needed to determine a "distance" between two functions: ||f0(x)−f1(x)||
which, in turn, determines what we mean when saying that one function is in the small neighborhood of another.
Shifting an argument from f0(x) to f1(x) causes change of a functional's value from Φ[f0(x)] to Φ[f1(x)], and we know that, since f0(x) is a point of a local minimum, within a small neighborhood around f0(x) the value Φ[f1(x)] cannot be less than Φ[f0(x)].
More rigorously, there exists a positive δ such that for any f1(x) that satisfies
||f1(x)−f0(x)|| ≤ δ
this is true: Φ[f0(x)] ≤ Φ[f1(x)]
Consider a parameterized family of function-arguments g(t,x) (t is a parameter) defined by a formula g(t,x) = f0(x) + t·[f1(x)−f0(x)]
For t=0 g(t,x)=f0(x).
For t=1 g(t,x)=f1(x).
For t=−1 g(t,x)=2f0(x)−f1(x), which is a function symmetrical to f1(x) relatively to f0(x) in a sense that ½[g(−1,x)+f1(x)] = f0(x).
For all real t function g(t,x) is a linear combination of f0(x) and f1(x) and for each pair of these two functions or, equivalently, for each direction from f0(x) towards f1(x) functional Φ[g(t,x)] =
= Φ[f0(x) + t·(f1(x)−f0(x))]
can be considered a real function of a real argument t.
Let's concentrate on a behavior of real function Φ[g(t,x)] of real argument t where g(t,x) is a parameterized by t linear combination of f0(x) and f1(x) and analyze it using a classical methodology of Calculus.
We can take a derivative of Φ[f0(x) + t·(f1(x)−f0(x))] by t and, since f0(x) is a local minimum, this derivative must be equal to zero at this function-argument f0(x), that is at t=0.
This derivative constitutes a directional derivative or variation of functional Φ[f(x)] at function-argument f0(x) along a direction defined by a location of function-argument f1(x).
Variation of functional Φ[f(x)] is usually denoted as δΦ[f(x)].
If we shift the location of function-argument f1(x) in the neighborhood of f0(x), a similar approach would show that this variation (directional derivative by t) would still be zero at t=0 because functional Φ[f(x)] has minimum at f0(x) regardless of the direction of a shift.
Our conclusion is that, if functional Φ[f(x)] has local minimum at function-argument f0(x), the variation (directional derivative) at function-argument f0(x) in the direction from f0(x) to f1(x) should be equal to zero regardless of location of f1(x) in the neighborhood of f0(x).
Examples of handling local minimum of functionals will be presented in the next lecture.
The following represents additional information not covered in the associated video.
It contains the comparison of properties of functionals and real functions defined in N-dimensional space with Cartesian coordinates (functions of N real arguments) to emphasize common techniques to find points of extremum by using the directional derivatives.
Certain important details of what was explained above are repeated here in more details.
On the first reading these details can be skipped, but it's advisable to eventually go through them.
We assume that our N-dimensional space has Cartesian coordinates and every point P there is defined by its coordinates.
This allows us to do arithmetic operations with points by applying the corresponding operations to their coordinates - addition of points, subtraction and multiplying by real constants are defined through these operations on their corresponding coordinates.
To make a concept of a functional, its minimum and approaches to find this minimum easier to understand, let's draw a parallel between
(a) finding a local minimum of a real function F(P) of one argument P, where P is a point in N-dimensional space with Cartesian coordinates with each such point-argument P mapped by function F(P) to a real number, and
(b) finding a local minimum of a functional Φ[f(x)] of one argument f(x), where f(x) is a real function of a real argument with each such function-argument f(x) mapped by functional Φ[f(x)] to a real number.
Note that function-arguments f(x) of a functional Φ[f(x)] have a lot in common with points in the N-dimensional space that are arguments to functions of N arguments.
Both can be considered as elements of corresponding sets with operations of addition and multiplication by a real number that can be easily defined.
Thus, if two points P and Q in N-dimensional space with Cartesian coordinates are given, the linear combination Q−P represents the vector from P to Q and P+t·(Q−P) represents all the points on a line going through P and Q.
In a similar fashion, we can call function-arguments of a functional points in the space of all functions for which this functional is defined (for example, all functions defined on segment [a,b] and differentiable to a second derivative).
With these functions we can also use arithmetic operations of addition, subtraction and multiplication by a real number.
Also, we can use geometric word line to characterize a set of functions defined by a linear combination f(x)+t·(g(x)−f(x)), where f(x) and g(x) are two functions and t-any real parameter.
This approach will demonstrate that dealing with functionals, in principle, follows the same logic as dealing with regular real functions.
As a side note, wherever we will use limits, differentials or derivatives in this lecture, we assume that the functions we deal with do allow these operations, and all limits, differentials or derivatives exist. Our purpose is to explain the concept, not to present mathematically flawless description with all the bells and whistles of 100% rigorous presentation.
(a1) Distance between points
Let's talk about a distance between two arguments of a function (distance between two points in N-dimensional space).
The arguments of a real function F(P), as points in N-dimensional space, can be represented in Cartesian coordinates (x1,x2,...,xN) with a known concept of a distance between two arguments. This is used to define a neighborhood of some point-argument P - all points Q within certain distance from P.
Thus, a distance between P(x1,x2,...,xN) and Q(y1,y2,...,yN) that we will denote as ||P−Q|| is defined as
||P−Q|| = [Σi∈(1,N)(yi−xi)²]½
The definition of a distance will lead to a concept of neighborhood which is essential to define and to find a local minimum of real functions using a derivative.
(b1) Distance between functions
Let's talk about a distance between two arguments of a functional (a distance between two real functions).
The arguments of a functional Φ[f(x)], as real functions from some class of functions, also should have this concept of a distance to be able to talk about a local minimum in a neighborhood of some particular function-argument.
This distance can be defined in many ways to quantitatively measure the "closeness" of one function to another. This was discussed in the previous lecture and one of the ways to define this distance was suggested there by using a concept of a scalar product of functions as an integral of their algebraic product.
Let's suggest some other ways to define this distance.
First of all, as in case of real numbers, the distance between functions f(x) and g(x) must be based upon their algebraic difference h(x)=f(x)−g(x).
Secondly, we have to quantify this difference, a function h(x), with a single real value.
There are a few traditional ways to apply a real number (called a norm and denoted ||h(x)||) to a function to signify how close this function is to zero.
Here are some for functions defined on segment [a,b]:
||h(x)|| = max[a,b]|h(x)|
||h(x)|| = max[a,b]{|h(x)|,|h'(x)|}
||h(x)|| = ∫[a,b]h²(x)·dx
Let's assume that some norm ||f(x)|| is defined for any function-argument of functional Φ[f(x)].
So, ||h(x)|| is a measure of how close function h(x) is to a function that equals to zero everywhere and ||g(x)−f(x)|| is a measure of how close function g(x) is to function f(x).
(a2) Increment
Now we will introduce a concept of an increment of an argument to a function F(P) (an increment to a point P in N-dimensional space) and caused by it an increment of function f(P) itself.
Let's fix an argument of function F(P): P=P0.
Consider these two points P0(x1,x2,...,xN) and P(y1,y2,...,yN) and their "difference" R(y1−x1,y2−x2,...,yN−xN).
This "difference" R is an increment of an argument of function f() from point P0 to P because in coordinate form P=P0+R.
We will denote it as
ΔP=R=P−P0 - an increment of argument P0.
At the same time, the difference
ΔF(P)=F(P)−F(P0) is an increment of a function f() at point P0 when we increment an argument by ΔP to point P=P0+ΔP.
(b2) Increment
Now we will introduce a concept of increment of a function-argument to a functional Φ[f(x)] and an increment of a functional itself.
Let's fix a function-argument of functional Φ[f(x)]: f(x)=f0(x).
If we consider another function-argument f(x), the difference
Δf(x)=f(x)−f0(x) is an increment of function-argument f0(x).
At the same time, the difference
ΔΦ[f(x)]=Φ[f(x)]−Φ[f0(x)| is an increment of a functional Φ[f(x0)], when we increment its argument from f0(x) by Δf(x) to f(x)=f0(x)+Δf(x).
(a3) Neighborhood
A neighborhood of positive radius δ around point-argument P0 of function F(P) is a set of all arguments P such that a defined above increment ΔP=P−P0 from P0 to P has the norm||ΔP|| that does not exceed radius δ.
(b3) Neighborhood
A neighborhood of positive radius δ around a function-argument f0(x) of functional Φ[f(x)] is a set of all function-arguments f(x) such that a defined above increment Δf(x)=f(x)−f0(x) from f0(x) to f(x) has the norm ||Δf(x)|| that does not exceed δ.
(a4) Linear Function
Recall that multiplication of a point in N-dimensional space by a real number and addition of points are done on a coordinate basis, that is each coordinate is multiplied and corresponding coordinates are added.
Function F(P) is linear if for any of its point-arguments and any real multiplier k the following is true: f(k·P) = k·F(P) and F(P1+P2) = F(P1) + F(P2)
(b4) Linear Functional
Functional Φ[f(x)] is linear if for any function-arguments and any real multiplier k the following is true: Φ[k·f(x)] = k·Φ[f(x)] and Φ[f1(x)+f2(x)] =
= Φ[f1(x)] + Φ[f2(x)]
(a5) Continuous Function
Function F(P) is continuous at point P0 if a small increment of an argument from P0 to some neighboring point P causes a small increment of the value of function from F(P0) to F(P).
More precisely, function F(P) is continuous at point P0 if for any positive function increment ε there exist a positive δ such that if ||ΔP|| = ||P−P0|| ≤ δ then
|ΔF(P)| = |F(P)−F(P0)| ≤ ε.
(b5) Continuous Functional
Functional Φ[f(x)] is continuous at point f0(x) if a small increment of a function-argument from f0(x) to some neighboring function-argument f(x) causes a small increment of the value of functional from Φ[f0(x)] to Φ[f(x)].
More precisely, functional Φ[f(x)] is continuous at point f0(x) if for any positive functional increment ε there exist a positive δ such that if ||Δf(x)|| = ||f(x)−f0(x)|| ≤ δ then
|ΔΦ[f(x)]| =
= | Φ[f(x)]−Φ[f0(x)] | ≤ ε.
(a6) Differentiation of Functions
To find a local minimum of a function F(P), we should know certain properties of a point where this minimum takes place. Then, using these properties, we will be able to find a point of a local minimum.
In case of a function of one argument (that is, if dimension N=1) we know that a derivative of a function at a point of local minimum equals to zero. So, we take a derivative, equate it to zero and solve the equation.
With greater dimensions of a space where our function is defined this approach would not work, because we cannot take a derivative by a few arguments at the same time.
However, we can do something clever to overcome this problem.
Assume, function F(P) has a local minimum at point P0 and takes value F(P0) at this point.
Shifting an argument from P0 to P1 causes change of a function value from F(P0) to F(P1), and we know that, since P0 is a point of a local minimum, within a small neighborhood around P0 the value F(P1) cannot be less than F(P0).
More rigorously, there exists a positive δ such that for any P1 that satisfies
||P1−P0|| ≤ δ
this is true: F(P0) ≤ F(P1)
Consider a straight line between P0 and P1 in our N-dimensional space.
Its points Q can be parameterized as Q(t) = P0 + t·(P1−P0)
For t=0Q(t)=Q(0)=P0.
For t=1Q(t)=Q(1)=P1.
For t=−1Q(−1) is a point symmetrical to P1 relatively to point P0.
For all other t point Q(t) lies somewhere on a line that goes through P0 and P1.
If we concentrate on a behavior of function F(Q(t))=F(P0+t·(P1−P0))
where Q(t) is a point on a line going through P0 and P1, it can be considered as a function of only one variable t and, therefore, can be analyzed using a classical methodology of Calculus.
We can take a derivative of F(P0+t·(P1−P0)) by t and, since P0 is a local minimum, this derivative must be equal to zero at this point P0, that is at t=0.
This derivative constitutes a directional derivative of function F(P) at point P0 along a direction defined by a location of point P1.
What's more interesting is that, if we shift the location of point P1 in the neighborhood of P0, a similar approach would show that this directional derivative by t would still be zero at t=0 because function F(P) has minimum at P0 regardless of the direction of a shift.
Our conclusion is that, if function F(P) has local minimum at point P0, the directional derivative at point P0 in the direction from P0 to P1 should be equal to zero regardless of location of P1 in the neighborhood of P0.
Let's see how it works if, instead of points P0 and P1 in N-dimensional space, we use Cartesian coordinates.
Note: This is a method that can be used for functions of N arguments, but not for functionals, where we will use only directional derivatives.
See item (b6) below.
Let P0(...x0i...) - i∈[1,N] P1(...x1i...) - i∈[1,N] Q1(...qi...) - i∈[1,N]
where qi = x0i+t·(x1i−x0i)
Directional derivative of F(Q(t)) = f(q1,...qN) by t, using the chain rule, equals to
Σi∈[1,N][∂f(q1,...qN)/∂qi]·(dqi /dt)
or
Σi∈[1,N][∂f(q1,...qN)/∂qi]·(x1i−x0i)
If P0(...x0i...) is the point of minimum, the above expression for a directional derivative must be zero for t=0, that is when Q(t)=Q(0)=P0(...x0i...), for any direction defined by point P1(...x1i...).
The only way it can be true is if every ∂f(q1,...qN)/∂qi] equals to zero at point P0.
We came to a conclusion that, when a function defined on N-dimensional space has a minimum at some point, all partial derivatives of this function equal to zero at this point.
It's quite appropriate to demonstrate this technique that involves directional derivatives on a simple example.
Consider a function defined on two-dimensional space f(x,y) = (x−1)² + (y−2)²
Let's find a point P0(x0,y0) where it has a local minimum.
Let's step from this function to a neighboring one P1(x1,y1) and parameterize all points on a straight line between P0 and P1 Q(t) = P0 + t·(P1−P0) =
= (x0+t·(x1−x0), y0+t·(y1−y0))
The value of our function f() at point Q(t) is F(Q(t)) = (x0+t·(x1−x0)−1)² + (y0+t·(y1−y0)−2)²
The directional derivative of this function by t will then be f 't (Q(t)) = 2(x0+t·(x1−x0)−1)·(x1−x0) +
+ 2(y0+t·(y1−y0)−2)·(y1−y0)
If P0(x0,y0) is a point of minimum, this directional (from P0 towards P1) derivative at t=0 should be equal to zero for any point P1(x1,y1).
At t=0 f 't (Q(0)) = 2(x0−1)·(x1−x0) + 2(y0−2)·(y1−y0)
If P0(x0,y0) is a point of minimum, the expression above must be equal to zero for any x1 and y1, and the only possible values for x0 and y0 are x0=1 and y0=2.
Therefore, point P0(1,2) is a point of minimum.
The same result can be obtained by equating all partial derivatives to zero, as mentioned above. ∂f(x,y)/∂x = 2(x−1) ∂f(x,y)/∂y = 2(y−2)
System of equations ∂f(x,y)/∂x = 0 ∂f(x,y)/∂y = 0
is 2(x−1) = 0 2(y−2) = 0
Its solutions are x = 1 y = 2
Of course, this was obvious from the expression of function f(x,y)=(x−1)²+(y−2)² as it represents a paraboloid z=x²+y² with its vertex (the minimum) shifted to point (1,2).
(b6) Variation of Functionals
Let's follow the above logic that uses directional derivatives and apply it to finding a local minimum of functionals.
To find a local minimum of a functional Φ[f(x)], we should know certain properties of a function-argument f(x) where this minimum takes place.
In the above case of a function defined on N-dimensional space we used the fact that a directional derivative from a point of minimum in any direction is zero.
We do analogously with functionals.
Assume, functional Φ[f(x)] has a local minimum at function-argument f0(x) and takes value Φ[f0(x)] at this function.
Shifting an argument from f0(x) to f1(x) causes change of a functional's value from Φ[f0(x)] to Φ[f1(x)], and we know that, since f0(x) is a point of a local minimum, within a small neighborhood around f0(x) the value Φ[f1(x)] cannot be less than Φ[f0(x)].
More rigorously, there exists a positive δ such that for any f1(x) that satisfies
||f1(x)−f0(x)|| ≤ δ
this is true: Φ[f0(x)] ≤ Φ[f1(x)]
Consider a parameterized family of function-arguments g(t,x) (t is a parameter) defined by a formula g(t,x) = f0(x) + t·[f1(x)−f0(x)]
For t=0 g(t,x)=f0(x).
For t=1 g(t,x)=f1(x).
For t=−1 g(t,x)=2f0(x)−f1(x), which is a function symmetrical to f1(x) relatively to f0(x) in a sense that ½[g(−1,x)+f1(x)] = f0(x).
For all real t function g(t,x) is a linear combination of f0(x) and f1(x) and for each pair of these two functions or, equivalently, for each direction from f0(x) towards f1(x) functional Φ[g(t,x)] =
= Φ[f0(x) + t·(f1(x)−f0(x))]
can be considered a real function of a real argument t.
Let's concentrate on a behavior of real function Φ[g(t,x)] of real argument t where g(t,x) is a parameterized by t linear combination of f0(x) and f1(x) and analyze it using a classical methodology of Calculus.
We can take a derivative of Φ[f0(x) + t·(f1(x)−f0(x))] by t and, since f0(x) is a local minimum, this derivative must be equal to zero at this function-argument f0(x), that is at t=0.
This derivative constitutes a directional derivative or variation of functional Φ[f(x)] at function-argument f0(x) along a direction defined by a location of function-argument f1(x).
If we shift the location of function-argument f1(x) in the neighborhood of f0(x), a similar approach would show that this variation (directional derivative by t) would still be zero at t=0 because functional Φ[f(x)] has minimum at f0(x) regardless of the direction of a shift.
Our conclusion is that, if functional Φ[f(x)] has local minimum at function-argument f0(x), the variation (directional derivative) at function-argument f0(x) in the direction from f0(x) to f1(x) should be equal to zero regardless of location of f1(x) in the neighborhood of f0(x).
Examples of handling local minimum of functionals will be presented in the next lecture.
In this lecture we will discuss a concept of a local minimum or maximum of a functional.
Consider an N-dimensional real function f(x1,...,xN) defined on a domain of sets of N real numbers.
We can always assume that these N real numbers are represented by a point in N-dimensional vector space with Cartesian coordinates and point O(0,...0) as the origin.
What is the meaning of a statement that this function has a local minimum at point P(x1,...,xN)?
In plain language it means that within a sufficiently small neighborhood around point P, no matter where we move from point P, the value of our function at that new point will be greater or equal than f(P).
Let's formalize this definition in a way that will be used to define a local minimum of a functional.
Firstly, for convenience, we will use vectors originated at the origin of coordinates and ending at some point instead of N-dimensional coordinates of that point.
So, vector OP that stretches from the origin of coordinates O to point P will replace coordinates (x1,...,xN) of that point.
Using this, our function can be viewed as f(OP).
A "sufficiently small neighborhood" of point P(x1,...,xN) (or of vector OP) can be described as all points Q(x1,...,xN) on a sufficiently small distance from point P according to a regular definition of distance in Cartesian coordinates (or as all vectors OQ also originated at the origin of coordinates such that magnitude of a difference vector |OQ−OP|=|PQ| is sufficiently small).
Since we are dealing with N-dimensional Cartesian space, we know how to determine the distance between two points P and Q or a magnitude of a vector PQ that represents a difference between two vectors OQ−OP.
We can also approach it differently getting an equivalent definition of a minimum.
Consider any vector e of unit length.
Now, all vectors OP+t·e, where t is some "sufficiently small" real values and e can be any vector of unit length, describe "sufficiently small neighborhood" of vector OP.
This representation of "sufficiently small neighborhood" might be more convenient since it depends on a single real value t.
Using the above, we can define a local minimum of N-dimensional function f() as follows.
Vector OP is defined as a local minimum of function f() if there exists a real positiveτ such that f(OP) ≤ f(OP+t·e) for all 0 ≤ t ≤ τ and for any unit vector e.
Obviously, local maximum can be defined analogously by changing "less than" to "greater than" in the above definition.
For a function of one argument we usually look for a local minimum by solving an equation with a function's derivative equal to zero.
This is a very geometrical approach of looking for minimum because on one side of a minimum our function is decreasing, on another - increasing, so a derivative is changing its sign from negative for decreasing interval to positive for increasing, and, therefore, must be equal to zero at the point of minimum itself.
With functions of two and more arguments this geometric logic is not so obvious, but our alternative way of defining a minimum using unit vector e originated at the point of minimum and scalar multiplier t helps to return to geometrical meaning of a local minimum.
You can imagine that each direction of unit vector e defines a plane parallel to Z-axis going through point P and unit vector e cutting a paraboloid on a picture above with a parabola as an intersection.
This parabola is a function of one variable t and, therefore, at point of minimum P must have its derivative by t equal to zero.
This derivative is called directional derivative with vector e being a direction.
Indeed, if a directional derivative by t of f(OP+t·e) is zero at point P for each unit vector e, regardless of its direction, then point P is a good candidate for a local minimum (or maximum).
This approach allows us to deal with one-dimensional case many times (for each direction of unit vector e) instead of once but for more complicated case of multiple dimensions.
Of course, the number of possible directions of unit vector e is infinite, but it's not difficult to prove that if directional derivatives along each and every coordinate axis (partial derivative) is zero, a derivative along any other direction will be zero as well.
So, for a function of N variables it's sufficient to check N first derivatives of this function, and finding all minimum and maximum points requires solving a system of N partial derivative equation with N unknowns. Might not be simple but doable.
Let's try to transfer the above definition of a minimum of a function defined on N-dimensional vector space to a functional defined on a set of functions.
First of all, we will concentrate only on sets of "nice" functions - those defined on some segment [a,b] (including the ends) and differentiable, at least to a derivative of a second order.
Secondly, to make analogy with vectors even better, we introduce a scalar product [·] of these "nice" two functions
[f(x)·g(x)] = ∫[a,b] f(x)·g(x)·dx
Now our functions behave pretty much like vectors and we will try to transfer the definition of a minimum from a function defined on N-dimensional vector space to a functional defined on an infinite set of "nice" functions.
Consider functional F(f) defined for each function f(x) from a set of "nice" functions defined above.
Assume, we define a function f0(x) as a point where the functional F(f) has a local minimum.
It implies that there is a neighborhood of function f0(x) such that for any function f(x) located within this neighborhood F(f0(x)) ≤ F(f(x))
The problem with this definition is that we have not defined a concept of "neighborhood" yet.
But that is not difficult provided we have defined a scalar product of two functions.
Recall that a magnitude of a vector can be defined as a square root of its scalar product with itself |v| = √[v·v]
So, the distance between two points P and Q in N-dimensional space, which is the length of vector PQ=OQ−OP, can be expressed as the magnitude of this vector using its scalar product with itself.
Replacing function with a functional and vector in N-dimensional space with a "nice" function, we can define a neighborhood of a function f as a set of all functions g such that magnitude of a difference between functions ||g−f|| = √[(g−f)·(g−f)]
is sufficiently small.
As in a case of N-dimensional vector space, let's consider an alternative definition that will allow us to use differentiation to find a point of local minimum of a functional.
Consider any "nice" function f0(x) in a sense described above where functional F(f) has a local minimum.
Also consider any other "nice" function h(x) that defines a direction we can shift from point f0(x).
The neighborhood of this function f0(x) in the direction h(x) of radius τ are all functions f(x)+t·h(x)
where 0 ≤ t ≤ τ.
Now we can define a point f0(x) as a local minimum of a functional F if has a minimum at this point regardless of a choice of direction h(x).
In other words,
Function f0(x) is a local minimum of functional F() if for any direction defined by function h(x) there exist a real positive number τ such that F(f0(x)) ≤ F(f0(x)+t·h(x)) for all 0 ≤ t ≤ τ
Analogously,
Function f0(x) is a local maximum of functional F() if for any direction defined by function h(x) there exist a real positive number τ such that F(f0(x)) ≥ F(f0(x)+t·h(x)) for all 0 ≤ t ≤ τ
The above definitions simplify a complicated dependency of a functional on an infinite set of argument functions to a much simpler dependency on a single real variable.
The usefulness of these definitions is in our ability to differentiate by parameter t, assuming that derivative should be zero at points of local minimum or maximum.
But that is a subject of the next lecture.