Monday, February 6, 2017

Unizor - Indefinite Integrals - Definition





Notes to a video lecture on http://www.unizor.com

Indefinite Integral - Definition

Consider a set of sufficiently smooth (differentiable as many times as the context implies) functions and an operation of differentiations.
This operation allows for any such smooth function (an element of our set) to find a corresponding target - its derivative (also an element of our set). So, the differentiation is an unary operation on a set of smooth functions.

When we talk about unary operations, we always want to know the properties of these operations. We have already considered how differentiation works on a function multiplied by a constant, how it works on a sum of two functions, their product and ratio. We did not touch the inverse operation yet.
Integration is the inverse operation to differentiation, and we would like to define it more precisely.

The meaning of the inverse operation on a set is that, if applied on a result of a direct operation, we will get an original element of a set the direct operation was applied to.
In our case, the direct operation was a differentiation that resulted in a derivative of an original function. So, we expect that integration, if applied on a derivative of an original function, results in that original function.

Well, this is not exactly true. Integration is not an inverse operation to differentiation in a classical meaning of the word "inverse". The reason is simple, and it's the same as in case of an operation of square root as not exactly an inverse to raising to the power of two. As we know, 2² = (−2)² = 4. So, raising to the power of two is an operation that for any real number finds its square. But operation of square root from has two targets, 2 and −2, which makes this operation returning two different values, which does not correspond to a classical concept of operation delivering one exact target for any source.

As in a case with square root, integration of a function results in more than one function, derivative of which equals to a subject of integration.
Consider two functions, f(x)and g(x)=f(x)+C, where C is a constant.
Derivatives of these two functions are the same because (we use symbol Dx for operation of differentiation)
Dxg(x) = Dx(f(x)+C) =
= Dx f(x)+DxC = Dx f(x)

since derivative of a constant is zero.

Since our constant C can be any real number, we have infinite number of functions, derivatives of which are the same. It's quantitatively more complex than with an operation of square root, where we have only two numbers with equal squares, but the idea is the same, and we have deal with this somehow.

One of the ways we deal with square root, we use a symbol ±to indicate both results of this operation, implying that both positive and negative numbers, if squared, give the original element we applied the operation of square root.
Analogously, since all functions, that differ only by an additive constant, result in the same derivative, we just use symbolic +C with any specific function, derivative of which gives the function we integrated, to specify all the possibilities of integration.

Again, referring to square roots, where mathematicians invented a special symbol for this operation    , a special symbol for integration was invented as well, it's called "integral" symbol and it looks like this: .

So, if Dx f(x) = g(x), then
 g(x) = f(x)+C

To emphasize that differentiation was by argument x, we used an index in an operation of differentiation Dxor symbol d/dx. For similar purposes we would like to say that an argument of integration is x. To specify this, we add dat the end of operation of integration, so the complete notation is
 g(x) dx = f(x)+C,
which implies that the derivative of f(x) (as well as derivative of any other function that differs from it by a constant) is g(x).

Here are a few examples.

Dx xn = n·xn−1 ⇒
⇒  xn dx = xn+1/(n+1) + C
(for all n ≠ −1,
see integral of 1/x below)

Dx ex = ex ⇒
⇒  ex dx = ex + C

Dx sin(x) = cos(x) ⇒
⇒  cos(x) dx = sin(x) + C

Dx ln(x) = 1/x ⇒
⇒  1/x dx = ln(x) + C

We can now define an operation of integration more precisely.
A set of functions f(x)+C, where f(x) is some concrete smooth function and C is any real constant, is called an integral of function g(x), if derivative of f(x) is equal tog(x).
Sometimes any specific function f(x), whose derivative equals to g(x) is called anti-derivative of function g(x).

There is one unfinished detail in this definition.
Obviously, our goal is to find all such functions, derivative of which is equal to a function we integrate. Assuming we found one, we can add any real constant to it and get another function, whose derivative equals to the function we integrate. Does this procedure delivers all the answers? In other words, do all functions, derivative of which equals to a given function, differ only by a constant?

The answer is yes, and here is the proof.

Recall a lecture "Constant Function" among "Main Theorems" of derivatives. In this lecture we have proven that, if the function has derivative equal to zero at each value of an argument, then this function is constant.
Now assume that two functions,f1(x) and f2(x), have derivatives equaled to each other:
d/dx f1(x) = d/dx f2(x)
From this we conclude that
d/dx (f1(x) − f2(x)) = 0
Therefore, according to a theorem mentioned above,
f1(x) − f2(x) = C
(where C - some constant).
End of proof.

As we see, to find an integral of a function, that is to find all such functions, whose derivative equals to the original integrated function, it's sufficient to find just one function, whose derivative equals to our integrated function, and, adding any real constant to it, we will obtain all other functions, whose derivative is the same, and we are certain that there are no other solutions to our integration.

No comments: