## Thursday, August 15, 2024

### Vectors+ 09 Example of Hilbert Space: UNIZOR.COM - Math+ & Problems - Ve...

Notes to a video lecture on http://www.unizor.com

Vectors+ 09
Examples of Hilbert Spaces

Let's illustrate our theory of Hilbert spaces with a few examples.

Example 1

Consider a set V of all polynomials of real argument x defined on a segment [0,1].
It's a linear vector space with any polynomial acting as a vector in this space because all the previously mentioned axioms for an abstract vector space are satisfied:
(A1) Addition of any two polynomials a(x) and b(x) is commutative
a(x),b(x)V: a(x) + b(x) = b(x) + a(x)
(A2) Addition of any three polynomials a(x), b(x) and c(x) is associative
a(x),b(x),c(x)V: [a(x)+b(x)]+c(x) =
= a
(x)+
[b(x)+c(x)]
(A3) There is one polynomial that is equal to 0 for any argument in segment [0,1] called null-polynomial, denoted 0(x) (that is, 0(x)=0 for any x of a domain [0,1]), with a property of not changing the value of any other polynomial a(x) if added to it
a(x)V: a(x) + 0(x) = a(x)
(A4) For any polynomial a(x) there is another polynomial called its opposite, denoted as −a(x), such that the sum of a polynomial and its opposite equals to null-polynomial (that is a polynomial equaled to zero for all arguments)
a(x)V−a(x)V: a(x)+(−a(x))=0(x)

(B1) Multiplication of any scalar (element of a set of all real numbers) α by any polynomial a(x) is commutative
a(x)V, ∀real α: α·a(x) = a(x)·α
(B2) Multiplication of any two scalars α and β by any polynomial a(x) is associative
a(x)V, ∀real α,β:
(α·β)·a(x) = α·(β·a(x))
(B3) Multiplication of any polynomial by scalar 0 results in null-polynomial
a(x)V: 0·a(x) = 0(x)
(B4) Multiplication of any polynomial by scalar 1 does not change the value of this polynomial a(x)
a(x)V: 1·a(x) = a(x)
(B5) Multiplication is distributive relatively to addition of polynomials. ∀a(x),b(x)V, ∀real α:
α·(a(x)+b(x)) = α·a(x)+α·b(x)
(B6) Multiplication is distributive relatively to addition of scalars.
a(x)V, ∀real α,β: (α+β)·a(x) = α·a(x)+β·a(x)

Let's define a scalar product of two polynomials as an integral of their algebraic product on a segment [0,1].
To differentiate a scalar product of two polynomials from their algebraic product under integration we will use notation [a(x)·b(x)] for a scalar product.
[a(x)·b(x)] = [0,1] a(x)·b(x) dx

This definition of a scalar product satisfies all the axioms we set for a scalar product in an abstract vector space.
(1) For any polynomial a(x) from V, which is not a null-polynomial, its scalar product with itself is a positive real number
a(x)V, a(x)≠0(x):
[0,1] a(x)·a(x) dx > 0
(2) For null-polynomial 0(x) its scalar product with itself is equal to zero
[0,1] 0(x)·0(x) dx = 0
(3) Scalar product of any two polynomials a(x) and b(x) is commutative because an algebraic multiplication of polynomials is commutative
a(x),b(x)V:
[0,1] a(x)·b(x) dx = [0,1] b(x)·a(x) dx
(4) Scalar product of any two polynomials a(x) and b(x) is proportional to their magnitude
a(x),b(x)V, for any real γ:
[0,1] ·a(x))·b(x) dx =
= γ·[0,1] a(x)·b(x) dx
(5) Scalar product is distributive relatively to addition of polynomials. ∀a(x),b(x),c(x)V:
[0,1](a(x)+b(x))·c(x) dx=
= [0,1] a(x)·c(x) dx + [0,1] b(x)·c(x) dx

Based on above axioms that are satisfied by polynomials with scalar product defined as we did, we can say that this set is pre-Hilbert space.
The only missing part to be a complete Hilbert space is that this set does not contain limits to certain sequences.
Indeed, we can approximate many smooth non-polynomial functions with sequences of polynomials (recall, for example, Taylor series).

However, the Cauchy-Schwartz-Bunyakovsky inequality was proven for any abstract vector space with scalar product (pre-Hilbert space), and we can apply it to our set of polynomials.
According to this inequality, the following is true for any pair of polynomials:
[a(x)·b(x)]² ≤ [a(x)·a(x)]·[b(x)·b(x)]
or, using our explicit definition of a scalar product,
[[0,1] a(x)·b(x) dx]² ≤
≤ [[0,1] (x) dx]·[[0,1] (x) dx]

Just out of curiosity, let's see how it looks for a(x)=xm and b(x)=xn.
In this case
a(x)·b(x) = xm+n
a²(x) = x2m
b²(x) = x2n
Calculating all the scalar products
[0,1] x(m+n) dx = 1/(m+n+1)
[0,1] x2m dx = 1/(2m+1)
[0,1] x2n dx = 1/(2n+1)
Now the Cauchy-Swartz-Bunyakovsky inequality looks like
1/(m+n+1)² ≤ 1/[(2m+1)(2n+1)]

The validity of this inequality is not obvious, so it would be nice to check if it's really true for any m and n.
To check it, let's transform it into an equivalent inequality between denominators with reversed sign of inequality
(m+n+1)² ≥ (2m+1)(2n+1)
Opening all the parenthesis leads us to this equivalent inequality
m²+n²+1+2mn+2m+2n ≥
≥ 4mn+2m+2n+1

After obvious simplification the resulting inequality looks like
m²+n²−2mn ≥ 0
which is always true because the left side equals to (m−n.
All transformations were invariant and reversible, which proves the original inequality.

Example 2

Elements of our new vector space are infinite sequences of real numbers {xn} (n changes from 1 to ∞) for which series Σn∈[1,∞) xn² converges to some limit.

Addition and multiplication by a scalar are defined as addition and multiplication individual members of the sequences involved.
These operations preserve the convergence of sum of squares of elements.

Scalar product is defined as
{xn}·{yn} = Σn∈[1,∞) xn·yn
In some sense this is an expansion of N-dimensional Euclidean space to infinite number of dimensions, as long as a scalar product is properly defined, which in our case is assured because of convergence of the sum of squares of the elements.
Indeed, this definition makes sense because each member of a sum that defines a scalar product is bounded
|xn·yn| ≤ ½(xn² + yn²)
and the sum of right side of this inequality for all n∈[1,∞) converges.

This set is Hilbert space (we skip the proof that this space is complete for brevity), its properties are very much the same as properties of N-dimensional Euclidean space.
All the axioms of Hilbert space are satisfied.
As a consequence, the Cauchy-Shwartz-Bunyakovsky inequality is [{xn}·{yn}]² ≤ [{xn}·{xn}]·[{yn}·{yn}]

Problem A

Given a set of all real two-dimensional vectors (a1,a2) with standard definitions of addition and multiplication by a scalar (real number)
(a1,a2) + (b1,b2) = (a1+b1,a2+b2)
λ·(a1,a2) = (λ·a1,λ·a2)
So, it's a linear vector space.

The scalar product we will define in a non-standard way:
(a1,a2)·(b1,b2) =
= a1·b1 + 2·a1·b2 + 2·a2·b1 + a2·b2

Is this vector space a Hilbert space?

Hint A
Check if a scalar product of some vector by itself is zero, while the vector is not a null-vector.

Solution A
Let's examine all vectors that have the second component equal to 1 and find the first component x, which breaks the rule of scalar product of a vector with itself to be positive, unless the vector is a null-vector
(x,1)·(x,1) = 0

According to our non-standard definition of a scalar product, this means the following for a1=b1=x and a2=b2=1
x·x + 2·x·1 + 2·1·x + 1·1 = 0
x² + 4·x + 1 = 0
x1 = −2 + √3
x2 = −2 − √3
So, both vectors (x1,1) and (x2,1) have the property that the scalar product of a vector by itself gives zero, while the vectors themselves are not null-vectors.
Indeed, for vector (x1,1) this scalar product with itself is
(x1,1)·(x1,1) = (−2+√3,1)·(−2+√3,1) =
= (4−4√3+3)+4·(−2+√3)+1 = 0

Therefore, thus defined scalar product does not satisfy the axiom for a scalar product in Hilbert space, and our space is not Hilbert's.

Problem B

Prove the parallelogram law in Hilbert space V
a,bV:
||a−b||² + ||a+b||² = 2||a||² + 2||b||²

Note B
For vectors in two-dimensional Euclidean space this statement geometrically mean that sum of squares of two diagonals in a parallelogram equals to sum of squares of all its sides.
The parallelogram law can be proven geometrically in this case, using, for example, the Theorem of Cosines.

Hint B
The definition of a norm or magnitude of a vector x in Hilbert space is
||x|| = √(x·x)
Using this, all you need to prove the parallelogram law is to open parenthesis is the magnitudes of a−b and a+b.