*Notes to a video lecture on http://www.unizor.com*

__Vectors+ 09__

Examples of Hilbert Spaces

Examples of Hilbert Spaces

Let's illustrate our theory of Hilbert spaces with a few examples.

*Example 1*

Consider a set

*of all polynomials of real argument*

**V***x*defined on a segment [

*0,1*].

It's a linear vector space with any polynomial acting as a vector in this space because all the previously mentioned axioms for an abstract vector space are satisfied:

(A1) Addition of any two polynomials

*and*

**a**(x)*is*

**b**(x)**commutative**

∀

*∈*

**a**(x),**b**(x)*:*

**V**

**a**(x)**+ b**(x)**= b**(x)**+ a**(x)(A2) Addition of any three polynomials

*,*

**a**(x)*and*

**b**(x)*is*

**c**(x)**associative**

∀

*∈*

**a**(x),**b**(x),**c**(x)*: [*

**V***]*

**a**(x)**+b**(x)*[*

**+c**(x)**=**

= a(x)= a

**+***]*

**b**(x)**+c**(x)(A3) There is one polynomial that is equal to

*0*for any argument in segment [

*0,1*] called

**null-polynomial**, denoted

*(that is,*

**0**(x)*for any*

**0**(x)=0*x*of a domain [

*0,1*]), with a property of not changing the value of any other polynomial

*if added to it*

**a**(x)∀

*∈*

**a**(x)*:*

**V**

**a**(x)**+ 0**(x)**= a**(x)(A4) For any polynomial

*there is another polynomial called its opposite, denoted as*

**a**(x)*, such that the sum of a polynomial and its opposite equals to null-polynomial (that is a polynomial equaled to zero for all arguments)*

**−a**(x)∀

*∈*

**a**(x)*∃*

**V***∈*

**−a**(x)*:*

**V**

**a**(x)**+(−a**(x)**)=0**(x)(B1) Multiplication of any scalar (element of a set of all real numbers)

*α*by any polynomial

*is*

**a**(x)**commutative**

∀

*∈*

**a**(x)*, ∀real*

**V***α*:

*α*

**·a**(x)**= a**(x)**·**α(B2) Multiplication of any two scalars

*α*and

*β*by any polynomial

*is*

**a**(x)**associative**

∀

*∈*

**a**(x)*, ∀real*

**V***α,β*:

*(α·β)*

**·a**(x)**=**α·(β**·a**(x))(B3) Multiplication of any polynomial by scalar

*0*results in null-polynomial

∀

*∈*

**a**(x)*:*

**V***0*

**·a**(x)**= 0**(x)(B4) Multiplication of any polynomial by scalar

*1*does not change the value of this polynomial

**a**(x)∀

*∈*

**a**(x)*:*

**V***1*

**·a**(x)**= a**(x)(B5) Multiplication is

**distributive**relatively to addition of polynomials. ∀

*∈*

**a**(x),**b**(x)*, ∀real*

**V***α*:

*α*

**·(a**(x)**+b**(x)**) =**α**·a**(x)**+**α**·b**(x)(B6) Multiplication is

**distributive**relatively to addition of scalars.

∀

*∈*

**a**(x)*, ∀real*

**V***α,β*:

*(α+β)*

**·a**(x)**=**α**·a**(x)**+**β**·a**(x)Let's define a scalar product of two polynomials as an integral of their algebraic product on a segment [

*0,1*].

To differentiate a scalar product of two polynomials from their algebraic product under integration we will use notation [

*] for a scalar product.*

**a**(x)**·b**(x)[

*]*

**a**(x)**·b**(x)**= ∫**

_{[0,1]}

**a**(x)**·b**(x) dxThis definition of a scalar product satisfies all the axioms we set for a scalar product in an abstract vector space.

(1) For any polynomial

*from*

**a**(x)*, which is not a*

**V****null-polynomial**, its scalar product with itself is a positive real number

∀

*∈*

**a**(x)*,*

**V***:*

**a**(x)**≠0**(x)**∫**

_{[0,1]}

**a**(x)**·a**(x) dx > 0(2) For null-polynomial

*its scalar product with itself is equal to zero*

**0**(x)**∫**

_{[0,1]}

**0**(x)**·0**(x) dx = 0(3) Scalar product of any two polynomials

*and*

**a**(x)*is*

**b**(x)**commutative**because an algebraic multiplication of polynomials is commutative

∀

*∈*

**a**(x),**b**(x)*:*

**V****∫**

_{[0,1]}

**a**(x)**·b**(x) dx**=****∫**

_{[0,1]}

**b**(x)**·a**(x) dx(4) Scalar product of any two polynomials

*and*

**a**(x)*is proportional to their magnitude*

**b**(x)∀

*∈*

**a**(x),**b**(x)*, for any real*

**V***γ*:

**∫**

_{[0,1]}

*(γ*=

**·a**(x))**·b**(x) dx=

*γ*

**·∫**

_{[0,1]}

**a**(x)**·b**(x) dx(5) Scalar product is

**distributive**relatively to addition of polynomials. ∀

*∈*

**a**(x),**b**(x),**c**(x)*:*

**V****∫**

_{[0,1]}

*(*=

**a**(x)**+b**(x))**·c**(x) dx=

**∫**

_{[0,1]}

**a**(x)**·c**(x) dx**+ ∫**

_{[0,1]}

**b**(x)**·c**(x) dxBased on above axioms that are satisfied by polynomials with scalar product defined as we did, we can say that this set is

**pre-Hilbert space**.

The only missing part to be a complete Hilbert space is that this set does not contain limits to certain sequences.

Indeed, we can approximate many smooth non-polynomial functions with sequences of polynomials (recall, for example, Taylor series).

However, the Cauchy-Schwartz-Bunyakovsky inequality was proven for any abstract vector space with scalar product (pre-Hilbert space), and we can apply it to our set of polynomials.

According to this inequality, the following is true for any pair of polynomials:

[

*]² ≤ [*

**a**(x)**·b**(x)*]·[*

**a**(x)**·a**(x)*]*

**b**(x)**·b**(x)or, using our explicit definition of a scalar product,

[∫

_{[0,1]}

*]² ≤*

**a**(x)**·b**(x) dx≤ [∫

_{[0,1]}

*]·[∫*

**a²**(x) dx_{[0,1]}

*]*

**b²**(x) dxJust out of curiosity, let's see how it looks for

*and*

**a**(x)**=x**^{m}*.*

**b**(x)**=x**^{n}In this case

**a**(x)**·b**(x)**= x**^{m+n}

**a**²(x)**= x**^{2m}

**b**²(x)**= x**^{2n}Calculating all the scalar products

∫

_{[0,1]}

**x**dx^{(m+n)}

**= 1/(m+n+1)**∫

_{[0,1]}

**x**dx^{2m}

**= 1/(2m+1)**∫

_{[0,1]}

**x**dx^{2n}

**= 1/(2n+1)**Now the Cauchy-Swartz-Bunyakovsky inequality looks like

*[*

**1/(m+n+1)² ≤ 1/***]*

**(2m+1)(2n+1)**The validity of this inequality is not obvious, so it would be nice to check if it's really true for any

*and*

**m***.*

**n**To check it, let's transform it into an equivalent inequality between denominators with reversed sign of inequality

**(m+n+1)² ≥ (2m+1)(2n+1)**Opening all the parenthesis leads us to this equivalent inequality

**m²+n²+1+2mn+2m+2n ≥**

≥ 4mn+2m+2n+1≥ 4mn+2m+2n+1

After obvious simplification the resulting inequality looks like

**m²+n²−2mn ≥ 0**which is always true because the left side equals to

*(*.

**m−n**)²All transformations were invariant and reversible, which proves the original inequality.

*Example 2*

Elements of our new vector space are infinite sequences of real numbers {

*} (*

**x**_{n}*n*changes from

*1*to ∞) for which series

**Σ**

_{n∈[1,∞) }

**x**_{n}**²**Addition and multiplication by a scalar are defined as addition and multiplication individual members of the sequences involved.

These operations preserve the convergence of sum of squares of elements.

Scalar product is defined as

{

*}·{*

**x**_{n}*} =*

**y**_{n}**Σ**

_{n∈[1,∞) }

**x**_{n}**·y**_{n}In some sense this is an expansion of

*N*-dimensional Euclidean space to infinite number of dimensions, as long as a scalar product is properly defined, which in our case is assured because of convergence of the sum of squares of the elements.

Indeed, this definition makes sense because each member of a sum that defines a scalar product is bounded

**|x**_{n}**·y**_{n}

**| ≤ ½(x**_{n}**² + y**_{n}**²)**and the sum of right side of this inequality for all

*n*∈[

*1,∞*) converges.

This set is Hilbert space (we skip the proof that this space is

**complete**for brevity), its properties are very much the same as properties of

*N*-dimensional Euclidean space.

All the axioms of Hilbert space are satisfied.

As a consequence, the Cauchy-Shwartz-Bunyakovsky inequality is [{

*}·{*

**x**_{n}*}]*

**y**_{n}*[{*

**² ≤***}·{*

**x**_{n}*}]·[{*

**x**_{n}*}·{*

**y**_{n}*}]*

**y**_{n}*Problem A*

Given a set of all real two-dimensional vectors

*(*with standard definitions of addition and multiplication by a scalar (real number)

**a**_{1},**a**_{2})*(*+

**a**_{1},**a**_{2})*(*=

**b**_{1},**b**_{2})*(*

**a**_{1}+**b**_{1},**a**_{2}+**b**_{2})*λ·(*=

**a**_{1},**a**_{2})*(λ·*

**a**_{1},λ·**a**_{2})So, it's a linear vector space.

The scalar product we will define in a non-standard way:

*(*

**a**_{1},**a**_{2})**·**

*(*

=

**b**_{1},**b**_{2}) ==

**a**_{1}·**b**_{1}+ 2·**a**_{1}·**b**_{2}+ 2·**a**_{2}·**b**_{1}+**a**_{2}·**b**_{2}Is this vector space a Hilbert space?

*Hint A*

Check if a scalar product of some vector by itself is zero, while the vector is not a null-vector.

*Solution A*

Let's examine all vectors that have the second component equal to

*1*and find the first component

*x*, which breaks the rule of scalar product of a vector with itself to be positive, unless the vector is a null-vector

*(x,1)·(x,1) = 0*

According to our non-standard definition of a scalar product, this means the following for

*and*

**a**_{1}=**b**_{1}=x

**a**_{2}=**b**_{2}=1*x·x + 2·x·1 + 2·1·x + 1·1 = 0*

*x² + 4·x + 1 = 0*

*x*

_{1}= −2 + √3*x*

_{2}= −2 − √3So, both vectors

*(*and

**x**)_{1},1*(*have the property that the scalar product of a vector by itself gives zero, while the vectors themselves are not null-vectors.

**x**)_{2},1Indeed, for vector

*(*this scalar product with itself is

**x**)_{1},1*(*·

**x**)_{1},1*(*=

**x**)_{1},1*(−2+√3,1)*·

*(−2+√3,1) =*

= (4−4√3+3)+4·(−2+√3)+1 = 0

= (4−4√3+3)+4·(−2+√3)+1 = 0

Therefore, thus defined scalar product does not satisfy the axiom for a scalar product in Hilbert space, and our space is not Hilbert's.

*Problem B*

Prove the

**parallelogram law**in Hilbert space

**V**∀

*∈*

**a,b***:*

**V**

**||a−b||² + ||a+b||² = 2||a||² + 2||b||²***Note B*

For vectors in two-dimensional Euclidean space this statement geometrically mean that sum of squares of two diagonals in a parallelogram equals to sum of squares of all its sides.

The parallelogram law can be proven geometrically in this case, using, for example, the Theorem of Cosines.

*Hint B*

The definition of a

**norm**or magnitude of a vector

*in Hilbert space is*

**x**

**||x|| = √(x·x)**Using this, all you need to prove the parallelogram law is to open parenthesis is the magnitudes of

*and*

**a−b***.*

**a+b**
## No comments:

Post a Comment