*Notes to a video lecture on http://www.unizor.com*

__Vectors+ 04__

N-dimensional Vectors

N-dimensional Vectors

*Theory*

1. A single real number

*R*can be represented as a point

*on a straight*

**A****coordinate line**having some fixed point

*called the*

**O****origin**, unit of measurement and a particular direction from point

*chosen as*

**O****positive**.

In this case point

*is defined so that its distance from the origin*

**A***equals to*

**O***|R|*

*R*corresponding to a defined positive or negative direction from the origin.

At the same time this number

*R*can be represented as a vector that stretches along the coordinate line from one point to another having the magnitude of absolute value |

*R*| and the direction corresponding to a defined positive or negative direction from the origin.

So, a single real number

*R*is represented as a point

*on a coordinate line or as a vector*

**A***on this line.*

**R**All these representations of a single real number we call

**one-dimensional**.

In the vector representation real number

*R=1*can be represented by a vector from the origin of coordinates

*to point on a distance of*

**O***1*unit of measurement in a positive direction. We will call this vector a

**unit vector**.

*i*If we stretch vector

*by a factor of*

**i***R*, taking in consideration the correspondence between the sign of

*R*and direction on a line, we will reach point

*.*

**A**That's why we can state

**R =**R**·i**2. A pair of two real numbers (

*R*) can be represented as a point

_{1},R_{2}*on a*

**A****coordinate plane**with some fixed point

*called the*

**O****origin of coordinates**, two perpendicular lines with chosen positive direction on each going through the origin

*called*

**O****abscissa**and

**ordinate**with some unit of measurement along these lines.

In this case point

*is defined so that*

**A**(a) its distance from the ordinate along a line parallel to the abscissa equals to

*|R*

_{1}|*R*corresponding to a defined positive or negative direction along the abscissa;

_{1}(b) its distance from the abscissa along a line parallel to the ordinate equals to

*|R*

_{2}|*R*corresponding to a defined positive or negative direction along the ordinate.

_{2}At the same time this pair of numbers (

*R*) can be represented as a vector that stretches from the origin of coordinate to point

_{1},R_{2}*defined above or any other vector on a plane with the same magnitude and direction.*

**A**So, a pair of real numbers (

*R*) is represented as a point

_{1},R_{2}*on a coordinate plane or as a vector*

**A***from the origin*

**R***to point*

**O***, or any other vector of the same magnitude and direction.*

**A**All these representations of a pair of real numbers we call

**two-dimensional**.

In the vector representation a pair of real numbers

*(R*is represented by vector from the origin along the abscissa to point

_{1}=1,R_{2}=0)*on a distance of*

**I**_{1}*1*unit of measurement in a positive direction. We will call this vector a

**abscissa unit vector**.

*i*Similarly, the vector representation a pair of real numbers

*(R*is represented by vector from the origin along the ordinate to point

_{1}=0,R_{2}=1)*on a distance of*

**I**_{2}*1*unit of measurement in a positive direction. We will call this vector an

**ordinate unit vector**.

*j*If we stretch vector

*by a factor of*

**i***R*, taking in consideration the correspondence between the sign of

_{1}*R*and direction on the abscissa, and similarly stretch vector

_{1}*by a factor of*

**j***R*and add these two vectors by the rules of addition of vectors, we will reach point

_{2}*.*

**A**That's why we can state

**R =**R_{1}**·i +**R_{2}**·j**which is a representation of vector

*through unit vectors*

**R***and*

**i***and*

**j****coordinates**

*R*and

_{1}*R*.

_{2}Using the above representation of a vector, it's easy to derive a formula for a scalar product of two two-dimensional vectors.

**R =**R_{1}**·i +**R_{2}**·j**

**S =**S_{1}**·i +**S_{2}**·j**

**R·S =**R_{1}·S_{1}**·(i·i) +**R_{1}·S_{2}**·(i·j) +**

+R+

_{2}·S_{1}**·(j·i) +**R_{2}·S_{2}**·(j·j)**According to definition of a scalar product,

*,*

**(i·i)=1***,*

**(i·j)=0***,*

**(j·i)=0**

**(j·j)=1**Therefore,

**R·S =**R_{1}·S_{1}**+**R_{2}·S_{2}Perpendicularity of vectors

*and*

**R***can be checked by examining the coordinate expression for their scalar product*

**S***R*. If it's equal to zero, the vectors are perpendicular to each other.

_{1}·S_{1}**+**R_{2}·S_{2}A cosine of an angle

*φ*between two two-dimensional vectors can be determined as

**cos(φ) = (R·S) / (|R|·|S|)**where in numerator we use scalar product of two vectors and in denominator - a product of their magnitudes.

3. Exactly as in a previous two-dimensional case, we can represent a triplet of real numbers (

*R*) as a point

_{1},R_{2},R_{3}*in the three-dimensional space, a vector*

**A***from the origin of coordinates to that point or as a vector sum of three mutually perpendicular vectors, each positioned along one of the coordinate axes*

**R**

**R =**R_{1}**·i +**R_{2}**·j +**R_{3}**·k**where

*,*

**i***and*

**j***are unit vectors along three mutually perpendicular axes of coordinates.*

**k**Using the above representation of a vector, it's easy to derive a formula for a scalar product of two three-dimensional vectors.

**R =**R_{1}**·i +**R_{2}**·j +**R_{3}**·k**

**S =**S_{1}**·i +**S_{2}**·j +**S_{3}**·k**

**R·S =**R_{1}·S_{1}**+**R_{2}·S_{2}**+**R_{3}·S_{3}Perpendicularity of vectors

*and*

**R***can be checked by examining the coordinate expression for their scalar product*

**S***R*. If it's equal to zero, the vectors are perpendicular to each other.

_{1}·S_{1}**+**R_{2}·S_{2}+R_{3}·S_{3}A cosine of an angle

*φ*between two two-dimensional vectors can be determined as

**cos(φ) = (R·S) / (|R|·|S|)**where in numerator we use scalar product of two vectors and in denominator - a product of their magnitudes.

4. Now we will generalize the same concept to

*N*-dimensional case.

An ordered set of

*N*real numbers (

*R*) we will call an

_{1},R_{2},...,R_{N}*N*-dimensional vector, which can be interpreted as a point in

*N*-dimensional coordinate space.

An important operations on vectors known from

*2*- and

*3*-dimensional cases can be easily expanded to

*N*-dimensional case:

(a) Addition of two

*N*-dimensional vectors resulting in a new

*N*-dimensional vector:

**R**^{(1)}+ R^{(2)}= R^{(2)}+ R^{(1)}= R(

*R*) + (

_{1}^{(1)},...,R_{N}^{(1)}*R*) =

_{1}^{(2)},...,R_{N}^{(2)}= (

*R*)

_{1}^{(1)}+R_{1}^{(2)},...,R_{N}^{(1)}+R_{N}^{(2)}(b) Multiplication of an

*N*-dimensional vector by a real number

*q*resulting in a new

*N*-dimensional vector:

*q*

**·R = R·**q**= S***q*·(

*R*) = (

_{1},...,R_{N}*q·R*)

_{1},...,q·R_{N}(c) Scalar (dot) product of two

*N*-dimensional vectors resulting in a scalar:

**R**^{(1)}· R^{(2)}= R^{(2)}· R^{(1)}= C(

*R*) · (

_{1}^{(1)},...,R_{N}^{(1)}*R*) =

_{1}^{(2)},...,R_{N}^{(2)}=

*R*

_{1}^{(1)}·R_{1}^{(2)}+...+R_{N}^{(1)}·R_{N}^{(2)}= C(d) N-dimensional Angle

If a scalar product of two

*N*-dimensional vectors

*and*

**R**^{(1)}*equals to zero, they are called*

**R**^{(2)}**perpendicular**to each other.

In general, an angle

*φ*between these two

*N*-dimensional vectors can be defined as

**cos(φ) = (R**^{(1)}·R^{(2)}) / (|R^{(1)}|·|R^{(2)}|)where in numerator we use scalar product of two vectors and in denominator - a product of their magnitudes.

(e) Linear dependency

Certain number

*K*of

*N*-dimensional vectors

*,...,*

**R**^{(1)}*is called*

**R**^{(K)}**linearly dependent**if there are

*K*multipliers

*q*,

_{1},...,q_{K}__not all of which are equal to zero__, such that

*q*

_{1}**·R**q^{(1)}+...+_{K}**·R**^{(K)}= 0where

*is a*

**0****null-vector**- an

*N*-dimensional vector with all components zero.

The negation of

**linear dependency**is

**linear independency**.

*Problem A*

Prove that in

*N*-dimensional space exists a set of

*N*linearly independent vectors.

*Solution A*

We can prove the existence by suggesting a concrete set of vectors that satisfies the requirements of a problem.

Consider these

*N*vectors in

*N*-dimensional space

*(*

**i**^{(1)}=*1,0,...,0*),

*(*

**i**^{(2)}=*0,1,...,0*),

...

*(*

**i**^{(N)}=*0,0,...,1*)

They are

**linearly independent**.

Indeed, assume there exist multipliers

*q*,

_{1},...,q_{K}__not all of which are equal to zero__, such that

*q*

_{1}**·i**q^{(1)}+...+_{N}**·i**^{(N)}= 0But expression on the left is vector

*(*

**S =***q*)

_{1},...,q_{N}So, if it's equal to

**null-vector**, all components are zero, which contradicts the assumption about linear dependency of these vectors.

Hence, these

*N*vectors are linearly independent.

*Problem B*

Represent any vector in

*N*-dimensional space as a linear combination of the set of linearly independent

*N*vectors presented in the solution of the previous problem.

*Solution B*

Take any

*N*-dimensional vector

*(*

**R =***R*)

_{1},...,R_{N}and set of linearly independent

*N*-dimensional vectors

*(*

**i**^{(1)}=*1,0,...,0*),

*(*

**i**^{(2)}=*0,1,...,0*),

...

*(*

**i**^{(N)}=*0,0,...,1*)

It is obvious that

**R =**R_{1}·**i**R^{(1)}+_{2}·**i**R^{(2)}+...+_{N}**·i**^{(N)}*Problem C*

*are*

**R**^{(1)}, R^{(2)},... ,R^{(N)}*N*linearly independent vectors in

*N*-dimensional vector space, where for each

*k*∈[

*1,N*] vector

*is a set of real numbers (*

**R**^{(k)}*R*).

_{1}^{(k)},R_{2}^{(k)},...,R_{N}^{(k)}Represent any vector

*(*

**V***V*) in this

_{1},V_{2},...,V_{N}*N*-dimensional space as a linear combination of these vectors

*, where*

**R**^{(k)}*k*∈[

*1,N*].

*Solution C*

We are looking for a representation that in vector form looks like

**V =**x_{1}**·R**x^{(1)}+_{2}**·R**x^{(2)}+...+_{N}**·R**^{(N)}where for

*k*∈[

*1,N*] all

*x*are unknown real numbers.

_{k}Let's state this equation in coordinate form

*V*

_{1}= R_{1}^{(1)}·x_{1}+R_{1}^{(2)}·x_{2}+...+R_{1}^{(N)}·x_{N}*V*

_{2}= R_{2}^{(1)}·x_{1}+R_{2}^{(2)}·x_{2}+...+R_{2}^{(N)}·x_{N}...

*V*

_{N}= R_{N}^{(1)}·x_{1}+R_{N}^{(2)}·x_{2}+...+R_{N}^{(N)}·x_{N}This system of linear equations has unique solution if the

*N*⨯

*N*matrix of coefficients

*R*has a non-zero

_{i}^{(j)}**determinant**. This was explained in the

*Math 4 Teens*course on UNIZOR.COM (see

*Matrices*part of the course, chapters

*Matrix Determinant*and

*Matrix Solution*).

Exactly the same criteria of non-zero determinant is a necessary and sufficient condition for a set of vectors to be linearly independent.

Since the linear independence of vectors

*is given, the system of linear equations has a unique solution for unknown coefficients*

**R**^{(1)}, R^{(2)},... ,R^{(N)}*x*.

_{i}*Answer C*

From the coordinates of all

*N*vectors

*(*

**R**^{(j)}*j*∈[

*1,N*]) and coordinates

*V*(

_{i}*i*∈[

*1,N*]) of vector

*construct a system of*

**V***N*linear equations.

It's solutions gives the coefficients

*x*we are looking for.

_{j}

**Definition***N*linearly independent vectors in

*N*-dimensional vector space are called

**basis**, if any vector of this vector space can be represented as a linear combination of these

*N*linearly independent vectors.

From the

*Problems A, B*follows that a set of vectors

*(*

**i**^{(1)}=*1,0,...,0*),

*(*

**i**^{(2)}=*0,1,...,0*),

...

*(*

**i**^{(N)}=*0,0,...,1*)

is a basis.

So is a set of vectors

*of*

**R**^{(1)}, R^{(2)},... ,R^{(N)}*Problem C*.

## No comments:

Post a Comment