Thursday, August 1, 2024

Vectors+ 04 - N-dimensional Vectors: UNIZOR.COM - Math+ & Problems - Vec...

Notes to a video lecture on http://www.unizor.com

Vectors+ 04
N-dimensional Vectors


Theory

1. A single real number R can be represented as a point A on a straight coordinate line having some fixed point O called the origin, unit of measurement and a particular direction from point O chosen as positive.
In this case point A is defined so that its distance from the origin O equals to |R| in chosen units of measurement and the sign of R corresponding to a defined positive or negative direction from the origin.
At the same time this number R can be represented as a vector that stretches along the coordinate line from one point to another having the magnitude of absolute value |R| and the direction corresponding to a defined positive or negative direction from the origin.
So, a single real number R is represented as a point A on a coordinate line or as a vector R on this line.
All these representations of a single real number we call one-dimensional.
In the vector representation real number R=1 can be represented by a vector from the origin of coordinates O to point on a distance of 1 unit of measurement in a positive direction. We will call this vector a unit vector i.
If we stretch vector i by a factor of R, taking in consideration the correspondence between the sign of R and direction on a line, we will reach point A.
That's why we can state
R = R·i

2. A pair of two real numbers (R1,R2) can be represented as a point A on a coordinate plane with some fixed point O called the origin of coordinates, two perpendicular lines with chosen positive direction on each going through the origin O called abscissa and ordinate with some unit of measurement along these lines.
In this case point A is defined so that
(a) its distance from the ordinate along a line parallel to the abscissa equals to |R1| in chosen units of measurement and the sign of R1 corresponding to a defined positive or negative direction along the abscissa;
(b) its distance from the abscissa along a line parallel to the ordinate equals to |R2| in chosen units of measurement and the sign of R2 corresponding to a defined positive or negative direction along the ordinate.
At the same time this pair of numbers (R1,R2) can be represented as a vector that stretches from the origin of coordinate to point A defined above or any other vector on a plane with the same magnitude and direction.
So, a pair of real numbers (R1,R2) is represented as a point A on a coordinate plane or as a vector R from the origin O to point A, or any other vector of the same magnitude and direction.
All these representations of a pair of real numbers we call two-dimensional.

In the vector representation a pair of real numbers (R1=1,R2=0) is represented by vector from the origin along the abscissa to point I1 on a distance of 1 unit of measurement in a positive direction. We will call this vector a abscissa unit vector i.
Similarly, the vector representation a pair of real numbers (R1=0,R2=1) is represented by vector from the origin along the ordinate to point I2 on a distance of 1 unit of measurement in a positive direction. We will call this vector an ordinate unit vector j.
If we stretch vector i by a factor of R1, taking in consideration the correspondence between the sign of R1 and direction on the abscissa, and similarly stretch vector j by a factor of R2 and add these two vectors by the rules of addition of vectors, we will reach point A.
That's why we can state
R = R1·i + R2·j
which is a representation of vector R through unit vectors i and j and coordinates R1 and R2.

Using the above representation of a vector, it's easy to derive a formula for a scalar product of two two-dimensional vectors.
R = R1·i + R2·j
S = S1·i + S2·j
R·S = R1·S1·(i·i) + R1·S2·(i·j) +
+
R2·S1·(j·i) + R2·S2·(j·j)


According to definition of a scalar product, (i·i)=1, (i·j)=0, (j·i)=0, (j·j)=1
Therefore, R·S = R1·S1 + R2·S2

Perpendicularity of vectors R and S can be checked by examining the coordinate expression for their scalar product R1·S1+R2·S2. If it's equal to zero, the vectors are perpendicular to each other.

A cosine of an angle φ between two two-dimensional vectors can be determined as
cos(φ) = (R·S) / (|R|·|S|)
where in numerator we use scalar product of two vectors and in denominator - a product of their magnitudes.

3. Exactly as in a previous two-dimensional case, we can represent a triplet of real numbers (R1,R2,R3) as a point A in the three-dimensional space, a vector R from the origin of coordinates to that point or as a vector sum of three mutually perpendicular vectors, each positioned along one of the coordinate axes
R = R1·i + R2·j + R3·k
where i, j and k are unit vectors along three mutually perpendicular axes of coordinates.

Using the above representation of a vector, it's easy to derive a formula for a scalar product of two three-dimensional vectors.
R = R1·i + R2·j + R3·k
S = S1·i + S2·j + S3·k
R·S = R1·S1 + R2·S2 + R3·S3

Perpendicularity of vectors R and S can be checked by examining the coordinate expression for their scalar product R1·S1+R2·S2+R3·S3. If it's equal to zero, the vectors are perpendicular to each other.

A cosine of an angle φ between two two-dimensional vectors can be determined as
cos(φ) = (R·S) / (|R|·|S|)
where in numerator we use scalar product of two vectors and in denominator - a product of their magnitudes.

4. Now we will generalize the same concept to N-dimensional case.
An ordered set of N real numbers (R1,R2,...,RN) we will call an N-dimensional vector, which can be interpreted as a point in N-dimensional coordinate space.
An important operations on vectors known from 2- and 3-dimensional cases can be easily expanded to N-dimensional case:

(a) Addition of two N-dimensional vectors resulting in a new N-dimensional vector:
R(1) + R(2) = R(2) + R(1) = R
(R1(1),...,RN(1)) + (R1(2),...,RN(2)) =
= (R1(1)+R1(2),...,RN(1)+RN(2))

(b) Multiplication of an N-dimensional vector by a real number q resulting in a new N-dimensional vector:
q·R = R·q = S
q·(R1,...,RN) = (q·R1,...,q·RN)

(c) Scalar (dot) product of two N-dimensional vectors resulting in a scalar:
R(1) · R(2) = R(2) · R(1) = C
(R1(1),...,RN(1)) · (R1(2),...,RN(2)) =
= R1(1)·R1(2)+...+RN(1)·RN(2) = C

(d) N-dimensional Angle
If a scalar product of two N-dimensional vectors R(1) and R(2) equals to zero, they are called perpendicular to each other.
In general, an angle φ between these two N-dimensional vectors can be defined as
cos(φ) = (R(1)·R(2)) / (|R(1)|·|R(2)|)
where in numerator we use scalar product of two vectors and in denominator - a product of their magnitudes.

(e) Linear dependency
Certain number K of N-dimensional vectors R(1),...,R(K) is called linearly dependent if there are K multipliers q1,...,qK, not all of which are equal to zero, such that
q1·R(1)+...+qK·R(K) = 0
where 0 is a null-vector - an N-dimensional vector with all components zero.
The negation of linear dependency is linear independency.


Problem A

Prove that in N-dimensional space exists a set of N linearly independent vectors.

Solution A
We can prove the existence by suggesting a concrete set of vectors that satisfies the requirements of a problem.
Consider these N vectors in N-dimensional space
i(1) = (1,0,...,0),
i(2) = (0,1,...,0),
...
i(N) = (0,0,...,1)
They are linearly independent.
Indeed, assume there exist multipliers q1,...,qK, not all of which are equal to zero, such that
q1·i(1)+...+qN·i(N) = 0
But expression on the left is vector
S = (q1,...,qN)
So, if it's equal to null-vector, all components are zero, which contradicts the assumption about linear dependency of these vectors.
Hence, these N vectors are linearly independent.


Problem B

Represent any vector in N-dimensional space as a linear combination of the set of linearly independent N vectors presented in the solution of the previous problem.

Solution B
Take any N-dimensional vector R = (R1,...,RN)
and set of linearly independent N-dimensional vectors
i(1) = (1,0,...,0),
i(2) = (0,1,...,0),
...
i(N) = (0,0,...,1)
It is obvious that
R = R1·i(1)+R2·i(2)+...+RN·i(N)


Problem C

R(1), R(2),... ,R(N) are N linearly independent vectors in N-dimensional vector space, where for each k∈[1,N] vector R(k) is a set of real numbers (R1(k),R2(k),...,RN(k)).
Represent any vector V(V1,V2,...,VN) in this N-dimensional space as a linear combination of these vectors R(k), where k∈[1,N].

Solution C

We are looking for a representation that in vector form looks like
V = x1·R(1) + x2·R(2) +...+ xN·R(N)
where for k∈[1,N] all xk are unknown real numbers.
Let's state this equation in coordinate form
V1 = R1(1)·x1+R1(2)·x2+...+R1(N)·xN
V2 = R2(1)·x1+R2(2)·x2+...+R2(N)·xN
...
VN = RN(1)·x1+RN(2)·x2+...+RN(N)·xN
This system of linear equations has unique solution if the NN matrix of coefficients Ri(j) has a non-zero determinant. This was explained in the Math 4 Teens course on UNIZOR.COM (see Matrices part of the course, chapters Matrix Determinant and Matrix Solution).
Exactly the same criteria of non-zero determinant is a necessary and sufficient condition for a set of vectors to be linearly independent.
Since the linear independence of vectors R(1), R(2),... ,R(N) is given, the system of linear equations has a unique solution for unknown coefficients xi.

Answer C
From the coordinates of all N vectors R(j) (j∈[1,N]) and coordinates Vi (i∈[1,N]) of vector V construct a system of N linear equations.
It's solutions gives the coefficients xj we are looking for.

Definition

N linearly independent vectors in N-dimensional vector space are called basis, if any vector of this vector space can be represented as a linear combination of these N linearly independent vectors.
From the Problems A, B follows that a set of vectors
i(1) = (1,0,...,0),
i(2) = (0,1,...,0),
...
i(N) = (0,0,...,1)
is a basis.
So is a set of vectors R(1), R(2),... ,R(N) of Problem C.

No comments: