Linear dependence and independence of vectors

Definitions of linearly dependent and independent systems of vectors

Definition 22

Let we have a system of n-vectors and have a set of numbers
, Then

(11)

is called a linear combination of a given system of vectors with a given set of coefficients.

Definition 23

Vector system
is called linearly dependent if there is such a set of coefficients
, of which at least one is not equal to zero, such that the linear combination of the given system of vectors with this set of coefficients is equal to the zero vector:

Let
, Then

Definition 24 ( through the representation of one vector of the system as a linear combination of the others)

Vector system
is called linearly dependent if at least one of the vectors of this system can be represented as a linear combination of the other vectors of this system.

Statement 3

Definitions 23 and 24 are equivalent.

Definition 25(via zero line combination)

Vector system
is called linearly independent if the zero linear combination of this system is possible only for all
equal to zero.

Definition 26(through the impossibility of representing one vector of the system as a linear combination of the rest)

Vector system
is called linearly independent if none of the vectors of this system can be represented as a linear combination of other vectors of this system.

Properties of linearly dependent and independent systems of vectors

Theorem 2 (zero vector in the system of vectors)

If there is a zero vector in the system of vectors, then the system is linearly dependent.

 Let
, Then .

Get
, therefore, by definition of a linearly dependent system of vectors in terms of a zero linear combination (12) the system is linearly dependent. 

Theorem 3 (dependent subsystem in the system of vectors)

If a system of vectors has a linearly dependent subsystem, then the entire system is linearly dependent.

 Let
- linearly dependent subsystem
, among which at least one is not equal to zero:

Hence, by Definition 23, the system is linearly dependent. 

Theorem 4

Any subsystem of a linearly independent system is linearly independent.

 On the contrary. Let the system be linearly independent and have a linearly dependent subsystem. But then, by Theorem 3, the entire system will also be linearly dependent. Contradiction. Therefore, a subsystem of a linearly independent system cannot be linearly dependent. 

Geometric meaning of linear dependence and independence of a system of vectors

Theorem 5

Two vectors And linearly dependent if and only if
.

Necessity.

And - linearly dependent
that the condition
. Then
, i.e.
.

Adequacy.

Linear dependent. 

Corollary 5.1

Zero vector is collinear to any vector

Corollary 5.2

For two vectors to be linearly independent it is necessary and sufficient that was not collinear .

Theorem 6

In order for a system of three vectors to be linearly dependent, it is necessary and sufficient that these vectors be coplanar .

Necessity.

- are linearly dependent, therefore, one vector can be represented as a linear combination of the other two.

, (13)

Where
And
. According to the parallelogram rule is the diagonal of a parallelogram with sides
, but a parallelogram is a flat figure
coplanar
are also coplanar.

Adequacy.

- coplanar. We apply three vectors to the point O:

C

B`

– linearly dependent 

Corollary 6.1

The zero vector is coplanar to any pair of vectors.

Corollary 6.2

In order for the vectors
are linearly independent if and only if they are not coplanar.

Corollary 6.3

Any plane vector can be represented as a linear combination of any two non-collinear vectors of the same plane.

Theorem 7

Any four vectors in space are linearly dependent .

Let's consider 4 cases:

Let's draw a plane through the vectors, then a plane through the vectors and a plane through the vectors. Then we draw the planes passing through the point D, parallel to the pairs of vectors ; ; respectively. We build a parallelepiped along the lines of intersection of the planes OB 1 D 1 C 1 ABDC.

Consider OB 1 D 1 C 1 - parallelogram by construction according to the parallelogram rule
.

Consider OADD 1 - a parallelogram (from the parallelepiped property)
, Then

EMBED Equation.3 .

By Theorem 1
such that . Then
, and by definition 24 the system of vectors is linearly dependent. 

Corollary 7.1

The sum of three non-coplanar vectors in space is a vector that coincides with the diagonal of the parallelepiped built on these three vectors attached to a common origin, and the beginning of the sum vector coincides with the common origin of these three vectors.

Corollary 7.2

If we take 3 non-coplanar vectors in a space, then any vector of this space can be decomposed into a linear combination of these three vectors.

Expression of the form called linear combination of vectors A 1 , A 2 ,...,A n with coefficients λ 1, λ 2 ,...,λ n.

Determining the linear dependence of a system of vectors

Vector system A 1 , A 2 ,...,A n called linearly dependent, if there is a non-zero set of numbers λ 1, λ 2 ,...,λ n, under which the linear combination of vectors λ 1 *A 1 +λ 2 *A 2 +...+λ n *A n equal to zero vector, that is, the system of equations: has a non-zero solution.
Set of numbers λ 1, λ 2 ,...,λ n is nonzero if at least one of the numbers λ 1, λ 2 ,...,λ n different from zero.

Determining the linear independence of a system of vectors

Vector system A 1 , A 2 ,...,A n called linearly independent, if the linear combination of these vectors λ 1 *A 1 +λ 2 *A 2 +...+λ n *A n is equal to the zero vector only for a zero set of numbers λ 1, λ 2 ,...,λ n , that is, the system of equations: A 1 x 1 +A 2 x 2 +...+A n x n =Θ has a unique zero solution.

Example 29.1

Check if a system of vectors is linearly dependent

Solution:

1. We compose a system of equations:

2. We solve it using the Gauss method. The Jordanian transformations of the system are given in Table 29.1. When calculating, the right parts of the system are not written down, since they are equal to zero and do not change under Jordan transformations.

3. From the last three rows of the table we write the allowed system equivalent to the original system:

4. We get the general solution of the system:

5. Having set at your own discretion the value of the free variable x 3 =1, we obtain a particular non-zero solution X=(-3,2,1).

Answer: Thus, with a non-zero set of numbers (-3,2,1), the linear combination of vectors equals the zero vector -3A 1 +2A 2 +1A 3 =Θ. Hence, system of vectors linearly dependent.

Properties of vector systems

Property (1)
If the system of vectors is linearly dependent, then at least one of the vectors is decomposed in terms of the rest, and vice versa, if at least one of the vectors of the system is decomposed in terms of the rest, then the system of vectors is linearly dependent.

Property (2)
If any subsystem of vectors is linearly dependent, then the whole system is linearly dependent.

Property (3)
If a system of vectors is linearly independent, then any of its subsystems is linearly independent.

Property (4)
Any system of vectors containing a zero vector is linearly dependent.

Property (5)
A system of m-dimensional vectors is always linearly dependent if the number of vectors n is greater than their dimension (n>m)

Basis of the vector system

The basis of the system of vectors A 1 , A 2 ,..., A n such a subsystem B 1 , B 2 ,...,B r(each of the vectors B 1 ,B 2 ,...,B r is one of the vectors A 1 , A 2 ,..., A n) that satisfies the following conditions:
1. B 1 ,B 2 ,...,B r linearly independent system of vectors;
2. any vector A j of the system A 1 , A 2 ,..., A n is linearly expressed in terms of the vectors B 1 ,B 2 ,...,B r

r is the number of vectors included in the basis.

Theorem 29.1 On the unit basis of a system of vectors.

If a system of m-dimensional vectors contains m different unit vectors E 1 E 2 ,..., E m , then they form the basis of the system.

Algorithm for finding the basis of a system of vectors

In order to find the basis of the system of vectors A 1 ,A 2 ,...,A n it is necessary:

  • Compose a homogeneous system of equations corresponding to the system of vectors A 1 x 1 +A 2 x 2 +...+A n x n =Θ
  • bring this system

Vectors, their properties and actions with them

Vectors, actions with vectors, linear vector space.

Vectors are an ordered collection of a finite number of real numbers.

Actions: 1. Multiplying a vector by a number: lambda * vector x \u003d (lamda * x 1, lambda * x 2 ... lambda * x n). (3.4, 0. 7) * 3 \u003d (9, 12,0.21)

2. Addition of vectors (they belong to the same vector space) vector x + vector y \u003d (x 1 + y 1, x 2 + y 2, ... x n + y n,)

3. Vector 0=(0,0…0)---n E n – n-dimensional (linear space) vector x + vector 0 = vector x

Theorem. In order for a system of n vectors in an n-dimensional linear space to be linearly dependent, it is necessary and sufficient that one of the vectors be a linear combination of the others.

Theorem. Any set of n+ 1st vector of n-dimensional linear space yavl. linearly dependent.

Addition of vectors, multiplication of vectors by numbers. Subtraction of vectors.

The sum of two vectors is the vector directed from the beginning of the vector to the end of the vector, provided that the beginning coincides with the end of the vector. If the vectors are given by their expansions in terms of basis vectors, then adding the vectors adds up their corresponding coordinates.

Let's consider this using the example of a Cartesian coordinate system. Let

Let us show that

Figure 3 shows that

The sum of any finite number of vectors can be found using the polygon rule (Fig. 4): to construct the sum of a finite number of vectors, it is enough to match the beginning of each subsequent vector with the end of the previous one and construct a vector connecting the beginning of the first vector with the end of the last one.

Properties of the vector addition operation:

In these expressions m, n are numbers.

The difference of vectors is called the vector. The second term is a vector opposite to the vector in direction, but equal to it in length.

Thus, the vector subtraction operation is replaced by the addition operation

The vector, the beginning of which is at the origin of coordinates, and the end at the point A (x1, y1, z1), is called the radius vector of the point A and is denoted or simply. Since its coordinates coincide with the coordinates of the point A, its expansion in terms of vectors has the form

A vector starting at point A(x1, y1, z1) and ending at point B(x2, y2, z2) can be written as

where r 2 is the radius vector of the point B; r 1 - radius vector of point A.

Therefore, the expansion of the vector in terms of orts has the form

Its length is equal to the distance between points A and B

MULTIPLICATION

So in the case of a flat problem, the product of a vector by a = (ax; ay) and a number b is found by the formula

a b = (ax b; ay b)

Example 1. Find the product of the vector a = (1; 2) by 3.

3 a = (3 1; 3 2) = (3; 6)

So in the case of a spatial problem, the product of the vector a = (ax; ay; az) and the number b is found by the formula

a b = (ax b; ay b; az b)

Example 1. Find the product of the vector a = (1; 2; -5) by 2.

2 a = (2 1; 2 2; 2 (-5)) = (2; 4; -10)

Dot product of vectors and where is the angle between the vectors and ; if either , then

From the definition of the scalar product, it follows that

where, for example, is the value of the projection of the vector onto the direction of the vector .

Scalar square of a vector:

Dot product properties:

Dot product in coordinates

If That

Angle between vectors

Angle between vectors - the angle between the directions of these vectors (smallest angle).

Vector product(The vector product of two vectors.)- is a pseudovector perpendicular to the plane constructed by two factors, which is the result of the binary operation "vector multiplication" on vectors in three-dimensional Euclidean space. The product is neither commutative nor associative (it is anticommutative) and is different from the dot product of vectors. In many engineering and physics problems, it is necessary to be able to build a vector perpendicular to two existing ones - the vector product provides this opportunity. The cross product is useful for "measuring" the perpendicularity of vectors - the length of the cross product of two vectors is equal to the product of their lengths if they are perpendicular, and decreases to zero if the vectors are parallel or anti-parallel.

Vector product is defined only in three-dimensional and seven-dimensional spaces. The result of the vector product, like the scalar product, depends on the metric of the Euclidean space.

Unlike the formula for calculating the scalar product from the coordinates of the vectors in a three-dimensional rectangular coordinate system, the formula for the vector product depends on the orientation of the rectangular coordinate system, or, in other words, its “chirality”

Collinearity of vectors.

Two non-zero (not equal to 0) vectors are called collinear if they lie on parallel lines or on the same line. A synonym is acceptable, but not recommended - "parallel" vectors. Collinear vectors can be directed in the same direction ("co-directed") or oppositely directed (in the latter case they are sometimes called "anticollinear" or "antiparallel").

Mixed product of vectors( a,b,c)- scalar product of vector a and vector product of vectors b and c:

(a,b,c)=a ⋅(b×c)

sometimes it is called the triple scalar product of vectors, apparently due to the fact that the result is a scalar (more precisely, a pseudoscalar).

Geometric meaning: The modulus of the mixed product is numerically equal to the volume of the parallelepiped formed by the vectors (a,b,c) .

Properties

A mixed product is skew-symmetric with respect to all its arguments: that is, e. a permutation of any two factors changes the sign of the product. It follows that the mixed product in the right Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of the matrix composed of the vectors and:

The mixed product in the left Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of a matrix composed of vectors and taken with a minus sign:

In particular,

If any two vectors are parallel, then with any third vector they form a mixed product equal to zero.

If three vectors are linearly dependent (i.e., coplanar, lie in the same plane), then their mixed product is zero.

Geometric meaning - The mixed product in absolute value is equal to the volume of the parallelepiped (see figure) formed by the vectors and; the sign depends on whether this triple of vectors is right or left.

Complanarity of vectors.

Three vectors (or more) are called coplanar if they, being reduced to a common origin, lie in the same plane

Complanarity Properties

If at least one of the three vectors is zero, then the three vectors are also considered coplanar.

A triple of vectors containing a pair of collinear vectors is coplanar.

Mixed product of coplanar vectors. This is a criterion for the coplanarity of three vectors.

Coplanar vectors are linearly dependent. This is also a criterion for coplanarity.

In 3-dimensional space, 3 non-coplanar vectors form a basis

Linearly dependent and linearly independent vectors.

Linearly dependent and independent systems of vectors.Definition. The system of vectors is called linearly dependent, if there is at least one non-trivial linear combination of these vectors equal to the zero vector. Otherwise, i.e. if only a trivial linear combination of given vectors is equal to the null vector, the vectors are called linearly independent.

Theorem (linear dependence criterion). For a system of vectors in a linear space to be linearly dependent, it is necessary and sufficient that at least one of these vectors be a linear combination of the others.

1) If there is at least one zero vector among the vectors, then the entire system of vectors is linearly dependent.

Indeed, if, for example, , then, assuming , we have a nontrivial linear combination .▲

2) If some of the vectors form a linearly dependent system, then the entire system is linearly dependent.

Indeed, let the vectors , , be linearly dependent. Hence, there exists a non-trivial linear combination equal to the zero vector. But then, assuming , we also obtain a non-trivial linear combination equal to the zero vector.

2. Basis and dimension. Definition. System of linearly independent vectors vector space is called basis this space, if any vector from can be represented as a linear combination of the vectors of this system, i.e. for each vector there are real numbers such that equality holds. This equality is called vector decomposition according to the basis , and the numbers called vector coordinates relative to the basis(or in basis) .

Theorem (on the uniqueness of the expansion in terms of the basis). Each space vector can be expanded in terms of the basis in a unique way, i.e. coordinates of each vector in the basis are defined unambiguously.

Definition. Linear combination of vectors a 1 , ..., a n with coefficients x 1 , ..., x n is called a vector

x 1 a 1 + ... + x n a n .

trivial, if all coefficients x 1 , ..., x n are equal to zero.

Definition. The linear combination x 1 a 1 + ... + x n a n is called non-trivial, if at least one of the coefficients x 1 , ..., x n is not equal to zero.

linearly independent, if there is no non-trivial combination of these vectors equal to the zero vector .

That is, the vectors a 1 , ..., a n are linearly independent if x 1 a 1 + ... + x n a n = 0 if and only if x 1 = 0, ..., x n = 0.

Definition. Vectors a 1 , ..., a n are called linearly dependent, if there exists a non-trivial combination of these vectors equal to the zero vector .

Properties of linearly dependent vectors:

    For 2 and 3 dimensional vectors.

    Two linearly dependent vectors are collinear. (Collinear vectors are linearly dependent.) .

    For 3-dimensional vectors.

    Three linearly dependent vectors are coplanar. (The three coplanar vectors are linearly dependent.)

  • For n -dimensional vectors.

    n + 1 vectors are always linearly dependent.

Examples of tasks for linear dependence and linear independence of vectors:

Example 1. Check whether the vectors a = (3; 4; 5), b = (-3; 0; 5), c = (4; 4; 4), d = (3; 4; 0) are linearly independent.

Solution:

The vectors will be linearly dependent, since the dimension of the vectors is less than the number of vectors.

Example 2. Check whether the vectors a = (1; 1; 1), b = (1; 2; 0), c = (0; -1; 1) are linearly independent.

Solution:

x1 + x2 = 0
x1 + 2x2 - x3 = 0
x1 + x3 = 0
1 1 0 0 ~
1 2 -1 0
1 0 1 0
~ 1 1 0 0 ~ 1 1 0 0 ~
1 - 1 2 - 1 -1 - 0 0 - 0 0 1 -1 0
1 - 1 0 - 1 1 - 0 0 - 0 0 -1 1 0

subtract the second from the first row; add the second line to the third line:

~ 1 - 0 1 - 1 0 - (-1) 0 - 0 ~ 1 0 1 0
0 1 -1 0 0 1 -1 0
0 + 0 -1 + 1 1 + (-1) 0 + 0 0 0 0 0

This solution shows that the system has many solutions, that is, there is a non-zero combination of values ​​of the numbers x 1 , x 2 , x 3 such that the linear combination of the vectors a , b , c is equal to the zero vector, for example:

A + b + c = 0

which means the vectors a , b , c are linearly dependent.

Answer: vectors a , b , c are linearly dependent.

Example 3. Check whether the vectors a = (1; 1; 1), b = (1; 2; 0), c = (0; -1; 2) are linearly independent.

Solution: Let's find the values ​​of the coefficients at which the linear combination of these vectors will be equal to the zero vector.

x 1 a + x 2 b + x 3 c 1 = 0

This vector equation can be written as a system of linear equations

x1 + x2 = 0
x1 + 2x2 - x3 = 0
x1 + 2x3 = 0

We solve this system using the Gauss method

1 1 0 0 ~
1 2 -1 0
1 0 2 0

subtract the first from the second line; subtract the first from the third row:

~ 1 1 0 0 ~ 1 1 0 0 ~
1 - 1 2 - 1 -1 - 0 0 - 0 0 1 -1 0
1 - 1 0 - 1 2 - 0 0 - 0 0 -1 2 0

subtract the second from the first line; add the second line to the third line.

In this article, we will cover:

  • what are collinear vectors;
  • what are the conditions for collinear vectors;
  • what are the properties of collinear vectors;
  • what is the linear dependence of collinear vectors.
Definition 1

Collinear vectors are vectors that are parallel to the same line or lie on the same line.

Example 1

Conditions for collinear vectors

Two vectors are collinear if any of the following conditions are true:

  • condition 1 . Vectors a and b are collinear if there is a number λ such that a = λ b ;
  • condition 2 . Vectors a and b are collinear with equal ratio of coordinates:

a = (a 1 ; a 2) , b = (b 1 ; b 2) ⇒ a ∥ b ⇔ a 1 b 1 = a 2 b 2

  • condition 3 . Vectors a and b are collinear provided that the vector product and the zero vector are equal:

a ∥ b ⇔ a , b = 0

Remark 1

Condition 2 not applicable if one of the vector coordinates is zero.

Remark 2

Condition 3 applicable only to those vectors that are given in space.

Examples of problems for the study of the collinearity of vectors

Example 1

We examine the vectors a \u003d (1; 3) and b \u003d (2; 1) for collinearity.

How to decide?

In this case, it is necessary to use the 2nd condition of collinearity. For given vectors, it looks like this:

The equality is wrong. From this we can conclude that the vectors a and b are non-collinear.

Answer : a | | b

Example 2

What value m of the vector a = (1 ; 2) and b = (- 1 ; m) is necessary for the vectors to be collinear?

How to decide?

Using the second collinear condition, vectors will be collinear if their coordinates are proportional:

This shows that m = - 2 .

Answer: m = - 2 .

Criteria for linear dependence and linear independence of systems of vectors

Theorem

A system of vectors in a vector space is linearly dependent only if one of the system's vectors can be expressed in terms of the rest of the system's vectors.

Proof

Let the system e 1 , e 2 , . . . , e n is linearly dependent. Let us write down the linear combination of this system equal to the zero vector:

a 1 e 1 + a 2 e 2 + . . . + a n e n = 0

in which at least one of the coefficients of the combination is not equal to zero.

Let a k ≠ 0 k ∈ 1 , 2 , . . . , n .

We divide both sides of the equality by a non-zero coefficient:

a k - 1 (a k - 1 a 1) e 1 + (a k - 1 a k) e k + . . . + (a k - 1 a n) e n = 0

Denote:

A k - 1 a m , where m ∈ 1 , 2 , . . . , k - 1 , k + 1 , n

In this case:

β 1 e 1 + . . . + β k - 1 e k - 1 + β k + 1 e k + 1 + . . . + βn e n = 0

or e k = (- β 1) e 1 + . . . + (- β k - 1) e k - 1 + (- β k + 1) e k + 1 + . . . + (- β n) e n

It follows that one of the vectors of the system is expressed in terms of all other vectors of the system. Which is what was required to be proved (p.t.d.).

Adequacy

Let one of the vectors be linearly expressed in terms of all other vectors of the system:

e k = γ 1 e 1 + . . . + γ k - 1 e k - 1 + γ k + 1 e k + 1 + . . . + γ n e n

We transfer the vector e k to the right side of this equality:

0 = γ 1 e 1 + . . . + γ k - 1 e k - 1 - e k + γ k + 1 e k + 1 + . . . + γ n e n

Since the coefficient of the vector e k is equal to - 1 ≠ 0 , we get a non-trivial representation of zero by a system of vectors e 1 , e 2 , . . . , e n , and this, in turn, means that the given system of vectors is linearly dependent. Which is what was required to be proved (p.t.d.).

Consequence:

  • A system of vectors is linearly independent when none of its vectors can be expressed in terms of all other vectors of the system.
  • A vector system that contains a null vector or two equal vectors is linearly dependent.

Properties of linearly dependent vectors

  1. For 2- and 3-dimensional vectors, the condition is fulfilled: two linearly dependent vectors are collinear. Two collinear vectors are linearly dependent.
  2. For 3-dimensional vectors, the condition is fulfilled: three linearly dependent vectors are coplanar. (3 coplanar vectors - linearly dependent).
  3. For n-dimensional vectors, the condition is fulfilled: n + 1 vectors are always linearly dependent.

Examples of solving problems for linear dependence or linear independence of vectors

Example 3

Let's check vectors a = 3 , 4 , 5 , b = - 3 , 0 , 5 , c = 4 , 4 , 4 , d = 3 , 4 , 0 for linear independence.

Solution. Vectors are linearly dependent because the dimension of the vectors is less than the number of vectors.

Example 4

Let's check vectors a = 1 , 1 , 1 , b = 1 , 2 , 0 , c = 0 , - 1 , 1 for linear independence.

Solution. We find the values ​​of the coefficients at which the linear combination will equal the zero vector:

x 1 a + x 2 b + x 3 c 1 = 0

We write the vector equation in the form of a linear one:

x 1 + x 2 = 0 x 1 + 2 x 2 - x 3 = 0 x 1 + x 3 = 0

We solve this system using the Gauss method:

1 1 0 | 0 1 2 - 1 | 0 1 0 1 | 0 ~

From the 2nd line we subtract the 1st, from the 3rd - the 1st:

~ 1 1 0 | 0 1 - 1 2 - 1 - 1 - 0 | 0 - 0 1 - 1 0 - 1 1 - 0 | 0 - 0 ~ 1 1 0 | 0 0 1 - 1 | 0 0 - 1 1 | 0 ~

Subtract the 2nd from the 1st line, add the 2nd to the 3rd:

~ 1 - 0 1 - 1 0 - (- 1) | 0 - 0 0 1 - 1 | 0 0 + 0 - 1 + 1 1 + (- 1) | 0 + 0 ~ 0 1 0 | 1 0 1 - 1 | 0 0 0 0 | 0

It follows from the solution that the system has many solutions. This means that there is a non-zero combination of the values ​​of such numbers x 1 , x 2 , x 3 for which the linear combination a , b , c equals the zero vector. Hence the vectors a , b , c are linearly dependent. ​​​​​​​

If you notice a mistake in the text, please highlight it and press Ctrl+Enter