1. Trang chủ >
  2. Kinh Doanh - Tiếp Thị >
  3. Tiếp thị - Bán hàng >

3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (9.98 MB, 1,310 trang )


358



Chapter 7. Systems of First Order Linear Equations



has (infinitely many) nonzero solutions in addition to the trivial solution. The situation

for the nonhomogeneous system (2) is more complicated. This system has no solution

unless the vector b satisfies a certain further condition. This condition is that

(b, y) = 0,



(5)



for all vectors y satisfying A∗ y = 0, where A∗ is the adjoint of A. If condition (5) is

met, then the system (2) has (infinitely many) solutions. Each of these solutions has

the form

x = x(0) + ␰,



(6)



where x(0) is a particular solution of Eq. (2), and ␰ is any solution of the homogeneous

system (4). Note the resemblance between Eq. (6) and the solution of a nonhomogeneous linear differential equation. The proofs of some of the preceding statements are

outlined in Problems 25 through 29.

The results in the preceding paragraph are important as a means of classifying the

solutions of linear systems. However, for solving particular systems it is generally best

to use row reduction to transform the system into a much simpler one from which the

solution(s), if there are any, can be written down easily. To do this efficiently we can

form the augmented matrix





a11 · · · a1n | b1

 .

.

.

.

(7)

A|b =  .

| .

.

.

.

an1 · · · ann | bn

by adjoining the vector b to the coefficient matrix A as an additional column. The

dashed line replaces the equals sign and is said to partition the augmented matrix.

We now perform row operations on the augmented matrix so as to transform A into

a triangular matrix, that is, a matrix whose elements below the main diagonal are all

zero. Once this is done, it is easy to see whether the system has solutions, and to find

them if it does. Observe that elementary row operations on the augmented matrix (7)

correspond to legitimate operations on the equations in the system (1). The following

examples illustrate the process.



Solve the system of equations

EXAMPLE



1



x1 − 2x2 + 3x3 = 7,

−x1 + x2 − 2x3 = −5,

2x1 − x2 − x3 = 4.

The augmented matrix for the system (8) is



1 −2

3 |

−1

1 −2 |

2 −1 −1 |





7

−5

4



(8)



(9)



We now perform row operations on the matrix (9) with a view to introducing zeros in

the lower left part of the matrix. Each step is described and the result recorded below.



7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors



359



(a)



Add the first row to the second row and add (−2) times the first row to the third

row.





1 −2

3 |

7

0 −1

1 |

2

0

3 −7 | −10



(b)



Multiply the second row by −1.



1 −2

0

1

0

3





3 |

7

−1 | −2

−7 | −10



(c)



Add (−3) times the second row to the third row.





1 −2

3 |

7

0

1 −1 | −2

0

0 −4 | −4



(d)



Divide the third row by −4.





1

0

0



−2

1

0





3 |

7

−1 | −2

1 |

1



The matrix obtained in this manner corresponds to the system of equations

x1 − 2x2 + 3x3 = 7,

x2 − x3 = −2,

x3 = 1,



(10)



which is equivalent to the original system (8). Note that the coefficients in Eqs. (10)

form a triangular matrix. From the last of Eqs. (10) we have x3 = 1, from the second

equation x2 = −2 + x3 = −1, and from the first equation x1 = 7 + 2x2 − 3x3 = 2.

Thus we obtain

 

2

x = −1 ,

1

which is the solution of the given system (8). Incidentally, since the solution is unique,

we conclude that the coefficient matrix is nonsingular.



Discuss solutions of the system

EXAMPLE



2



x1 − 2x2 + 3x3 = b1 ,

−x1 + x2 − 2x3 = b2 ,

2x1 − x2 + 3x3 = b3

for various values of b1 , b2 , and b3 .



(11)



360



Chapter 7. Systems of First Order Linear Equations



Observe that the coefficients in the system (11) are the same as those in the system

(8) except for the coefficient of x3 in the third equation. The augmented matrix for the

system (11) is





1 −2

3 | b1

−1

(12)

1 −2 | b2  .

2 −1

3 | b3

By performing steps (a), (b), and (c) as in Example 1 we transform the matrix (12) into





1 −2

3 |

b1

0

(13)

1 −1 |

−b1 − b2  .

0

0

0 | b1 + 3b2 + b3

The equation corresponding to the third row of the matrix (13) is

b1 + 3b2 + b3 = 0;



(14)



thus the system (11) has no solution unless the condition (14) is satisfied by b1 , b2 , and

b3 . It is possible to show that this condition is just Eq. (5) for the system (11).

Let us now assume that b1 = 2, b2 = 1, and b3 = −5, in which case Eq. (14) is

satisfied. Then the first two rows of the matrix (13) correspond to the equations

x1 − 2x2 + 3x3 =



2,



x2 − x3 = −3.



(15)



To solve the system (15) we can choose one of the unknowns arbitrarily and then solve

for the other two. Letting x3 = α, where α is arbitrary, it then follows that

x2 = α − 3,

x1 = 2(α − 3) − 3α + 2 = −α − 4.

If we write the solution in vector notation, we have





   

−4

−α − 4

−1

x =  α − 3 = α  1 + −3 .

1

0

α



(16)



It is easy to verify that the second term on the right side of Eq. (16) is a solution of the

nonhomogeneous system (11), while the first term is the most general solution of the

homogeneous system corresponding to (11).

Row reduction is also useful in solving homogeneous systems, and systems in which

the number of equations is different from the number of unknowns.

Linear Independence. A set of k vectors x(1) , . . . , x(k) is said to be linearly dependent if there exists a set of (complex) numbers c1 , . . . , ck , at least one of which is

nonzero, such that

c1 x(1) + · · · + ck x(k) = 0.



(17)



In other words x(1) , . . . , x(k) are linearly dependent if there is a linear relation among

them. On the other hand, if the only set c1 , . . . , ck for which Eq. (17) is satisfied is

c1 = c2 = · · · = ck = 0, then x(1) , . . . , x(k) are said to be linearly independent.



7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors



361

( j)



Consider now a set of n vectors, each of which has n components. Let xi j = xi be

the ith component of the vector x( j) , and let X = (xi j ). Then Eq. (17) can be written as

 



 (1)

(n)

x11 c1 + · · · + x1n cn

x 1 c1 + · · · + x 1 cn

 .

.  = Xc = 0.

. = .

. 

.   .

(18)

 .

.

.

.

.

(1)

(n)

xn1 c1 + · · · + xnn cn

x n c1 + · · · + x n cn

If det X = 0, then the only solution of Eq. (18) is c = 0, but if det X = 0, there are

nonzero solutions. Thus the set of vectors x(1) , . . . , x(n) is linearly independent if and

only if det X = 0.



EXAMPLE



3



Determine whether the vectors

 

1

x(1) =  2 ,

−1



x(2)



 

2

= 1 ,

3







x(3)





−4

=  1

−11



(19)



are linearly independent or linearly dependent. If linearly dependent, find a linear

relation among them.

To determine whether x(1) , x(2) , and x(3) are linearly dependent we compute det(xi j ),

whose columns are the components of x(1) , x(2) , and x(3) , respectively. Thus

det(xi j ) =



1

2

−1



2

1

3



−4

1 ,

−11



and an elementary calculation shows that it is zero. Thus x(1) , x(2) , and x(3) are linearly

dependent, and there are constants c1 , c2 , and c3 such that

c1 x(1) + c2 x(2) + c3 x(3) = 0.

Equation (20) can also be written in the form

   



c1

0

1

2

−4

 2

1

1 c2  = 0 ,

0

−1

3 −11

c3



(20)



(21)



and solved by means of elementary row operations starting from the augmented matrix





1

2

−4 | 0

 2

(22)

1

1 | 0 .

−1

3 −11 | 0

We proceed as in Examples 1 and 2.

(a)



Add (−2) times the first row to the second row, and add the first row to the third

row.





1

2

−4 | 0

0 −3

9 | 0

0

5 −15 | 0



362



Chapter 7. Systems of First Order Linear Equations



(b)



Divide the second row by −3; then add (−5) times the second row to the third

row.





1

2 −4 | 0

0

1 −3 | 0

0

0

0 | 0



Thus we obtain the equivalent system

c1 + 2c2 − 4c3 = 0,

c2 − 3c3 = 0.



(23)



From the second of Eqs. (23) we have c2 = 3c3 , and from the first we obtain c1 =

4c3 − 2c2 = −2c3 . Thus we have solved for c1 and c2 in terms of c3 , with the latter

remaining arbitrary. If we choose c3 = −1 for convenience, then c1 = 2 and c2 = −3.

In this case the desired relation (20) becomes

2x(1) − 3x(2) − x(3) = 0.



Frequently, it is useful to think of the columns (or rows) of a matrix A as vectors.

These column (or row) vectors are linearly independent if and only if det A = 0.

Further, if C = AB, then it can be shown that det C = (det A)(det B). Therefore, if

the columns (or rows) of both A and B are linearly independent, then the columns (or

rows) of C are also linearly independent.

Now let us extend the concepts of linear dependence and independence to a set

of vector functions x(1) (t), . . . , x(k) (t) defined on an interval α < t < β. The vectors

x(1) (t), . . . , x(k) (t) are said to be linearly dependent on α < t < β if there exists a set of

constants c1 , . . . , ck , not all of which are zero, such that c1 x(1) (t) + · · · + ck x(k) (t) = 0

for all t in the interval. Otherwise, x(1) (t), . . . , x(k) (t) are said to be linearly independent.

Note that if x(1) (t), . . . , x(k) (t) are linearly dependent on an interval, they are linearly

dependent at each point in the interval. However, if x(1) (t), . . . , x(k) (t) are linearly

independent on an interval, they may or may not be linearly independent at each point;

they may, in fact, be linearly dependent at each point, but with different sets of constants

at different points. See Problem 14 for an example.

Eigenvalues and Eigenvectors. The equation

Ax = y



(24)



can be viewed as a linear transformation that maps (or transforms) a given vector x

into a new vector y. Vectors that are transformed into multiples of themselves are

important in many applications.4 To find such vectors we set y = λx, where λ is a

scalar proportionality factor, and seek solutions of the equations

Ax = λx,



(25)



(A − λI)x = 0.



(26)



or



4



For example, this problem is encountered in finding the principal axes of stress or strain in an elastic body, and

in finding the modes of free vibration in a conservative system with a finite number of degrees of freedom.



7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors



363



The latter equation has nonzero solutions if and only if λ is chosen so that

(λ) = det(A − λI) = 0.



(27)



Values of λ that satisfy Eq. (27) are called eigenvalues of the matrix A, and the nonzero

solutions of Eq. (25) or (26) that are obtained by using such a value of λ are called the

eigenvectors corresponding to that eigenvalue.

If A is a 2 × 2 matrix, then Eq. (26) has the form

a11 − λ

a21



a12

a22 − λ



x1

x2



=



0

0



(28)



and Eq. (27) becomes

(λ) = (a11 − λ)(a22 − λ) − a12 a21 = 0.



(29)



The following example illustrates how eigenvalues and eigenvectors are found.



Find the eigenvalues and eigenvectors of the matrix

EXAMPLE



4



A=



−1

.

−2



3

4



(30)



The eigenvalues λ and eigenvectors x satisfy the equation (A − λI)x = 0, or

3−λ

4



−1

−2 − λ



x1

x2



=



0

.

0



(31)



The eigenvalues are the roots of the equation

det(A − λI) =



3−λ

4



−1

= λ2 − λ − 2 = 0.

−2 − λ



(32)



Thus the eigenvalues are λ1 = 2 and λ2 = −1.

To find the eigenvectors we return to Eq. (31) and replace λ by each of the eigenvalues

in turn. For λ = 2 we have

1

4



−1

−4



x1

x2



=



0

.

0



(33)



Hence each row of this vector equation leads to the condition x1 − x2 = 0, so x1 and

x2 are equal, but their value is not determined. If x1 = c, then x2 = c also and the

eigenvector x(1) is

x(1) = c



1

, c = 0.

1



(34)



Usually, we will drop the arbitrary constant c when finding eigenvectors; thus instead

of Eq. (34) we write

x(1) =



1

,

1



(35)



and remember that any nonzero multiple of this vector is also an eigenvector. We say

that x(1) is the eigenvector corresponding to the eigenvalue λ1 = 2.



364



Chapter 7. Systems of First Order Linear Equations



Now setting λ = −1 in Eq. (31), we obtain

4

4



−1

−1



x1

x2



=



0

.

0



(36)



Again we obtain a single condition on x1 and x2 , namely, 4x1 − x2 = 0. Thus the

eigenvector corresponding to the eigenvalue λ2 = −1 is

1

,

4



x(2) =



(37)



or any nonzero multiple of this vector.



As Example 4 illustrates, eigenvectors are determined only up to an arbitrary nonzero

multiplicative constant; if this constant is specified in some way, then the eigenvectors

are said to be normalized. In Example 4, we set the constant equal to 1, but any other

nonzero value could also have been used. Sometimes it is convenient to normalize an

eigenvector x by choosing the constant so that (x, x) = 1.

Equation (27) is a polynomial equation of degree n in λ, so there are n eigenvalues

λ1 , . . . , λn , some of which may be repeated. If a given eigenvalue appears m times as

a root of Eq. (27), then that eigenvalue is said to have multiplicity m. Each eigenvalue

has at least one associated eigenvector, and an eigenvalue of multiplicity m may have q

linearly independent eigenvectors, where

1 ≤ q ≤ m.



(38)



Examples show that q may be any integer in this interval. If all the eigenvalues of

a matrix A are simple (have multiplicity one), then it is possible to show that the n

eigenvectors of A, one for each eigenvalue, are linearly independent. On the other hand,

if A has one or more repeated eigenvalues, then there may be fewer than n linearly

independent eigenvectors associated with A, since for a repeated eigenvalue we may

have q < m. As we will see in Section 7.8, this fact may lead to complications later on

in the solution of systems of differential equations.



EXAMPLE



5



Find the eigenvalues and eigenvectors of the matrix





0 1 1

A = 1 0 1 .

1 1 0

The eigenvalues λ and eigenvectors x satisfy the equation (A − λI)x = 0, or



   

x1

−λ

1

1

0

 1 −λ

1 x2  = 0 .

1

1 −λ

0

x3



(39)



(40)



The eigenvalues are the roots of the equation

det(A − λI) =



−λ

1

1



1

−λ

1



1

1 = −λ3 + 3λ + 2 = 0.

−λ



(41)



7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors



365



The roots of Eq. (41) are λ1 = 2, λ2 = −1, and λ3 = −1. Thus 2 is a simple eigenvalue,

and −1 is an eigenvalue of multiplicity 2.

To find the eigenvector x(1) corresponding to the eigenvalue λ1 we substitute λ = 2

in Eq. (40); this gives the system



   

x1

0

−2

1

1

 1 −2

(42)

1 x2  = 0 .

0

1

1 −2

x3

We can reduce this to the equivalent system



   

x1

2 −1 −1

0

0

1 −1 x2  = 0

0

0

0

0

x3

by elementary row operations. Solving this system we obtain the eigenvector

 

1

x(1) = 1 .

1



(43)



(44)



For λ = −1, Eqs. (40) reduce immediately to the single equation

x1 + x2 + x3 = 0.



(45)



Thus values for two of the quantities x1 , x2 , x3 can be chosen arbitrarily and the third

is determined from Eq. (45). For example, if x1 = 1 and x2 = 0, then x3 = −1, and

 

1

x(2) =  0

(46)

−1

is an eigenvector. Any nonzero multiple of x(2) is also an eigenvector, but a second

independent eigenvector can be found by making another choice of x1 and x2 ; for

instance, x1 = 0 and x2 = 1. Again x3 = −1 and

 

0

(47)

x(3) =  1

−1

is an eigenvector linearly independent of x(2) . Therefore in this example two linearly

independent eigenvectors are associated with the double eigenvalue.



An important special class of matrices, called self-adjoint or Hermitian matrices, are

those for which A∗ = A; that is, a ji = ai j . Hermitian matrices include as a subclass real

symmetric matrices, that is, matrices that have real elements and for which AT = A.

The eigenvalues and eigenvectors of Hermitian matrices always have the following

useful properties:

1.

2.



All eigenvalues are real.

There always exists a full set of n linearly independent eigenvectors, regardless of

the multiplicities of the eigenvalues.



366



Chapter 7. Systems of First Order Linear Equations



3.



4.



If x(1) and x(2) are eigenvectors that correspond to different eigenvalues, then

(x(1) , x(2) ) = 0. Thus, if all eigenvalues are simple, then the associated eigenvectors

form an orthogonal set of vectors.

Corresponding to an eigenvalue of multiplicity m, it is possible to choose m

eigenvectors that are mutually orthogonal. Thus the full set of n eigenvectors can

always be chosen to be orthogonal as well as linearly independent.



Example 5 above involves a real symmetric matrix and illustrates properties 1, 2,

and 3, but the choice we have made for x(2) and x(3) does not illustrate property 4.

However, it is always possible to choose an x(2) and x(3) so that (x(2) , x(3) ) = 0. For

example, in Example 5 we could have chosen

 

 

1

1

x(3) = −2

x(2) =  0 ,

−1

1

as the eigenvectors associated with the eigenvalue λ = −1. These eigenvectors are orthogonal to each other as well as to the eigenvector x(1) corresponding to the eigenvalue

λ = 2. The proofs of statements 1 and 3 above are outlined in Problems 32 and 33.



PROBLEMS



In each of Problems 1 through 5 either solve the given set of equations, or else show that there

is no solution.

2.

x1 + 2x2 − x3 = 1

− x3 = 0

1.

x1

3x1 + x2 + x3 = 1

2x1 + x2 + x3 = 1

−x1 + x2 + 2x3 = 2

x1 − x2 + 2x3 = 1

3.



x1 + 2x2 − x3 = 2

2x1 + x2 + x3 = 1

x1 − x2 + 2x3 = −1



5.



x1

− x3 = 0

3x1 + x2 + x3 = 0

−x1 + x2 + 2x3 = 0



4.



x1 + 2x2 − x3 = 0

2x1 + x2 + x3 = 0

x1 − x2 + 2x3 = 0



In each of Problems 6 through 10 determine whether the given set of vectors is linearly independent. If linearly dependent, find a linear relation among them. The vectors are written as row

vectors to save space, but may be considered as column vectors; that is, the transposes of the

given vectors may be used instead of the vectors themselves.

x(2) = (0, 1, 1),

x(3) = (1, 0, 1)

6. x(1) = (1, 1, 0),

7. x(1) = (2, 1, 0),

x(2) = (0, 1, 0),

x(3) = (−1, 2, 0)

(1)

(2)

8. x = (1, 2, 2, 3),

x = (−1, 0, 3, 1),

x(3) = (−2, −1, 1, 0),

(4)

x = (−3, 0, −1, 3)

9. x(1) = (1, 2, −1, 0),

x(2) = (2, 3, 1, −1),

x(3) = (−1, 0, 2, 2),

x(4) = (3, −1, 1, 3)

10. x(1) = (1, 2, −2),

x(2) = (3, 1, 0),

x(3) = (2, −1, 1),

x(4) = (4, 3, −2)

11. Suppose that the vectors x(1) , . . . , x(m) each have n components, where n < m. Show that

x(1) , . . . , x(m) are linearly dependent.

In each of Problems 12 and 13 determine whether the given set of vectors is linearly independent

for −∞ < t < ∞. If linearly dependent, find the linear relation among them. As in Problems 6

through 10 the vectors are written as row vectors to save space.

12. x(1) (t) = (e−t , 2e−t ),



x(2) (t) = (e−t , e−t ),



x(3) (t) = (3e−t , 0)



7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors



13. x(1) (t) = (2 sin t, sin t),

14. Let



367



x(2) (t) = (sin t, 2 sin t)



x(1) (t) =



et

,

tet



x(2) (t) =



1

.

t



Show that x(1) (t) and x(2) (t) are linearly dependent at each point in the interval 0 ≤ t ≤ 1.

Nevertheless, show that x(1) (t) and x(2) (t) are linearly independent on 0 ≤ t ≤ 1.

In each of Problems 15 through 24 find all eigenvalues and eigenvectors of the given matrix.

15.



5

3



17.



−2

1



−1

1

1

−2



3

−1



√1

3





1

0

0

21. 2

1 −2

3

2

1





11/9 −2/9

8/9

23. −2/9

2/9 10/9

8/9

10/9

5/9

19.



16.



3

4



18.



1

−i



−2

−1

i

1



−3 3/4

−5

1





3

2

2

22.  1

4

1

−2 −4 −1





3 2 4

24. 2 0 2

4 2 3

20.



Problems 25 through 29 deal with the problem of solving Ax = b when det A = 0.

25. Suppose that, for a given matrix A, there is a nonzero vector x such that Ax = 0. Show

that there is also a nonzero vector y such that A∗ y = 0.

26. Show that (Ax, y) = (x, A∗ y) for any vectors x and y.

27. Suppose that det A = 0 and that Ax = b has solutions. Show that (b, y) = 0, where y is

any solution of A∗ y = 0. Verify that this statement is true for the set of equations in

Example 2.

Hint: Use the result of Problem 26.

28. Suppose that det A = 0, and that x = x(0) is a solution of Ax = b. Show that if ␰ is a

solution of A␰ = 0 and α is any constant, then x = x(0) + α ␰ is also a solution of Ax = b.

29. Suppose that det A = 0 and that y is a solution of A∗ y = 0. Show that if (b, y) = 0 for

every such y, then Ax = b has solutions. Note that this is the converse of Problem 27; the

form of the solution is given by Problem 28.

30. Prove that λ = 0 is an eigenvalue of A if and only if A is singular.

31. Prove that if A is Hermitian, then (Ax, y) = (x, Ay), where x and y are any vectors.

32. In this problem we show that the eigenvalues of a Hermitian matrix A are real. Let x be an

eigenvector corresponding to the eigenvalue λ.

(a) Show that (Ax, x) = (x, Ax). Hint: See Problem 31.

(b) Show that λ(x, x) = λ(x, x). Hint: Recall that Ax = λx.

(c) Show that λ = λ; that is, the eigenvalue λ is real.

33. Show that if λ1 and λ2 are eigenvalues of a Hermitian matrix A, and if λ1 = λ2 , then the

corresponding eigenvectors x(1) and x(2) are orthogonal.

Hint: Use the results of Problems 31 and 32 to show that (λ1 − λ2 )(x(1) , x(2) ) = 0.



368



Chapter 7. Systems of First Order Linear Equations



7.4 Basic Theory of Systems of First Order Linear Equations

The general theory of a system of n first order linear equations

x1 = p11 (t)x1 + · · · + p1n (t)xn + g1 (t),

.

.

.

xn = pn1 (t)x1 + · · · + pnn (t)xn + gn (t)



(1)



closely parallels that of a single linear equation of nth order. The discussion in this

section therefore follows the same general lines as that in Sections 3.2, 3.3, and 4.1.

To discuss the system (1) most effectively, we write it in matrix notation. That is, we

consider x1 = φ1 (t), . . . , xn = φn (t) to be components of a vector x = ␾(t); similarly,

g1 (t), . . . , gn (t) are components of a vector g(t), and p11 (t), . . . , pnn (t) are elements

of an n × n matrix P(t). Equation (1) then takes the form

x = P(t)x + g(t).



(2)



The use of vectors and matrices not only saves a great deal of space and facilitates

calculations but also emphasizes the similarity between systems of equations and single

(scalar) equations.

A vector x = ␾(t) is said to be a solution of Eq. (2) if its components satisfy the system of equations (1). Throughout this section we assume that P and g are continuous on

some interval α < t < β; that is, each of the scalar functions p11 , . . . , pnn , g1 , . . . , gn

is continuous there. According to Theorem 7.1.2, this is sufficient to guarantee the

existence of solutions of Eq. (2) on the interval α < t < β.

It is convenient to consider first the homogeneous equation

x = P(t)x



(3)



obtained from Eq. (2) by setting g(t) = 0. Once the homogeneous equation has been

solved, there are several methods that can be used to solve the nonhomogeneous

equation (2); this is taken up in Section 7.9. We use the notation









x1k (t)

x11 (t)

 x21 (t)

 x2k (t)

x(1) (t) =  .  , . . . , x(k) (t) =  .  , . . .

(4)

 . 

 . 

.

.

xn1 (t)

xnk (t)

( j)



to designate specific solutions of the system (3). Note that xi j (t) = xi (t) refers to

the ith component of the jth solution x( j) (t). The main facts about the structure of

solutions of the system (3) are stated in Theorems 7.4.1 to 7.4.4. They closely resemble

the corresponding theorems in Sections 3.2, 3.3, and 4.1; some of the proofs are left to

the reader as exercises.



Theorem 7.4.1



If the vector functions x(1) and x(2) are solutions of the system (3), then the linear

combination c1 x(1) + c2 x(2) is also a solution for any constants c1 and c2 .

This is the principle of superposition; it is proved simply by differentiating c1 x(1) +

c2 x(2) and using the fact that x(1) and x(2) satisfy Eq. (3). By repeated application of



Xem Thêm
Tải bản đầy đủ (.pdf) (1,310 trang)

×