1. Trang chủ >
  2. Công Nghệ Thông Tin >
  3. Kỹ thuật lập trình >

10 Cauchy's Polar Decomposition Theorem

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (6.6 MB, 489 trang )

1.11 Higher Order Tensors


To prove the existence, we note that the tensor C D F T F is symmetric and

positive definite. Therefore, Theorem 1.8 implies that U exists and that it is the only

tensor satisfying the equation U 2 D C . Choosing R D F U 1 , we obtain F D RU .

The following relations prove that R is a proper rotation:

RT R D U 1 F T F U 1 D U

det R D det F det U


U 2U


> 0:


D I;

Finally, the second decomposition is obtained by taking V D RURT .



1.11 Higher Order Tensors

The definition of a Euclidean second-order tensor introduced in the previous section

is generalized here to Euclidean higher order tensors. A third-order Euclidean

tensor or, in short, a 3-tensor, is a mapping

T W E3

E3 ! E3 ;

which is linear with respect to each variable. The Levi–Civita symbol introduced

by (1.34) is an example of a third-order tensor.

If a basis .ei / is introduced in E3 , then for all u; v 2 E3 , we have the relation

T.u; v/ D T.ui ei ; vj ej / D ui vj T.ei ; ej /;

from which, introducing the notation

T.ei ; ej / D Tijk ek ;

we also have

T.u; v/ D Tijk ui vj ek :


The tensor product of the vectors u; v; w is the following third-order tensor:

u ˝ v ˝ w.x; y/ D u.v x/.w y/

8x; y 2 E3 :

From (1.23), Eq. (1.88) becomes

T.u; v/ D Tijk .ei u/.ej v/ek D Tijk ek ˝ ei ˝ ej .u; v/;



1 Elements of Linear Algebra

so that, from the arbitrariness of vectors u; v, we obtain

T D Tijk ek ˝ ei ˝ ej :


Relation (1.88) can be written using different types of components of the tensor T.

For example, recalling the definition of reciprocal basis, we derive

T D Tjlk el ˝ ej ˝ ek D T lmk el ˝ em ˝ ek D Tpqr ep ˝ eq ˝ er ;


Tjlk D g li Tijk ;

T lmk D g li g mj Tijk ;

Tpqr D gpk Tqrk ;

denote the different components of the tensor.

A third-order tensor can be also defined as a linear mapping

T W E3 ! Lin.E3 /;


by again introducing the tensor product (1.89) as follows:

u ˝ v ˝ w.x/ D .w x/u ˝ v:

In fact, from the equation

Tu D ui Tei

and the representation


Tei D Ti ej ˝ ek ;

we derive


T D Ti ej ˝ ek ˝ ei :

Finally, a 4th-order tensor is a linear mapping T W E3 E3 E3 ! E3 or,

equivalently, a linear mapping T W Lin.E3 / ! Lin.E3 /. Later, we will prove that,

in the presence of small deformations of an elastic system, the deformation state

is described by a symmetric 2-tensor Ehk , while the corresponding stress state is

expressed by another symmetric tensor Thk . Moreover, the relation between these

two tensors is linear:

Thk D Clm

hk Elm ;

where the 4th-order tensor Clm

hk is called the linear elasticity tensor.

1.12 Euclidean Point Space


1.12 Euclidean Point Space

In this section a mathematical model of the ordinary three-dimensional Euclidean

space will be given. A set En , whose elements will be denoted by capital letters A,

B, : : :, will be called an n-dimensional affine space if a mapping

f W .A; B/ 2 En


En ! f .A; B/ Á AB 2 En ;


exists between the pairs of elements of En and the vectors of a real n- dimensional

vector space En , such that










2. for any O 2 En and any vector u 2 En , one and only one element P 2 En exists

such that


OP D u:


Then the elements of En will be called points; moreover, the points A; B of En that



correspond to AB, are called the initial and final point of AB, respectively.

A frame of reference .O; ei / in an n-dimensional affine space En is the set of a

point O 2 En and a basis .ei / of En . The point O is called the origin of the frame.

For a fixed frame, the relation (1.94) can be written as


OP D ui ei ;


where the real numbers ui denote the contravariant components of u in the basis

.ei /. These components are also called the rectilinear coordinates of P in the frame

.O; ei /. In order to derive the transformation formulae of the rectilinear coordinates

(Sect. 1.1) of a point P 2 E3 for the frame change .O; ei / ! .O 0 ; e0i /, we note





OP D OO 0 C O 0 P :

Representing the vectors of (1.96) in both the frames, we have

ui ei D uiO 0 ei C u0j e0j ;



1 Elements of Linear Algebra

where uiO 0 denote the coordinates of O 0 with respect to .O; ei /. Recalling (1.25),

from the previous relation we derive

ui D uiO 0 C Aij u0j :


Finally, if En is a Euclidean vector space, then En is called a Euclidean point

space. In this case, the distance between two points A and B can be defined as

the length of the unique vector u corresponding to the pair .A; B/. To determine

the expression for the distance in terms of the rectilinear coordinates of the above

points, we note that, in a given frame of reference .O; ei /, we have:







OA D .uiB

uiA /ei ;


where uiB , uiA denote the rectilinear coordinates of B and A, respectively. Therefore,

the distance jABj can be written as

jABj D


gij .uiB


uiA /.uB


uA / ;


and it assumes the Pitagoric form


u n


jABj D t .ui


uiA /2



when the basis .ei / is orthonormal.

1.13 Exercises

1. In Sect. 1.3 the vectors of dual basis have been defined by (1.18). Then, it has

been proved that

ei ej D ıji :

Prove that, this condition, in turn, leads to (1.18).

Hint: If we put ei D ij ej , then from the above condition we obtain


gij D ıij .

2. Prove that (1.38) is equivalent to the following formula


ˇ 1 2 3ˇ

ˇu u u ˇ

p ˇˇ 1 2 3 ˇˇ

v/ w D g ˇ v v v ˇ :

ˇ w1 w2 w3 ˇ

1.13 Exercises


In particular, verify that the elementary volume d V of the three-dimensional

Euclidean point space in the coordinates .y i / can be written as follows

dV D


gdy 1 dy 2 dy 3 :

3. Prove the invariance under base changes of the coefficients of the characteristic

equation (1.64). Use the transformation formulae (1.53) of the mixed components of a tensor.

4. Using the properties of the cross product, determine the reciprocal basis .ei / of

.ei /; i D 1; 2; 3.

The vectors of the reciprocal basis .ei / satisfy the condition (1.19)

ei ej D ıji ;

which constitutes a linear system of n2 equations in the n2 unknowns represented by the components of the vectors ei . For n D 3, the reciprocal vectors

are obtained by noting that, since e1 is orthogonal to both e2 and e3 , we can


e1 D k.e2

e3 /:

On the other hand, we also have

e1 e1 D k.e2

e3 / e1 D 1;

so that


D e1 e2


e3 :


e1 D ke2

e3 ;

e2 D ke3

e1 ;

e3 D ke1

e2 :

These relations also show that k D 1 and the reciprocal basis coincides with

.ei /, when this is orthonormal.

5. Evaluate the eigenvalues and eigenvectors of the tensor T whose components

in the orthonormal basis .e1 ; e2 ; e3 / are



4 2 1

@ 2 4 1 A:

11 3

The eigenvalue equation is

Tu D u:



1 Elements of Linear Algebra

This has nonvanishing solutions if and only if the eigenvalue

the characteristic equation

I/ D 0:


is a solution of


In our case, the relation (1.102) requires that






det @ 2 4

1 A D 0;


1 3

corresponding to the following third-degree algebraic equation in the

unknown :


C 11

34 C 24 D 0;


whose solutions are


D 1;


D 4;


D 6:

The components of the corresponding eigenvectors can be obtained by solving

the following homogeneous system:

T ij uj D ı ij uj :



D 1, Eq. (1.103) become

3u1 C 2u2 u3 D 0;

2u1 C 3u2 C u3 D 0;

u1 C u2 C 2u3 D 0:

Imposing the normalization condition

ui ui D 1

to these equations gives


u1 D p ;


Proceeding in the same way for

of the other two eigenvectors:


u2 D


p ;


D 4 and





p ;p ;p ;






u3 D p :


D 6, we obtain the components




p ; p ;0 :



1.13 Exercises


From the symmetry of T, the three eigenvectors are orthogonal (verify).

6. Let u and be the eigenvectors and eigenvalues of the tensor T. Determine the

eigenvectors and eigenvalues of T 1 :

If u is an eigenvector of T, then

Tu D u:

Multiplying by T 1 , we obtain the condition

T 1u D



which shows that T 1 and T have the same eigenvectors, while the eigenvalues

of T 1 are the reciprocal of the eigenvalues of T.

7. Let

g11 D g22 D 1; g12 D 0;

be the coefficients of the scalar product g with respect to the basis .e1 ; e2 /;

determine the components of g in the new basis defined by the transformation

matrix (1.101) and check whether or not this new basis is orthogonal.

8. Given the basis .e1 ; e2 ; e3 / in which the coefficients of the scalar product are

g11 D g22 D g33 D 2; g12 D g13 D g23 D 1;

determine the reciprocal vectors and the scalar product of the two vectors u D

.1; 0; 0/ and v D .0; 1; 0/.

9. Given the tensor T whose components in the orthogonal basis .e1 ; e2 ; e3 / are




@0 1 1A;


determine its eigenvalues and eigenvectors.

10. Evaluate the eigenvalues and eigenvectors of a tensor T whose components are

expressed by the matrix (1.101), but use the basis .e1 ; e2 ; e3 / of the Exercise 6.

11. Evaluate the component matrix of the most general symmetric tensor T in the

basis .e1 ; e2 ; e3 /, supposing that its eigenvectors u are directed along the vector

n D .0; 1; 1/ or are orthogonal to it; that is, verify the condition

n uD

u2 C u3 D 0:

Hint: Take the basis formed by the eigenvectors e01 D .1; 0; 0/, e02 D .0; 1; 1/

and n:


1 Elements of Linear Algebra

12. An axial vector u transforms according to the law


ui D ˙Ai uj :

Verify that the vector product u v is an axial vector when both u and v are

polar and a polar vector when one of them is axial.

13. Let En be a Euclidean n-dimensional vector space. If u is a vector of En ,

determine the linear map

P W En ! E n ;

such that P.v/ u D 0, 8v 2 En . Determine the representative matrix P of P in

a basis .ei / of En .

Hint: The component of v along u is u v so that P.v/ D v .u v/u D v u˝u v.

14. Let S and T be two endomorphisms of the vector space En . Let W be the

eigenspace belonging to the eigenvalue of T. If S and T commute, i.e., if

ST D TS, prove that S maps W into itself. Furthermore, S maps the kernel of

T into itself.

Hint: If T.u/ D 0, then TS.u/ D ST.u/ D 0 so that S maps ker T into itself.

Now, W is the kernel of T

I. S commutes with this mapping if it commutes

with T. Recalling what has just been shown, S maps W into itself.

15. Let S and T be 2-tensors of the Euclidean vector space En . Suppose that both

S and T admit n simple eigenvalues. Denote by 1 ; : : : ; n the eigenvalues of

S and with .ui / the corresponding basis of eigenvectors. Finally, we denote by

1 ; : : : ; n and .ui / the eigenvalues and the basis of eigenvectors of T. Prove

that the bases .ui / and .ui / coincide if and only if S and T commute.

Hint: If ui D ui , i D 1; : : : ; n, we have that

TS/ui D


i Sui


i Tui





D 0;


so that


TS D 0:


If S and T commute, we can write what follows

TSui D STui D

i Sui ;

and Sui is an eigenvector of T belonging to the eigenvalue i . But i is simple

and the corresponding eigenspace is one-dimensional. Therefore, Sui D ui

1.13 Exercises


and we conclude that any eigenvector of T is also an eigenvector of S. Repeating

the above reasoning starting from STui , we prove that any eigenvector of S is

also an eigenvector of T.

16. Taking into account the results of Exercises 14 and 15, extend the result of

Exercise 15 when both the 2-tensors S and T on the vector space En are

symmetric and they still admit bases of eigenvectors but their eigenvalues are

not simple. In other words, prove that when the symmetric 2-tensors T and S

are symmetric and have bases of eigenvectors, T and S commute if and only if

they have at least a common basis of eigenvectors.

Hint: We have already proved in the Exercise 15 that if T and S have a common

basis of eigenvectors, then they commute. Denote by Wi the vector subspace of

eigenvectors belonging to the eigenvalue i of T and let ni be the dimension

of Wi . If 1 ; : : : ; r are all the eigenvectors of T, it is W1 C

C Wr D En .

Further, we have already shown that if T and S commute, then the restriction of

S to Wi is an endomorphism of Wi . Since this restriction is symmetric, it admits

a basis of eigenvectors of Wi , that are also eigenvectors of T belonging to i .

These results hold for any subspace Wi and the problem is solved.

17. Let En be a Euclidean vector space and denote by Vm a m-dimensional vector

subspace of En . It is evident that the vector set

W D fw 2 En W w u; 8u 2 Vm g

is a vector subspace formed by all the vectors that are orthogonal to all the

vectors of Vm . Prove that En is the direct sum of Vm and W .

Hint: It has to be proved that


• Vm W D 0;

• Any u 2 En can be written as u D v C w, where v 2 Vn and w 2 W are

uniquely determined.


In fact, if u 2 Vm W , then it is u u D 0 so that u D 0. Further, denoting by

.ei /, i D 1; : : : ; m, a basis of Vm , for any u 2 En we define the vectors




.u ei /ei ;





where the vector w 2 W since w v D .u v/ v D 0. If another decomposition

of u exists, we have that

u D v C w D v0 C w0 ;

that is


But w0

w 2 W and v


v0 :

v0 2 Vm . Since they are equal, they coincide with 0.


1 Elements of Linear Algebra

18. Prove that the map

P W u 2 E n ! v 2 Vm ;

where v is given by (1.106), is linear, symmetric and verifies the condition

P2 D I:

The map P is called the projection map of En on Vm .

1.14 The Program VectorSys

Aim of the Program VectorSys

The program VectorSys determines, for any system † of applied vectors, an

equivalent vector system †0 and, when the scalar invariant vanishes, its central axis.

Moreover, it plots in the space both the system † and †0 , as well as the central axis.

Description of the Problem and Relative Algorithm

Two systems † D f.Ai ; vi /giD1; ;n and †0 D f.Bj ; wj /gj D1; ;m of applied vectors

are equivalent if they have the same resultants and moments with respect to any pole

O. It can be proved that † and †0 are equivalent if and only if

R D R0 ; MP D M0P ;


where R, R0 denote their respective resultants and MP , M0P their moments with

respect to a fixed pole P .

The scalar invariant of a system of applied vectors † is the scalar product



which is independent of the pole P . If R Ô 0, then the locus of points Q satisfying

the condition




1.14 The Program VectorSys


is a straight line parallel to R, which is called the central axis of †. Its parametric

equations are the components of the vector 3



C t R;



where P is any point of E3 and R is the norm of R.

It is possible to show that any system † of applied vectors is equivalent to its

resultant R, applied at an arbitrary point P , and a torque equal to the moment MP

of † with respect to P . Moreover, there are systems of applied vectors which are

equivalent to either a vector or a torque (see [52, 64]).

More precisely, the following results hold:

1. If I D 0

(a) and R Ô 0, then the system † is equivalent to its resultant R applied at any

point A of the central axis;

(b) and R D 0, then the system † is equivalent to any torque having the moment

MP of † with respect to P .

2. If I Ô 0, then the system is equivalent to its resultant R applied at any point

P and a torque with moment MP .

A system of applied vectors is equivalent to zero if for all P 2 <3 we have

R D MP D 0.

Command Line of the Program VectorSys


Parameter List

Input Data

A = list of points of application of the vectors of †;

V = list of components of the vectors of †;

P = pole with respect to which the moment of † is evaluated.


Equation (1.110) is a direct consequence of (1.109) and the formula




8O; P 2 <3 :

Xem Thêm
Tải bản đầy đủ (.pdf) (489 trang)