San José State University

applet-magic.com
Thayer Watkins
Silicon Valley
& Tornado Alley
USA

The Computation of Functions
of Square Matrices

Theorem 1: Let G( ) be a polynomial, finite or infinite, given by a sequence of coefficients {gj, j=0 …}. Let X and C be n×n matrices. Then

G(CXC−1) = CG(X)C−1

Proof:

Note that

(CXC−1)² = (CXC−1)(CXC−1) = (CXC−1CXC−1)
hence
(CXC−1)² = CX²C−1
and furthermore
(CXC−1)j = CXjC−1

Therefore term-by-term

Σj=0gj(CXC−1)j = Σj=0gj(CXjC−1)
= C[Σj=0gjXj]C−1)
which is the same as
G(CXC−1) = CG(X)C−1

A diagonal matrix D is one such that djk=0 if j≠k. It is then given by the sequence {djj}.

Theorem 2: Let D be a diagonal matrix {djj and G a polynomial. Then G(D) is equal to Diagonal {G(djj)}.

Proof:

Note that D² is equal to Diagonal {djj²} and hence Dj=Diagonal {djjj}. Therefore

G(D) = Diagonal {Σj=0gjdjjj} = Diagonal {G(djj)};

Theorem 3: If X is an n×n matrix with n distinct eigenvalues (λj, j=1 to n) and C is the matrix of its eigenvectors then for a polynomial G

G(X) = G(CΛC−1) = CG(Λ)C−1) = C{G(λj)}C−1)

Proof:

This follows from an application of Theorems 1 and 2.

All of the above, of couse, applies to the infinite polynomial Exp( ).

(To be continued.)


HOME PAGE OF applet-magic
HOME PAGE OF Thayer Watkins,