# 读书笔记 – Linear Algebra Done Right

errata

## Ch1 Vector Spaces

### 1.1 Complex Numbers

Definition and properties of complex number.

### 1.2 Definition of Vector Space

Define $F^n$ to be the set of all lists of length $n$ consisting of elements of $F$:$$F^n = \{(x_1,…,x_n):x_j\in F\; \text{for}\; j = i,…,n\}$$

Scalar multiplication (数乘), $F^\infty$,

The polynomials also have scalar multiplications:$$(p+q)(z) = p(z) + q(z)$$

### 1.3 Properties of Vector Spaces

A vector space has:

• a unique additive identity ($v+0=v$)
• a unique additive inverse ($v+w=0$)
• $0v=0$
• commutativity($u+v=v+u$)
• associativity ($a(bu) = (ab)u$)
• distributive property ($a(u+v) = au+av$)

### 1.4 Subspaces

A subset $U$ of $V$ is called a subspace of $V$ if $U$ is also a space (using the same addition and scalar multiplication as on V). So $U$ is a subspace of $V$ meets the following:

• $U$ is a subset of $V$
• for $U$: additive identity, closed under addition, closed under scalar multiplication

The empty set is not a subspace of $V$.

### 1.5 Sums

Suppose $U_1,…,U_m$ are subspaces of $V$, the sum of $U_1,…,U_m$ denoted $U_1+…+U_m$ is defined to be the set of all possible sums of elements of $U_1,…,U_m$.$$U_1+…+U_m = \{u_1+…+u_m:u_1\in U_1,…,u_m\in U_m\}$$

The sum of subspaces $\ne$ The union of subspaces: If $U_1,…,U_m$ are subspaces of $V$, then the sum $U_1+…+U_m$ is a subspace of $V$. However, the union of two subspaces of $V$ is a subspace of $V$ if and only if the subspaces is contained in the other.

For example:$U = \{(x,0,0)\in F^3:x\in F\}$, $V = \{(0,y,0)\in F^3:y\in F\}$ and $W = \{(y,y,0)\in F^3:y\in F\}$.

Then$$U+V = \{(x,y,0)\in F^3:x,y\in F\}$$ $$U+W = \{(x,y,0)\in F^3:x,y\in F\}$$

### 1.6 Direct Sums

$V$ is the **direct sum** of subspaces $U_1,…,U_m$ written $V=U_1\oplus…\oplus U_m$, if each element of $V$ can be written uniquely as a sum $u_1+…+u_m$, where each $u_j\in U_j$. (In my opinion, which $\approx$ that all the vectors are linear independence)

And to decide whether it is a direct sum, we only need consider whether 0 can be uniquely written as an appropriate sum. (To prove it, just create 2 representation of a same $v$, and find the difference)

## Ch2 Finite-Dimensional Vector Spaces

### 2.1 Span

The set of all linear combinations of $(v_1,…,v_m)$ is called the span of $(v_1,…,v_m)$.

If $span(v_1,…,v_m)$ equals $V$, we say that $(v_1,…,v_m)$ spans $V$.

A vector space is finite dimensional if some list of vectors in it spans the space.

### 2.2 Linear Independence

A list $(v_1 ,…, v_m )$ of vectors in $V$ is called linearly independent if the only choice of $a_1,…,a_m \in F$ that makes $a_1v_1 +…+a_mv_m=0$ is $a_1 =…= a_m =0$.

The length of linearly independent list of vectors $\le$ the length of vectors.

### 2.3 Bases

basis of $V$ is a list of vectors in $V$ that is linearly independent and spans $V$.$((1,0,…,0),(0,1,0,…,0),…,(0,…,0,1))$ is the **standard basis**.

Every spanning list in a vector space can be reduced to a basis of the vector space. Every linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis of the vector space.

### 2.4 Dimension

All the bases have the same length, which is the dimension of the space.

If $V$ is finite dimensional and $U$ is a subspace of $V$, then dim$U$ ≤ dim$V$.

## Ch3 Linear Maps

### 3.1 Definitions and Examples

Linear map (linear transformation): function $T:V\rightarrow W$ with additivity (加性) and homogeneity (齐性).

The set of all linear maps from $V$ to $W$ is denoted $\mathcal{L}(V,W)$

Those are all linear maps: zero ($0v=0$), identity ( $Iv = v$ ), differentiation ( $Tp = p’$ ), integration ( $Tp = \int p(x)dx$ )

### 3.2 Null Space

For $T\in \mathcal{L}(V,W)$, the null space (or kernel) of $T$, denoted null $T$, is the subset of $V$ consisting of those vectors that $T$ maps to 0:$$\text{null\ }T =\{v \in V :Tv =0\}$$

For example, in the differentiation example $Tf = f’$, the null space of $T$ is the set of constant functions, which is because the differentiation of a constant number equals zero.

If $T\in \mathcal{L}(V,W)$, then null $T$ is a subspace of $V$. (To prove it, just prove null $T$ contains 0 and is closed under addition and scalar multiplication, then null $T$ is a subspace of $V$.)

A linear map $T: V → W$ is called injective (单射) if whenever u,v ∈ V and Tu = Tv, we have u = v.$T$ is injective if and only if null $T$ = {0}. (Otherwise contradict to the definition)

### 3.3 Range

For $T\in \mathcal{L}(V,W)$, the range (or image) of T, denoted range $T$, is a subspace of $W$:$$\text{range }T = \{Tv:v\in V\}$$

A linear map $T:V\rightarrow W$ is called surjective (or onto) (满射) if its range equals $W$.

The dimension of the null space plus the dimension of the range of a linear map on a finite-dimensional vector space equals the dimension of the domain:

If $V$ is finite dimensional and $T\in \mathcal{L}(V,W)$, then range $T$ is a subspace of $W$ and$$\text{dim }V = \text{dim null }T + \text{dim range }T$$

To prove it, just create a 极大线性无关组 of $V$, and apply $T$ to both sides of that equation.

And as a result of the theorem above, if dim $V$ > dim $W$, then no linear map from $V$ to $W$ is injective. And if dim $V$ < dim $W$, then no linear map from $V$ to $W$ is surjective.

The 2 corollaries have important consequences in the theory of linear equations: a homogeneous system of linear equations (齐次方程组) in which there are more variables than equations must have nonzero solutions. An inhomogeneous system of linear equations (非齐次方程组) in which there are more equations than variables has no solution for some choice of the constant terms.

### 3.4 The Matrix of a Linear Map

This is the most important part of linear algebra, because the core of linear algebra, or the core of the matrix is to represent the linear equations.

### 3.5 Invertibility

And the invertibility of a matrix ($^{-1}$).

Two vector spaces are called isomorphic if there is an invertible linear map from one vector space onto the other one. Two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension.

Every finite-dimensional vector space is isomorphic to some $F^n$ . Specifically, if $V$ is a finite-dimensional vector space and dim $V = n$, then $V$ and $F^n$ are isomorphic.

The dimension of the vector space of linear maps from $V$ to $W$ = dim$(V)$ · dim$(W)$

A linear map from a vector space to itself is called an operator. This is the deepest and most important parts of linear algebra, as well as most of the rest of this book, deal with operators.

Suppose $V$ is finite dimensional. If $T\in \mathcal{L}(V)$, then the following are equivalent:

• $T$ is invertible;
• $T$ is injective;
• $T$ is surjective.

## Ch4 Polynomials

This chapter is not about linear algebra

### 4.1 Degree

A number $\lambda \in F$ is called a root of a polynomial $p \in \mathcal{P}(F)$ if$$p(\lambda) = 0$$

And other corollaries are so simple, skip.

### 4.2 Complex Coefficients

Every non-constant polynomial with complex coefficients has a root and a unique factorization.

### 4.3 Real Coefficients

For complex number $\lambda$, if $\lambda$ is a root of a polynomial, then so is $\bar{\lambda}$

## Ch5 Eigenvalues and Eigenvectors

### 5.1 Invariant Subspaces

For $T ∈ L(V)$ and $U$ a subspace of $V$, we say that $U$ is invariant under $T$ if $u ∈ U$ implies $Tu ∈ U$. In other words, $U$ is invariant under $T$ if $T|_U$ is an operator on $U$.

For the equation$$Tu=\lambda u$$

It is intimately connected with one-dimensional invariant subspaces. $\lambda$ is the eigenvalue and $u$ is the eigenvector

Let $T ∈ L(V)$. Suppose $λ_1 , . . . , λ_m$ are distinct eigenvalues of $T$ and $v_1, . . . , v_m$ are corresponding nonzero eigenvectors. Then $(v_1, . . . , v_m)$ is linearly independent.

An operator cannot have more distinct eigenvalues than the dimension of the vector space on which it acts.

### 5.2 Polynomials Applied to Operators

Just to transfer the concepts of applying a polynomial to an operator.

### 5.3 Upper-Triangular Matrices

Every operator on a finite-dimensional, nonzero, complex vector space has an eigenvalue.

Suppose $T ∈ L(V)$ and $(v_1,…,v_n)$ is a basis of $V$ . Then the following are equivalent:

• the matrix of $T$ with respect to $(v_1, . . . , v_n)$ is upper triangular;
• $T_{v_k} ∈$ span$(v_1,…,v_k)$ for each $k = 1,…,n$;
• span$(v_1,…,v_k)$ is invariant under $T$ for each $k = 1,…,n$.

Suppose $T ∈ L(V)$ has an upper-triangular matrix with respect to some basis of $V$. Then $T$ is invertible if and only if all the entries on the diagonal of that upper-triangular matrix are nonzero.

### 5.4 Diagonal Matrices

An operator $T ∈ L(V)$ has a diagonal matrix with respect to some basis of $V$ if and only if V has a basis consisting of eigenvectors of $T$ .

If $T ∈ L(V )$ has dim $V$ distinct eigenvalues, then $T$ has a diagonal matrix with respect to some basis of $V$ .

Suppose $T ∈ L(V )$. Let $λ_1 , . . . , λ_m$ denote the distinct eigenvalues of $T$ . Then the following are equivalent:

• $T$ has a diagonal matrix with respect to some basis of $V$ ;
• $V$ has a basis consisting of eigenvectors of $T$ ;
• there exist one-dimensional subspaces $U_1 , . . . , U_n$ of $V$ , each invariant under $T$ , such that $V = U_1 ⊕$ ··· $⊕ U_n$;
• $V$ = null$(T −λ_1I)⊕$ ··· $⊕$null$(T −λ_mI)$;
• dim$V$ =dim null$(T −λ_1I)$+ ··· +dim null$(T −λ_mI)$.

### 5.5 Invariant Subspaces on Real Vector Spaces

Every operator on a finite-dimensional, nonzero, real vector space has an invariant subspace of dimension 1 or 2. (And I just cannot understand the proof 5.24)

Suppose $U$ and $W$ are subspaces of $V$ with $V=U⊕W$. Each vector $v ∈ V$ can be written uniquely in the form $v = u + w$, where $u ∈ U$ and $w ∈ W$ . With this representation, define $P_{U ,W} \in \mathcal{L}(V)$ by $P_{U ,W} v = u$. $P_{U ,W}$ is often called the projection onto $U$ with null space $W$.

Every operator on an odd-dimensional real vector space has an eigenvalue. (And I cannot understand the proof 5.26)

## Ch6 Inner-Product Spaces

### 6.1 Inner Products

The length of a vector $x$ is called the norm of $x$, denoted $\|x\|$.

The dot product of $x$ and $y$, denoted $x\cdot y$, is defined by$$x\cdot y = x_1y_1+x_2y_2+…+x_ny_n$$

Note that the dot product of two vectors in $R^n$ is a number, not a vector.

An inner product is a generalization of the dot product.

The inner product of $w = (w_1,…,w_n) ∈ C^n$ with $z = (z_1,…,z_n) ∈ C^n$ should equal$$w_1\overline{z_1} +…+w_n\overline{z_n}.$$

An inner product on $V$ is a function that takes each ordered pair $(u, v)$ of elements of $V$ to a number $⟨u, v⟩ ∈ F$ and has the following properties:

• positivity(正性): $⟨v, v⟩ >0$ for all $v\in V$
• definiteness(定性): $\langle v,v\rangle = 0$ if and only if $v=0$
• additivity in first slot: $⟨u+v, w⟩ = ⟨u, w⟩+⟨v, w⟩$ for all $u,v,w\in V$
• homogeneity in first slot: $⟨av, w⟩ = a⟨v, w⟩$ for all $a\in F$ and $v,w\in V$
• conjugate symmetry: $⟨v, w⟩ = \overline{⟨w, v⟩}$ for all $u,v,w\in V$

An inner-product space is a vector space V along with an inner product on V .

We can define an inner product on $F^n$ by$$⟨(w_1,…,w_n), (z_1,…,z_n)⟩ = w_1\overline{z_1}+…+w_n\overline{z_n}$$

This inner product, which provided our motivation for the definition of an inner product, is called the Euclidean inner product on $F^n$.

Furthermore, there are other inner products on $F^n$ in addition to the Euclidean inner product. For example, if $c_1,…,c_n$ are positive numbers, then we can define an inner product on $F^n$ by$$⟨(w_1,…,w_n), (z_1,…,z_n)⟩ = c_1w_1\overline{z_1}+…+c_nw_n\overline{z_n}$$

In an inner-product space, we have additivity and conjugate homogeneity in the second slot: $⟨u, v+w⟩ = ⟨u, v⟩+⟨y, w⟩$ and $⟨u, av⟩ = \overline{a}⟨u, v⟩$

### 6.2 Norms

For $v\in V$, we define the norm of $v$, denoted $\|v\|$, by$$\|v\| = \sqrt{⟨v, v⟩}$$

(After reading this, I finally realized that the different norms, i.e. $l1$ norm and $l2$ norm, are originated from different inner product spaces!)

Two vectors $u, v ∈ V$ are said to be orthogonal if $⟨u, v⟩ = 0$.

Pythagorean Theorem: If $u, v$ are orthogonal vectors in $V$ , then $\|u+v\|^2 = \|u\|^2+ \|v\|^2$.

Cauchy-Schwarz Inequality: If $u, v ∈ V$ , then $|⟨u, v⟩| \le \|u\|\|v\|$. (Use the orthogonal decomposition $u = \dfrac{⟨u, v⟩}{\|v\|^2}v + w$ to prove that.)

Triangle Inequality: If $u, v ∈ V$ , then $\|u, v\| \le \|u\|+\|v\|$.

Parallelogram Equality: If $u, v ∈ V$ , then $\|u+v\|^2 + \|u-v\|^2 = 2(\|u\|^2+\|v\|^2)$.

### 6.3 Orthonormal Bases

If $(e_1, . . . , e_m)$ is an orthonormal list of vectors in $V$, then for all $a_1, . . . , a_m ∈ F$,$$\|a_1e_1 + ··· + a_me_m\|^2 = |a_1|^2 + ··· + |a_m|^2.$$

Every orthonormal list of vectors is linearly independent.

Every orthonormal list of vectors in $V$ with length dim $V$ is automatically an orthonormal basis of $V$ (proof: by the previous corollary, any such list must be linearly independent; because it has the right length, it must be a basis)

The usefulness of orthonormal bases: it is easy to find the scalars $a_1,…,a_m$ of $v\in V$ such that $v = a_1e_1+…+a_ne_n$

Gram-Schmidt procedure: turn a linearly independent list into an orthonormal list with the same span as the original list. (Learned in postgraduate entrance examination, so skip it)

Every finite-dimensional inner-product space has an orthonormal basis.

Every orthonormal list of vectors in V can be extended to an orthonormal basis of $V$ .

Suppose $T ∈ L(V)$. If $T$ has an upper-triangular matrix with respect to some basis of $V$ , then $T$ has an upper-triangular matrix with respect to some orthonormal basis of $V$ .

### 6.4 Orthogonal Projections and Minimization Problems

If $U$ is a subset of $V$, then the orthogonal complement of $U$, denoted $U^⊥$, is the set of all vectors in $V$ that are orthogonal to every vector in $U$:$$U^⊥ = \lbrace v\in V: ⟨u, v⟩=0 \text{ for all }u\in U\rbrace$$

You should verify that $U^⊥$ is always a subspace of $V$, that $V^⊥ = \lbrace 0\rbrace$, and that ${0}^⊥ =V$.Also note that if $U_1⊂U_2$, then $U_1^⊥ ⊃ U_2^⊥$.

If $U$ is a subspace of $V$ , then $V=U⊕U^⊥$. (use an orthonormal basis to prove $V=U+U^⊥$, and $v\in U \cap U^⊥$ means $v=0$ so $U \cap U^⊥ = \lbrace 0\rbrace$)

Suppose $U$ is a subspace of $V$ . The decomposition $V = U ⊕ U^⊥$ means that each vector $v ∈ V$ can be written uniquely in the form $v = u + w$, where $u\in U$ and $w\in U^⊥$. We use this decomposition to define an operator on $V$, denoted $P_U$, called the orthogonal projection of $V$ onto $U$. For $v\in V$, we define $P_Uv$ to be the vector $u$ in the decomposition above. $P_U$ has the following properties:

• range $P_U = U$
• null $P_U = U^⊥$
• $v-P_Uv\in U^⊥$ for every $v\in V$
• ${P_U}^2 = P_U$
• $\|P_Uv\|\le\|v\|$ for every $v\in V$

Furthermore, we can figure out $P_Uv$ using$$P_Uv = ⟨v, e_1⟩e_1+…+⟨v, e_m⟩e_m.$$

Suppose $U$ is a subspace of $V$ and $v ∈ V$ . Then $\|v − P_U v\| ≤ \|v − u\|$ (just imagine it in a linear space)

In P114, there is an interesting example, which is to use polynomials to fit the function(or to get the Taylor polynomials) by using Gram-Schmidt procedure and $P_Uv$ composition to make $\|v −u\|$ as small as possible. Through that I finally understand why linear algebra is “algebra”!!

Conclusion: for $v ∈ V$ and $U$ a subspace of $V$ , the procedure discussed above for finding the vector $u ∈ U$ that makes $\|v −u\|$ as small as possible works if $U$ is finite dimensional, regardless of whether or not $V$ is finite dimensional.

### 6.5 Linear Functionals and Adjoints

linear functional on $V$ is a linear map from $V$ to the scalars $F$.

Suppose $\varphi$ is a linear functional on $V$. Then for every $u\in V$, there is a unique vector $v\in V$ such that $\varphi(u) = ⟨u, v⟩$ (Use a set of orthonormal base to prove that)

Let $T\in \mathcal{L}(V,W)$. The adjoint of T, denoted $T^*$, is the function from $W$ to $V$, and for all $v\in V$, $T^*w$ is the unique vector in $V$ such that$$⟨Tv, w⟩ = ⟨ v,T^*w⟩$$

So if $T\in \mathcal{L}(V,W)$, then $T^*\in \mathcal{L}(W,V)$

The function $T\mapsto T^*$ has the following propoerties:

• additivity: $(S+T)^* = S^*+T^*$ for all $S,T\in\mathcal{L}(V,W)$
• conjugate homogeneity: $(aT)^* = \hat{a}T^*$ for all $a\in F$ and $T\in\mathcal{L}(V,W)$
• adjoint of adjoint: $(T^*)^* = T$ for all $T\in\mathcal{L}(V,W)$
• identity: $I^*=I$, where $I$ is the identity operator on $V$
• products: $(ST)^* = T^*S^*$ for all $T\in\mathcal{L}(V,W)$ and $S\in\mathcal{L}(W,U)$

The relationship between the null space and the range of a linear map and its adjoint:

• $\text{null }T^* = (\text{range }T)^⊥$
• $\text{range }T^* = (\text{null }T)^⊥$
• $\text{null }T = (\text{range }T^*)^⊥$
• $\text{range }T = (\text{null }T^*)^⊥$

The conjugate transpose of an m-by-n matrix is the n-by-m matrix obtained by interchanging the rows and columns and then taking the complex conjugate of each entry.

Suppose $T ∈ \mathcal{L}(V,W)$. If $(e_1,…,e_n)$ is an orthonormal basis of $V$ and $(f_1, . . . , f_m)$ is an orthonormal basis of $W$ , then$$\mathcal{M}(T^* ,(f_1,…,f_m),(e_1,…,e_n))$$

is the conjugate transpose of$$\mathcal{M}(T ,(e_1,…,e_n),(f_1,…,f_m))$$

The adjoint of a linear map does not depend on a choice of basis. This explains why we will emphasize adjoints of linear maps instead of conjugate transposes of matrices.

## Ch7 Operators on Inner-Product Spaces

An operator $T ∈ \mathcal{L}(V)$ is called self-adjoint if $T = T^*$. The sum of two self-adjoint operators is self- adjoint and that the product of a real scalar and a self-adjoint operator is self-adjoint.

The adjoint on $\mathcal{L}(V)$ plays a role similar to complex conjugation on $C$.

Every eigenvalue of a self-adjoint operator is real.

If $V$ is a complex inner-product space and $T$ is an operator on $V$, for all $v \in V$, $⟨T v , v ⟩ = 0$, then $T=0$.

Let $V$ be a complex inner-product space and let $T ∈ \mathcal{L}(V)$. Then $T$ is self-adjoint if and only if $⟨T v , v ⟩ ∈ R$. This corollary provides another example of how self-adjoint operators behave like real numbers.

If $T$ is a self-adjoint operator on $V$, for all $v ∈ V$, $⟨T v , v ⟩ = 0$, then $T = 0$.

### 7.2 Normal Operators

An operator on an inner-product space is called normal(正规的) if it commutes with its adjoint; in other words, $T ∈ \mathcal{L}(V )$ is normal if $TT^∗ =T^∗T$.

An operator $T ∈ \mathcal{L}(V )$ is normal if and only if $\|Tv\| = \|T^∗v\|$

Suppose $T ∈ \mathcal{L}(V )$ is normal. If $v ∈ V$ is an eigenvector of $T$ with eigenvalue $λ ∈ F$, then $v$ is also an eigenvector of $T^∗$ with eigenvalue $\overline{\lambda}$

If $T ∈ \mathcal{L}(V )$ is normal, then eigenvectors of $T$ corresponding to distinct eigenvalues are orthogonal.

### 7.3 The Spectral Theorem

Complex Spectral Theorem: Suppose that $V$ is a complex inner-product space and $T ∈ \mathcal{L}(V)$. Then $V$ has an orthonormal basis consisting of eigenvectors of $T$ if and only if $T$ is normal.

Suppose $T ∈ \mathcal{L}(V )$ is self-adjoint. If $α, β ∈ R$ are such that $α^2 < 4β$, then $T^2 + αT + βI$ is invertible. (For every real number $x$, $x^2 + \alpha x + \beta > 0$)

Suppose $T ∈ \mathcal{L}(V )$ is self-adjoint. Then $T$ has an eigenvalue.

Real Spectral Theorem: Suppose that $V$ is a real inner-product space and $T ∈ \mathcal{L}(V )$. Then $V$ has an orthonormal basis consisting of eigenvectors of $T$ if and only if $T$ is self-adjoint. (every self-adjoint operator on $V$ has a diagonal matrix with respect to some orthonormal basis.)

Suppose that $T ∈ \mathcal{L}(V )$ is self-adjoint (or that $F = C$ and that $T ∈ \mathcal{L}(V )$ is normal). Let $λ_1 , . . . , λ_m$ denote the distinct eigenvalues of $T$ . Then$V =null(T −λ_1I)⊕···⊕null(T −λ_mI)$.

### 7.4 Normal Operators on Real Inner-Product Spaces

Suppose $V$ is a two-dimensional real inner-product space and $T ∈ \mathcal{L}(V )$. Then the following are equivalent:

• $T$ is normal but not self-adjoint;
• the matrix of $T$ with respect to every orthonormal basis of $V$ has the form $\bigl[ \begin{smallmatrix} a&-b\b&a \end{smallmatrix} \bigr]$ with $b \ne 0$;
• the matrix of $T$ with respect to some orthonormal basis of $V$ has the form $\bigl[ \begin{smallmatrix} a&-b\b&a \end{smallmatrix} \bigr]$ with $b > 0$.

Suppose $T ∈ \mathcal{L}(V)$ is normal and $U$ is a subspace of $V$ that is invariant under $T$. Then

• $U^⊥$ is invariant under $T$ ;
• $U$ is invariant under $T^∗$ ;
• $(T|_U)^∗=(T^∗)|_U$;
• $T |_U$ is a normal operator on $U$;
• $T|_{U^⊥}$ is a normal operator on $U^⊥$

block diagonal matrix is a square matrix of the form

$\bigl[ \begin{smallmatrix} &A_1&&0&\\&&\ddots&&\\&0&&A_m& \end{smallmatrix}\bigr]$, where $A_1,…,A_m$ are square matrices lying along the diagnoal and all the other entries of the matrix equal 0.

If $A$ and $B$ are block diagonal matrices of the form

$$A= \left[ \begin{matrix} A_1 & & 0 \\ & \ddots & \\ 0 & & A_m \\ \end{matrix} \right] , B= \left[ \begin{matrix} B_1 & & 0 \\ & \ddots & \\ 0 & & B_m \\ \end{matrix} \right]$$

where $A_j$ has the same size as $B_j$ for $j=1,…,m$, then $AB$ is a block diagnoal matrix of the form

$$AB= \left[ \begin{matrix} A_1B_1 & & 0 \\ & \ddots & \\ 0 & & A_mB_m \\ \end{matrix} \right]$$

Suppose that $V$ is a real inner-product space and $T ∈ \mathcal{L}(V)$. Then $T$ is normal if and only if there is an orthonormal basis of $V$ with respect to which $T$ has a block diagonal matrix where each block is a 1-by-1 matrix or a 2-by-2 matrix of the form

$$\left[ \begin{matrix} a & -b \\ b & a \\ \end{matrix} \right]$$

with $b>0$