Chapter 1

Vector Spaces

Definition 1.1

Let F stand for either R or C. A vector space over F is a set V together with two operations ⊕ and · such that:

• (v1) u

∈ V.

• (v2) u

∈ V.

• (v3) ∃o

∈ V.

• (v4) ∀u

.

• (v5) λ · (u

∈ V.

• (v6) (λ ⊕ μ) · u

∈ V.

• (v7) λ · (μ · u

∈ V.

• (v8) 1 · u

∈ V.

For any set V, operations ⊕ and · can be defined in many different ways and may bear no resemblance to the usual notion of addition and multiplication. All that is required is that axioms (v1) to (v8) be satisfied, and if they are then V is a vector space relative to those ⊕ and ·. The elements of F are called “scalars”. The operation ⊕ (read “o-plus”) is often called “addition” and · is called “scalar multiplication”. If V is a vector space over F then the elements of V are called “vectors”. The vector o

is called the additive inverse of u

is called the “zero vector”. The element −u

. It is important to emphasise that −u

⊕ (v

⊕ v

∈ V such that u

∈ V ∃−u

= u

⊕ v

= v

⊕ w

)=(λμ) · u

)=(λ · u

⊕ u

)=(u

∀u

= (λ · u

∈ V such that u

∀u

⊕ v

) ⊕ (λ · v

) ⊕ (μ · v

,v

⊕ o

) ⊕ w

∀λ, μ ∈ F,∀u

= u

may have nothing to

1

) ∀λ ∈ F,∀u

) ∀λ, μ ∈ F,∀u

∀u

∀u

⊕ −u

,v

,w

= o

,v



2 CHAPTER 1. VECTOR SPACES

do with “minus one times u

will depend on the definition of ⊕. To avoid confusion, we can read −u

as “dash-u” instead of “minus u”. The simplest example is R, the set of real numbers, which becomes a vector space over R (that is, F is R itself) by the following definitions:

x ⊕ y = x + y,

λ · x = λx,

which are the ordinary definitions of addition and multiplication of real num- bers. It is clear that all axions (v1) to (v8) are satisfied. Another familiar example is the set R3, whose definition is

R3 =

∣ ∣ ∣ x

1

}

or, in words, the set of all triplets of real numbers. This set is a vector space over R if ⊕ and · are defined as follows:

(x

1

{

(x

1

,x

2

,x

3

)

,x

2

,x

3

∈ R

,x

2

,x

3

) ⊕ (y

1

,y

2

,y

3

)=(x

1

+ y

1

,x

2

+ y

2

,x

3

+ y

3

), λ · (x

1

),

which are the usual definitions of vector addition and multiplication of a real number times vector. It is simple to check that axioms (v1) to (v8) are satisfied.

Examples 1.2

Similarly to R3 we define Rn as

Rn =

,x

2

,x

3

)=(λx

1

, λx

2

, λx

3

∣ ∣ ∣ x

i

}

which is the set of all n-tuplets of real numbers. This set is a vector space over R if ⊕ and · are defined in the usual way:

(x

1

{

(x

1

,x

2

,...,x

n

)

∈ R, 1 ≤ i ≤ n

,x

2

,...,x

n

) ⊕ (y

1

,y

2

,...,y

n

)=(x

1

+ y

1

,x

2

+ y

2

,...,x

n

+ y

n

), λ · (x

1

,x

2

,...,x

n

)=(λx

1

, λx

2

,...,λx

n

),

Examples 1.3

Let’s look at an example where a set can be made into a vector space only if we abandone the standard definitions of addition and scalar multiplication. Consider

V =

{

(x

1

,x

2

,1)

∣ ∣ ∣ x

1

,x

2

} ∈ R

.

”. The precise form of −u



3

Note that the elements of the set are triplets of real numbers, like in R3, except that the last one is always a one. Therefore, if we defined ⊕ as in R3 we would run into a problem:

(x

1

,x

2

,1) ⊕ (y

1

,y

1

,1) = (x

1

,x

2

,1) + (y

1

,y

1

,1) = (x

1

+ y

1

,x

2

+ y

2

,2),

which is not in V . The same would happed if we define scalar multiplication as we did in R3:

λ · (x

1

,x

2

,1) = λ(x

1

,x

2

,1) = (λx

1

, λx

2

,λ),

which again is not in V (because λ is in general not equal to 1.) Therefore we can’t accept these definitions of ⊕ and ·. On the other hand, we could simply change them into

(x

1

,x

2

,1) ⊕ (y

1

,y

1

,1) = (x

1

+ y

1

,x

2

+ y

2

,1), λ · (x

1

,x

2

,1) = (λx

1

, λx

2

,1),

which do satisfy all the axioms and therefore make V into a vector space, as you can check.

Examples 1.4

Let P

n

be the set of all polynomials in x of degree not higher than n with real coefficients:

P

n

{

∣ a

0

∣ ∣ a

i

}

.

This set becomes a vector space over R if addition and scalar multiplication are defined by

(a

0

=

+ a

1

x + a

2

x2 + ... + a

n

xn

∈ R∀0 ≤ i ≤ n

+ a

1

x + a

2

x2 + ... + a

n

xn) ⊕ (b

0

+ b

1

x + b

2

x2 + ... + b

n

xn) = (a

0

)x2 + ... + (a

n

)xn, λ · (a

0

+ b

0

)+(a

1

+ b

1

)x + (a

2

+ b

2

+ b

n + a

1

x + a

2

x2 + ... + a

n

xn)=(λa

0

)+(λa

1

)x + (λa

2

)x2 + ... + (λa

n

)xn.

Examples 1.5

Let RX denote the set of all functions from a set X to R:

RX = {f : X ↦→ R}.

This set becomes a vector space over R by the definitions

(f ⊕ g)(x) = f(x) + g(x),

(λ · f)(x) = λf(x).



4 CHAPTER 1. VECTOR SPACES

A remarkable example of how different a vector space can look like from what we are used to is this. Let R+ be the set of all strictly positive (meaning greater than zero) real numbers:

R+ =

{

x ∈ R

∣ ∣ ∣ x > 0

}

.

This set can’t be made into a vector space if we insist on the old definitions of ⊕ as + and · as ordinary multiplication, like we did in a previous example. That is because, for example, the zero vector would have to be the real number zero, which is excluded from the set R+. Moreover, the additive inverse of x ∈ R+ would have to be −x, the ordinary negative of x, but negative numbers are also outside R+. Nevertheless we can still make R+ into a vector space over R if we introduce the following (slightly surprising) definitions of addition and scalar multipli- cation:

x ⊕ y = xy, λ · x = xλ.

that is, addition is defined as ordinary multiplication and scalar multiplication is defined as a power. It is left as an exercise to check that these definitions make R+ a vector space over R in which the zero vector is o

= 1 and the additive inverse of x is 1/x. [Question: can integer powers be defined in this vector space? If yes, how? What about non-integer powers?] As an example that a set may be promoted to vector space in more than one way, consider this. Going back to the set R, let us propose the following alternative definitions of ⊕ and · :

x ⊕ y = (x3 + y3)1/3,

λ · x = λ1/3x.

Although these are definitely not the usual definitions of addition and mul- tiplication of real numbers, there is nothing wrong with them. Let us check that all axioms are satisfied:

• (v1)

x ⊕ (y ⊕ z) = x ⊕ (y3 + z3)1/3 = (x3 + (y3 + z3))1/3 = ((x3 + y3) + z3)1/3 = (x3 + y3)1/3 ⊕ z = (x ⊕ y) ⊕ z QED.

• (v2)

x ⊕ y = (x3 + y3)1/3 = (y3 + x3)1/3 = y ⊕ x QED.



5

• (v3) The zero vector is the real number 0:

x ⊕ 0=(x3 + 03)1/3 = (x3)1/3 = x QED.

• (v4) The additive inverse of x is the usual −x (meaning -1 times x):

x ⊕ (−x)=(x3 + (−x)3)1/3 = (x3 − x3)1/3 = 0 QED.

• (v5)

λ · (x ⊕ y) = λ · (x3 + y3)1/3 = λ1/3(x3 + y3)1/3 = ((λ1/3x)3 + (λ1/3y)3)1/3

= (λ · x) ⊕ (λ · y) QED.

• (v6)

(λ + μ) · x = (λ + μ)1/3x = (λx3 + μx3)1/3 = ((λ1/3x)3 + (μ1/3x)3)1/3

= (λ · x) ⊕ (μ · x) QED.

• (v7)

λ · (μ · x) = λ · (μ1/3x) = λ1/3μ1/3x

= (λμ)1/3x = (λμ) · x QED.

• (v8) The multiplicative unit is the same as the real number 1:

1 · x = 11/3x = 1x = x QED.

In this example we have seen that the set R can be made into a vector space in a way different from the usual one, by defining addition and scalar multiplication as we have just done.

Comment: Even though vector spaces are required to have a zero vector and to have additive inverses, it is not clear whether there could be several different zero vectors, of if some vectors could have more than one additive inverse. These are questions of uniqueness, that is, questions about whether there is only one object satisfying certain requirements. The next Proposition shows that any vector space has only one zero vector, and each vector has only one additive inverse. The idea of the proof is to assume that there are two such



6 CHAPTER 1. VECTOR SPACES

things and then show that, because of the axioms of vector space, they are in fact the same. Then uniqueness is established.

Proposition 1.6

Let V be a vector space over F. Then:

1. The zero vector is unique;

2. For every n

∈ V .

Proof: 1. Suppose there are two zero vectors o

′ and o

and o

. Therefore o

′ as required. 2. Suppose u

by (v3).

The proofs of parts 3. and 4. are left as an exercise ■

Definition 1.7

A subspace U of a vector space V over F is a subset of V which is also a vector space over F with the same ⊕ and · as V . We write U ≤ V to denote that U is a subspace of V . Note:

How can we check that a subset of V is a subspace of V ? We don’t need to check all of (v1) to (v8). The following theorem provides a shortcut.

as zero vector we get o

is unique;

3. 0 · u

∈ V ;

4. (−1) · u

′ as zero vector we get o

= o

′. Then

(−u

= o

has two additive inverses −u

= −u

⊕ u

∀u

∈ V , the additive inverse −u

′) ) ⊕ −u

by (v1) o

∀u

by ⊕ u

(v4). Therefore −u

′ ⊕ o

′ = −u

′ = −u

′ = −u

⊕ o

= o

⊕ (u

⊕ o

′ = o

′. But by (v2) we have o

⊕ −u

. Taking (v3) again with v

and o

and −u

′. Then, taking (v3) with v

⊕ o

′ = o

= o

′ ⊕ o

= o



7

Theorem 1.8

Let U ⊆ V . Then U ≤ V if and only if:

(s1) o

∈ U.

The prrof of this theorem is very simple and can be left as an exercise for the reader.

Examples 1.9

(1) Take V = R3. Then the set {(0,0,0)} consisting only of the zero vector is a subspace of R3. Any straight line in R3 that goes through the origin of coordinates is also a subspace. The same is true of any plane that contains the origin of coordinates. Finally, the whole of R3 is also a subspace of itself. Note:

In general, for any vector space V we have {o

} ≤ V and V ≤ V . (2) The set of all real solutions of a system of homogeneous linear equations is also a subspace of R3:

a

11

= 0; a

21

x

1

+ a

12

x

2

+ a

13

x

3

= 0; a

31

x

1

+ a

22

x

2

+ a

33

x

3 x

1

+ a

32

x

2

+ a

33

x

3

= 0.

If we wite this in matrix form as Ax

} ≤ R3.

The proof is very simple; we need only check (s1), (s2) and (s3): (s1): Clearly (0,0,0) ∈ U because x

1

= 0, x

2

= 0, x

3

= 0 is a solution to the homogeneous system of linear equations. (s2): Let x

. Then by the laws of matrix-vector multiplication we know that A(x

. Therefore x

∈ U. (s3): Let x

. Therefore λ · x

∈ U. In conclusion, U ≤ R3.

∈ U;

(s2) ∀u

∈ U;

(s3) ∀λ ∈ F,v

,v

,y

∈ U we have u

∈ U. For any λ ∈ R we have that A(λ · x

+ y

∈ U. That means that Ax

∈ U we have λ · u

then the statement is that

U = {x

+ v

∈ R3

∣ ∣ ∣ Ax

= o

= o

= o

and Ay

+y

) = Ax

= o

) = λ · Ax

+Ay

= λ · o

= o

+o

= o

= o



8 CHAPTER 1. VECTOR SPACES

(3) The set of all real solutions of a homogeneous linear equation is a subspace of RR (the vector space of all functions from R to R). Consider for example the differential equation f ′′ − f = 0. Define

U = {f : R → R

∣ ∣ ∣ f is twice differentiable and f′′ − f = 0}.

Let’s check the conditions for U to be a subspace of RR: (s1): o

is the zero function, which is clearly in U; (s2): If f,g ∈ U then (f + g)′′ − (f + g)=(f ′′ − f)+(g′′ − g)=0+0=0, so f + g ∈ U; (s3): Left as an exercise. In conclusion, U ≤ RR.

Definition 1.10a) Let V be a vector space over F. Let v

∈ F. Then the expression

n∑

i=1

λ

i

· v

n

is called a linear combination of the vectors v

.

Definition 1.10b) Let W be the set of all linear combinations of the vectors v

∈ V . Then W ≤ V and it is called the subspace of V spanned by v

. We write this as W = span{v

}. If W is all of V then we say that V is spanned by {v

}, or that {v

} is a spanning set for V . The proof of the statement that span{v

} is a subspace of V is left as an exercise.

Examples 1.11 Take V = R2 and v

} is a spanning set for R2. The proof is to show that every vector in R2 is a linear combination of these three vectors. One way to do this is by noticing that for any (a, b) ∈ R2 we can write

(a, b) = a·(1,0) +b·(0,1) + 0·(1,1) = a·v

}

as required. Note that there are many other ways of writing (a, b) as a linear combination of v

1

,v

2

,...,v

. For example,

(a, b) = c · (1,0) + (b + c − a) · (0,1) + (a − c) · (1,1).

n

1

,v

2

1

,v

= (1,0),v

3

i

= λ

1

1

,v

2

· v

2

= (0,1),v

,...,v

1

⊕ λ

2

1

,v

2

n

,...,v

· v

1

,v

1

2

3

+b·v

2

⊕···⊕ λ

n

,...,v

= (1,1). Then {v

1

,v

n

2

∈ V and λ

1

2

,...,v

+0·v

n

1

,v

· v

n

3

2

∈ span{v

,...,v

1

,v

2

2

,...,v

1

,...,λ

n

1

,v

,v

n

2

2

,v

1

,...,v

,v

3

n

2

,v

3

n



9

This is connected to the fact that {v

} is not a basis of R2, but this will be explained later.

Definition 1.12 A finite set of vectors is called linearly independent if no vector in the set is a linear combination of the others. Otherwise we say that it is linearly dependent. Stated in a more explicit way, the vector set {v

} is linearly de- pendent if any one of the vectors (say v

n

for some scalars λ

2

,...,λ

n

. If there are no such scalars then the vector set is linearly independent.

Proposition 1.13 A finite set of vectors {v

} is linearly independent if and only if the equation

λ

1

· v

= implies λ

1

0 (i.e., that is the only solution to the equation.) The proof is left as an exercise.

Examples 1.14 (1) The vector set {u

} consisting of a single vector is linearly independent if u

= o

then it is linearly dependent. More generally, if a vector set contains the zero vector then it is immediately a linearly dependent set. For, if the vector set is {o

} then it is always true that

1 · o

, so it is a linearly dependent set (note that λ

1

= 0). (2) A set {u

} where neither vector is the zero vector is linearly independent unless there is λ ∈ F such that u

. In other words, if neither vector is a mutiple of the other then the set of two vectors is linearly independent. Consider for example the following vector sets in R3:

{(1,0,0),(1,2,3)} linearly independent; {(1,0,0),(1/2,0,0)} linearly dependent (take λ = 1/2).

(3) Similarly, take a set of three vectors {u

} in R3. Then this vector set is linearly dependent if the three vectors are coplanar (i.e., lie on the same plane) and linearly independent if they are not.

. If u

= o

= λ

2

,v

= ··· = λ

n

) can be written as

v

1

1

1

,v

⊕ 0 · v

⊕ λ

2

= λ

2

2

,...,v

· v

· v

= λ · v

2

2

⊕···⊕ 0 · v

2

n

⊕···⊕ λ

n

⊕···⊕ λ

n

1

,v

1

2

,v

,v

3

,w

n

· v

· v

1

= o

,v

n

= o

2

,...,v

,v

2

,...,v

n

n



10 CHAPTER 1. VECTOR SPACES

For example, take {(1,0,0),(0,1,0),(1,2,3)}. Let’s check for linear indepen- dence:

λ

1

· (1,0,0) ⊕ λ

2

· (0,1,0) ⊕ λ

3

· (1,2,3) = (0,0,0) (λ

1

+ λ

3

2

+ 2λ

3

,3λ

3

) = (0,0,0) therefore λ

1

= λ

2

= λ

3

= 0.

So the vector set is linearly independent, as we should have expected because the three vectors are not coplanar. On the other hand, if we take {(1,0,0),(0,1,0),(1,2,0)} and check for linear independence we find this:

λ

1

· (1,0,0) ⊕ λ

2

· (0,1,0) ⊕ λ

3

· (1,2,0) = (0,0,0) (λ

1

+ λ

3

2

+ 2λ

3

,0) = (0,0,0) therefore λ

1

= −λ

3

and λ

2

= −2λ

3

but that does not imply that λ

1

= λ

2

= λ

3

. We can satisfy the same equations by taking, for example, λ

1

= 1,λ

2

= 2 and λ

3

= −1. The vector set is therefore linearly dependent, as it should be because all three vectors in it lie on the same plane (the xy plane.)

Definition 1.15

A basis for a vector space V is a set of vectors in V that is both linearly independent and spanning.

Examples 1.16

(1) Take V = R2,v

} is a basis of R2. Check linear independence:

λ

1

· (1,0) ⊕ λ

2

· (0,1) = (0,0) (λ

1

) = (0,0) therefore λ

1

2

= 0.

Now check that it spans R2. For any (a, b) ∈ R2 we write

(a, b) = a · (1,0) ⊕ b · (0,1),

which proves that the set is spanning. On the other hand, {v

= λ

2

1

= (1,0),v

1

} ,v

is also a basis of R2. Check linear independence:

λ

1

· (1,0) ⊕ λ

2

· (1,1) = (0,0) (λ

1

+ λ

2

2

) = (0,0) therefore λ

1

= λ

2

= 0.

3

2

= (0,1) and v

3

= (1,1). Then {v

1

,v

2