A linear vector space is a set V that is closed under finite vector addition and scalar multiplication.

By closed we mean that after finite vector addition and (or) scalar multiplication of any number of vectors belonging to that space, the result is a vector that also belongs to that space.
e.g. we can say that the set of integers (whole numbers) is closed under addition because the sum of any two of them is also an integer (2+45 = 47, 4+(-2) = 2 etc.) However, it is not closed under division because you can find a pair whose division is a fraction (e.g. 2/3, -3/5 etc.).

Formally, a nonempty set V (of vectors), each with the same number of components such that:

  1. If a vector¹  a is an element of V), so is αa, where α is a scalar (we’ll take the set of real numbers as acceptable values for α here for simplicity, i.e. we assume that α ∈ R).
    If a ∈ V ⇒ αa ∈ V
  2. If there are two vectors a and b, both elements of V and two scalars α and β, both real numbers then α+ βb is also an element of V
    i.e. if ab ∈ V and α ,β ∈ R, then αa + βb ∈ V
    and
    α(b) = α+ αb
    α(βa) = (αβ)a
    (α + β) = α+ βb
  3. Vector addition is commutative
    a + b = b + a ,
    (b) + c = + (c)
  4. There exists an identity element for addition 0
    a+0 = + a
    whereis called the Null vector which is defined to be a vector with zero length (and no direction).
  5. For every vector a in V, there exists another vector –a in V such that
    + (-a) = (-a) + a = 0
  6. There exists an identity element of multiplication 1
    1a = a

Linear Combination of Vectors

A weighted combination of vectors a, b, c etc. (all belonging to V) is called a linear combination of them(r,s,t etc. are real numbers here):

v = ra + sb + tc +

If this sum is zero, i.e.

v = 0 = ra + sb + tc +

such that all coefficients r,s,t etc have to be zero separately; then the vectors a, b, c etc. are said to be linearly independent. Otherwise they are called linearly dependent. If they are linearly independent, none of them can be expressed as the weighted sum of the rest.

Let’s understand it with the help of an example. Suppose we have three vectors such that

    \[\vec{a}=\hat{i}+2\hat{j}+2\hat{k}\]

    \[\vec{b}=2\hat{j}+\hat{k}\]

    \[\vec{c}=\hat{i}+\hat{k}\]

Then we can write \vec{a}   as a combination of \vec{b}  and \vec{c}  :

    \[\vec{a}=\vec{b}+\vec{c}\]

Or alternatively we can write \vec{c}  in terms of \vec{a}  and \vec{b}  :

    \[\vec{c}=\vec{a}-\vec{b}\]

Hence, the vectors \vec{a}\vec{b}  and \vec{c}  are linearly dependent. However, if we have another three vectors \vec{u}\vec{v} ,\vec{w}  such that

    \[\vec{u}=\hat{i}+2\hat{j}\]

    \[\vec{v}=2\hat{k}\]

    \[\vec{w}=3\hat{j}\]

Then there is no way to express any one of them as a linear combination of the other two. Hence, they are said to be linearly independent.

If the given vectors are linearly independent such that every other vector in that space can be expressed as a linear combination of them, they are said to be basis vectors.

A Basis is a linearly independent subset of V, consisting of the maximum possible number of vectors. When a vector is expressed as a linear combination of basis vectors, the coefficients multiplying the basis are called components of that vector relative to that basis.

e.g. the vector \vec{b}  encountered before was expressed in terms of cartesian unit vector basis:

    \[\vec{b}=2\hat{j}+\hat{k}\]

here the components are (0, 2, 1) respectively, corresponding to \hat{i} , \hat{j} and \hat{k}.

The maximum number of linearly independent vectors in V is called the dimension of V. The familiar cartesian coordinate space is a 3-dimensional space (one of the basis set being unit vectors \hat{i} , \hat{j} and \hat{k} ).

The set of all linear combinations of given vectors with the same no. of components is called the Span of these vectors. Span of a bunch of vectors is a vector space in itself.

Subspace

A subspace of V is a nonempty subset of V that forms another vector space V’ with respect to the two algebraic operations (addition and scalar multiplication) defined for the vectors in V.

Subspace S is orthogonal to another subspace  if every vector in S is perpendicular to every vector in T.

Orthogonality and Norm

It is important to understand that the concept of vectors goes deeper than a quantity having certain magnitude and direction. It is not necessary for a vector to have only 3 components. There are vectors encountered in physics that have no direction in the traditional sense of the word, and has more than 3 components. In those cases, we express vectors simply by components relative to a given basis. In fact, vectors are best expressed in terms of matrices. The concept of orthogonality is then expressed in terms of inner products between row and column matrix representation of vectors and there is no way we can visualize them being ‘perpendicular’ to each other:

\vec{a}  and \vec{b} are said to be orthogonal if

    \[\vec{a}.\vec{b}=0\]

The concept of length of the vector is then replaced by ‘norm’. In simple terms, norm of a vector is the (modulus of) square root of the inner product of a vector with itself:

    \[\left \| \vec{a} \right \| = \sqrt{\vec{a}.\vec{a}}\]

The null vector \vec{0} has been assigned the norm of length zero.

¹I have used both bold letters (such as a) and arrow notation (such as \vec{a} ) to denote vectors in this post. In the printed text bold letter notation is generally preferred, but the arrow notation helps you recognize a vector quickly and cannot be confused with emphasized text(also in bold) in my opinion.

(Linear) Vector Spaces: A quick review
Tagged on:     

Leave a Reply

Your email address will not be published. Required fields are marked *