First step on moon


In science and engineering we frequently encounter quantities that only have a magnitude, for example: mass, time, speed and temperature. These quantities are scalar quantities. In contrast, many other interesting physical quantities have besides magnitude also an associated direction in space. This second group includes displacement, velocity, acceleration, force and linear momentum. Quantities with magnitude and direction are mathematically described by an object called a vector vector.

The concept of a vector is a mathematical invention. The concept has been developed during the 19th century by the works of different mathematicans like Hamilton, Grassmann, Gibbs, and Heaviside. Gibbs published in 1881 his Elements of Vector Analysis on which our current understanding of the subject is based.

The geometry in which we develop vectors is Euclidean geometry, named after the Greek mathematician and philosopher Euclid, who developed this geometric abstraction of space around 300 B.C. The foundations of Euclidean geometry are based on five postulates concerning points and lines see Euclid's Postulates, From MathWorld. Classical (Newtonian) mechanics assumes that the geometry of space is Euclidean. In particular, our physical space is often referred to as the three-dimensional Euclidean space $\mathbb{R}^3$.

One of the remarkable properties of Euclidean space is that objects do not transform under the geometric operations of translation, rotation and reflection. This property of Euclidean space, is what physicist call the invariance of physical laws under transformations. It is for this reason that the physical laws are written in non-coordinate dependent vector quantities.

Geometrically a vector can be represented as directed line segments. The set of parallel vectors, with the same length and direction represent the same vector. This implies that a vector has no definite location in space. Vectors in this text are represented by letters in boldface, $\mathbf{A}$. The vector $\mathbf{A}$ has a magnitude (length or modulus) denoted by $A=\norm{\mathbf{A}}$. The simplest representation of direction and magnitude is the line segment between two points, say $A$ and $B$. The corresponding vector can be written $\mathbf{AB}$. We can associate with a vector a sense defined by the origin and terminus of the vector. For the vector $\mathbf{AB}$ the sense is from $A$ to $B$ and the vector is drawn as an arrow. The reverse sense is denoted by $\mathbf{BA}=-\mathbf{AB}$. With this geometrical definition of vector we have defined a "step in space".

Vector Algebra

The set of rules for combining and operating on vectors is called vector algebra. We describe here the most elementary vector operations in geometrical form.

Vector multiplication by a scalar

A vector $A$ can be multiplied by a number $a$ to get a new vector $\mathbf{C}=a\mathbf{A}$. If $a>0$ then $\mathbf{C}$ has the same direction en sense but different length $\norm{\boldsymbol{C}}=a\norm{\boldsymbol{A}}$. If $a<0$ $\mathbf{C}$ has also an opposite sense. Multiplying with $a=0$ we get the null vector $\mathbf{0}$.

Vector addition

A vector may be conveniently represented by an arrow with length proportional to the magnitude. The direction of the arrow gives the direction of the vector, with the positive sense of direction being indicated by the point. In this representation, vector addition: \[ \mathbf{C} = \mathbf{A} + \mathbf{B} \] consists in placing the rear end of vector $\mathbf{B}$ at the point of vector $\mathbf{A}$. Vector $\mathbf{C}$ is then represented by an arrow drawn from the rear of $\mathbf{A}$ to the point of $\mathbf{B}$. This procedure is called the triangle law of addition.

By completing the parallelogram we see that vector addition is communicative: \[ \mathbf{C} = \mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A} \]

For adding three or more vectors e.g.: \[ \mathbf{D}=\mathbf{A}+\mathbf{B}+\mathbf{C} \] we use the property that vector addition is associative, which means that we add first any two vectors following the parallelogram law and then successively add any remaining vector to the result: \[ (\mathbf{A}+ \mathbf{B}) + \mathbf{C} = \mathbf{A} + (\mathbf{B} + \mathbf{C}) \]

Vector subtraction

Subtraction may be handled by defining the negative of a vector as a vector of the same magnitude but with reversed direction: \[ \mathbf{A} - \mathbf{B} = \mathbf{A} + (- \mathbf{B}) \] If $\vec A$ and $\vec B$ are two vectors then the vector $\mathbf{AB}=\vec B - \vec A$.

Note that the vectors are treated as geometrical objects that are independent of any coordinate system.

Analytic method

The geometric method of vector algebra is not very usefull for vectors in three dimensions. We introduce an analytic method. This co-ordinate geometry was introduced by Descartes and Fermat and is in essence a method to associate numbers with vectors. For a vector in a plane we need 2 numbers and for a vector in space 3. These numbers are called the components or co-ordinates of a vector. Like scalars these numbers have only meaning in relation to a specific scale. For vectors we call this a co-ordinate system.

We illustrate this process of resolving a vector in components for a 3 dimensional rectangular co-ordinate system with Cartesian co-ordinates. The Cartesian co-ordinate system is used to uniquely determine each point in space through three numbers, $a_x,a_y,a_z$. To define these coordinates, three directed lines, mutually orthogonal, to each other (i.e. each at a right angle to the other) are specified: $x$-axis (abscissa),$y$-axis (ordinate),$z$-axis (applicate). A variant is to allow for "oblique" axis, that is, axis that do not meet at right angles. The point of intersection, where the axes meet, is called the origin normally labeled $O$. Each of the co-ordinate axis has a positive and negative orientation starting from $O$. Along each of these axis the unit length for measurement is specified by three unit vectors: ${\mathbf{\hat{e}}}_x,\mathbf{\hat{e}}_y,\mathbf{\hat{e}}_z$. With any point $A$ in space we can associate a vector $\vec{r}=\mathbf{OA}$, called the position vector. We drop perpendicular lines from the head of $\mathbf{OA}$ to these axis. The quantities $a_x,a_y,a_z$ are then called the components of the vector $\vec{r}$ denoted by the column vector \[ \boldsymbol{x}=\left[ \begin{array}{c} a_x \\ a_y \\ a_z \end{array} \right]\] or the co-ordinates of the point $A$ denoted by tuple notation: $\left(a_x,a_y,a_z\right)$. We will use tupple notation in our text. Notice that $\vec{r}$ is the vector sum of the unit vectors multiplied scaled by the components: \[ \vec{r}=a_x\mathbf{\hat{e}}_x+a_y\mathbf{\hat{e}}_y+a_z\mathbf{\hat{e}}_z \] Note A unit vector is defined by: \[ \mathbf{\hat{x}}=\frac{\vec x}{\norm{\vec x}} \]

We introduce the Einstein convention

Einstein convention

Whenever the same index symbol is appears in a term of an algebraic expression both as a subscript and a superscript the expression is to be summed over the range of that index, called a dummy index
We use for the components now the following symbols $x^1,x^2,x^3$ and change the subscripts of the unit vectors from $x,y,z$ to $1,2,3$. Then with the Einstein convention applied we get: \[ \vec{r}=x^1\mathbf{\hat{e}}_1+x^2\mathbf{\hat{e}}_2+x^3\mathbf{\hat{e}}_3=\sum_{i=1}^3{x^i\mathbf{\hat{e}}_i}=x^i\mathbf{\hat{e}}_i \] With this new syntax we easily extend our algebraic definition of a vector to any dimension n, by letting the index i run from $1$ to $n$: \[ \vec{r}=\sum_{i=1}^n{x^i\mathbf{\hat{e}}_i}=x^i\mathbf{\hat{e}}_i \]

Orthogonal projection

The orthogonal projection $\mathbf{P_B}(A)$ of a vector $\vec A$ onto a vector $B$ is defined as follows:

Orthogonal projection

\[\mathbf{P_{B}(A)}=\norm{A} \cos \theta \mathbf{\hat{B}}\] with $\theta$ the angle between $\vec A$ and $\vec B$, $\mathbf{\hat{B}}$ a unit vector in the direction of $\vec B$

If the axes $\{e_1,e_2,...,e_n\}$ of a co-ordinate system are mutually perpendicular, $\cos \alpha_{ij}=\delta_{ij}$, and of unit length, $\norm{e_i}=1$, then we call this an orthonormal co-ordinate system. In an orthonormal co-ordinate system we can specify each vector $\vec v$ as the sum of the orthogonal projection of $\vec v$ on each of its co-ordinate axes: \[ \vec v = \norm{\vec v} \cos \theta^1 \mathbf{\hat{e}}_1 +\norm{\vec v} \cos \theta^2 \mathbf{\hat{e}}_2 + \dots + \norm{\vec v} \cos \theta^n \mathbf{\hat{e}}_n =\norm{\vec v} \cos \theta^i \mathbf{\hat{e}}_n \]

Note $\delta_{ij}$ is called the Kronecker delta defined by: \[ \delta_{ij}=\begin{cases} 1, & \mbox{if }i=j \\ 0, & \mbox{if }i\neq j\end{cases} \]

Analytic operations

We define now the geometric operations in terms of analytic co-ordinates in an n-dimensional Euclidean space $\mathbb{R}^n$ with a orthonormal co-ordinate system.

Let $\vec A=(a^1,a^2,...,a^n)$, $\vec B=(b^1,b^2,...,b^n)$ be vectors and $\alpha$ a real number. \[\vec A+\vec B=(a^1+b^1,a^2+b^2,...,a^n+b^n)\] \[\alpha \vec A = (\alpha a^1, \alpha a^2,...,\alpha a^n)\] \[\norm{\vec A}=\left((a^1)^2+(a^2)^2+ \dots + (a^n)^2\right)^\tfrac{1}{2}\]
These operations satisfy the following laws (which follows directly from the laws of real numbers and the definition of the vector operations):

  • Closure law: If $\vec A$ and $\vec B$ are elements of $\mathbb{R}^n$ then also $\vec A+\vec B$.
  • Commutative law of addition: $\vec A + \vec B = \vec B + \vec A$
  • Associative law of addition $(\vec A + \vec B)+\vec C=\vec A + (\vec B+\vec C)$
  • Identity element for addition: There exists a unique vector $\vec 0$ such that $\vec A + \vec 0= \vec A$
  • Additive inverse: For each $\vec A$ there exists $(-\vec A)$ such that $\vec A+(-\vec A)=\vec 0$

Note however that vectors are geometrical objects independent of the coordinate system chosen. The vector components of one coordinate system can be transformed in components of another system by a transformation rule (translation or rotation) without changing the vector. The main purpose of using vectors in physics is to describe physical quantities and their relations coordinate system independent (invariant). In addition vector notation is also more compact since in one equation the relationship for each dimension involved is contained.