In this section we want to characterise vector fields in which path independence holds. A precise definition of path independence is given below.
Definition8.1.Path independence.
Suppose that \(\vect f\) is a continuous vector field on an open set \(D\subset\mathbb R^N\text{.}\) We say that line integrals are path independent in \(\vect f\) if
\begin{equation*}
\int_C\vect f\cdot\,d\vect x
\end{equation*}
has the same value for every piecewise smooth curve \(C\) in \(D\) with the same start and end points. A vector field for which line integrals are path independent is called a conservative vector field, or simply a conservative field.
Using Remark 7.17 we easily deduce the following proposition.
Proposition8.2.Conservative vector fields.
A vector field, \(\vect f\text{,}\) on \(D\subset\mathbb R^N\) is conservative if and only if
for every piecewise smooth closed curve \(C\) in \(D\text{.}\)
We will show in this section that vector fields in which line integrals are path independent are what we call gradient vector fields.
Definition8.3.Gradient field and potentials.
Suppose that \(D\subset\mathbb R^N\) is open, and that \(\vect f\) is a continuous vector field on \(D\text{.}\) We say that \(\vect f\) is a gradient vector field, or simply a gradient field, if there exists a function \(V\colon D\to\mathbb R\) such that
for all \(\vect x\in D\text{.}\) The function \(V\) is called a potential for \(\vect f\).
Note that a potential of a vector field is never unique! It is determined only up to a constant.
We next show that path independence holds in gradient vector fields.
Theorem8.4.Gradient fields are conservative.
Suppose that \(\vect f\) is a continuous gradient vector field on the open set \(D\subset\mathbb R^N\text{.}\) If \(V\) is a potential for \(\vect f\) then for every piecewise smooth curve, \(C\text{,}\) connecting \(\vect a\) to \(\vect b\) in \(D\) we have
\begin{equation*}
\int_C\vect f\cdot\,d\vect x
=V(\vect b)-V(\vect a)\text{.}
\end{equation*}
In particular, \(\vect f\) is a conservative vector field.
The statement of the theorem looks very similar to the fundamental theorem of calculus. In fact, its proof is based on the fundamental theorem of calculus. Suppose that \(\vect\gamma(t)\text{,}\)\(t\in[\alpha,\beta]\) is a regular parametrisation of a smooth curve, \(C\text{,}\) connecting \(\vect a\) and \(\vect b\) in \(D\text{.}\) Hence \(\vect\gamma(\alpha)=\vect a\) and \(\vect\gamma(\beta)=\vect b\text{.}\) We then define the function \(\varphi\) by
Now by definition of the line integral and the fundamental theorem of calculus we see that
\begin{equation*}
\int_C\vect f\cdot\,d\vect x
=\int_\alpha^\beta\vect f(\vect\gamma(t))\cdot\vect\gamma'(t)\,dt
=\int_\alpha^\beta\frac{d}{dt}\varphi(t)\,dt
=\varphi(\beta)-\varphi(\alpha)\text{.}
\end{equation*}
By definition of \(\varphi\) and the fact that \(\vect\gamma(\alpha)=\vect a\) and \(\vect\gamma(\beta)=\vect b\) we finally have
\begin{equation*}
\int_C\vect f\cdot\,d\vect x
=\varphi(\beta)-\varphi(\alpha)
=V(\vect\gamma(\beta))-V(\vect\gamma(\alpha))
=V(\vect b)-V(\vect a)
\end{equation*}
as required. If \(C\) is only piecewise smooth then we do the above calculation for each smooth part. Adding up the results the intermediate terms cancel, and only the first and the last remain, whence the assertion of the theorem.
We just saw that gradient vector fields are conservative fields. We now ask whether a conservative field, \(\vect f\text{,}\) is a gradient field. This in fact turns out to be the case. What we need to do is to construct a potential for \(\vect f\text{.}\) The idea to do so is as follows. To start with suppose that \(\vect f\) is a gradient vector field. Fix a point, \(\vect a\text{,}\) in the domain, \(D\subset\mathbb R^N\text{,}\) of \(\vect f\text{,}\) and suppose that \(C_{\vect x}\) is a smooth curve connecting \(\vect a\) to \(\vect x\text{.}\) According to the above theorem
so up to a constant \(V\) is a line integral along a curve connecting \(\vect a\) to \(\vect x\text{.}\) Replacing \(V(\vect x)\) by \(V(\vect x)-V(\vect a)\) we have
The point \(\vect a\) is called a reference point for the potential as \(V(\vect a)=0\text{.}\) If \(\vect f\) is a conservative vector field, that is, a field for which we have path independence, we can define a function, \(V\text{,}\) by (8.1) by fixing \(\vect a\) and choosing for every \(\vect x\in D\) a smooth curve, \(C_{\vect x}\text{,}\) connecting \(\vect a\) to \(\vect x\text{.}\) By assumption the function \(V\) is well defined as it does not depend on the particular choice of \(C_{\vect x}\text{.}\) Of course, we need to assume that such a curve \(C_{\vect x}\) exists for each \(\vect x\in D\text{.}\) Open sets having this property are called connected domains. We hope that \(V\) defined as above is a potential for \(\vect f\text{.}\) The following theorem, a converse to Theorem 8.4, shows that this is in fact the case.
Theorem8.5.Conservative fields are gradient fields.
Every conservative field, \(\vect f\text{,}\) defined on an open connected domain, \(D\subset\mathbb R^N\text{,}\) is a gradient vector field. If we fix a reference point \(\vect a\in D\text{,}\) and for every \(\vect x\in D\) a smooth curve \(C_{\vect x}\text{,}\) connecting \(\vect a\) to \(\vect x\text{,}\) then a potential is defined by (8.1).
exists for all \(i=1,\dots,N\text{.}\) Here, as usual, \(f_i\) is the \(i\)-th component function of \(\vect f\text{.}\) If we are able to prove the above then clearly \(\vect f=\grad V\text{,}\) and we are done. Now fix \(\vect x\in D\) and \(i\in\{1,\dots,N\}\text{.}\) Suppose that \(C_0\) and \(C_s\) are smooth curves connecting the reference point \(\vect a\) to \(\vect x\) and \(\vect x+s\vect e_i\text{,}\) respectively. Moreover, let \(\Sigma_s\) denote the line segment joining \(\vect x\) and \(\vect x+s\vect e_i\text{.}\) As \(D\) is open, \(\Sigma_s\) is completely contained in \(D\) if \(s\) is sufficiently small. The situation is depicted in Figure 8.6.
As line integrals are path independent in \(\vect f\) by assumption, we have
\begin{equation*}
\int_{C_s}\vect f\cdot\,d\vect x
=\int_{C_0}\vect f\cdot\,d\vect x
+\int_{\Sigma_s}\vect f\cdot\,d\vect x\text{.}
\end{equation*}
Using the definition (8.1) of \(V\) this implies that
We next compute the limit of the right hand side as \(s\) tends to zero. To do so we parametrise \(\Sigma_s\) by \(\vect\gamma(t):=\vect x+t\vect e_i\text{,}\)\(t\in[0,s]\text{.}\) Then \(\vect\gamma'(t)=\vect e_i\) for all \(t\text{.}\) As \(\vect f\cdot\vect e_i=f_i\) we get
The limit of the last expression as \(s\to 0\) is the derivative of the function \(g(s):=\int_0^sf_i(\vect x+t\vect e_i)\,dt\) at \(s=0\text{.}\) By the fundamental theorem of calculus we therefore have that
Together with (8.3) it follows that (8.2) is true, completing the proof of the theorem.
The above is not very useful to determine whether a given vector field is conservative or not. In the next section we want to study conditions which guarantee that a vector field is a gradient field.