web analytics
Skip to main content

Section 8.1 Gradient Vector Fields and Potentials

In this section we want to characterise vector fields in which path independence holds. A precise definition of path independence is given below.

Definition 8.1. Path independence.

Suppose that \(\vect f\) is a continuous vector field on an open set \(D\subset\mathbb R^N\text{.}\) We say that line integrals are path independent in \(\vect f\) if
\begin{equation*} \int_C\vect f\cdot\,d\vect x \end{equation*}
has the same value for every piecewise smooth curve \(C\) in \(D\) with the same start and end points. A vector field for which line integrals are path independent is called a conservative vector field, or simply a conservative field.
Using Remark 7.17 we easily deduce the following proposition.
We will show in this section that vector fields in which line integrals are path independent are what we call gradient vector fields.

Definition 8.3. Gradient field and potentials.

Suppose that \(D\subset\mathbb R^N\) is open, and that \(\vect f\) is a continuous vector field on \(D\text{.}\) We say that \(\vect f\) is a gradient vector field, or simply a gradient field, if there exists a function \(V\colon D\to\mathbb R\) such that
\begin{equation*} \vect f(\vect x)=\grad V(\vect x) \end{equation*}
for all \(\vect x\in D\text{.}\) The function \(V\) is called a potential for \(\vect f\).
Note that a potential of a vector field is never unique! It is determined only up to a constant.
We next show that path independence holds in gradient vector fields.
The statement of the theorem looks very similar to the fundamental theorem of calculus. In fact, its proof is based on the fundamental theorem of calculus. Suppose that \(\vect\gamma(t)\text{,}\) \(t\in[\alpha,\beta]\) is a regular parametrisation of a smooth curve, \(C\text{,}\) connecting \(\vect a\) and \(\vect b\) in \(D\text{.}\) Hence \(\vect\gamma(\alpha)=\vect a\) and \(\vect\gamma(\beta)=\vect b\text{.}\) We then define the function \(\varphi\) by
\begin{equation*} \varphi(t):=V(\vect\gamma(t)) \end{equation*}
for all \(t\in[\alpha,\beta]\text{.}\) By the chain rule (see Theorem 4.17) we deduce that
\begin{equation*} \frac{d}{dt}\varphi(t) =\grad V(\vect\gamma(t))\cdot\vect\gamma'(t)\text{.} \end{equation*}
As \(V\) is a potential for \(\vect f\) we have \(\vect f=\grad V\) and thus
\begin{equation*} \frac{d}{dt}\varphi(t) =\grad V(\vect\gamma(t))\cdot\vect\gamma'(t) =\vect f(\vect\gamma(t))\cdot\vect\gamma'(t)\text{.} \end{equation*}
Now by definition of the line integral and the fundamental theorem of calculus we see that
\begin{equation*} \int_C\vect f\cdot\,d\vect x =\int_\alpha^\beta\vect f(\vect\gamma(t))\cdot\vect\gamma'(t)\,dt =\int_\alpha^\beta\frac{d}{dt}\varphi(t)\,dt =\varphi(\beta)-\varphi(\alpha)\text{.} \end{equation*}
By definition of \(\varphi\) and the fact that \(\vect\gamma(\alpha)=\vect a\) and \(\vect\gamma(\beta)=\vect b\) we finally have
\begin{equation*} \int_C\vect f\cdot\,d\vect x =\varphi(\beta)-\varphi(\alpha) =V(\vect\gamma(\beta))-V(\vect\gamma(\alpha)) =V(\vect b)-V(\vect a) \end{equation*}
as required. If \(C\) is only piecewise smooth then we do the above calculation for each smooth part. Adding up the results the intermediate terms cancel, and only the first and the last remain, whence the assertion of the theorem.
We just saw that gradient vector fields are conservative fields. We now ask whether a conservative field, \(\vect f\text{,}\) is a gradient field. This in fact turns out to be the case. What we need to do is to construct a potential for \(\vect f\text{.}\) The idea to do so is as follows. To start with suppose that \(\vect f\) is a gradient vector field. Fix a point, \(\vect a\text{,}\) in the domain, \(D\subset\mathbb R^N\text{,}\) of \(\vect f\text{,}\) and suppose that \(C_{\vect x}\) is a smooth curve connecting \(\vect a\) to \(\vect x\text{.}\) According to the above theorem
\begin{equation*} V(\vect x)=V(\vect a)+\int_{C_{\vect x}}\vect f\cdot\,d\vect x, \end{equation*}
so up to a constant \(V\) is a line integral along a curve connecting \(\vect a\) to \(\vect x\text{.}\) Replacing \(V(\vect x)\) by \(V(\vect x)-V(\vect a)\) we have
\begin{equation} V(\vect x)=\int_{C_{\vect x}}\vect f\cdot\,d\vect x\tag{8.1} \end{equation}
The point \(\vect a\) is called a reference point for the potential as \(V(\vect a)=0\text{.}\) If \(\vect f\) is a conservative vector field, that is, a field for which we have path independence, we can define a function, \(V\text{,}\) by (8.1) by fixing \(\vect a\) and choosing for every \(\vect x\in D\) a smooth curve, \(C_{\vect x}\text{,}\) connecting \(\vect a\) to \(\vect x\text{.}\) By assumption the function \(V\) is well defined as it does not depend on the particular choice of \(C_{\vect x}\text{.}\) Of course, we need to assume that such a curve \(C_{\vect x}\) exists for each \(\vect x\in D\text{.}\) Open sets having this property are called connected domains. We hope that \(V\) defined as above is a potential for \(\vect f\text{.}\) The following theorem, a converse to Theorem 8.4, shows that this is in fact the case.
Denote by \(\vect e_i\) the \(i\)-th standard basis vector of \(\mathbb R^N\text{.}\) Given \(V\) as above we want to show that for all \(x\in D\)
\begin{equation} \frac{\partial}{\partial x_i}V(\vect x) =\lim_{s\to 0}\frac{V(\vect x+s\vect e_i)-V(\vect x)}{s} =f_i(\vect x)\text{,}\tag{8.2} \end{equation}
exists for all \(i=1,\dots,N\text{.}\) Here, as usual, \(f_i\) is the \(i\)-th component function of \(\vect f\text{.}\) If we are able to prove the above then clearly \(\vect f=\grad V\text{,}\) and we are done. Now fix \(\vect x\in D\) and \(i\in\{1,\dots,N\}\text{.}\) Suppose that \(C_0\) and \(C_s\) are smooth curves connecting the reference point \(\vect a\) to \(\vect x\) and \(\vect x+s\vect e_i\text{,}\) respectively. Moreover, let \(\Sigma_s\) denote the line segment joining \(\vect x\) and \(\vect x+s\vect e_i\text{.}\) As \(D\) is open, \(\Sigma_s\) is completely contained in \(D\) if \(s\) is sufficiently small. The situation is depicted in Figure 8.6.
Figure 8.6. Paths connecting to the reference point.
As line integrals are path independent in \(\vect f\) by assumption, we have
\begin{equation*} \int_{C_s}\vect f\cdot\,d\vect x =\int_{C_0}\vect f\cdot\,d\vect x +\int_{\Sigma_s}\vect f\cdot\,d\vect x\text{.} \end{equation*}
Using the definition (8.1) of \(V\) this implies that
\begin{equation*} V(\vect x+s\vect e_i)=V(\vect x) +\int_{\Sigma_s}\vect f\cdot\,d\vect x, \end{equation*}
and therefore, rearranging the terms and dividing by \(s\) we get
\begin{equation} \frac{V(\vect x+s\vect e_i)-V(\vect x)}{s} =\frac{1}{s}\int_{\Sigma_s}\vect f\cdot\,d\vect x\text{.}\tag{8.3} \end{equation}
We next compute the limit of the right hand side as \(s\) tends to zero. To do so we parametrise \(\Sigma_s\) by \(\vect\gamma(t):=\vect x+t\vect e_i\text{,}\) \(t\in[0,s]\text{.}\) Then \(\vect\gamma'(t)=\vect e_i\) for all \(t\text{.}\) As \(\vect f\cdot\vect e_i=f_i\) we get
\begin{equation*} \frac{1}{s}\int_{\Sigma_s}\vect f\cdot\,d\vect x =\frac{1}{s} \int_0^s\vect f(\vect\gamma(t))\cdot\vect\gamma'(t)\,dt =\frac{1}{s}\int_0^sf_i(\vect x+t\vect e_i)\,dt\text{.} \end{equation*}
The limit of the last expression as \(s\to 0\) is the derivative of the function \(g(s):=\int_0^sf_i(\vect x+t\vect e_i)\,dt\) at \(s=0\text{.}\) By the fundamental theorem of calculus we therefore have that
\begin{equation*} \lim_{s\to 0}\frac{1}{s}\int_0^sf_i(\vect x+t\vect e_i)\,dt =g'(0)=f_i(\vect x+0\vect e_i)=f_i(\vect x)\text{.} \end{equation*}
Together with (8.3) it follows that (8.2) is true, completing the proof of the theorem.
The above is not very useful to determine whether a given vector field is conservative or not. In the next section we want to study conditions which guarantee that a vector field is a gradient field.