This document is written in reference to qualifying exams given at the University of Louisville in past years. These solutions are not given from the University, but of my work alone as a way to study for my own qualifying exam. If any tips or recommendations come up and you feel you should share, feel free to raise an issue on GitHub where I have this document saved and open to the public here. To see the qualifying exams for yourself, visit this link. When referencing the Perko book, this is in reference to the Differential Equations and Dynamical Systems, 3rd Edition. Thank you for reading, and for all advice.
Jacob Townson
Mechanical Vibrations: Newton’s Laws, spring-mass systems, two-mass oscillators, friction, damping, pendulum, linear stability and equilibria, energy analysis, phase place analysis, nonlinear oscillations, control oscillations, inverse problem
Traffic Flow: Velocities and velocity fields, traffic flow and density, conservation laws, linear and nonlinear car-following models, steady state, first order partial differential equations, green light models and rarefaction solution, shock waves, highway with entrance, traffic wave propagation, optimization problem. (NOTE: WE DID NOT COVER TRAFFIC FLOW IN CLASS, THUS IT IS LIKELY TO NOT BE ON THE QUAL THIS SUMMER [2018])
Dynamical Systems: Nonlinear systems in the plane, interacting species, limit cycles, Hamiltonian systems, Liapunov functions and stability, bifurcation theory, three-dimensional autonomous system and chaos, Poincare maps and nonautonomous systems in the plane, linear discrete dynamical systems
This implies that \[u_1(t) = x_0 + \int_0 ^t f(x_0) ds \implies 1+ \int_0 ^t 1 ds = 1+t\], \[u_2(t) = 1 + \int_0 ^t f(1+s)ds = 1+ \int_0 ^t (s^2 + 2s + 1)ds\] \[= 1 + \frac{t^3}{3} + t^2 + t\],
\[u_3(t) = 1+ \int^t _0 f \left(1+s+s^2 +\frac{s^3}{3} \right)ds\] \[ = 1+ \int^t _0 \left( 1+2s+3s^2+\frac{8s^3}{3} + \frac{5s^4}{3}+\frac{2s^5}{3}+\frac{s^6}{9} \right)ds\] \[=1+t+t^2+t^3+\frac{2t^4}{3}+\frac{t^5}{3}+\frac{t^6}{9}+\frac{t^7}{63}\]
Base case: Above, this holds for \(u_1,u_2,\) and \(u_3\). Suppose it is true for \(n-1\), i.e. \[u_{n-1}(t) = 1+t+...+t^{n-1}+O(t)\] Then \(u_n = x_0 \int^t _0 f(u_{n-1} (s))ds\). Thus \[u_n = 1+ \int^t _0 f(1+s+s^2 +...+s^{n-1} + O(t))ds\] \[= 1+\int^t _0 \left( 1+ 2s +3s^2 + ... + n s^{n-1} + O(t) \right)ds\] \[=1+t+t^2 +t^3+...+t^n + O(t)\] as needed. QED
\(\dot x =x^2\) and \(x(0) = 1\). So \(\int \frac{1}{x^2} dx = \int 1dt\) which implies that \(\frac{-1}{x} = t+c\) giving us \(x = \frac{-1}{t+c}\). Then we can see that using our initial condition, we get that \(x(0) = - \frac{1}{0+c}\). So \[1 = -\frac{1}{c} \implies c = -1\] Thus \[x(t) = - \frac{1}{t-1} = \frac{1}{1-t}\]
Note \(x(t)\) is undefined at \(t=1\) so by definition of the solution \(x(t)\) is only a solution on the interval \((- \infty , 1)\).
Consider \(\beta = -2\). Then \[\frac{\partial}{\partial x} \left( \mathrm{e}^{-2x} \cdot y \right) + \frac{\partial}{\partial y} \left( \mathrm{e}^{-2x} \cdot (-2x-y-3x^4+y^2) \right)\] \[=-2 \mathrm{e}^{-2x} \cdot y + \mathrm{e}^{-2x} \cdot (-1+2y) = - \mathrm{e}^{-2x}<0\] Now we apply Dulac’s Criteria, and we have proven that there are no limit cycles in \(\mathbb{R}^2\).
Consider the Liapunov function \[V = (x+y-1)^2\] Its derivative along the trajectories of the system is \[\dot V = 2(x+y-1)(\dot x + \dot y) = 2(x+y-1)\left( 1-x- \frac{2xy}{2+x} + {2xy}{2+x} - y \right)\] \[=2(x+y-1)(1-x-y) = -2V^2 \leq 0,\] Thus according to the LaSalle’s invairance principle, the \(\omega\)-limit set of the system is contained in \(K\).
The right part of the system is equal to zero iff \(x=1\), \(y=0\), thus, \((1,0)\) is a unique steady state. \(\dot y\) is negative on \(K\) except for the point \((1,0)\) (because \(\frac{2x}{2+x}<1\) for any \(x<2\)), thus, \(y\) is decreasing along the trajectories on \(K\), thus, any solution that contained in \(K\) moves toward the point \((1,0)\). It means that the \(\omega\)-limit set of the system contains the point \((1,0)\) only, i.e. for any initial point in \(\mathbb{D}\), \(\lim_{t \to \infty} (x(t),y(t))=(1,0).\)
In order to prove the global asymptotic stability of \((1,0)\) we need also to prove its (local) Lyapunov stability. It is given by the Lyapunov function \(W(x,y) = V(x,y) + y^2\) since the derivative is \[\dot W = -2V^2 + 2y^2 \left( \frac{2x}{2+x}-1 \right)\] is negative in some neighborhood of the equilibrium point \((1,0)\).
Note, we chose these \(V\) and \(W\) functions because of the following:
In the first case we know the function must be equal to zero on \(K\) and positive on \(\mathbb{D}/ K\). In the second case, it was just a guess. The simplest possible Lyapunov function is a quadratic form. We need a positive definite quadratic form; \(V\) is not positive definite. What can we do to \(V\) in order to obtain a positive definite quadratic form? \((x-1)^2\) and \(y^2\), fir instance. \(y^2\) is suitable for us because its derivative is \(\leq 0\) and the overall derivative is negative definite.
Let \(\dot x\) and \(\dot y\) be defined as above. Then the equilibrium point of this system is \(x=0\) and \(y=0\). The Jacobian matrix gives us \[Df(0,0) = \left( \begin{matrix}1 & -1\\ 1 & 1\end{matrix}\right)\] The eigenvalues are \(1+i\) and \(1-i\). Hence the system is unstable (node or focus), which is the first criterion of the Poincare-Bendixon theorem.
Secondly, let us choose a closed bounded subset given by \[V(x,y) = x^2 + y^2 \leq c\] where \(c\) is a positive constant. Therefore the closed subset \(V(x,y)\) is a circle of radius \(\sqrt c\). We only need to show that there exists a finite value of \(c\) for which the vector field of \(f(x,y)\) never leaves the set enclosed by \(c\). This is identical to the statement: \[\nabla V(x,y) \cdot f(x,y) \leq 0\]
Now, \[\nabla V(x,y) \cdot f(x,y) = \frac{\partial V}{\partial x} \dot x + \frac{\partial V}{\partial y} \dot y\]
Therefore \[\nabla V(x,y) \cdot f(x,y) = 2x \left[ x-y-\left( x^2 + \frac{3}{2} y^2 \right) x \right]+ 2y \left[ x+y- \left(x^2+ \frac{1}{2} y^2 \right) y\right]\] \[=2(x^2 + y^2) - 2(x^2 + y^2)^2 + y^4 -x^2 y^2\]
But \[y^4 - x^2 y^2 \leq (x^2 + y^2)^2\]
Therefore \[2(x^2+y^2) - 2(x^2+y^2)^2 +y^4 -x^2y^2 \leq 2(x^2+y^2) - 2(x^2+y^2)^2-(x^2+y^2)\] \[= 2(x^2+y^2)-(x^2+y^2)^2\]
This implies that \[\nabla V(x,y) \cdot f(x,y) \leq 2(x^2+y^2 - (x^2+y^2)^2 = 2c-c^2 = (2-c)c\]
Finally if we choose \(c=2\) we are guaranteed that \(\nabla V(x,y) \cdot f(x,y) \leq 0\) which satisfies the second criterion of the Poincare-Bendixson theorem.
Therefore it can be concluded that a limit cycle does indeed exist. QED
First off, here it is easy to tell that the only equilibrium point we have here is indeed the origin. Next, let’s discuss the behavior of this solution point.
\[Df(x,y) = \left( \begin{matrix}y & x\\ -2x & -1\end{matrix}\right)\] \[Df(0,0) = \left( \begin{matrix}0 & 0\\ 0 & -1\end{matrix}\right)\]
Thus we can’t show anything from the eigenvalues of \(\lambda_1 = 0\) and \(\lambda_2 = -1\). So we will use the Lyapunov function of \(V = x^2 + y^2\). Then
\[\dot V = 2x(xy) + 2y \left(-y-x^2 \right) = 2x^2y - 2y^2-2x^2 y = -2y^2 \leq 0\]
This is true for all values of \(y\). Thus the origin is stable. For the phase portrait, see figure 1.
Phase Portrait
So finally using the second equation, we get that \[0 = \frac{3}{4} y -y- \frac{3}{2}y \cdot \frac{3}{2}y^2 \implies 0=y+9y^3\] which can only happen if \(y=0\). Thus we find that our only possible equilibrium point is the trivial one, \((x,y,z) = (0,0,0)\).
This is a negative definite quadratic form, therefore it is less than \(0\) unless \(x=y=z=0\). Moreover there is some \(k>0\) such that \(\frac{d}{dt}V(x,y,z) \leq -k V(x,y,z)\). Therefore for any initial point \(x(0), y(0),z(0)\), \(V(x,y,z) \leq V(x(0),y(0),z(0)) \mathrm{e}^{-kt}\). In particular, this tells us that \((x,y,z) \to 0\) as \(t \to \infty\). Thus our equilibrium point is indeed globally stable.
When \(\epsilon = 0\), we get that \(\ddot x + x - x^3 +x^5 = 0\). So if we let \(\dot x = y\), then \(\dot y = -x^5 + x^3 - x\). Now recall that \(\dot x = \frac{\partial H}{\partial y}\) and \(\dot y = - \frac{\partial H}{\partial x}\). So we get the following:
\[\dot x \partial y = \partial H \implies \frac{1}{2} y ^2 + f(x) = H\]
and \[\dot y \partial x = -\partial H \implies (x^5 -x^3 +x) \partial x = \partial H\] \[\implies H = \int(x^5 -x^3 + x) \partial x = \frac{1}{6}x^6 - \frac{1}{4} x^4 + \frac{1}{2}x^2 + g(y)\]
Putting all of this together, we find that \[H = \frac{1}{2} y^2 + \frac{1}{6}x^6 - \frac{1}{4} x^4 + \frac{1}{2}x^2\].
Now, when \(\epsilon < 0\), and if we set \(\dot x = y\), then we get that \(\dot y = \ddot x = \epsilon y -x + x^3 -x^5\). We know that \((0,0)\) is indeed an equilibrium point. Now we just need to study the stability. Well
\[Df(x,y) = \left( \begin{matrix}0 & 1\\ 3x^2 -5x^4 -1 & \epsilon\end{matrix}\right)\] \[\implies Df(0,0) = \left( \begin{matrix}0 & 1\\ -1 & \epsilon\end{matrix}\right) \implies \lambda_{1,2} = \frac{\epsilon \pm \sqrt{\epsilon^2 -4}}{2}\]
Notice the part under the radical here is negative, as is \(\epsilon\). So we have two complex eigenvalues both with a negative real part, hence \((0,0)\) is a stable focus.
Now notice in the same situation above, if \(\epsilon > 0\), then we get two complex eigenvalues both with a positive real part, implying we have an unstable focus at the origin. Now we just need to show that in this situation, there exists a limit cycle.
Well, assume that all solutions starting in \(\mathbb{R}_+ ^2 = \{(x,y):x \geq 0, y \geq 0\}\) are bounded for \(t > 0\). It follows then that the domain of any such solution contains \([0, \infty)\), and that the \(\omega\)-limit set is compact and nonempty. Let \(L\) stand for the \(\omega\)-limit set of some point, \((x_0,y_0)\), sufficiently close to the unstable focus \((0,0)\). By the Poincare-Bendixson theorem, \(L\) is either a limit cycle or an equilibrium. \(L\) cannot be an equilibrium. This is because the only equilibrium point in this situation is \((0,0)\), and the only point whose \(\omega\)-limit set is \(\{(0,0)\}\) is \((0,0)\) itself. We proceed now to excluding the case of the heteroclinic cycle. But since we only have one equilibrium point, this is not the case. Consequently, \(L\) is a limit cycle surrounding \((0,0)\).
Because of all of these facts put together, we can indeed say that \(\epsilon = 0\) is a bifurcation point.
As \(\epsilon < 0\), \(x = 0, x' = 0\) is the only steady state, and is stable. And as \(\epsilon > 0\), zero is the only unstable steady state, and a cycle emerges from here. Thus this must be a Hopf bifurcation. The bifurcation diagram can be found in figure 2:
Hopf Bifurcation
First we can see here that we have three equilibrium points. \(E_1 = (0,0)\), \(E_2 = (1,0)\), and \(E_3 = \left( \frac{1}{4}, \frac{3}{4} \right)\). As defined in the problem, \(x(0), y(0) > 0\), so only one of our equilibrium points is in the desired area. However, for the sake of practice, we will include the analysis of all equilibrium points in this problem.
First note:
\[Df(x,y) = \left( \begin{matrix}1 -2x-y & -x\\ 4y & 4x-1\end{matrix}\right)\]
Then we get the following:
\[Df(0,0) = \left( \begin{matrix} 1 & 0\\ 0 & -1\end{matrix}\right) \implies \lambda_1 = 1, \lambda_2 = -1\] \[Df(1,0) = \left( \begin{matrix}-1 & -1\\ 0 & 3\end{matrix}\right) \implies \lambda_1 = -1, \lambda_2 = 3\] \[Df\left(\frac{1}{4} , \frac{3}{4} \right) = \left( \begin{matrix}\frac{-1}{4} & \frac{1}{4}\\ 3 & 0\end{matrix}\right) \implies \lambda_{1,2} = \frac{1}{8} \left(-1 \pm i \sqrt{47} \right)\]
Thus we get a saddle at \((0,0)\) and \((1,0)\), and then a stable focus at \(\left( \frac{1}{4}, \frac{3}{4} \right)\). Because \(\left( \frac{1}{4}, \frac{3}{4} \right)\) is the only point in the first quadrant as we desire in this problem, we can see that for any \(x(0), y(0) >0\), \(\lim_{t \to \infty} (x(t),y(t)) = \left( \frac{1}{4}, \frac{3}{4} \right)\).
First off we can see here that we have two equilibrium points, \(E_1 = (0,0)\), and \(E_2 = \left( \frac{1}{2},1 \right)\). Since we only care about the non constant positive solutions, we will focus our efforts on \(E_2\).
\[Df\left(x , y \right) = \left( \begin{matrix} 1-y & -x \\ 2y & 2x-1 \end{matrix}\right)\] So, \[Df\left( \frac{1}{2}, 1 \right) = \left( \begin{matrix} 0 & -\frac{1}{2} \\ 2 & 0\end{matrix}\right) \implies \lambda_{1,2} = \pm i\]
Thus, since we get complex eigenvalues with a zero real part, this implies that we have a center at the point \(\left(\frac{1}{2},1 \right)\).
However, I am not sure if this is enough to show that every non-constant positive solution is periodic, so to do that we will use Lyapunov functions. First off notice that \(\frac{dy}{dx} = \frac{(2x-1)y}{x(1-y)}\), which is separable. If we separate the variables and integrate accordingly, we get \[\ln(y) - y = 2x - \ln(x) +C\] If we move some things around, we notice that \[\ln(y)-y-2x+\ln(x) = C\] So this gives us a constant solution of our DE. Thus all of the trajectories along this function are closed, proving to us that all of the solutions are indeed periodic.
Using the squared radius \(V = x^2 + y^2\) as a Lyapunov function, one finds that \[\dot V = 2V - 8(x^4 + y^4)\] Then using \[\frac{1}{2}(x^2+y^2)^2 + \frac{1}{2}(x^2 - y^2 )^2 = x^4 + y^4 = (x^2+y^2)^2 - 2(xy)^2\] gives us that \[2V-8V^2 \leq \dot V \leq 2V -4V^2\]
This already tells us that the vector field points outwards, away from zero, for \(V \leq \frac{1}{4}\) and inwards towards zero for \(V \geq \frac{1}{2}\). This statement is sufficient to apply Poincare-Bendixon, thus the system has a limit cycle.
There are four equilibrium points here, all of which can be found using relatively simple algebra. They are \(E_1 = (0,0), E_2 = (0,1), E_{3,4} = \left( \pm \frac{\sqrt{3}}{2}, - \frac{1}{2} \right)\). The stability of these is found below:
\[Df\left( x, y \right) = \left( \begin{matrix} 2x & 1-2y \\ -1-2y & -2x\end{matrix}\right)\]
\[Df\left( E_1 \right) = \left( \begin{matrix} 0 & 1 \\ -1 & 0\end{matrix}\right) \implies \lambda_{1,2} = \pm i\]
Thus \(E_1\) is a center.
\[Df\left( E_2 \right) = \left( \begin{matrix} 0 & -1 \\ -3 & 0\end{matrix}\right) \implies \lambda_{1,2} = \pm \sqrt 3\] Thus \(E_2\) is a saddle.
\[Df\left( E_3 \right) = \left( \begin{matrix} \sqrt 3 & 2 \\ 0 & - \sqrt 3\end{matrix}\right) \implies \lambda_{1,2} = \pm \sqrt 3\] Thus \(E_3\) is a saddle.
\[Df\left( E_4 \right) = \left( \begin{matrix} -\sqrt{3} & 2 \\ 0 & \sqrt 3\end{matrix}\right) \implies \lambda_{1,2} = \pm \sqrt 3\] Thus \(E_4\) is a saddle.
\[H(x,y) = \int (y + x^2 - y^2)dy = \frac{1}{2}y^2 + x^2 y - \frac{1}{3}y^3 + f(x)\] Thus \[-\frac{d}{dx} H(x,y) = -\frac{d}{dx} [x^2 y + f(x)] = \dot y = -x-2xy\] which implies \[-2xy -f'(x) = -x-2xy \implies f'(x) = x\] thus \(f(x) = \frac{1}{2}x^2\). So the Hamiltonian is \[H(x,y) = \frac{1}{2}y^2 + x^2 y - \frac{1}{3}y^3 + \frac{1}{2}x^2\]
We want a curve that connects a saddle point. If it goes through \(\left( \frac{\sqrt 3}{2}, \frac{1}{2} \right)\), then we’re done. Well \[H \left( \frac{\sqrt 3}{2}, \frac{1}{2} \right) = \frac{1}{2} \left( -\frac{1}{2} \right) ^2 + \left( \frac{\sqrt 3}{2} \right) ^2 \left( -\frac{1}{2} \right) - \left(-\frac{1}{2} \right) ^3 + \frac{1}{2}\left( \frac{\sqrt 3}{2} \right)^2= \frac{1}{4}\] Hence \(H(x,y) = \frac{1}{4}\) connects a saddle point…..?
NOT YET FINISHED
\[\ddot{x}+\dot{x}+x=f(t)\]
The homogeneous solutions \(x_h\) are of the form
\[x_h(t)=\exp\left(-\frac{t}{2}\right)\,\Biggl(a\,\cos\left(\frac{\sqrt{3}}{2}t\right)+b\,\sin\left(\frac{\sqrt{3}}{2}t\right)\Biggr)\,.\tag{*}\]
For \(t \in [0,T]\), we have \(x(t) = x_h(t) +x_p(t)\) for some appropriate choice \(x_h\) of homogeneous solutions, where \(x_p\) is a particular solution. If we let \(\omega:=\frac{\pi}{2T}\) and \(x_p(t) = A \cos(\omega t) + B \sin (\omega t)\), then we find that
\[\ddot{x}_p(t)=-\omega^2\,A\,\cos(\omega t)-\omega^2\,B\,\sin(\omega t)\,,\] and \[\dot{x}_p(t)=\omega\,B\,\cos(\omega t)-\omega\,A\,\sin(\omega t)\,.\]
Hence \[\ddot{x}_p(t)+\dot{x}_p(t)+x_p(t)=\Big((1-\omega^2)\,A+\omega\,B\Big)\,\cos(\omega t)+\Big(-\omega \,A+(1-\omega^2)\,B\Big)\,\sin(\omega t)\,.\]
Then we balance the coefficients, using \(\ddot{x}_p(t)+\dot{x}_p(t)+x_p(t)=f(t)=\frac{\omega}{2}\,\sin(\omega t)\). Therefore
\[(1-\omega^2)\,A+\omega\,B=0\text{ and }-\omega \,A+(1-\omega^2)\,B=\frac{\omega}{2}\,.\]
Thus \[\small(1-\omega^2)^2 A +\omega^2 A=(1-\omega^2)\big((1-\omega^2)\,A+\omega\,B\big)-\omega\big(-\omega \,A+(1-\omega^2)\,B\big)= (1-\omega^2)\cdot 0 -\omega\left(\frac{\omega}{2}\right)\,.\] This gives us \(A=-\frac{\omega^2}{2(1-\omega^2+\omega^4)}\), and so \(B=-\frac{1-\omega^2}{\omega}A=\frac{\omega(1-\omega^2)}{2(1-\omega^2+\omega^4)}\).
From the work above, we find that \[x_p(t)=-\frac{\omega^2}{2(1-\omega^2+\omega^4)}\,\cos(\omega t)+\frac{\omega(1-\omega^2)}{2(1-\omega^2+\omega^4)}\,\sin(\omega t)\,.\]
Thus \(x_p(0)=-\frac{\omega^2}{2(1-\omega^2+\omega^4)}\) and \(\dot{x}_p(0)=\frac{\omega^2(1-\omega^2)}{2(1-\omega^2+\omega^4)}\). Because \(x = x_p + x_h\) with \(x(0) = 0\) and \(\dot x(0) = 0\), you need \(x_h\) such that \(x_h(0)=\frac{\omega^2}{2(1-\omega^2+\omega^4)}\) and \(\dot{x}_h(0)=-\frac{\omega^2(1-\omega^2)}{2(1-\omega^2+\omega^4)}\). From (*), you immediately get \(a=\frac{\omega^2}{2(1-\omega^2+\omega^4)}\).The remaining part is to find \(b\), noting that \(\dot{x}_h(0)=\frac{\sqrt{3}b-a}{2}\). That is, \(b=-\frac{\omega^2(1-2\omega^2)}{2\sqrt{3}(1-\omega^2+\omega^4)}\). Thus we finally arrive at
\[x_h(t)=\exp\left(-\frac{t}{2}\right)\,\Biggl(\frac{\omega^2}{2(1-\omega^2+\omega^4)}\,\cos\left(\frac{\sqrt{3}}{2}t\right)-\frac{\omega^2(1-2\omega^2)}{2\sqrt{3}(1-\omega^2+\omega^4)}\,\sin\left(\frac{\sqrt{3}}{2}t\right)\Biggr)\,.\]
We have now solved the case \(t \in [0,2T]\).
For\(t > 2T\), the solution should be a homogeneous solution. From the work above, we can get \(x(t)\) for \(t \in [0,2T]\). Therefore, we know the value \(x(2T)\) and \(\dot x(2T)\). Which homogeneous solution satisfies these new boundary conditions?
Observe that \[x(2T)=\small\frac{\omega^2}{2(1-\omega^2+\omega^4)}+\exp(-T)\,\left(\frac{\omega^2}{2(1-\omega^2+\omega^4)}\,\cos\left(\sqrt{3}T\right)-\frac{\omega^2(1-2\omega^2)}{2\sqrt{3}(1-\omega^2+\omega^4)}\,\sin\left(\sqrt{3}T\right)\right)\]
and that \[\dot{x}(2T)=-\small\frac{\omega^2(1-\omega^2)}{2(1-\omega^2+\omega^4)}-\exp(-T)\,\left(\frac{\omega^2(1-\omega^2)}{2(1-\omega^2+\omega^4)}\,\cos\left(\sqrt{3}T\right)+\frac{\omega^2(1+\omega^2)}{2\sqrt{3}(1-\omega^2+\omega^4)}\,\sin\left(\sqrt{3}T\right)\right)\,.\]
We can write \[x(t+2T)=\exp\left(-\frac{t}{2}\right)\,\Biggl(\alpha\,\cos\left(\frac{\sqrt{3}}{2}t\right)+\beta\,\sin\left(\frac{\sqrt{3}}{2}t\right)\Biggr)\,.\]
Then we get \(\alpha = x(2T)\) and \(\frac{\sqrt{3}\beta-\alpha}{2}=\dot{x}(2T)\). This gives you \[x(t+2T)=\exp\left(-\frac{t}{2}\right)\,\Biggl(x(2T)\,\cos\left(\frac{\sqrt{3}}{2}t\right)+\frac{x(2T)+2\,\dot{x}(2T)}{\sqrt{3}}\,\sin\left(\frac{\sqrt{3}}{2}t\right)\Biggr)\]
for \(t \geq 0\). Use \(x(2T)\) and \(\dot x(2T)\) from above, and we get one ugly expression for \(x(t+2T)\), when \(t \geq 0\). We may then write
\[x(t)=\small\exp\left(-\frac{(t-2T)}{2}\right)\,\Biggl(x(2T)\,\cos\left(\frac{\sqrt{3}}{2}(t-2T)\right)+\frac{x(2T)+2\,\dot{x}(2T)}{\sqrt{3}}\,\sin\left(\frac{\sqrt{3}}{2}(t-2T)\right)\Biggr)\]
for \(t \geq 2T\). In other words, \(x(t)=\exp\left(-\frac{t}{2}\right)\,\Biggl(C\,\cos\left(\frac{\sqrt{3}}{2}t\right)+S\,\sin\left(\frac{\sqrt{3}}{2}t\right)\Biggr)\) for \(t \geq 2T\), where
\[C:=\frac{\omega^2}{2(1-\omega^2+\omega^4)}\,\big(\exp(T)\,\cos(\sqrt{3}T)+1\big)+\frac{\omega(1-2\omega^2)}{2\sqrt{3}(1-\omega^2+\omega^4)}\,\exp(T)\,\sin(\sqrt{3}T)\]
and
\[S:=-\frac{\omega(1-2\omega^2)}{2\sqrt{3}(1-\omega^2+\omega^4)}\,\exp(T)\,\cos(\sqrt{3}T)-\frac{\omega^2}{2(1-\omega^2+\omega^4)}\,\big(\exp(T)\,\sin(\sqrt{3}T)+1\big)\,.\]
Luckily for us here (especially after the brutality of part (a)), we don’t need to actually do any calculations here. Because the system is stable (it is a second order DE with constant positive coefficients) and \(f\) has a finite duration, we then know that both limits here are zero.
This part is a little more intensive. First off, note what is happening as \(T \to 0\). It means that we are approaching a situation where \(f(t) = 0\). The physical meaning of this is that the striking force is instantaneous It happens at a single point in time. Lucky for us we have the homogeneous solution. Through some simple algebra and trigonometry, we find that \(x(0) = 1 \times (x(2T))\). Using what we found above, we find that \[x(2T) = \frac{\frac{\pi^2}{4T^2}}{\left( 1-\frac{\pi^2}{4T^2} + \frac{\pi^4}{16T^4} \right)} = \frac{4T^2 \pi}{16T^4 -4T^2 \pi^2 + \pi^4}\]
Now if we let \(T=0\), we find that \(x(2T) = \frac{0}{\pi^4}=0\). Thus the limit as \(T \to 0\) for \(x(T)\) is \(0\).
Now for \(\lim_{T \to 0} \dot x(T)\). First off, as \(T \to 0\), we know that \(\dot x (T) = \dot x(2T)\). And we know what \(x(2T)\) is from above, thus
\[\lim_{t \to 0} x(2T) = -\frac{\omega^2(1- \omega^2)}{1-\omega^2 + \omega^4} = \frac{\frac{\pi^2}{4T^2} \left(1-\frac{\pi^2}{4T^2} \right)}{1-\frac{\pi^2}{4T^2} + \frac{\pi^4}{16T^4}}\] \[=\left( \frac{\pi^4}{16T^4} - \frac{\pi^2}{4T^2} \right) \times \frac{16T^4}{16T^4 - 4T^2 \pi^2 + \pi^4}\]
Now if we let \(T =0\), we get that the above equation is equal to \(\frac{\pi^4}{\pi^4}=1\). Thus the limit as \(T \to 0\) for \(\dot x(T) = 1\)
Let’s discuss what exactly this part means physically. First off, because \(x(T)\) approaches zero, we can tell that as the striking time gets closer to zero, the pendulum will stay closer to its equilibrium point of \(0\). Thus, if the striking force was instantaneous and was only one single point in time, the object wouldn’t move at all (theoretically). Then, as \(T \to 0\), the velocity approaches a constant value, implying that there is no force applied to the mass. If there was an external force taking the mass out of equilibrium, then there would be acceleration, but since the velocity is a constant value, we can see that there is no external force. Thus the strike (if it were indeed instantaneous) would not move the mass.
Let \(\dot x = y\) and \(\dot y = -2x -3y^5\). Then we only get one equilibrium point of \((0,0)\). If we do some analysis of this point, we find:
\[Df\left( x,y \right) = \left( \begin{matrix} 0 & 1 \\ -2 & -15y^4\end{matrix}\right)\] \[\implies Df\left( 0,0 \right) = \left( \begin{matrix} 0 & 1 \\ -2 & 0 \end{matrix}\right) \implies \lambda_{1,2} = \pm i \sqrt{2}\] Thus since we have two complex eigenvalues with real part zero, we have a center at \((0,0)\). Note after finding this, phase plane analysis must be done to find the direction of the curve.
First note that \[\dot{x} = x(2-y) = 0 \implies x = 0, y = 2\] and \[\dot{y} = y(x-1) = 0 \implies y = 0, x =1\]
Thus we let \(E_0 = (0,0)\) and \(E_1 = (1,2)\). Then \[\dot{V} = \left( 1- \frac{1}{x} \right)(2x-xy)+ \left( 1- \frac{2}{y} \right)(yx-y)\] \[= 2x-xy-2+y+yx-y-2x+2 = 0\]
Thus every trajectory along the closed curve \(V(x,y) = c\) where \(c\) is constant. Thus every non-constant positive solution is periodic.
\[\dot{x} = y+x\sqrt{x^2+y^2} \sin \left({\frac{1}{\sqrt{x^2+y^2}}} \right), \dot{y} = -x +y \sqrt{x^2+y^2} \sin \left( \frac{1}{\sqrt{x^2+y^2}} \right)\]
\[\dot{r} = r^{-1}\left[\left(y+xr \sin\left(\frac{1}{r}\right)\right)x + \left(-x+yr \sin\left(\frac{1}{r}\right)\right)y\right]\] \[=r^{-1}\left(xy+x^2 r \sin\left(\frac{1}{r}\right) -xy + y^2 r \sin \left( \frac{1}{r} \right) \right) = r^{-1}r \sin\left( \frac{1}{r}\right)\left( x^2 + y^2 \right)\] \[=r^2 \left[ \sin \left( \frac{1}{r} \right)(\cos^2 \theta + \sin^2 \theta) \right] = r^2 \sin \left( \frac{1}{r} \right)\]
Note that \(r^2 \sin \left( \frac{1}{r} \right)\) equals \(0\) on \(r = k \pi\). Thus we have an infinite number of limit cycles.
Now \[\dot{\theta} = r^{-2}(x\dot{y}-y\dot{x}) = r^{-2}\left(x\left(-x+yr\sin\left( \frac{1}{r} \right) \right) -y\left(y+xr \sin\left(\frac{1}{r}\right)\right)\right)\] \[= r^{-2}\left(-x^2 + xyr \sin \left( \frac{1}{r} \right) - y^2 - xyr \sin \left(\frac{1}{r} \right)\right)= r^{-2}\left(-r^2\right) = -1<0\]
Thus the trajectories go clockwise.
Recall that \[\dot x = \frac{dH}{dy} = y\] and \[\dot y = -\frac{dH}{dx} = x^3 -x = x(x^2 -1)\] Thus our equilibrium points are \(E_1 = (0,0)\) and \(E_{2,3} = (\pm 1,0)\). Now we just need to study their stability.
\[Df\left( x,y \right) = \left( \begin{matrix} 0 & 1 \\ 3x^2-1 & 0 \end{matrix}\right)\]
\[\implies Df\left( E_1 \right) = \left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix}\right) \implies \lambda_{1,2} = \pm i\]
\[\implies Df\left( E_2 \right) = \left( \begin{matrix} 0 & 1 \\ 2 & 0 \end{matrix}\right) \implies \lambda_{1,2} = \pm \sqrt{2}\]
\[\implies Df\left( E_3 \right) = \left( \begin{matrix} 0 & 1 \\ 2 & 0 \end{matrix}\right) \implies \lambda_{1,2} = \pm \sqrt{2}\]
Thus, \(E_1\) is a center since the eigenvalues are complex with zero real part, and \(E_2\) and \(E_3\) are saddles since we have one positive real eigenvalue and one negative real eigenvalue for each one.
\[\dot{x} = x \left(6-x-\frac{3y}{1+x} \right), \dot{y} = y(x-2)\]
It’s fairly easy to see here that we have three equilibrium points here. We have \(E_1 = (0,0), E_2 = (6,0)\), and \(E_3 = (2,4)\). Now we just need to study their stability.
\[Df\left( x,y \right) = \left( \begin{matrix} 6-2x-\frac{3y}{1+x^2} & \frac{-3x}{1+x} \\ y & x-2 \end{matrix}\right)\]
\[\implies Df\left( E_1 \right) = \left( \begin{matrix} 6 & 0 \\ 0 & -2 \end{matrix}\right) \implies \lambda_1 = 6, \lambda_2 = -2\]
\[\implies Df\left( E_2 \right) = \left( \begin{matrix} -6 & -\frac{18}{7} \\ 0 & 4 \end{matrix}\right) \implies \lambda_1 = -6, \lambda_2 = 4\]
\[\implies Df\left( E_3 \right) = \left( \begin{matrix} -\frac{2}{5} & -2 \\ 4 & 0 \end{matrix}\right) \implies \lambda_{1,2} = \frac{1}{3}\left( 1 \pm i \sqrt{71} \right)\]
Thus \(E_1\) and \(E_2\) are saddle points, and \(E_3\) is a stable focus.
Assume that all solutions starting in \(\mathbb{R}_+ ^2 = \{(x,y):x \geq 0, y \geq 0\}\) are bounded for \(t > 0\). It follows then that the domain of any such solution contains \([0, \infty)\), and that the \(\omega\)-limit set is compact and nonempty.
Let \(L\) stand for the \(\omega\)-limit set of some point, \((x_0,y_0)\), sufficiently close to the unstable focus \((2,4)\). By the Poincare-Bendixson theorem, \(L\) is either a limit cycle or an equilibrium.
\(L\) cannot be an equilibrium. Indeed, there are three possibilities: either \(L = \{(0,0)\}\), or \(L = \{(6,0)\}\), or else \(L = \{(2,4)\}\). In the first two cases, the equilibria are saddles, so \((x_0,y_0)\) must belong to the stable manifold, thus either \(\{(0,y): y > 0\}\) or \(\{(x,0):x \in (0,6) \cup (6, \infty)\}\). In the third case, the only point whose \(\omega\)-limit set is \(\{(2,4)\}\) is \((2,4)\) itself.
We proceed now to excluding the case of the heteroclinic cycle. But the only possible connection is from \((0,0)\) to \((6,0)\), so there are no heteroclinic cycles.
Consequently, \(L\) is a limit cycle surrounding \((2,4)\).
Let \(\dot x = y\) so that \(\dot y = -y -x^5\). Thus our only equilibrium point of this system is \((0,0)\). Then
\[Df\left( x,y \right) = \left( \begin{matrix} 0 & 1 \\ -5x^4 & -1 \end{matrix}\right)\]
\[\implies Df\left( 0,0 \right) = \left( \begin{matrix} 0 & 1 \\ 0 & -1 \end{matrix}\right) \implies \lambda_{1} = -1, \lambda_2 = 0\]
This doesn’t tell us anything about the system though, which is unfortunate.
Instead, consider the following:
\[\frac{dx}{dt} \left( \frac{d^2 x}{dt^2} \right) + \frac{dx}{dt} \frac{dx}{dt} + \frac{dx}{dt} x^5 = 0\] \[ \implies \frac{d}{dt} \left( \frac{1}{2} \left( \frac{dx}{dt} \right)^2 \right) + \left( \frac{dx}{dt} \right)^2 +\frac{dx}{dt} x^5 = 0\]
\[\implies \frac{d}{dt} \left( \frac{1}{2} \left( \frac{dx}{dt} \right)^2 \right)dt + \left( \frac{dx}{dt} \right)^2 dt + x^5 dx = 0\]
Then we integrate the entire equation to get \[\frac{1}{2} y^2 + \int \left( \frac{dx}{dt} \right)^2 dt + \frac{1}{6} x^6 = (0,0)\]
\[\implies \frac{1}{2}y^2 + \frac{1}{6} x^6 = (0,0) - \int \left( \frac{dx}{dt} \right)^2 dt\]
Using what we found here, let \(V(x,y) = \frac{1}{2} y^2 + \frac{1}{6} x^6\). Thus
\[\dot V(x,y) = y \dot y + x^5 \dot x = y(-y-x^5) + x^5(y) = -y^2 < 0\]
This implies that the equilibrium point is globally asymptotically stable, giving us insight into how the solutions of the system behave.
First off, let \(\dot x = y\) and \(\dot y = -x-x^3\). Then the DE can be written as \[\frac{y^2}{2}+\frac{x^2}{2}+\frac{x^4}{4} = H(x,y)\]
This is Newtonian (can be split into an energy function \(T(y) + U(x) = H(x,y)\)), so consider the following:
\[U(x) = \frac{x^2}{2} + \frac{x^4}{4}\] \[U'(x) = x+x^3\]
This shows that \(U'(x) = 0\) means that \(x = 0\). Now
\[U''(x) = 1 + 3x^2\]
So \(U''(0) = 1 > 0\) which implies that we have a min at \(x = 0\). This implies that we have a center at \(x =0\).
Through some simple algebra, we find that the equilibrium points are \(E_1 = (0,0), E_2 = (2,1), E_3 = (0,2)\), and \(E_4 = (3,0)\). Though, the problem requires \(x(0), y(0) > 0\) so we will ignore \(E_1, E_3,\) and \(E_4\). So then
\[Df\left( x,y \right) = \left( \begin{matrix} 3-2x-y & x \\ y & 4-x-4y \end{matrix}\right)\] \[\implies Df(2,1) = \left( \begin{matrix} -2 & 2 \\ 1 & -2 \end{matrix}\right) \implies \lambda_{1,2} = -2 \pm \sqrt2\]
Thus we can see that \(E_2\) is a stable node, which implies that \(\lim_{t \to \infty}(x(t),y(t)) = E_2\).
Recall that \(r \dot r = x \dot x + y \dot y\) and that \(x = r \cos \theta\) and \(y = r \sin \theta\). Also recall that \(r^2 \theta = \dot y x - \dot x y\). After a decent bit of conversions, one can find that \(\dot r = r -r^3(2 \cos ^2 \theta + 3 \sin ^2 \theta)\). It can also be found that \(\dot \theta = 1\). Notice that
\[2 \leq 2 \cos ^2 \theta + 3 \sin ^2 \theta \leq 5\]
Using this, we can tell that
\[r - 5r^3 \leq \dot r \leq r - 2r^3\] by our definition or \(\dot r\). Notice for all \(r \leq \frac{1}{4}\), \(\dot r > 0\). Also for all \(r \geq 1\), \(\dot r < 0\). Thus by Poincare-Bendixson, there exists a limit cycle between the circles with radii \(\frac{1}{4}\) and \(1\). Thus we have a limit cycle in an annulus.
First note that \(E = (2,1)\) is indeed an equilibrium point, which can be checked easily. Now, to make derivation easier, let’s distribute the \(x\) and \(y\) in the above system, so that \[\dot x = 3x-x^2-xy\] \[\dot y = 4y -xy -2y^2\]
Thus
\[Df(x,y) = \left( \begin{matrix}3-2x-y & -x\\ -y & 4-x-4y\end{matrix}\right)\]
\[Df(2,1) = \left( \begin{matrix}-2 & -2\\ -1 & -2\end{matrix}\right)\] This implies that our eigenvalues are \(\lambda_1 = -2 + \sqrt{2}\) and \(\lambda_2 = -2 - \sqrt{2}\). Both of these are negative real values, this we find that \((2,1)\) is a stable node. Thus we know that any positive solution will approach the equilibrium point \((2,1)\).
Recall that \(\dot x = \frac{\partial H}{\partial y}\) and \(\dot y = -\frac{\partial H}{\partial x}\). Thus
\[\dot x \partial y = \partial H\] and \[\dot y \partial x = - \partial H\]
So we get the following calculations:
\[H = \int y \partial y = \frac{1}{2} y^2 + f(x)\]
and \[-H = \int - \sin(x) \partial x = \cos(x) + g(y)\]
If we combine these two calculations, we get that
\[H = \frac{1}{2} y^2 - \cos(x)\]
Now we just need to plot the phase portrait (which is below). Recall that plotting the phase portrait can be done easily for an \(H\) such as this one because we have a clear split between \(T(x)\) and \(U(x)\), the kinetic and potential energies respectively.
,
First after some simple algebra, we find that \(E_1 = (0,0), E_2 = (4,0)\), and \(E_3 = (1,3)\). So by linearizing the system, we find that
\[Df(x,y) = \left( \begin{matrix}4-2x-\frac{2y}{(1+x)^2} & -\frac{2x}{1+x}\\ y & x-1\end{matrix}\right)\]
\[Df(0,0) = \left( \begin{matrix}4 & 0\\ 0 & -1\end{matrix}\right) \implies \lambda_1 = 4, \lambda_2 = -1\] Thus \((0,0)\) is a saddle.
\[Df(4,0) = \left( \begin{matrix}-4 & -\frac{8}{5}\\ 0 & 3\end{matrix}\right) \implies \lambda_1 = -4, \lambda_2 = 3\] Thus \((4,0)\) is a saddle.
\[Df(1,3) = \left( \begin{matrix}\frac{1}{2} & -1\\ 3 & 0\end{matrix}\right) \implies \lambda_{1,2} = \frac{1}{4}(1 \pm i \sqrt{47})\] Thus \((1,3)\) is an unstable focus.
Assume that all solutions starting in \(\mathbb{R}_+ ^2 = \{(x,y):x \geq 0, y \geq 0\}\) are bounded for \(t > 0\). It follows then that the domain of any such solution contains \([0, \infty)\), and that the \(\omega\)-limit set is compact and nonempty.
Let \(L\) stand for the \(\omega\)-limit set of some point, \((x_0,y_0)\), sufficiently close to the unstable focus \((1,3)\). By the Poincare-Bendixson theorem, \(L\) is either a limit cycle or an equilibrium.
\(L\) cannot be an equilibrium. Indeed, there are three possibilities: either \(L = \{(0,0)\}\), or \(L = \{(4,0)\}\), or else \(L = \{(1,3)\}\). In the first two cases, the equilibria are saddles, so \((x_0,y_0)\) must belong to the stable manifold, thus either \(\{(0,y): y > 0\}\) or \(\{(x,0):x \in (0,4) \cup (4, \infty)\}\). In the third case, the only point whose \(\omega\)-limit set is \(\{(1,3)\}\) is \((1,3)\) itself.
We proceed now to excluding the case of the heteroclinic cycle. But the only possible connection is from \((0,0)\) to \((4,0)\), so there are no heteroclinic cycles.
Consequently, \(L\) is a limit cycle surrounding \((1,3)\).
Considering the dynamic system \[\left\{ \begin{array}{rcl} \dot x & = & y\\ \dot y & = & -\alpha x^3 \end{array} \right. \Rightarrow \left\{ \begin{array}{rcl} \alpha x^3\dot x & = & \alpha x^3 y\\ y\dot y & = & -\alpha x^3 y \end{array} \right. \Rightarrow \left(\frac 14\alpha x^4+\frac 12 y^2\right)' = 0\] hence the orbits are described by the curves \[\frac 14\alpha x^4+\frac 12 y^2 = C_0\]
This characterizes a center. We could also do this by studying the linearization of the system and finding the eigenvalues to study the stability.
The same procedure as above can be used to show that when \(\sigma > 0\) those orbits collapse to zero with time. Note
\[\left(\frac 14\alpha x^4+\frac 12 y^2\right)' = -\sigma y^4 < 0,\;\;\forall y \ne 0\]
Since we get this is less than zero, we know then that the system must be coming in, making a stable focus. Again, we can also solve this using linearization and studying the eigenvalues.
Please note, this does not contain anywhere near all of the information contained in the Perko book. Instead, I have only chosen to include what seems absolutely necessary for the exam.
A phase portrait of a system of differential equations with \(x \in \mathbb{R}^n\) is the set of all solution curves of the DE in the phase space of \(\mathbb{R}^n\)
Find eigenvalues using the following: \(\det(A-\lambda I) = 0\). For a \(2 \times 2\) matrix, this simplifies to \[\lambda_1 = \frac{a+d}{2}+\sqrt{\frac{(a+d)^2}{4-ad+bc}}\] \[\lambda_2 = \frac{a+d}{2}-\sqrt{\frac{(a+d)^2}{4-ad+bc}}\] where our matrix is \(A = \bigl( \begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\bigr)\)
Let \(A\) be a square matrix, then \[\frac{d}{dt} \mathrm{e}^{At} = A \mathrm{e}^{At}\]
The linear transformation \(D \mathbf{f}(\mathbf{x}_0)\) is called the derivative of \(\mathbf{f}\) at \(\mathbf{x}_0\).
Suppose that \(\mathbf{f}: E \rightarrow \mathbb{R}^n\) is differentiable on \(E\). Then \(\mathbf{f} \in C^1(E)\) if the derivative \(D\mathbf{f}:E \rightarrow L(\mathbb{R}^n)\) is continuous on \(E\).
A point \(\mathbf{x}_0 \in \mathbb{R}^n\) is called an equilibrium point or critical point of the nonlinear system if \(\mathbf{f}(\mathbf{x}_0)= \mathbf{0}\). An equilibrium point \(\mathbf{x}_0\) is called a hyperbolic equilibrium point of the nonlinear system if none of the eigenvalues of the matrix \(D\mathbf{f}(\mathbf{x}_0)\) have zero real part. The linear system with the matrix \(A = D\mathbf{f}(\mathbf{x}_0)\) is called the linearization of the nonlinear system at \(\mathbf{x}_0\).
The following 3 facts are important!
An equilibrium point \(\mathbf{x}_0\) of the nonlinear system is called a sink if all of the eigenvalues of the matrix \(D\mathbf{f}(\mathbf{x}_0)\) have negative real part.
An equilibrium point \(\mathbf{x}_0\) of the nonlinear system is called a source if all of the eigenvalues of \(D\mathbf{f}(\mathbf{x}_0)\) have positive real part.
An equilibrium point \(\mathbf{x}_0\) of the nonlinear system is called a saddle if it is a hyperbolic equilibrium point and \(D\mathbf{f}(\mathbf{x}_0)\) has at least one eigenvalue with a positive real part and at least one with a negative real part.
Any sink is asymptotically stable, and any source or saddle is unstable. Hence any hyperbolic equilibrium point is either asymptotically stable or unstable.
If \(\mathbf{x}_0\) is a stable equilibrium point, no eigenvalue of \(D\mathbf{f}(\mathbf{x}_0)\) has positive real part.
A function \(V: \mathbb{R}^n \to \mathbb{R}\) satisfying the hypotheses of the below theorem is called a Liapunov function
Let \(E\) be an open subset of \(\mathbb{R}^n\) containing \(\mathbf{x}_0\). Suppose that \(\mathbf{f} \in C^1(E)\) and that \(\mathbf{f}(\mathbf{x}_0)=\mathbf{0}\). Suppose further that there exists a real valued function \(V \in C^1(E)\) satisfying \(V(\mathbf{x}_0) = 0\) and \(V(\mathbf{x}) >0\) if \(\mathbf{x} \neq \mathbf{x}_0\). Then (a) if \(\dot{V}(\mathbf{x}) \leq 0\) for all \(\mathbf{x} \in E\), \(\mathbf{x}_0\) is stable; (b) if \(\dot{V}(\mathbf{x}) < 0\) for all \(\mathbf{x} \in E \sim \{\mathbf{x}_0\}\), \(\mathbf{x}_0\) is asymptotically stable; (c) if \(\dot{V}(\mathbf{x}) > 0\) for all \(\mathbf{x} \in E \sim \{\mathbf{x}_0\}\), \(\mathbf{x}_0\) is unstable.
If \(\dot{V}(\mathbf{x}) = 0\) for all \(\mathbf{x} \in E\) then the trajectories of the system lie on the surfaces in \(\mathbb{R}^n\) (or curves in \(\mathbb{R}^2\)) defined by \(V(\mathbf{x}) = c\).
Note the below 6 facts refer to a two dimentional system, ie. a two dimensional \(D\mathbf{f}(\mathbf{x}_0)\) matrix
If you have two negative real eigenvalues for \(D\mathbf{f}(\mathbf{x}_0)\), then you have a stable node.
If you have two complex eigenvalues with a negative real part for \(D\mathbf{f}(\mathbf{x}_0)\), then you have a stable focus.
If you have two positive real eigenvalues for \(D\mathbf{f}(\mathbf{x}_0)\), then you have an unstable node.
If you have two complex eigenvalues both with a positive real part for \(D\mathbf{f}(\mathbf{x}_0)\), then you have an unstable focus.
If you have one positive real eigenvalue and one negative real eigenvalue for \(D\mathbf{f}(\mathbf{x}_0)\), then you have a saddle.
If you have two complex eigenvalues with real part zero for \(D\mathbf{f}(\mathbf{x}_0)\) then you have a center.
If the above facts don’t work, use the following 2 facts
Let \(f \in C^1(E), V \in C^1(E)\), and \(\phi_t\) the flow of the differential equation. Then for \(x \in E\) the derivative of the function \(V(x)\) along the solution \(\phi_t(x)\) is \[\dot V(x) = \frac{d}{dt} V(\phi_t(x)) = DV(x) f(x)\]
Let \(E\) be an open subset of \(\mathbb{R}^n\) containing \(x_0\). Suppose \(f \in C^1(E)\) and that \(f(x_0) = 0\). Suppose further that there exists a real values function \(V \in C^1(E)\) satisfying \(V(x_0) = 0\) and \(V(x) > 0\) if \(x \neq x_0\). Then if \(\dot V(x) \leq 0\) for all \(x \in E\), \(x_0\) is stable; if \(\dot V(x) < 0\) for all \(x \in E -\{x_0\}\), \(x_0\) is asymptotically stable; and if \(\dot V(x) > 0\) for all \(x \in E -\{x_0\}\), \(x_0\) is unstable.
Let \(E\) be an open subset of \(\mathbb{R}^{2n}\) and let \(H \in C^2(E)\) where \(H = H(\mathbf{x},\mathbf{y})\) with \(\mathbf{x,y} \in \mathbb{R}^n\). A system of the form \[\mathbf{\dot{x}} = \frac{\partial H}{\partial \mathbf{y}}\] \[\mathbf{\dot{y}} = -\frac{\partial H}{\partial \mathbf{x}}\] where \(\frac{\partial H}{\partial \mathbf{x}}\) and \(\frac{\partial H}{\partial \mathbf{y}}\) is called a Hamiltonian system with \(n\) degrees of freedom on \(E\).
(Conservation of Energy) The total energy \(H(\mathbf{x},\mathbf{y})\) of the Hamiltonian system remains constant along the trajectories of the system.
The total energy for this system \(H(x,y) = T(y) + U(x)\) where \(T(y) = \frac{y^2}{2}\) is the kinetic energy and \[U(x) = - \int_{x_0} ^x f(s) ds\] is the potential energy.
The critical points of the Newtonian system all lie on the \(x\)-axis. The point \(x_0,0\) is a critical point of the Newtonian system iff it is a critical point of the function \(U(x)\), i.e., a zero of the function \(f(x)\). If \((x_0,0)\) is a strict local maximum of the analytic function \(U(x)\), it is a saddle for the system. If \((x_0,0)\) is a strict local minimum of the analytic function \(U(x)\), it is a center for the system. If \((x_0,0)\) is a horizontal inflection point of the function \(U(x)\), it is a cusp for the system. And finally, the phase portrait of the system is symmetric with respect to the \(x\)-axis.
Let \(E\) be an open subset of \(\mathbb{R}^n\) and let \(V \in C^2(E)\). A system of the form \[\dot{\mathbf{x}} = - \nabla V(\mathbf{x})\] where \[\nabla V = \left( \frac{\partial V}{\partial x_1},..., \frac{\partial V}{ \partial x_n} \right)^T\] is called a gradient system on \(E\).
A point \(\mathbf{p} \in E\) is an \(\omega\)-limit point of the trajectory \(\phi (*,\mathbf{x})\) of the system if there is a sequence \(t_n \to \infty\) such that \[\lim_{n \to \infty} \phi(t_n,\mathbf{x}) = \mathbf{p}\].
Similarly to above, if there is a sequence \(t_n \to - \infty\) such that \[\lim_{n \to \infty} \phi(t_n, \mathbf{x}) = \mathbf{q}\], at the point \(\mathbf{q} \in E\), then the point \(\mathbf{q}\) is called an \(\alpha\)-limit point of the trajectory.
The \(\alpha\) and \(\omega\)-limit sets of a trajectory \(\Gamma\) of the system, \(\alpha(\Gamma)\) and \(\omega(\Gamma)\), are closed subsets of \(E\) and if \(\Gamma\) is contained in a compact subset of the \(\mathbb{R}^n\), then \(\alpha(\Gamma)\) and \(\omega(\Gamma)\), are non-empty, connected, compact subsets of \(E\).
A cycle or periodic orbit of a system is any closed solution curve of the system which is not an equilibrium point of the system
A limit cycle \(\Gamma\) of a planar system is a cycle of the system which is the \(\alpha\) or \(\omega\)-limit set of some trajectory of the system other than \(\Gamma\).
If a cycle \(\Gamma\) is the \(\omega\)-limit set of every trajectory in some neighborhood of \(\Gamma\), then \(\Gamma\) is called an \(\omega\)-limit cycle or stable limit cycle; if \(\Gamma\) is the \(\alpha\)-limit set of every trajectory in some neighborhood of \(\Gamma\), then \(\Gamma\) is called an \(\alpha\)-limit cycle or an unstable limit cycle; and if \(\Gamma\) is the \(\omega\)-limit set of one trajectory other than \(\Gamma\) and the \(\alpha\)-limit set of another trajectory other than \(\Gamma\), then \(\Gamma\) is called a semi-stable limit cycle.
(Dulac) In any bounded region of the plane, a planar analytic system with \(\mathbf{f(x)}\) analytic in \(\mathbb{R}^2\) has at most a finite number of limit cycles. Any polynomial system has at most a finite number of limit cycles in \(\mathbb{R}^2\).
(Poincare) A planar analytic system cannot have an infinite number of limit cycles which accumulate on a cycle of the system.
If \(\Gamma\) and \(\omega(\Gamma)\) have a point in common, then \(\Gamma\) is either a critical point or a periodic orbit.
If \(\omega(\Gamma)\) contains no critical points \(\omega(\Gamma)\) contains a periodic orbit \(\Gamma_0\), then \(\omega(\Gamma) = \Gamma_0\).
(Bendixson’s Criteria) Let \(\mathbf{f} \in C^1(E)\) where \(E\) is a simply connected region in \(\mathbb{R}^2\). If the divergence of the vector field \(\mathbf{f}, \nabla \mathbf{f}\) is not identically zero and does not change sign in \(E\), then the system has no closed orbit lying entirely in \(E\).
(Dulac’s Criteria) Let \(\mathbf{f} \in C^1(E)\) where \(E\) is a simply connected region in \(\mathbb{R}^2\). If there exists a function \(B \in C^1(E)\) such that \(\nabla (B\mathbf{f})\) is not identically zero and does not change sign in \(E\), then the system has no closed orbit lying entirely in \(E\). If \(A\) is an annular region contained in \(E\) on which \(\nabla(B\mathbf{f})\) does not change sign, then there is at most one limit cycle of the system in \(A\).
(Sotomayor) Suppose that \(\mathbf{f}(\mathbf{x}_0, \mu _0) = \mathbf{0}\) and that the \(n \times n\) matrix \(A = D \mathbf{f}(\mathbf{x}_0, \mu_0)\) has a simple eigenvalue \(\lambda=0\) with eigenvector \(\mathbf{v}\) and that \(A^T\) has an eigenvector \(\mathbf{w}\) corresponding to the eigenvalue \(\lambda = 0\). Furthermore, suppose that \(A\) has \(k\) eigenvalues with negative real part and \((n-k-1)\) eigenvalues with positive real part and the following conditions are satisfied: \[\mathbf{w}^T \mathbf{f}_{\mu}(\mathbf{x}_0,\mu_0) \neq 0, \mathbf{w}^T[D^2 \mathbf{f} (\mathbf{x}_0, \mu_0)(\mathbf{v,v})] \neq 0\] Then there is a smooth curve of equilibrium points of the system in \(\mathbb{R}^n \times \mathbb{R}\) passing through \((\mathbf{x}_0,\mu_0)\) and tangent to the hyperplane \(\mathbb{R}^n \times \{\mu_0\}\). Depending on the signs of the expressions in the conditions, there are no equilibrium points of the system near \(\mathbf{x}_0\) when \(\mu<\mu_0\) (or when \(\mu > \mu_0\)) and there are two equilibrium points of the system near \(\mathbf{x}_0\) when \(\mu > \mu_0\) (or when \(\mu < \mu_o\)). The two equilibrium points of the system near \(\mathbf{x}_0\) are hyperbolic and have stable manifolds of dimensions \(k\) and \(k+1\) respectively. I.E. the system experiences a saddle-node bifurcation at the equilibrium point \(\mathbf{x}_0\) as the parameter \(\mu\) passes through the bifurcation value \(\mu = \mu_0\). The set of \(C^{\infty}\)-vector fields satisfying the above condition is an open, dense subset in the Banach space of all \(C^{\infty}\), one-parameter, vector fields with an equilibrium point at \(\mathbf{x}_0\) having a simple zero eigenvalue.
If the above conditions are changed to \[\mathbf{w}^T \mathbf{f}_{\mu} (\mathbf{x}_0,\mu_0) = 0,\] \[\mathbf{w}^T[D\mathbf{f}_{\mu}(\mathbf{x}_0,\mu_0)\mathbf{v}] \neq 0,\] \[\mathbf{w}^T[D^2\mathbf{f}(\mathbf{x}_0,\mu_0)(\mathbf{v,v})] \neq 0\] then the system experiences a transcritical bifurcation.
If the above conditions are changed to \[\mathbf{w}^T \mathbf{f}_{\mu}(\mathbf{x}_0, \mu_0) = 0,\] \[\mathbf{w}^T [D \mathbf{f}_{\mu}(\mathbf{x}_0, \mu_0)\mathbf{v}] \neq 0,\] \[\mathbf{w}^T[D^2 \mathbf{f}(\mathbf{x}_0, \mu_0) (\mathbf{v,v})] = 0,\] \[\mathbf{w}^T [D^3 \mathbf{f}(\mathbf{x}_0, \mu_0) (\mathbf{v,v,v})] \neq 0\] then the system experiences a pitchfork bifurcation.
Hopf bifurcations are explained in this book, but I won’t include notes on them because I seriously doubt they will be on the exam.
Below are example bifurcation diagrams of each type:
Saddle-node Bifurcation
Transcritical Bifurcation
Pitchfork Bifurcation
Hopf Bifurcation
\(\frac{d^2 \theta}{dt^2} \cdot \frac{d \theta}{dt} = \frac{d}{dt} \left[ \frac{1}{2} \left( \frac{d \theta}{dt} \right)^2 \right]\)
Let \(v = \frac{d \theta}{dt}\). Then \[\frac{d^2 \theta}{dt^2} = \frac{d}{dt} \left( \frac{d \theta}{dt} \right) = \frac{dv}{dt} = \frac{dv}{d \theta} \cdot \frac{d \theta}{dt}\]
Let \(v = \frac{d \theta}{dt}\). Then \[\frac{d^2 \theta}{dt^2} = \frac{d}{d \theta} (v^2)\]
Integration by parts: \(\int u dv = uv - \int v du\)
set up two models
Use one model to find equilibrium positions
Use the other model to solve for \(x(t)\): \(\ddot x = Cx\)
Let \(\omega = \sqrt C\). Then \[x(t) = A \cos(\omega t) + B \cos (\omega t) = D \sin(\omega t + O)\] where \(T = \frac{2 \pi}{\omega}\) is the period.
Transform to first order system
Find critical points and linearize
Let \(v = \frac{dx}{dt}\). Then \(\frac{dv}{dt} = \frac{dv}{dx} \cdot \frac{dx}{dt} = \frac{dv}{dx} \cdot v\)
Find critical points
Linearize and study stability
then draw phase portrait
Suppose \(\gamma\) is contained in a bounded region in which there are finitely many critical points. Then \(L\) is either a single critical point, a single limit cycle, or a spearatrix cycle.
If \(D\) is a bounded, closed set containing no critical points and \(D\) is positively invariant, then there is a limit cycle in \(D\).
Same idea as Bendixson’s, but we consider an annular region, and if \(\nabla (\psi X)\) does not change sign, then there is at most one limit cycle in \(A\). More formal definition is given in the Perko notes.
Use Liapunov functions to determine behavior around non-hyperbolic equilibrium points.
Let \(E \subset \mathbb{R}^n\) be open, \(x_0 \in E\), \(f \in C^1(E)\), and \(f(x_0) = 0\). So \(x_0\) is an equilibrium point. If there is a function \(V:E \to \mathbb{R}\) such that \(V(x_0) = 0\) and \(V(x) > 0\) for \(x \neq x_0\), then:
If \(\dot V(x) \leq 0\) for all \(x \in E\), then \(x_0\) is stable
If \(\dot V(x) < 0\) for all \(x \in E\), then \(x_0\) is asymptotically stable.
If \(\dot V(x) >0\) for all \(x \in E -\{x_0\}\), then \(x_0\) is unstable.
\(\dot V = V_x \dot x + V_y \dot y\)
A common Liapunov function is \[V(x,y) = \left[ x - x^*\ln x + y - y^* \ln y \right] - \left[ x^* -x^* \ln x^* + y^* -y^* \ln y^* \right]\] To show \(v(x,y) > 0\) for all \((x,y) \neq (x^*,y^*)\), show \((x^*,y^*)\) is a minimum by showing \(V_{xx}V_{yy} - V_{xy}^2 > 0\) at \((x^*,y^*)\) and \(V_{xx} > 0\) at \((x^*,y^*)\).
Cases where \(\mu < 0\), \(\mu = 0\), and \(\mu > 0\)
Find critical points and possible limit cycles, then analyze stability by considering sign of \(\dot r\)
\(\dot \theta = 1 \implies\) it’s counter clockwise with time
\(\dot \theta = -1 \implies\) it’s clockwise with time
Bifurcation diagram \((r,\mu)\): rotate the 2-D plot to form a 3-D plot (in terms of \((x,y,\mu)\))
\(\dot x = f(x,y)\) and \(\dot y = -y\)
Find critical points, consider \(Df(x,\mu)\) to study stability. Consider \(\mu <0, \mu = 0, \mu >0\), then draw the bifurcation diagram.
\[\frac{dx}{dt} = x^2\] \[\implies x^{-2} dx = dt \implies \int_{x_0}^x s^{-2} ds = \int_0 ^t du\] \[\implies -\frac{1}{x} + \frac{1}{x_0} = t \implies x = \frac{x_0}{1-tx_0} = \frac{1}{1-t}\]
\[\dot r = ar \implies r^{-1} dr = adt\] \[\implies \int_{r_0} ^r s^{-1} ds = \int_0 ^t a du \implies \ln r - \ln r_0 = at\] \[\ln \left(\frac{r}{r_0} \right) = at \implies r(t) = r_0 \mathrm{e}^{at}\] and \[\dot \theta = t \implies d \theta = b dt \implies \] \[\int_{\theta_0} ^{\theta} d \mu = \int_0 ^t b d u \implies \theta - \theta_0 = bt \implies \theta(t) = \theta_0 + bt\]