Parametric oscillator: a close look

This post contains my research notes about the parametric oscillator. Here is an introduction from Wikipedia (see references section):

A parametric oscillator is a harmonic oscillator whose parameters oscillate in time. For example, a well known parametric oscillator is a child pumping a swing by periodically standing and squatting to increase the size of the swing's oscillations. The varying of the parameters drives the system. Examples of parameters that may be varied are its resonance frequency ω\omega and damping β\beta.

As that Wikipedia article shows, a certain coordinate change can eliminate damping. So we focus on the case where there is only a resonance frequency ω\omega, which varies with time. That is, we consider the differential equationx¨(t)+ω(t)2x(t)=0\displaystyle\ddot{x}(t) + \omega(t)^2 x(t) = 0 \quadwhere ω(t)\omega(t) repeats itself after some time interval TT.

The main purpose of this post is to preserve what I learned, for my own future reference. I am bad at maintaining handwritten notes, so I am moving to a digital medium. On the off chance that this interests others, I put it on the web. This post contains some work of my own that goes beyond the material in the references section. Most or all of my findings are likely known by experts. The proofs are mine, and so are the mistakes.

Floquet theory

A suitable mathematical background for the parametric oscillator is Floquet theory (see references section). It deals with linear differential equations of the form x˙(t)=A(t)x(t)\dot{x}(t) = A(t) x(t) where x:RRnx:\mathbb{R}\to\mathbb{R}^n, and the function A:RRn×nA:\mathbb{R}\to\mathbb{R}^{n\times n} is periodic with TT. We could also consider complex numbers as the elements of xx and AA, but we shall stick with the reals here, in accordance with the physics. We can write a parametric oscillator as a Floquet equation:ddt(xv)= \frac{d}{dt} \left(\begin{array}{c} x \\ v\end{array}\right) =(01ω(t)20)(xv) \left(\begin{array}{cc} 0 & 1 \\ -\omega(t)^2 & 0\end{array}\right) \left(\begin{array}{c} x \\ v\end{array}\right)

I encountered Floquet theory in the well-known "Mechanics" book by Landau and Lifshitz (see references section), which we shall call "the book" in this post. The book contains a chapter on parametric resonance, which deals with parametric oscillators and their resonance behavior. The book uses Floquet theory in specialized form, without calling it so. Part of my motivation here is to obviate the exact way in which that book chapter ties in with general Floquet theory.

The monodromy matrix

We shall now introduce the notion of monodromy, which is pivotal for Floquet theory. Let Ψ:RRn×n\Psi: \mathbb{R}\to \mathbb{R}^{n\times n} be a fundamental matrix for a Floquet differential equation. Then ΨT(t):=Ψ(t+T)\Psi_T(t) := \Psi(t + T) is also a fundamental matrix, because A(t+T)=A(t)A(t+T)=A(t). So we have Ψ(t+T)=Ψ(t)M\Psi(t + T) = \Psi(t) M for some invertible n×nn\times n-matrix MM. This MM describes the change of the solution after each period. It is called the monodromy matrix. Rightly so, since in Greek "mono" means "one" and "dromos" means "course" or "running".

One checks easily that different Ψ\Psi for the same AA can yield different MM. But:

Lemma: The monodromy matrix of a Floquet equation is determined up to isomorphism.

So see this, let Φ\Phi be another fundamental matrix for AA. Let NN be the corresponding monodromy matrix. We have Φ(t)=Ψ(t)Q\Phi(t) = \Psi(t) Q for some invertible n×nn\times n-matrix QQ. SoΨ(t)QN=Φ(t)N=Φ(t+T)=Ψ(t+T)Q=Ψ(t)MQ \begin{aligned} \Psi(t) Q N &= \Phi(t)N \\ &= \Phi(t+T) \\ &= \Psi(t+T)Q \\ &= \Psi(t) M Q \end{aligned}

Because Ψ(t)\Psi(t) and QQ are invertible, we get N=Q1MQN = Q^{-1} M Q

Importantly, it follows that any two monodromy matrices have the same Jordan normal forms, and therefore the same eigenvalues.

Now recall that we assumed that the matrix AA from our Floquet equation has only real entries. Naturally, we are only interested in real solutions Ψ\Psi. So any resulting MM too has only real entries.

Floquet's theorem

Floquet equations cannot not generally be solved symbolically. However, Floquet's theorem makes a useful statement about the solutions. The version of the theorem that interests me here is:

Theorem: Any fundamental matrix Ψ\Psi of a Floquet equation with period TT has the form Ψ(t)=Π(t)etB\Psi(t) = \Pi(t) e^{t B} for some TT-periodic matrix function Π\Pi and some matrix BB of matching dimension.

Note that the statement employs the matrix exponential etBe^{t B} (see references section).

Proof: Because MM is invertible, it has a matrix logarithm (see references section), that is, a matrix of which it MM is the exponential. (Important properties of the matrix logarithm are: they exist for every invertible matrix; and as in the special case of scalar logarithms, they can be complex-valued even if the exponential is real-valued, and they are not unique. For example, i(2k+1)πi(2k+1)\pi is a logarithm of minus one for every integer kk.) Let BB be a logarithm of MM, divided by TT. That is, eTB=Me^{T B} = M. Let Π(t):=Ψ(t)etB\Pi(t) := \Psi(t) e^{-t B}. To see that Π\Pi is TT-periodic, considerΠ(t+T)=Ψ(t+T)e(t+T)B=Ψ(t)MeTBtB=Ψ(t)eTBeTBetB=Ψ(t)etB=Π(t) \begin{aligned} \Pi(t+T) &= \Psi(t+T) e^{-(t+T)B} \\ &= \Psi(t) M e^{-T B -t B} \\ &= \Psi(t) e^{T B} e^{-T B} e^{-t B} \\ &= \Psi(t) e^{-t B} = \Pi(t) \end{aligned}

Applying the theorem to the oscillator

First we perform a coordinate change into the eigensystem of the monodromy matrix MM. This is tantamount to assuming that MM is in Jordan normal form. As for any 2×22\times 2 -Matrix, the Jordan normal form is M=(μ1δ0μ2) M = \left(\begin{array}{cc}\mu_1 & \delta \\0 & \mu_2\end{array}\right)where the μi\mu_i are the eigenvalues and δ\delta is zero ore one. The book considers only the case where the two eigenvalues differ, and therefore δ\delta is zero. We shall also consider the case where δ\delta is one. This can happen, as we shall see later.

We shall now apply the Floquet theorem. First, we need a logarithm of MM. We shall deal first with the more difficult case where δ\delta is one, and therefore μ1=μ2=μ\mu_1=\mu_2=\mu.

As explained in a Wikipedia article referenced below, the logarithm can be calculated as a Mercator series:ln(1+x)= \ln (1+x) =xx22+x33x44+ x-\frac{x ^ 2}{2}+\frac{x ^ 3}{3}-\frac{x ^ 4}{4}+\cdotsWe have M=μ(I+K)M = \mu(I + K) where II stands for the identity matrix, andK=(01/μ00) K =\left(\begin{array}{cc}0 & 1/\mu \\0 & 0\end{array}\right)Using the fact that KnK^n vanishes for nn greater than two, we getlnM=ln(μ(I+K))=ln(μI)+ln(I+K)=(lnμ)I+KK22+K33=(lnμ)I+K=(lnμ1/μ0lnμ) \begin{aligned} \ln M &=\ln \big(\mu(I+K)\big) \\ &= \ln (\mu I) +\ln (I+K) \\ &= (\ln \mu) I \\ & \quad + K-\frac{K^2}{2}+\frac{K^3}{3}-\cdots \\ &= (\ln \mu) I + K \\ &= \left(\begin{array}{cc}\ln\mu & 1/\mu \\0 & \ln\mu\end{array}\right) \end{aligned}From the proof of the theorem, we know that we can choose B=T1lnMB = T^{-1}\ln M. Some calculation involving the matrix exponential yieldsetB=(μt/TtTμt/T10μt/T) e^{t B} =\left(\begin{array}{cc}\mu^{t/T} & \frac{t}{T}\mu^{t/T-1} \\0 & \mu^{t/T}\end{array}\right)Note that eTB=Me^{T B} = M, as required. Now suppose we have a fundamental matrixΨ(t)=(x1(t)x2(t)v1(t)v2(t)) \Psi(t) =\left(\begin{array}{cc}x_1(t) & x_2(t) \\v_1(t) & v_2(t)\end{array}\right)When we spell out the Floquet theorem elementwise, and ignore the viv_i, we get:

Corollary: If δ=1\delta = 1, there are TT-periodic functions π1\pi_1, π2\pi_2 and π3\pi_3 such thatx1(t)=μt/Tπ1(t)+tTμt/T1π3(t)x2(t)=μt/Tπ2(t) \begin{aligned} x_1(t) &= \mu^{t/T} \pi_1(t) + \frac{t}{T}\mu^{t/T-1} \pi_3(t) \\ x_2(t) &= \mu^{t/T} \pi_2(t) \end{aligned}

If δ=0\delta = 0, we get the result from the book:

Corollary: If δ=0\delta = 0, there are TT-periodic functions π1\pi_1 and π2\pi_2 such thatx1(t)=μ1t/Tπ1(t)x2(t)=μ2t/Tπ2(t) \begin{aligned} x_1(t) &= \mu_1^{t/T} \pi_1(t) \\ x_2(t) &= \mu_2^{t/T} \pi_2(t) \end{aligned}

The calculation is much simpler than for δ=1\delta=1. I leave it away here.

Possible eigenvalues

First, we observe, like the book:

Corollary: The eigenvalues of the monodromy matrix MM of a parametric oscillator are either real or complex conjugates of one another.

This follows simply from the fact that MM has only real entries. Now the book deduces, for δ=0\delta = 0, that the eigenvalues must be mutually reciprocal. We shall show this even for δ=1\delta = 1:

Lemma: The product of the eigenvalues of the monodromy matrix MM of a parametric oscillator is one.

Proof: Liouvilles formula (see references section) entails for every fundamental matrix Ψ\Psi of a Floquet equation thatddtdetΨ(t)=trA(t)detΨ(t) \frac{d}{dt}\det\,\Psi(t) = \mathrm{tr}\,A(t) \cdot\det\,\Psi(t)Here tr\mathrm{tr} stands for trace, which is the sum of the diagonal elements of a matrix. For a parametric oscillator, that trace is zero. So detΨ(t)\det\,\Psi(t) is constant. Because Ψ(t+T)=Ψ(t)M\Psi(t+T) = \Psi(t)M, we have detΨ(t+T)=detΨ(t)detM\det\,\Psi(t+T)=\det\Psi(t)\det M. Since Ψ\Psi is constant, we have detΨ(t+T)=detΨ(t)\det\,\Psi(t+T)=\det\Psi(t), and since detΨ(t)0\det\,\Psi(t)\neq 0 we have detM=1\det\,M = 1. The claim follows because the determinant is the product of the eigenvalues □

Combining the results of this section, we see that the eigenvalues are either reciprocal reals, or two non-reals on the complex unit circle which are complex conjugates of one another. When δ=1\delta = 1, we know also that the two eigenvalues are the same, and so they are both one or both minus one.

Classification of possible behaviors

First, suppose that δ=1\delta = 1. Then the eigenvalues are both one or both minus one.

If μ=1\mu = 1, we have by an earlier corollaryx1(t)=π1(t)+tTπ3(t)x2(t)=π2(t) \begin{aligned} x_1(t) &= \pi_1(t) + \frac{t}{T}\pi_3(t) \\x_2(t) &= \pi_2(t) \end{aligned}for TT-periodic π1\pi_1, π2\pi_2, and π3\pi_3, where x1x_1 and x2x_2 are coordinates which go along with the eigensystem of the monodromy matrix.

If μ=1\mu = -1, we havex1(t)=(1)t/Tπ1(t)+tT(1)t/T1π3(t)x2(t)=(1)t/Tπ2(t) \begin{aligned} x_1(t) &= (-1)^{t/T} \pi_1(t) \\ & \quad + \frac{t}{T}(-1)^{t/T-1} \pi_3(t) \\ x_2(t) &= (-1)^{t/T} \pi_2(t) \end{aligned}Note that this entails that we have 2T2T-periodic ρ1\rho_1, ρ2\rho_2, and ρ3\rho_3 such thatx1(t)=ρ1(t)+tTρ3(t)x2(t)=ρ2(t) \begin{aligned} x_1(t) &= \rho_1(t) + \frac{t}{T}\rho_3(t) \\x_2(t) &= \rho_2(t) \end{aligned}

Now suppose that δ=0\delta = 0. We concluded above that x1(t)=μ1t/Tπ1(t)x_1(t) = \mu_1^{t / T} \pi_1(t) and x2(t)=μ2t/Tπ2(t)x_2(t) = \mu_2^{t / T} \pi_2(t) for TT-periodic π1\pi_1 and π2\pi_2.

If the eigenvalues are both one, we have TT-periodic behavior, respectively. Note that in this case MM is not just isomorphic to, but equal to the identity matrix. So any coordinate system is an eigensystem, that is, we can choose the xix_i freely.

If the eigenvalues are both minus one, we have 2T2T-periodic behavior. In this case TT is not just isomorphic to, but equal to minus one times the identity matrix. So here too, any coordinate system is an eigensystem, so we can choose the xix_i freely.

If the eigenvalues are other reals, the one whose absolute value is greater than one "wins" as tt goes towards infinity. So the amplitude grows exponentially. If the eigenvalues are not reals, they are on the complex unit circle, and the amplitude has an upper bound.

Example: the Mathieu equation

The Mathieu equation is the parametric oscillator withω(t)2=ω02(1+hcos(γt)) \omega(t)^2 = \omega_0^2(1 + h \cos (\gamma t))If this ω(t)\omega(t) came from a child on a swing, it would be a strange child: one that stands and squats at a frequency γ\gamma independent of the resonance frequency ω0\omega_0 of the swing. Still, the Mathieu equation is important in physics.

Here is a graph showing the monodromy's eigenvalues for the Mathieu equation with ω0=1\omega_0 = 1 and h=1h = 1. The vertical axis corresponds to γ\gamma, which ranges from 0.20.2 to 55. Horizontally, we have the complex plane. For each γ\gamma, the graph contains both eigenvalues of the corresponding monodromy matrix. I refrained from drawing the coordinate axes, to avoid clutter.

The eigenvalues of a Mathieu equation as gamma changes

The graph shows that, for every γ\gamma, the eigenvalues are either (1) on a circle, which happens to be the unit circle, or (2) opposite one another on a perpendicular of the circle, actually, reciprocal reals. This agrees with our mathematical results about the possible eigenvalues. In case (1) we have no resonance. In case (2), we have resonance.

The greatest resonance is represented by the "face of the bunny", around γ=2ω0=2\gamma = 2\omega_0 = 2, connected to the circle at 1+0i-1 + 0i. The second greatest resonance is represented by the bunny's (uppermost) tail, around γ=ω0=1\gamma = \omega_0 = 1, connected to the circle at 1+0i1 + 0i. This second greatest resonance corresponds to a normal child that stands and squats once during a period of the swing. The greatest resonance corresponds to an eager child that turns around at the apex, facing down again, and stands and squats again on the way back.

There are also resonances for smaller γ\gamma, their connection points with the circle alternating between 1+0i1 + 0i and 1+0i1 + 0i.

It is worth noting that, for smaller hh, the resonance areas can shrink in such a way that only the bunny's face at γ=2\gamma = 2 remains, while all resonances at smaller γ\gamma vanish. That is: if the child's standing and squatting have a small amplitude hh, the child needs to stand and squat more often to achieve resonance.

The transition into and out of resonance

Possible shapes of the monodromy matrix

As we have seen, the transitions into and out of resonance happen where the eigenvalues are both one or both minus one. This means that the Jordan normal form of the monodromy matrix is(1δ01)or(1δ01) \left(\begin{array}{cc}1 & \delta \\0 & 1\end{array}\right)\quad \mathrm{or} \quad \left(\begin{array}{cc}-1 & \delta \\0 & -1\end{array}\right)where δ\delta is zero or one. So:

To fully understand the transitions into and out of resonance, we must know δ\delta!

From the start, I wondered about the case where MM cannot be diagonalized, that is, δ=1\delta = 1, since that was left aside in the book. Next, I was intrigued by the instantaneous ninety-degree turns where the bunny's body meets the face or a tail. Those points turned out to be the only ones where MM might be undiagonalizable. So I kept running into the question about δ\delta.

I checked, with Mathematica, the bunny's two transition points for the resonance at γ=2\gamma = 2, and its two transition points for the resonance at γ=1\gamma = 1. In all cases, we have δ=1\delta = 1. So the question arises:

Is it true for all parametric oscillators that the monodromy matrix is undiagonalizable at all transitions into and out of resonance?

We shall now shed light on this question.

The meaning of diagonalizability and lack thereof

First, suppose that δ=0\delta = 0. If μ=1\mu = 1, we have, as observed above, two linearly independent solutions x1(t)=π1(t)x_1(t) = \pi_1(t) and x2(t)=π2(t)x_2(t) = \pi_2(t) where the πi\pi_i are TT-periodic. Since every solution x(t)x(t) is a linear combination of those xix_i, it follows that every solution is TT-periodic. So, for every initial phase (x(t0),v(t0))(x(t_0), v(t_0)) at some t0t_0, the corresponding solution is TT-periodic. If μ=1\mu = -1, we can deduce by a similar argument: for every initial phase (x(t0),v(t0))(x(t_0), v(t_0)) at some t0t_0, the corresponding solution is 2T2T-periodic.

Now suppose that δ=1\delta = 1. If μ=1\mu = 1, we have, as observed above, two linearly dependent solutions x1(t)=π1(t)x_1(t) = \pi_1(t) and x2(t)=π2(t)+tTπ3(t)x_2(t) = \pi_2(t) + \frac{t}{T} \pi_3(t) where the πi\pi_i are TT-periodic. So the solution space, which is two-dimensional, has a one-dimensional subspace of TT-periodic functions. All other solutions grow linearly with time. So for every t0t_0, the (also two-dimensional) space of initial conditions at t0t_0 has a one-dimensional subspace of TT-periodic solutions. For all other initial conditions, the solutions grow linearly with time. For μ=1\mu = -1, we get something similar: for every t0t_0, the space of initial conditions has a one-dimensional subspace of periodic solutions, this time with period 2T2T. Again, all other solutions grow linearly.

In summary: for δ=0\delta = 0, all solution are periodic, while for δ=1\delta = 1 only some are periodic. In the latter case, we can destabilize a periodic solution by arbitrarily small changes of the initial conditions.

Undiagonizable examples

We shall now give a stringent example, more illuminating than the Mathieu equation, where δ=1\delta = 1, that is, MM cannot be diagonalized. Here ω\omega will be a certain rectangular pulse:ω(t)= \omega(t) ={10(tmodT)<t1ωmaxt1tmodT)<t20t2(tmodT)<t3<π21t3(tmodT)<T \begin{cases} 1 & 0 \leq (t\, \mathrm{mod}\, T) < t_1\\ \omega_{\mathrm{max}} & t_1 \leq t\,\mathrm{mod}\, T) < t_2\\ 0 & t_2 \leq (t\,\mathrm{mod}\, T) < t_3 < \frac{\pi}{2}\\ 1 & t_3 \leq (t\, \mathrm{mod} \,T) < T \end{cases}

Here TT is the period, which we must still determine. And ωmax\omega_{\mathrm{max}} is a value greater than one, which we must still determine. For the construction, we assume temporarily as initial conditions x(0)=1x(0) = 1 and v(0)=0v(0) = 0. That is, the solution is the cosine for 0t<t10 \leq t < t_1. We let t2=t1+Δtt_2 = t_1 + \Delta t for a small Δt\Delta t. The ωmax>1\omega_{\mathrm{max}} > 1 "accelerates the swing", that is, the solution increases its descent more than a cosine while ωmax\omega_{\mathrm{max}} lasts. We choose ωmax\omega_{\mathrm{max}} in such a way that at t2t_2 the solution's first derivative is minus one. There it remains until t3t_3 since ω\omega is zero there. We let t3t_3 be the point where the solution is zero for the first time for positive tt. So, from t3t_3, the solution is again a like cosine with amplitude one, but shifted a little to the left. We let TT be the time, slightly less than 2π2\pi, where the solution is zero the second time for positive tt. Obviously, the solution is periodic with TT. It looks like a cosine, except that in the first quadrant there is a "fast forward" lasting from t1t_1 to t3t_3.

So, our constructed parametric oscillator has a periodic solution. But are all solutions periodic? No! We fine-tuned ω(t)\omega(t) so that it would have a periodic solution specifically for the initial condition x(0)=1x(0) = 1 and v(0)=0v(0) = 0. As can be easily checked, that there are other initial conditions with non-periodic solutions. So, owing to earlier observations, the initial conditions with periodic solutions form a one-dimensional subspace. That is, the only periodic solutions arise from initial conditions that are scalar multiples of x(0)=1,v(0)=0x(0) = 1, v(0) = 0. The period of our ω\omega function happens to agree with that of the oscillator's solution, so the eigenvalues are one. In summary, our constructed parametric oscillator hasM=(1101) M = \left(\begin{array}{cc}1 & 1 \\0 & 1\end{array}\right)

Our constructed ω\omega supplies one impulse in the first quadrant of the movement. So four quadrants pass between impulses. Obviously, we could modify our construction to have an impulse in the first and third quadrant. Then two quadrants would pass between impulses. So the solution's period would be twice that of ω\omega, and the eigenvalues would be minus one. We could also modify our construction to have six quadrants between impulses (eigenvalues minus one), or eight (eigenvalues one), or ten (eigenvalues minus one), and so on.

Diagonalizable examples

First I conjectured, in this post, that there is no parametric oscillator with non-constant ω\omega that has M=IdM = \mathrm{Id} or M=IdM = -\mathrm{Id}. My conjecture was inspired by the previous section. But John Baez proved me wrong.

First, an example where M=IdM = \mathrm{Id}. Consider the following non-constant ω\omega:ω(t)2π= \frac{\omega(t)}{2\pi} ={10(tmod0.75)<0.520.5(tmod0.75)<0.75 \begin{cases} 1 & 0 \leq (t\, \mathrm{mod} \,0.75) < 0.5\\ 2 & 0.5 \leq (t\, \mathrm{mod}\, 0.75) < 0.75 \end{cases}

The solution for x(0)=0x(0) = 0 and v(0)=1v(0) = 1 is composed of two sine curves of different frequency:

Solution for M = Id, x(0) = 0 and v(0) = 1
Solution for M = Id, x(0) = 0 and v(0) = 1

It is periodic, with period 0.750.75. The solution for x(0)=1x(0) = 1 and v(0)=0v(0) = 0 is composed of two cosine curves of different frequency:

Solution for M = Id, x(0) = 1 and v(0) = 0
Solution for M = Id, x(0) = 1 and v(0) = 0

This too is periodic with period 0.750.75. Since the solution space is spanned by those two solutions, every solution is periodic with period 0.750.75. Since 0.750.75 is also the period of ω\omega, both eigenvalues are one. So the monodromy matrix is the identity.

Now an example where M=IdM = -\mathrm{Id}. Consider the following non-constant ω\omega:

ω(t)2π={10(tmod1)<0.520.5(tmod1)<1 \frac{\omega(t)}{2\pi} = \begin{cases} 1 & 0 \leq (t\,\mathrm{mod}\, 1) < 0.5\\2 & 0.5 \leq (t\,\mathrm{mod}\, 1) \lt 1 \end{cases}

The solution for x(0)=0x(0) = 0 and v(0)=1v(0) = 1 is composed of two sine/cosine curves of different frequency:

Solution for M = -Id, x(0) = 0 and v(0) = 1
Solution for M = -Id, x(0) = 0 and v(0) = 1

This is periodic, with period two. The solution for x(0)=1x(0) = 1 and v(0)=0v(0) = 0 too is composed of two sine/cosine curves of different frequency:

Solution for M = -Id, x(0) = 1 and v(0) = 0
Solution for M = -Id, x(0) = 1 and v(0) = 0

This too is periodic with period two. Since the solution space is spanned by those two solutions, every solution is periodic with period two. Since that is twice the period of ω\omega, both eigenvalues are minus one. So the monodromy matrix is the minus identity.

References

L.D.Landau and E.M.Lifschitz.Lehrbuch der theoretischen Physik I: Mechanik.Verlag Harry Deutsch, 14. Auflage.

Wikipedia: Floquet theory

Wikipedia: Liouville's formula

Wikipedia: Matrix exponential

Wikipedia: Logarithm of a matrix