Understanding Phase-Locked Loops: Theory and Application

For some complicated and largely irrelevant reasons, I found myself needing to understand how phase-locked loops worked recently. It was a fun chance to dust off my “circuits brain,” because I haven’t been in a situation where I’ve done much of interest with electronics in quite a while. To wit, the only reference I had handy was the legendary The Art of Electronics (AoE) by Horowitz and Hill. Alas, I only had its first edition from 1980. And it holds up surprisingly well. It was only upon reading the phrase “nowadays, personal computers can be acquired for as little as $5k,” that I called in a favor or two to get a look at the 2015 edition. It’s worth the price, if only for the footnotes and off-the-rails easter eggs.

Being out of the game for so long, I had to learn a little before I was ready to learn from AoE. This blog (and associated LTspice files) from Robert Keim and this tutorial from Behzad Razavi were rather helpful. The latter actually goes into a lot more math than you find in AoE. Arguably—and this is the premise of AoE—it’s almost counterproductive to throw a lot of effort into understanding the detailed mathematics for a lot of practical electronics. But I was rusty, and the math was helpful in setting up some conceptual anchors.

So here’s the plan. In this post, I distill many hours of my own aimless wandering into what I hope is a decent outline of the basic functionality of a phase-locked loop (PLL) and, more importantly, what interesting things one can do with them. There’s a TON of content on this out there, and much of it is quite good. But from perusing YouTube and various forums, I was struck by how disconnected a lot of the explanations were from the interesting use cases. So what if you can lock the phase difference between two sine waves? It probably stems from my pre-existing condition of “theorist,” but it took me a lot longer than I thought it should have to see the big picture. Consequently, this post builds directly to how a PLL can (1) act as a frequency multiplier and (2) demodulate (recover the signal from) an FM radio wave. Here we go.

Basic block structure of a phase-locked loop

A phase-locked loop is made up of three basic parts (arguably two) and a feedback loop, shown above. One way of thinking about the basic functionality is that it’s defined by the following rule:

For a pure sine wave of frequency f, the output is another pure sine wave of the same frequency f, with a fixed phase difference \phi (with respect to the input).

Experimentalists and other practically-minded folks might immediately see value in this functionality. I now see this “rule” as sort of analogous to the “Golden Rules for Op-Amps.” For example, an (ideal) op-amp (composed of a myriad of carefully-arranged transistors and passive components) will “do whatever is required to maintain equal voltage at both inputs.” Why not just connect the inputs with a wire? Well, because by exploiting this “baked in” feature, some other part of the circuit will do something interesting (amplification, buffering, integration, etc.). I did a lot with op amps without really knowing much about their innards. But the PLL is a little more subtle than an op amp, so it’s worth digging into the how.

Phase Detector (and Low-Pass Filter)

The first component is the phase detector (PD). At a high level, this device takes two oscillatory inputs and (somehow) outputs a voltage proportional to that phase difference. The low-pass filter (LPF) is arguably a crucial piece for getting the phase in a useable form, so I’m going to lump those pieces together as the “phase detector.” PLLs (and consequently PDs) come in a variety of flavors, so let’s start with digital input, or square wave signals. In that case, all you need to get a rudimentary PD is the following: and XOR gate and a RC low-pass filter.

For digital input, this is the simplest phase-detector (with low-pass filter). But why does this work?

The XOR gate takes two square wave (on/off) signals and only gives an output when exactly one is on and the other is off. Hence, “eXclusive OR.” If you think about what happens when you put in two slightly different frequencies, you’d get output blips of increasing duration as the two signals gradually fall out of phase. For now, we’re just looking at two externally fixed input signals. The feedback part comes later!

XOR output for two square waves of slightly differing frequencies.

The output clearly has something to do with phase as its pulse widths seem to increase as the signals drift out of phase (and consequently will decrease when one signal begins to “lap” the other). But we’re trying to convert phase into a (DC) signal whose voltage is proportional to phase. Enter the low-pass filter. To understand what it does, you can (1) wave hands in several ways, (2) go through the math, or (3) just set it up and see what happens. You get different insight from each of the three approaches, and I’ll adopt the AoE mindset by focusing on (1) and (3). When the XOR output logic signal (5V) is on, the capacitor charges up. When it’s off, the capacitor discharges through the resistor. As the pulse widths increase, you get more charge time and less discharge time. So charge accumulates, raising the capacitor voltage. That’s basically it, and the capacitor voltage effectively tracks the phase. Done.

I find myself lacking electronics components to play with at the moment. So the next best thing is to use LTspice, a free (and somewhat quirky) simulation program for circuits from your friends at Analog Devices. It’s alright for basic things, but I’d only recommend it for playing around with simple things (and ideal components) unless you want to spend a lot of time giving it specific models for various real-world components. If I set this PD up in LTspice, I get the following for the inputs and XOR output:

Input (left) and output (right) to XOR gate from LTspice simulation

It works just like we expect! Now, if I increase the frequencies to 5kHz and 4.95kHz, these graphs get harder to resolve unless you zoom in quite far. But the voltage on the capacitor (let’s call it V_{ctrl}) is

After XOR and LPF, 5kHz and 4.95 kHz signals yield the following output. This signal tracks the instantaneous phase difference of the input signals.

Note that the frequency is about 5Hz, which is what you’d expect from the beat frequency of the input signals. And so this V_{ctrl} signal just smoothly tracks the phase difference between those inputs. If it works for that, we’ll reason that it should do the same for any inputs. The label V_{ctrl} is chosen aptly, as this voltage is used to control the next component.

Voltage-Controlled Oscillator (VCO)

The next block is something called a voltage-controlled oscillator (VCO). As the name suggests, it’s just an oscillator whose resonant frequency is controlled by an external voltage, or V_{ctrl} in our case. How would one make such a thing?

VCO Theory: to make an oscillator with variable frequency, just find a variable capacitor!

I think it’s helpful to start with the basic textbook model of an LC oscillator. This system is mathematically equivalent to the simple, harmonic oscillator. Instead of a mass on a spring, we have charge building up and discharging from the capacitor. The capacitor is like the spring (it stores energy), and the inductor is like the mass (inductance constitutes a sort of inertia in the EM field). It also oscillates at a fixed frequency, f = 1/(2\pi \sqrt{LC}). So one thing you could do is to supply some of the capacitance with a reverse-biased varactor diode. Diodes exhibit variable resistance in reverse-bias operation because the applied voltage changes the size of the depletion region, and that’s sort of like adjusting the distance between two capacitor plates (C \propto 1/d). The scary “varactor” (or sometimes “varicap”) modifier just means they’ve been carefully designed to have a larger range of variation in capacitance.

So here’s where the theory and practicality clash pretty hard. Real circuits have resistance (especially inside inductors), and resistance dissipates energy. So there’s no way to get by with this simple LC circuit unless its cleverly supplemented with some kind of mechanism for replacing the losses. The Colpitts oscillator is just such a device which happened to turn 100 years old back in 2018 (!). There are all kinds of variants, but here’s one I found which has been suitably modified with varactor diodes to serve as a VCO.

You’re going to want to, but don’t look at anything to the right of the inductor. Image from here.

Over half of this circuit (the right half) is devoted to a feedback mechanism for keeping the oscillator going in spite of its internal losses. The conceptually important part is the inductor and the two (reverse-biased) varactors. Note that the voltage across those varactors (which sets their capacitances) is controlled by an external voltage. That’s basically it: this thing should output an oscillation whose frequency can be tuned by that input signal.

All together now

The big claim is if we put these pieces in the right order, we should be able to lock that oscillator’s output signal into phase (and hence, frequency) with the input signal. How does it work? In a nutshell, we’re measuring the phase difference and then adjusting the VCO frequency accordingly. I thank this video for clarifying that big picture when I was early into this and still pretty clueless at a conceptual level. What’s not immediately clear (to me) is why this mechanism results in some kind of reliable approach to the desired steady state. Look at the humble logistic map for an example of how feedback can result in steady state or in wildly complicated behavior. That could happen here, too (theorist brain). For starters, the VCO is going to need to be tuned carefully so it’s able to lock into the right frequency. If it can’t do that, the whole thing falls apart.

Returning to LTspice, I found that the surprisingly prolific Robert Keim was able to implement this in LTspice, and he kindly made his files available for download here. I had mixed success in rigging together a functional Colpitts-style VCO in LTspice. There’s also the subtlety that the VCO I described outputs an analog signal, whereas the input is digital. Keim’s slick solution was to use a built-in “resistor set” chip that acts like a digital oscillator (square pulse output) and is controlled by an input current. His clever trick was to make that input current depend on the control voltage, using trial and error to tune the frequency range to include his input signal. He also used a special set of logic gates you have to install for use in LTspice. I had neither the patience nor motivation to learn how to do this, so I fiddled around with the native XOR gate and its weird number of inputs/outputs (5/2 instead of 2/1) until it worked.

PLL simulation lovingly lifted (and slightly modified) from Robert Keim’s informative blog post.

Interestingly, I found the Frankenstein version of his simulation I concocted didn’t actually lock as configured. Likely, I shouldn’t have changed anything, no matter how equivalent I thought my version was. But with a little fiddling (basically adjusting the resonance of the VCO by changing the input function), it worked beautifully for a 5 kHz input.

The output of the PD/LPF is on the left. After some surprisingly not-that-wild transients, it more or less settles down to a constant. That stabilization suggests steady state because it controls the frequency of the VCO. I’ll note that when the original parameters didn’t result in phase locking, the control signal itself still did stabilize like this (more on that shortly).

On the right (looking in the range after the control voltage stabilizes), you see some beautiful locking (at ~ 90 degrees phase difference) between the input and output. If you zoom in during the transient period, you can confirm that the VCO output has a significantly variable frequency compared to the steady input. So, the thing actually works. Neat!

I’m glossing over some technicalities and highly-valuable practicalities (some of which, I even understand now). As a naive theory person, I kind of expected this thing to be more versatile from the dramatic descriptions I heard on YouTube. It takes a lot of fine tuning, and you have to know the expected input frequency in order to tune the oscillator appropriately. In a real circuit, the locking range is going to be limited by how much those varactor capacitances can swing. Even here in simulation land, this mysterious “toy VCO” has qualitatively similar limitations, though I haven’t dug deeply enough to understand where they come from. It seems that if the VCO’s frequency range doesn’t include the input signal’s frequency, the control will still stabilize, but then you lock in to some rational multiple of the input frequency. This leads to some “ripples” in the control voltage that you could miss if you don’t zoom in enough. I’m kind of grateful the simulation didn’t work for me immediately, because I gained a lot of intuition by having to turn knobs and explore what affect(s) they had. With some understanding of the basic operation, let’s see what this thing can actually do.

Frequency multiplier

There’s a delightful video on YouTube (this one) from MIT explaining PLLs in the context of it being 1985 when the video was recorded. It’s a charming little, virtual, time capsule. And for better or for worse, this was one of the first I watched. Not having fully wrapped my head around basic functionality at the time, my mood deflated somewhat when the instructor just casually added a frequency divider to the loop as a “common application” and moved on while I was still Googling new words like “varactor” and “Colpitts.”

The value of a PLL is that with very minimal modification (sometimes none), a PLL can do really useful things. From a mathematical perspective, I’m now quite fascinated by how the thing works. But fascinations don’t pay the bills, and the only reason YouTube is flooded with PLL videos is because they’re useful in some practical, tangible way. Here’s one: Suppose you have an oscillator. The world is a collection of oscillators, from lasers to musical instruments, to (probably) lightsabers (but only in a galaxy far, far away). There’s going to be some limited range of frequencies over which that oscillator can resonate. What if you need a higher frequency than what you can create?

You could probably buy one that suits your needs. But maybe it’s 2024, late on a Sunday evening, so RadioShack is closed (see: 2024). If you’re in a pinch and can’t wait for a mail order, you can use a PLL to generate a higher frequency from your source signal. My initial tension with this idea (theorist brain condition again) was that if the VCO has to be tuned to the output (high frequency) already, why not just use that as the oscillator with an appropriate DC input? Is this not terribly redundant?

Alas, practicality and real-world complications rule over theory. It costs more to manufacture high-frequency oscillators, and they tend to be of lower quality than the low-frequency counterparts. Anecdotal handwaving: I recall when (red) laser pointers became affordable, consumer-grade tech in the 1990s. Suddenly every kid (like me) with a gadget-loving parent (not like me) was trying to get cheap laughs (see Seinfeld) and blind airplane pilots. Then there were green ones, but those were more expensive. I do remember the first one I saw being used quite effectively at an astronomy event. They were way brighter, too. Eventually blue ones appeared, but they were (initially) far more expensive than green and not as bright. That seemed disappointing, but the consumer “pointer” versions were all 5mW. It just happens that the human eye is more sensitive to green light, so those buggers actually just seemed “brighter.” But I digress.

So you might have to spend a lot to get the right frequency, and it may not be very reliable. The PLL feedback lets you generate a higher frequency while locked into the reliable phase behavior of the lower-frequency signal. You’re skirting the inherent VCO limitations with that feedback loop, and that’s one reason it’s valuable. The more I learn, the more I respect experimentalists.

With that lengthy prelude of justification out of the way (perhaps more for me than for you), here’s how you would go about generating a higher frequency with a PLL. For simplicity, I’ll treat the case of a factor of two (frequency doubler). Essentially, just throw a frequency divider into the feedback loop so that the signal being phase locked is actually half the frequency of the VCO output. Thanks, 1985’s MIT! Once you lock in, your VCO should be supplying twice the input frequency and locked into the phase of the input, so your effective output quality should be only limited by the quality of the input.

Add a frequency divider to the feedback loop, and the VCO doubles the frequency of the input while retaining its signal quality.

So how do we cut the frequency with a “divider?” Is this not just running in circles? Well, cutting the frequency is easy, in principle. You already have a full cycle to work with when you only need to produce a half cycle of output. There’s more information than you need. That should be way easier than trying to (directly) double the frequency by producing a full cycle of output with only a half cycle of input to “work with” (having less information than you need). For the factor-of-two case with logic output, there’s a delightful little logic operation known as a D flip-flop device (“D” is for delay data) that does just the trick. Here’s a cute simulation of how it works. The last step is to actually adjust the VCO so that it has a range of frequencies that covers whatever twice the input frequency is. It’s as simple as that, and yet again Robert Keim had already done it (here) in his series of PLL posts. But for sake of completion, here’s the output I got for throwing in a flip-flop element to my feedback loop. His version used a special element from some other “library,” so I just used what I had and added a native version of the flip-flop element. After “retuning” the oscillator output to the appropriate range (twice the input), it worked like a charm.

PLL output after locking at twice the input frequency via D flip-flop in feedback loop

FM demodulation

Here’s what I think of as the really neat application for this PLL. An FM (frequency-modulated) wave form consists of a sine wave at some “carrier frequency” f_{0} in which a signal is encoded in in variations in the carrier frequency, so

\displaystyle x(t) = A\sin(2\pi f(t)t)

where f(t) = f_{0} + \delta f(t). If we (for no apparent reason) integrate that frequency, we get

\displaystyle \int_{0}^{t}f(t')dt' = 2\pi f_{0}t + \int_{0}^{t}\delta f(t')dt'

Since 2\pi  f_{0}t is just the total phase of a pure sine wave, we can identify that last integral as an “excess” phase angle \phi and alternatively think about FM signals as being encoded in these phase variations. How could the prospect of decoding this signal not be crying out for a phase-locked loop?

One minor issue is that the XOR phase detector only works for digital input, and this FM signal is very much a continuous analog signal. That means we’ll need a new phase detector and some kind of VCO that actually puts out an analog signal. The first point is easy. It turns out you can accomplish the phase detection of analog inputs by swapping out the XOR gate for a voltage multiplier, or a circuit element that outputs a signal proportional to the product of the inputs. It works in a remarkably slick way. With a little bit of math, we find that for two sine waves differing in phase \phi, we get

\begin{array}{ccc} \displaystyle V_{1}V_{2} & = & \sin(2\pi f t)\sin(2\pi f' t + \phi) \\ & = &\displaystyle \frac{1}{2}\cos\left[2\pi (f'-f)t + \phi\right] - \frac{1}{2}\cos\left[2\pi(f+f')t+\phi\right]\end{array}

It’s a great example of the age-old question, “why do we learn about trig identities?” That’s because they’re useful. I don’t see much virtue in memorizing them if you never need them, but there’s great utility in remembering that they exist so you can look them up when you do need them. For two very close frequencies f \approx f', the first term loses its time-dependence and simplifies. Since we’re running this signal through a low-pass filter, the second term averages to zero because f+f' will be a large frequency (compared to f-f').

But don’t take my word for it. Try it. One of few things I’ve spent time building circuits for (back in the day) was simulating nonlinear systems of differential equations. For that, I needed (nonlinear) voltage multipliers. So I know that there exist IC components like the AD633 and MLT04 that perform exactly this function. Sometimes obscure knowledge is useful! I also know that LTspice does not have these devices built into it and that trying to model new devices in LTspice induces headaches I never thought I could experience. Fortunately (as with most aspects of life), there’s a convenient cheat. You can just define a functional source of voltage that’s some kind of mathematical function of the inputs. In this case, make it the simple product of its two inputs. It’s an idealization, but that’s fine for “proof of concept.” Here’s what you get:

Multiplier output (red) and filtered signal (light blue) for two slightly different input frequencies. The filtered signal represents the beat frequency.

The red blur represents the multiplication of two sine waves with slightly different frequencies. Already, one sees the superposition of high-frequency fluctuations on a smooth low-frequency trend. The low-pass filter just wipes out the high-frequency part, leaving a simple sine wave that oscillates at the beat frequency (difference between the two input frequencies). It’s exactly the prediction of the theory, and that’s convenient. Of course it’s designed to do that, so maybe it’s not a big surprise.

The other (trickier) issue is that we need an analog VCO. I wasn’t able to get that VCO diagram above to work in LTspice because I couldn’t find exactly the right components in the base library. The next-best choices didn’t seem to do the job, and I didn’t have the patience to get that deep into customizing elements in LTspice. But I happened upon this forum post where someone was trying to create an FM signal with a Colpitts oscillator and wasn’t happy with the quality of the signal. The design was somewhat different from the first one I looked at, and it happened to use native components I could also use. Sure enough, it worked just fine for me (win!) but obviously suffered in quality (muted win!).

As the original poster pointed out, the quality wasn’t great. The output for a ~5V DC input is on the left (not exactly a sine wave), and that input tunes the oscillator to an output of about 55kHz. There’s some impressive wisdom in the responses which point out the realistic “quality” expectations for a simple-minded Colpitts oscillator as well as the difficulties in practically embedding a signal that could possibly be detected. The former point really… uh, resonated with me after realizing the value of a frequency multiplier was in its ability to lock a lower-quality VCO into a high-quality source to output a high-quality, high-frequency signal. But back to the matter at hand, I gained a working analog VCO in LTspice I could try to use for an analog (FM) input.

So I tried embedding a 250Hz sine wave in the excess phase of a 55kHz signal and sending it through this PLL with an analog phase detector and analog (albeit imperfect) VCO shown above. I honestly can’t believe it worked as well as it did. That’s only a frequency variation of ~0.5%! Above on the left, you see the control voltage signal. And that’s actually the interesting part of the loop here. Once the PLL locks the output into the ~55kHz signal, it’s the output of the phase detector that will wiggle back and forth as the input changes frequency. Remember, the formal output is going to lock into the input and look sort of redundant. But the control signal gives the variation necessary for the VCO to track the input like this.

Sure enough, as this thing locks (see left, above), you can zoom into this control voltage and see little wiggles at 250 Hz (see right, above; 250Hz is an oscillation period of 4ms). That’s the embedded signal, and you can convince yourself of that by embedding other frequencies and finding them in the control signal. I checked 100 Hz and 500 Hz, so there’s at least some bandwidth to play with. Any radio signal is just a sum of many frequencies, so in principle this embedded signal could be any kind of complicated waveform instead of a simple sine wave. Proof of concept!

This is the PLL-equivalent of using an op-amp as an integrator or amplifier, and that’s the fun part. Exploiting the basic property of these contraptions lets you do interesting things. The real hard workers out there optimize the pieces and let us actually pick up NPR (thanks, dad) or MIX 101.5 (thanks, mom) on any crummy car radio with remarkable accuracy. Having gotten used to hand-waving about how things work in principle and blurring over the practical details, it’s quite fun to dig just deeply enough to get a glimpse of the actual methodology for something useful.

Leave a comment