Sloppy integrals

I’m admittedly a big fan of tricky integrals, but I’m not actually that good at doing integrals. If anything, I’m stubborn, and that tends to get me far enough. One of my absolute favorites is

\displaystyle \int_{0}^{1}\left\{\frac{1}{x}\right\}^{2}\left\{\frac{1}{1-x}\right\}^{2}dx = 4\ln2\pi - 4\gamma -5,

where \left\{a\right\} = a-\lfloor a\rfloor is the fractional part of a, and \gamma \approx 0.577 is the Euler-Mascheroni constant. It’s great fun to work out, and it involves all kinds of mathematical gymnastics. Even Stirling’s approximation makes an appearance. Also, there’s a delightful book that talks about this and other related integrals. In short, it’s a real treat for damaged folks like myself.

Unfortunately, there aren’t a lot of times in physics where things boil down to something that can be turned into some exact, complicated integral. I didn’t appreciate that hard fact until very late in grad school, if even then. But recently, I was looking back at some notes I kept on a project from grad school—because somehow I actually kept notes on some things from over a decade ago—and realized how crippled I had become by trying to do evaluate things exactly.

Exhibit A is the following thing that crept up in a particular physical problem,

\displaystyle \int_{\mu}^{\infty}\frac{dp}{2\pi}\sin(px)\frac{p^{2}}{p^{2} + m^{2}}.

Right off the bat, it’s something that diverges. But we shouldn’t worry if it comes from physics, because nothing actually diverges if one is careful reasonable. In a physical context, p is momentum, and x is distance. So we can always assume that there’s some smallest, relevant, length scale \alpha which lets us “cut off” the integral. We could impose a hard cutoff by replacing the upper limit \infty \rightarrow \Lambda \sim \alpha^{-1} or introduce an exponential factor which effectively serves the same purpose,

\displaystyle \int_{\mu}^{\infty}\frac{dp}{2\pi}\sin(px)\frac{p^{2}}{p^{2} + m^{2}}e^{-\alpha p}.

It’s not really important which regularization method we use for this illustration, so let’s impose the hard cutoff and try to make some level of progress. In grad school, I went the other way and evaluated this thing exactly using “tables” (or maybe Mathematica) in terms of cosine and sine integrals. It’s notable that you can get an exact answer in that case, but it’s almost hopelessly complicated. In the limit where x is “large,” or x \gg \mu^{-1}, one can expand this mess and find a fairly simple result after the smoke clears. It ends up being quite a lot of work for a pretty simple expression.

For the sake of argument, let’s pretend that we don’t know that the answer can actually be obtained exactly (even though it can be). A seemingly strange thing happens in that the large-x limit of the integral follows from (a) replacing \sin(px) with \cos(px) in the integrand, (b) dividing the integrand by x, and setting p = \mu. That’s it, so

\displaystyle \lim_{\Lambda\rightarrow\infty} \int_{\mu}^{\Lambda}\frac{dp}{2\pi}\sin(px)\frac{p^{2}}{p^{2} + m^{2}} \simeq \frac{\cos[\mu x]}{2\pi x}\frac{\mu^{2}}{\mu^{2} + m^{2}}\;\;\;\;(\mbox{as }x\rightarrow\infty).

Note that the regularization length \alpha = 0^{+} drops out entirely in this limit. As long as something doesn’t blow up, one is always free to let \alpha \rightarrow 0 since it’s only there to ensure convergence. So how do we prove this without appealing to some complicated solution in terms of special functions? Turns out, it’s actually quite easy.

The reason I mentioned my quantum field theory course is because that’s where I began to appreciate the power of integration by parts. Yes, it’s that technique brought to you by the same sadistic folks who turned “partial fraction decomposition” and “trigonometric substitutions” into the makings of tricky calculus test questions. In quantum (or even classical) field theory, one often invokes “integration by parts” quite liberally to make lagrangians look “nicer” inside the action integral. In most situations, the “boundary terms” vanish, so you’re essentially moving a derivative from one factor to another at the expense of a minus sign. In short, the idea is that

\displaystyle \int_{a}^{b}u(x) dv(x) = \left. u(x)v(x)\right|_{a}^{b} - \int_{a}^{b}v(x)du(x).

After teaching several upper-level courses, I find that students hate this technique. I don’t really blame them, because it seems to be much more useful in a formal sense than a practical sense. That is, I’ve used it far more often in situations where I’m not actually doing an integral than cases where I am. For example, it lets one write

\displaystyle \int_{t_{i}}^{t_{f}}\left[\frac{\partial\mathcal{L}}{\partial x} \delta x(t) + \frac{\partial \mathcal{L}}{\partial \dot{x}}\delta \dot{x}(t)\right] dt =\int_{t_{i}}^{t_{f}}\left[\frac{\partial\mathcal{L}}{\partial x} -\frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{x}}\right]\delta \dot{x}(t) dt,

without ever doing the integral, so long as \delta x(t_{i}) = \delta x(t_{f}) = 0. But I shouldn’t bash the technique, because it is sometimes quite useful when one is deep within the weeds. But it’s limited to cases where the integrand factors into a clean function and a clean differential. I vividly remember two “gotcha” cases where it’s quite essential. The first one is the integral of the logarithm,

\displaystyle \int \ln x dx,

where you have to use dx as the derivative, so u(x) = \ln x \Rightarrow du(x) = \frac{dx}{x} and dv(x) = dx \Rightarrow v(x) = x, or

\displaystyle \int \ln x dx = x\ln x - \int x\frac{dx}{x} = x\left(\ln x -1\right) + C.

The other fun trick is when you have to “use it cyclically” when tackling something like

\displaystyle I  = \int e^{ax}\sin bx dx.

So you proceed like everything’s going well until you make no progress,

\displaystyle I = \frac{1}{a}e^{ax}\sin bx - \frac{b}{a}\int e^{ax}\cos bx dx.

When all else fails, try it again:

\displaystyle I =\frac{1}{a} e^{ax}\sin bx - \frac{b}{a^{2}}e^{x}\cos bx - \frac{b^{2}}{a^{2}}\int e^{ax}\sin bx.

While this seems like it’s going to go on forever (it will), you get a pass for being clever and noting that the last factor is just -\frac{b^{2}}{a^{2}}I. Bring it back to the left side, and you get

\displaystyle I = \frac{ae^{ax}\sin bx - be^{ax}\cos bx}{a^{2}+b^{2}}.

Nifty, right? Once you see it get burned by it on a calculus exam, you can’t unsee it.

Back to the matter at hand, what we want to do with the nasty integral (way back there) is something similar to this last example in that we’ll make use of the predictability of the trig functions. If we keep choosing just the trig part \cos(px) as the “dv,” we can keep applying integration by parts and getting additional factors of x^{-1} each time. This serves to organize the terms by powers of x^{-1} so that the harder we work, the less important each successive term becomes. If we’re already looking for the large x behavior, we can get the leading-order behavior by (hopefully) a single iteration. What on earth do I mean? Let’s look at the integral again and use a hard cutoff \Lambda as the upper limit. Summoning the dark forces for a single act of “integration by parts,” we get

\displaystyle \int_{\mu}^{\Lambda}\frac{dp}{2\pi}\sin(px)\frac{p^{2}}{p^{2} + m^{2}} = -\left.\frac{1}{2\pi x}\cos(px)\frac{p^{2}}{p^{2} + m^{2}}\right|_{\mu}^{\Lambda} + \int_{\mu}^{\Lambda}\frac{\cos(px)}{2\pi x}\left[\frac{d}{dp}\left(\frac{p^{2}}{p^{2} + m^{2}}\right)\right].

I didn’t even bother to calculate the derivative in the last term because I hope to avoid doing it. Why? Because if I get something nonvanishing from the boundary term, I could keep going by applying integration by parts again. But every time I integrate that trig function, I’m going to pull out another factor of x^{-1}. At large x, each successive term becomes less important. That’s great, because each successive term is going to require more work, too! So let’s look at the boundary term. The upper limit doesn’t matter because as \Lambda\rightarrow\infty, that cosine is going to oscillate rapidly, averaging to zero. A much better justification for this sloppiness is provided by Brian Skinner in the link I included earlier. If you find all of this physical reasoning unsatisfying, you might prefer to try doing the integral with the e^{-\alpha p} factor. Actually, there’s no more work required there, because you can lump that factor into the rest of the mess. Either way, you’re left with only the lower limit. Then to leading order, we have

\displaystyle \lim_{\Lambda\rightarrow\infty}\int_{\mu}^{\Lambda}\frac{dp}{2\pi}\sin(px)\frac{p^{2}}{p^{2} + m^{2}} \simeq \frac{\cos[\mu x]}{2\pi x}\frac{\mu^{2}}{\mu^{2} + m^{2}}\;\;\;\;(\mbox{as }x\rightarrow\infty),

as claimed above. Nifty, right? Note that there’s no funny dependence on \Lambda in this limit.

Of course, as with all things, sometimes you do have to work harder. You’ll notice that the leading order term here vanishes when \mu = 0. But the integral shouldn’t be zero, so we have to keep going until we find a term that doesn’t vanish. It takes two more applications of integration by parts to get the leading order term. It’s great fun (if you like such things) to carry all this out, but you should find

\displaystyle \int_{0}^{\infty}\frac{dp}{2\pi}\sin(px)\frac{p^{2}}{p^{2} + m^{2}} \simeq \frac{1}{2\pi m^{3}x^{3}}\;\;\;\;(\mbox{as }x\rightarrow\infty).

Note that you’ve got to differentiate that nasty “other stuff” twice in order to get this result.

And that’s it! Anytime you have some freakish integral over p with a factor of \sin(px) or \cos(px), you can extract the x\rightarrow \infty asymptotics by just using integration by parts. Sometimes integration techniques are useful out there in the wild.

Leave a comment