Fruit of Preterition

math

#Math and Symbols

Introduction

I believe it was J. A. Wheeler as said something to the effect of “Never do a calculation before you know the answer”. I spent a good amount of time today picking at a math problem and am writing this post to reflect on how I did it, how I might have done (a lot?) better, and how this relates to the function of math as a whole in practical matters.

The problem I had set myself was “derive the Stirling approximation for the Gamma function to all orders without using the Laplace approximation or recurrence relations”. Since I spent a good amount of interrupted-time over the course of the work day thinking about this, I must have written the integral definition of the gamma function thirty times or more - the thing that catches my attention in retrospect, is that my approach was basically doing a heuristic search over symbol manipulations, writing down the symbols corresponding to trying some perturbation or expansion, and seeing if anything in the answer there jumped out at me. I’d put this in opposition to the approach of “thinking about it real hard”, in the sense of developing an ‘argument’ in words for what you’re trying to do and why. I’ll call the first approach as ‘symbol-jockeying’ or similar, and the second as ‘semantic reasoning’.

An example

To give an example of what these look like: (n.b. this is the normal gamma function offset by 1, bc that makes symbol jockeying easier) \(\Gamma(z) = \int_0^\infty e^{-t} t^z dt\) “Hm what if I Taylor it in z?” –> write down all the steps till hit a dead end –> doesn’t work –> “hm what if I put another variable before the t above the e, and write some differential equation in that” –> write down some more steps till hitting a dead end = doesn’t work –> “hm what if I try to find f(x) such that lim x–> inf of Gam(x) / e^(f(x)) is a constant?” –> etc.

This technique is driven by 1. having a library of ‘techniques’ to try on the problem - sometimes these will work, sometimes they won’t. Sometimes you do some mix-and-match within the library, and sometimes you can figure out some pretty neat stuff this way! (For instance I figured out Gamma is an eigenfunction of the infinite-order differential operator e^(x*d/dx) )

The second approach looks something more like literally verbally reasoning out loud: “I have a function which is defined by its argument acting as a parameter in some definite integral. My goal is to find out what the asymptotic behavior of this function is for large values of that argument. I can’t evaluate the integral, and I don’t readily know how to do asymptotic analysis since the integration variable ranges over all positive numbers. Well, at the end of the day, the integral is the area under some curve; that curve is parametrized by the argument. Obviously this function is monotone increasing in the argument. Since there’s an exponential, most of the area is going to come from near some maximum whose location depends on the argument. I should track that maximum and find the area locally, and then see what order the corrections to that are in the argument. Ah shit that’s Laplace’s method which I said I wasn’t gonna do.” etc. etc.

When is braindead better?

On thinking about this dichotomy, I feel an urge to say that the semantic reasoning is ‘better’, more or less because it satisfies that Wheeler quote above, and I’d think Feynman would agree. For me, I don’t think this is obviously the better way to do it, for two reasons: 1. it requires more intuitional understanding of the problem at hand, and 2. it’s insensitive to non-obvious facts which are revealed only by doing some symbol-jockeying.

1: not every problem is important or worth spending time on. Writing this, I can’t shake an attitude of “you should semantically understand every step of the problem, or else you’re dumb and bad at physics”. To be sure, I’m thinking about this now because I’m especially weak in this area, but it also seems clear to me that limitations of time and energy license us to speed through any really dull calculations without a full intuitive understanding at each step of ‘how making this u-substitution and integrating’ corresponds to ‘averaging the product of the kinetic energy and the velocity of…’. Further, I think there are things that are just very hard to think of as anything other than mathematical objects. c.f. my earlier post on wave-functions, or consider the ‘action’. The best definition of the action is “A function of some variables which has the property that only one particular infinitesimal variation of those variables causes no change in the function, and that one is the motion of the system” - that really is what it is and why it’s important, even though its really just words describing an optimization problem.

2: one of the great allicitations of math is that it let’s us draw enormously non-obvious parallels between different phenomena - light and sound and wobbly girders are all wave-y. Many things are really only obvious when doing symbology - in the above example, you definitely have to turn the crank to follow the maximum and the orders of corrections and expansions etc. The maximalist case for semantic-only understanding is then obviously pretty wrong in that Lucretius obviously understood less facts about the world than we do today despite being probably about as good at the relevant style of reasoning as any given talented scientist - there’s some power in the symbol-jockeying, and at least sometimes its valuable to turn the brain off and crank out the integrals / ODEs etc.

When to argue, when to calculate

So we have a dichotomy of two ways to handle our problems, to argue or to calculate. We also see that, (I imagine Feynman’s ghost frowning at me), neither is uniformly better than the other. Really, both are quite powerful, and I think I’ve neglected the semantic-reasoning approach in my studies and thinking hitherto. On reflection then, where in that Gamma function debacle should I have reasoned semantically vs. symbolically? I think semantic reasoning does a better job of quickly identifying the inability to evaluate the integral as the key stumbling block, while at-will perturbation expansions took an embarrasing number of tries to grope my way to that conclusion and some more after that to address it, but without success. Having identified that as the problem, semantic reasoning also seems quicker to switch gears - no need to try another 2 weird places of sticking some new variable before moving on. Beyond getting to the point where I hit Laplace’s approximation and hit a wall, though, semantic reasoning has very little to offer - I still haven’t figured out a good way to get my answer, but it seems likely to me (if only because I’m biased by being bad at semantic reasoning?) that any other way to do it is going to have to totally reframe the problem in a way which is much much easier symbolically. E.g. if the answer were to come from, say, writing a polynomial in gamma or whatever, you couldn’t feasibly arrive at the solution per se (‘yes so this must be a quintic in x whose roots as x goes to infinity do such-and-such’) this way, perhaps maybe the idea of using a polynomial?

I think we chalk up another one to Hegel here and give the obvious cop-out answer that there’s really a scale of how braindead the symbol manipulation you’re doing is, symbol-jockeying on the left, pure gigachad platonic geometry on the right. And there’s also the meta direction of reasoning about the symbol jockeying we’re doing, and I think that’s useful and meaningfully different from the above.

Takeaway for me is that I’ll never get to the point where solving a familiar problem is easier from first-principles semantic reasoning than plug-and-chug, and that’s okay - but I should actively try to verbally reason through different angles of attack. Perfecting that will by my New Year’s resolution - we’ll see if I get any more Feynman-y.