Fruit of Preterition

Seeds

Bite-sized mini-curricula on math and physics topics I find interesting. Each one is a curated entry point into a subject: the motivation for caring, the right angle of approach, and enough material to get started. I picked the topics and angles; Claude fleshed them out into full curricula.

The idea is that you take a seed and feed it to your own LLM of choice, asking it to expand any section into a full tutorial, work examples, quiz you, or branch into the "what's next" pointers. The seed is the map; you and your LLM walk the terrain.

SDEs for Physicists

April 18, 2026 · via Claude mathlong
Motivation: Stochastic calculus has a reputation for heavy machinery. The measure theory is real, but physicists almost never need it. What you actually need: Brownian motion as a tool, Itô's lemma, Fokker-Planck, and a few named tricks (Girsanov, Doob, Onsager-Machlup) that all turn out to be the same move in different costumes.
Background: Comfort with ODEs, basic probability, and the idea of a path integral. No measure theory. If you've written down the Langevin equation and waved your hands about white noise, that's the right starting point.

RKHS and Kernel Learning

April 18, 2026 · via Claude mathlong
Motivation: Kernels are the right language for a surprising amount of ML: SVMs, GPs, MMD, HSIC, kernel mean embeddings, even the NTK story. They're also the cleanest nontrivial example of 'infinite-dimensional linear algebra that you can actually compute with'. Worth knowing the toolkit cold.
Background: Functional analysis at the level of 'Hilbert spaces, bounded operators, Riesz representation'. Some ML exposure helps but isn't required. If you've seen $(X^T X + \lambda I)^{-1} X^T y$ before, you already know half the story.

Probability as Operators

April 18, 2026 · via Claude mathlong
Motivation: I kept rewriting the same linear-algebra-flavored probability arguments: expectations as inner products, conditioning as projection, Bayes as reweighting. The RKHS embedding view unifies these, but a lot of expositions gloss the fine print. This is my attempt to keep the unified frame while being honest about where it's schematic and where it's actually a theorem.
Background: Comfortable with Hilbert spaces, bounded/compact operators, and basic probability. Prior exposure to RKHS is helpful but not required - the reproducing kernel story is recalled as needed.

PKPD as Systems Theory

April 18, 2026 · via Claude mathlong
Motivation: Pharmacology reads as chemistry + curve-fitting until you notice it is a small library of canonical dynamical systems templates. Cascaded filters, saturating nonlinearities, slow-fast integral feedback, opponent processes - the math is generic and transferable, and the concrete pharmacology tells you when each template bites.
Background: Linear systems (Laplace / transfer functions), a little ODE / state-space, comfort with Michaelis-Menten-type saturating nonlinearities. Pharmacological intuition is not required - drugs are here as worked examples of the math.

Otto Calculus and Wasserstein

April 18, 2026 · via Claude mathlong
Motivation: The space of probability measures has a natural Riemannian-ish structure, and a bunch of PDEs you already know are gradient flows in it. Fokker-Planck is literally just downhill motion for free energy, measured with the right ruler. Once you see this, you get a free-energy landscape on distribution space and a bunch of functional inequalities fall out almost by accident.
Background: Multivariable calculus (gradient, divergence, integration by parts), some exposure to Fokker-Planck or Langevin diffusion, and enough comfort with functional derivatives that $\delta F / \delta \mu$ doesn't scare you. No need to have seen optimal transport before.

Natural Selection: Price to Lande

April 18, 2026 · via Claude mathlong
Motivation: Population genetics has a reputation as a thicket of folk theorems with strange names. It's actually one theorem - the Price equation - and everything else (Fisher's fundamental theorem, the breeder's equation, Lande's multivariate response, replicator dynamics) is a specialization or reframing. If you see the covariance, you see it all.
Background: Comfortable with basic probability (covariance, conditional expectation), some linear algebra (for the multivariate Lande), and willing to think of a population as a measure. Genetics background not required - this is a math post that happens to be about biology.

Natural Selection: Drift, Neutrality, and Resolution Limits

April 18, 2026 · via Claude mathlong
Motivation: The deterministic story of selection (see my post 'Natural Selection: Price to Lande') is clean and beautiful, but real populations are finite. Finite populations turn selection into a noisy channel: most variation is neutral, the molecular clock ticks, and adaptation has a resolution floor set by $1/(N_e s)$. The math is a diffusion equation on the simplex, and once you see it the slogans stop sounding like folklore.
Background: Comfortable with basic probability, Fokker-Planck / Kolmogorov forward equations, and the idea of a diffusion limit. No measure theory required. Helps to have read the companion seed on Price-to-Lande, but not strictly necessary - the selection side is summarized briefly.

Mean Field Theory

April 18, 2026 · via Claude math
Motivation: Mean field is the same idea wearing two hats. Physicists use it to solve the Ising model and find phase transitions; ML people use it to approximate posteriors and derive the ELBO. The unifying fact is that mean field is the best product-form approximation to a distribution, in KL. Once you see that, the Curie-Weiss equation and coordinate-ascent variational inference stop looking like separate subjects.
Background: Undergrad statistical mechanics (partition functions, free energy) and a passing acquaintance with variational inference or graphical models. If you've seen the ELBO once, or you've done an Ising model homework, you have enough background. If you've done both, even better - this post is about seeing that they were the same thing all along.

Information Geometry Basics

April 18, 2026 · via Claude math
Motivation: Probability distributions parametrized by some vector $\theta$ form a manifold. The naive thing is to put the Euclidean metric on $\theta$ and call it a day. That metric is a lie - it depends on how you picked the coordinates. There's a better one, and it's essentially unique.
Background: Comfortable with Riemannian geometry at the level of 'metric tensor, connection, geodesic'. Comfortable with MLE, KL divergence, exponential families. The payoff is that a lot of statistics and ML ends up looking like classical mechanics on a very specific manifold.

Fluctuation Theorems

April 18, 2026 · via Claude mathlong
Motivation: The second law, in its usual form, is an inequality about averages. Fluctuation theorems upgrade it to equalities holding for the full distribution of work, heat, and entropy. This is the quiet revolution that transformed non-equilibrium statistical mechanics in the 1990s and 2000s.
Background: Graduate stat mech (Gibbs ensembles, free energy, detailed balance). Some comfort with Langevin or Markov jump processes helps. The goal is to see why one identity, $\langle e^{-\sigma}\rangle = 1$, is really the whole show.

Exact Renormalization from Many Angles

April 18, 2026 · via Claude mathlong
Motivation: Renormalization isn't a bag of tricks for curing divergences. It's a semigroup flow on the space of theories, and there's an exact PDE for that flow. Once you see it as a flow, the same object keeps showing up in disguise: Fokker-Planck equations, score-based generative models, gradient flows on distribution space, neural network training. The exact RG is the common spine.
Background: A first course in QFT (you've seen path integrals and one-loop diagrams), some statistical mechanics, comfort with functional derivatives. No prior exposure to functional RG needed, but it helps if you've at least met Wilsonian RG in some form.

The Witten Deformation

February 25, 2026 · via Claude math
Motivation: The Fokker-Planck equation for a particle in a multi-well potential has a hidden algebraic structure: it's one piece of a supersymmetric complex. This structure connects statistical mechanics to topology via Morse theory.
Background: Familiarity with the Fokker-Planck equation, basic spectral theory, Morse theory at the level of 'Morse Theory'. Differential forms at the level of 'what is a $k$-form' helps but isn't strictly required.

Picard-Lefschetz Theory, or: Integration Contours as Linear Algebra

February 25, 2026 · via Claude math
Motivation: You know saddle-point approximation works, but which saddle points contribute? Why do the answers jump when you vary parameters? And where does the imaginary part of a metastable partition function come from? The answer is that integration contours form a vector space, and everything else is linear algebra.
Background: Basic complex analysis (Cauchy's theorem, residues, contour deformation). Helpful to have seen steepest descent used informally.

Morse Theory

February 25, 2026 · via Claude math
Motivation: A smooth function's critical points encode the topology of the underlying manifold. Morse theory makes this precise: count critical points, track gradient flow, build a chain complex.
Background: Multivariable calculus, basic linear algebra, comfort with manifolds at the level of 'Manifolds for the Anti-Mathematician'. No algebraic topology prerequisites - we build what we need.

From Witten to Kramers

February 25, 2026 · via Claude math
Motivation: The Witten deformation connects two things that look very different: the Morse complex (a topological chain complex counting gradient flow lines) and the Kramers rate matrix (a finite-state Markov chain on potential minima). Both are projections of the same operator onto its low-energy subspace.
Background: Morse theory at the level of 'Morse Theory', the Witten Laplacian at the level of 'The Witten Deformation'. We'll use both freely.

Instantons in Statistical Physics

February 19, 2026 · via Claude math
Motivation: Instantons aren't intrinsically quantum. Stripped of the field theory packaging, they're saddle points of an action functional that dominate rare-event statistics. The cleanest examples are classical: nucleation, thermal activation, large deviations.
Background: Statistical mechanics, free energy, basic path integrals or variational calculus. No quantum field theory required.

The Surprisingly Small Zoo of Natural Norms and Metrics

January 24, 2026 · via GPT-4 math
Motivation: Feeling overwhelmed by the proliferation of norms and metrics in physics and matrix analysis—surely there's some underlying structure that explains why certain ones keep showing up?
Background: Basic linear algebra and functional analysis. Some exposure to information geometry (Fisher metric, quantum state spaces). The question becomes natural once you've seen enough 'here's another norm' discussions and start wondering if there's a pattern.

WKB and the Art of Matched Asymptotics

January 21, 2026 · via Claude math
Motivation: Matched asymptotic expansions are one of the most beautiful techniques in applied mathematics—a symphony of approximations that fit together with stunning precision. WKB is the canonical example.
Background: Quantum mechanics at the Griffiths level, basic complex analysis. The goal isn't to derive Bohr-Sommerfeld (that's a consequence)—it's to see asymptotic matching as a way of thinking.

A Whirlwind Tour of Random Matrix Theory

January 21, 2026 · via Claude math
Motivation: RMT looks exactly like statistical mechanics—partition functions, Boltzmann weights, saddle points, diagrammatics. Is that coincidence or structure?
Background: Stat mech intuition (know what a partition function is), basic linear algebra. The log-gas perspective makes the whole subject click: eigenvalues are particles with logarithmic repulsion.

Singularities Propagate Along Hamilton Flows

January 21, 2026 · via Claude math
Motivation: WKB gives classical trajectories. Geometrical optics gives light rays. Wave equations have characteristics. There's clearly a pattern—what's the unifying principle?
Background: Some PDE exposure, WKB at the 'I've seen the ansatz' level. The key insight is that high-frequency behavior is controlled by the highest-order derivatives, and this naturally gives rise to Hamilton flows.

Manifolds for the Anti-Mathematician

January 21, 2026 · via Claude math
Motivation: I keep hitting differential geometry prerequisites and bouncing off. Topology-first presentations lose me. What's the minimum I need to actually compute things?
Background: Calculus, linear algebra, physics intuition. Frustrated by formal definitions when you just want to know: what IS a tangent vector, and why do Christoffel symbols exist?

Lie Groups and Haar Measure

January 21, 2026 · via Claude math
Motivation: Symmetries are groups. Continuous symmetries are Lie groups. But what's the 'right' way to integrate over a group? And why does that even make sense?
Background: Basic group theory, comfort with linear algebra. Helpful to have seen rotation matrices or unitary transformations. The Haar measure question becomes natural once you want to 'average over all rotations' and realize you need a measure to do that.

Functional Equations: The Algebra Behind Iteration

January 21, 2026 · via Claude math
Motivation: Here's a visibly simple equation: f(f(x)) = g(x). Solve for f. This looks like it should be straightforward—and then you realize it's not. What makes iteration so hard? And what tools exist to attack it?
Background: Basic calculus, comfort with power series. The surprise is that these 'simple-looking' equations are actually deep sources of complexity—and the techniques to solve them (eigenvalue methods, conjugacy, formal series) reveal structure you wouldn't have guessed.

Free Probability: The Non-Commutative Central Limit Theorem

January 21, 2026 · via Claude math
Motivation: The semicircle law keeps appearing in random matrix theory. And I know from the RMT diagrammatics that planar diagrams dominate at large N. What's the algebraic structure that emerges when you take N → ∞ seriously?
Background: Some exposure to RMT, especially the diagrammatic/$1/N$ expansion where planar diagrams dominate. The connection to 'freeness' becomes natural once you see that 'independent random matrices become free at large N' is the algebraic expression of 'only planar diagrams survive'.

Extreme Value Theory: Why Maxima Have Universal Statistics

January 21, 2026 · via Claude math
Motivation: CLT tells me about sums converging to Gaussians—is there an analogous story for maxima? Surely 'take the max of N things' is as natural an operation as 'sum N things'.
Background: Basic probability, familiarity with CLT. The question becomes natural when you realize sums and maxima are both aggregation operations that might have universal limits.

Computational Mechanics and Epsilon Machines

January 21, 2026 · via Claude math
Motivation: What's the 'right' way to model a stochastic process? Not just any model—the minimal one that captures all the predictive information. This turns out to have deep connections to information theory and complexity.
Background: Basic probability, some information theory (entropy, mutual information). The key insight is that 'minimal sufficient statistic for prediction' defines a canonical object—the epsilon machine—that measures intrinsic computational structure.

Asymptotics, Borel Transforms, and Stokes Phenomena

January 21, 2026 · via Claude math
Motivation: Asymptotic series are everywhere in physics—perturbation theory, WKB, semiclassical expansions. But they diverge! What does it mean to 'sum' a divergent series, and why do the answers sometimes jump discontinuously?
Background: Basic complex analysis, comfort with power series. The Stokes phenomenon becomes natural once you see that divergent series encode information about exponentially small terms hiding 'beyond all orders'.