Fruit of Preterition

Seeds (Experimental)

Under Construction: This entire section is written by Claude and has not yet been read or verified by me. It may contain errors, hallucinations, or misleading explanations. Treat everything here as experimental and unvetted.

These are prompts for you to explore with your own LLM.

The idea: take a seed, feed it to your preferred language model, and walk through the material together. Ask questions, request clarifications, explore tangents. The seeds provide a starting point—a topic and angle of approach—but the real value comes from the dialogue you have with your own model.

I haven't read these myself yet. They emerged from conversations with Claude that seemed worth preserving, but I make no claims about their accuracy or completeness. Each seed includes the motivation (what question prompted it) and background (why this angle seemed natural).

The Surprisingly Small Zoo of Natural Norms and Metrics

January 24, 2026 · via GPT-4 math
Motivation: Feeling overwhelmed by the proliferation of norms and metrics in physics and matrix analysis—surely there's some underlying structure that explains why certain ones keep showing up?
Background: Basic linear algebra and functional analysis. Some exposure to information geometry (Fisher metric, quantum state spaces). The question becomes natural once you've seen enough 'here's another norm' discussions and start wondering if there's a pattern.

WKB and the Art of Matched Asymptotics

January 21, 2026 · via Claude math
Motivation: Matched asymptotic expansions are one of the most beautiful techniques in applied mathematics—a symphony of approximations that fit together with stunning precision. WKB is the canonical example.
Background: Quantum mechanics at the Griffiths level, basic complex analysis. The goal isn't to derive Bohr-Sommerfeld (that's a consequence)—it's to see asymptotic matching as a way of thinking.

A Whirlwind Tour of Random Matrix Theory

January 21, 2026 · via Claude math
Motivation: RMT looks exactly like statistical mechanics—partition functions, Boltzmann weights, saddle points, diagrammatics. Is that coincidence or structure?
Background: Stat mech intuition (know what a partition function is), basic linear algebra. The log-gas perspective makes the whole subject click: eigenvalues are particles with logarithmic repulsion.

Singularities Propagate Along Hamilton Flows

January 21, 2026 · via Claude math
Motivation: WKB gives classical trajectories. Geometrical optics gives light rays. Wave equations have characteristics. There's clearly a pattern—what's the unifying principle?
Background: Some PDE exposure, WKB at the 'I've seen the ansatz' level. The key insight is that high-frequency behavior is controlled by the highest-order derivatives, and this naturally gives rise to Hamilton flows.

Manifolds for the Anti-Mathematician

January 21, 2026 · via Claude math
Motivation: I keep hitting differential geometry prerequisites and bouncing off. Topology-first presentations lose me. What's the minimum I need to actually compute things?
Background: Calculus, linear algebra, physics intuition. Frustrated by formal definitions when you just want to know: what IS a tangent vector, and why do Christoffel symbols exist?

Lie Groups and Haar Measure

January 21, 2026 · via Claude math
Motivation: Symmetries are groups. Continuous symmetries are Lie groups. But what's the 'right' way to integrate over a group? And why does that even make sense?
Background: Basic group theory, comfort with linear algebra. Helpful to have seen rotation matrices or unitary transformations. The Haar measure question becomes natural once you want to 'average over all rotations' and realize you need a measure to do that.

Functional Equations: The Algebra Behind Iteration

January 21, 2026 · via Claude math
Motivation: Here's a visibly simple equation: f(f(x)) = g(x). Solve for f. This looks like it should be straightforward—and then you realize it's not. What makes iteration so hard? And what tools exist to attack it?
Background: Basic calculus, comfort with power series. The surprise is that these 'simple-looking' equations are actually deep sources of complexity—and the techniques to solve them (eigenvalue methods, conjugacy, formal series) reveal structure you wouldn't have guessed.

Free Probability: The Non-Commutative Central Limit Theorem

January 21, 2026 · via Claude math
Motivation: The semicircle law keeps appearing in random matrix theory. And I know from the RMT diagrammatics that planar diagrams dominate at large N. What's the algebraic structure that emerges when you take N → ∞ seriously?
Background: Some exposure to RMT, especially the diagrammatic/$1/N$ expansion where planar diagrams dominate. The connection to 'freeness' becomes natural once you see that 'independent random matrices become free at large N' is the algebraic expression of 'only planar diagrams survive'.

Extreme Value Theory: Why Maxima Have Universal Statistics

January 21, 2026 · via Claude math
Motivation: CLT tells me about sums converging to Gaussians—is there an analogous story for maxima? Surely 'take the max of N things' is as natural an operation as 'sum N things'.
Background: Basic probability, familiarity with CLT. The question becomes natural when you realize sums and maxima are both aggregation operations that might have universal limits.

Computational Mechanics and Epsilon Machines

January 21, 2026 · via Claude math
Motivation: What's the 'right' way to model a stochastic process? Not just any model—the minimal one that captures all the predictive information. This turns out to have deep connections to information theory and complexity.
Background: Basic probability, some information theory (entropy, mutual information). The key insight is that 'minimal sufficient statistic for prediction' defines a canonical object—the epsilon machine—that measures intrinsic computational structure.

Asymptotics, Borel Transforms, and Stokes Phenomena

January 21, 2026 · via Claude math
Motivation: Asymptotic series are everywhere in physics—perturbation theory, WKB, semiclassical expansions. But they diverge! What does it mean to 'sum' a divergent series, and why do the answers sometimes jump discontinuously?
Background: Basic complex analysis, comfort with power series. The Stokes phenomenon becomes natural once you see that divergent series encode information about exponentially small terms hiding 'beyond all orders'.