descent & closure

from local micro-events to autonomous macro-processes

The event → process boundary

When does a collection of local micro-events become a coherent process? This playground instantiates a sheaf-theoretic answer: a process is a sheaf of trajectories over a site of time-intervals, where local sections glue consistently into global behavior.

Site and presheaf of micro-trajectories

The site is the poset category Int\mathbf{Int} of time-intervals I[0,T]I \subseteq [0,T] with inclusions as morphisms.

A presheaf X:IntopSet\mathcal{X}: \mathbf{Int}^{op} \to \mathbf{Set} assigns to each interval the set of admissible micro-histories on that interval. Restriction maps ρIJ:X(I)X(J)\rho_{IJ}: \mathcal{X}(I) \to \mathcal{X}(J) truncate histories to sub-intervals.

The sheaf condition states: if local sections xiX(Ii)x_i \in \mathcal{X}(I_i) agree on overlaps, they glue to a unique global section xX(Ii)x \in \mathcal{X}(\bigcup I_i).

The viewer now surfaces pairwise and triple Čech diagnostics: each overlap reports maxxixj\max |x_i - x_j| while triple overlaps verify the cocycle condition xixj+xjxk+xkxi=0x_i - x_j + x_j - x_k + x_k - x_i = 0. Failures are highlighted when exceeding the glue tolerance ε.

Coarse-graining as a natural transformation

The macro presheaf M\mathcal{M} assigns coarse-grained observables (here: moving averages) to each interval. Coarse-graining is a natural transformation:

q:XMq: \mathcal{X} \Rightarrow \mathcal{M}

Naturality ensures qJρIJX=ρIJMqIq_J \circ \rho^{\mathcal{X}}_{IJ} = \rho^{\mathcal{M}}_{IJ} \circ q_I: the macro of a restriction equals the restriction of the macro.

We compute a commutativity matrix: each entry records the mean and max deviation between both sides of the square above, so you can see which inclusions are closest to a true natural transformation.

Closure and the Markov limit

A macro description becomes a true process when its dynamics are closed: future evolution depends only on the current macro-state, not the micro-history.

The Mori–Zwanzig formalism makes this precise. Coarse-graining produces an exact equation:

dmdt=R(m)+0tK(ts)m(s)ds+η(t)\frac{dm}{dt} = R(m) + \int_0^t K(t-s) m(s) \, ds + \eta(t)

with drift RR, memory kernel KK, and noise η\eta. When the kernel decays rapidly (timescale separation), the memory term vanishes and you get Markovian closure: the macro is a self-contained dynamical system.

The playground now fits a finite impulse-response kernel K(τi)K(\tau_i) with adjustable lag count, plots its shape, and runs a Ljung–Box test on the residuals to quantify whether the Markov model is statistically adequate.

Statistical diagnostics

Macro observables now expose higher moments, autocorrelation, and mutual information between overlapping intervals. These summaries help you identify when the sheafified process carries long memory, heavy tails, or strongly coupled overlaps.

How to explore

  1. Turn Consistent restrictions off: local sections become independent, and strict sheaf gluing fails.
  2. Add measurement noise: gluing fails unless tolerance ε is large enough.
  3. Toggle Strict sheaf gluing off to see sheafification (best-fit descent repair).
  4. Compare Markov vs memory models. Vary τ and watch the closure RMSE change.
  5. Watch the multi-track panels: micro vs glued trajectory, macro vs reduced model, memory kernel bars, and autocorrelation all update live with the playback scrubber.