GRNs as Dynamical Systems
A gene regulatory network is a dynamical system on expression space. Each gene product's concentration xi evolves according to a nonlinear ODE:
x˙i=fi(x1,…,xn,u)−γixi where fi encodes regulatory inputs (activation and repression via Hill functions) and γixi is degradation. Cell types are modeled as stable attractors of this system.
Hill Function Regulation
Activation and repression use saturating Hill functions:
fact(A)=βKn+AnAnfrep(C)=βKn+CnKn The Hill coefficient n controls switch-like sharpness. Higher n gives a more digital response. The threshold K sets the inflection point.
Associative Learning in GRNs
Following Levin and Fernando, the associative circuit uses a slow memory variable C that integrates co-stimulation:
p˙=fp(w1,w2,u1,u2)−δpp w˙2=f2(p,u2)−δww2 The crucial feature: w2 is a slow state variable that stores the effect of prior co-stimulation. After training, a formerly weak cue can drive the response. This is the biochemical analog of a synaptic weight.
Three Memory Regimes
- Associative trace: a slow variable retains history of co-stimulation. Later, a weak cue triggers the trained response. Learning = threshold crossing into a new basin.
- Toggle / bistable attractor: mutual repression with self-activation creates two stable states. Memory = which basin the system occupies after a transient pulse.
- Oscillatory memory: a repressilator-style loop stores state in phase and amplitude rather than a fixed point. Memory = persistent dynamical regime.
Notes
- Simulation uses fourth-order Runge-Kutta (RK4) integration.
- “Learning” here means history-dependent state change, not gradient descent on parameters.
- The key insight: a GRN can exhibit memory, conditioning, and trainable behavior through the dynamics of its existing equations, without rewiring its topology.
- Values are illustrative. Replace coupling weights and timing to test specific biological circuits.