Generative Inference

A unifying principle for the integration of learned priors in natural and artificial intelligence

Tahereh Toosi, Kenneth D. Miller

Center for Theoretical Neuroscience, Columbia University

Sample Capabilities Across the Perceptual Continuum

Visual perception exists on a spectrum from pure sensory processing to pure prior-driven inference

💡 Key Insight: Neural networks trained for object recognition implicitly learn rich statistical priors about natural images. Generative Inference unlocks these hidden capabilities by repurposing the same feedback pathways used during training to guide perception during inference.
100% Sensory Evidence Sensory + Priors 100% Prior Knowledge

Core Object Recognition

Fast, feedforward processing of clear, prototypical visual inputs

Clear object classification demo

Resolving noise/corruption

Recognizing objects under corruption or noise

Noisy image restoration demo

Figure-Ground Segregation

Separating objects from background using contextual cues

Figure-ground separation demo

Kanizsa Illusions

Perceiving illusory contours and shapes from inducer stimuli

Kanizsa square

Ehrestein's Illusions

Perceiving illusory brightness and contrast effects from surrounding context

Ehrestein brightness illusion

Neon Color Spreading

Illusory color propagation beyond physical boundaries

Color spreading visualization

Bistable Perception

Prior-dependent interpretation of ambiguous stimuli

Rubin's face-vase demo

Gestalt Principles

Perceptual grouping through similarity, continuity, and closure

Gestalt grouping visualization

Pattern Generation

Seeing structure in noise - imagination and hallucination

Pattern emergence from noise
The Power of Repurposing: During training, feedback pathways carry error signals to update weights. During generative inference, these same pathways carry gradients to update activations toward more probable interpretations under the learned prior. No additional generative model training or architectural changes required.