Understanding how neural networks represent and process information is crucial for both neuroscience and artificial intelligence. This project focuses on developing interpretable features to identify and analyze neural representations across different systems. We aim to bridge the gap between biological and artificial neural networks by creating tools that can reveal the underlying computational principles and organizational patterns in both domains.
We combine techniques from machine learning, neuroscience, and computational modeling to develop interpretable features that can reveal the structure and organization of neural representations. Our approach involves analyzing both biological neural data and artificial neural networks to identify common patterns and principles. We focus on creating tools that are both theoretically grounded and practically useful for understanding neural computation.
Object-enhanced and object-centered representations across primate ventral visual cortex
Cognitive Computational Neuroscience (CCN) (2023)
Representational constraints underlying similarity between task-optimized neural systems
Unifying Representations in Neural Models Workshop, Neural Information Processing Systems (NeurIPS) (2023)
Can images predict neural patterns better than Deep Nets?
ICBINB Workshop, Cosyne Meeting, Lisbon, Portugal (2024)
Uncovering the evolution of neural representations in the ventral visual stream
Neuroscience and Artificial Intelligence Laboratory (NeuroAILab), Stanford University (2023)
Interpretable intermediate representations in primate ventral visual cortex
Visual Inference Lab, Columbia University (2023)