It is a mystery how the brain decodes color vision purely from the optic nerve signals it receives, with a core inferential challenge being how it disentangles internal perception with the correct color dimensionality from the unknown encoding properties of the eye. In this paper, we introduce a computational framework for modeling this emergence of human color vision by simulating both the eye and the cortex. Existing research often overlooks how the cortex develops color vision or represents color space internally, assuming that the color dimensionality is known a priori; however, we argue that the visual cortex has the capability and the challenge of inferring the color dimensionality purely from fluctuations in the optic nerve signals. To validate our theory, we introduce a simulation engine for biological eyes based on established vision science and generate optic nerve signals resulting from looking at natural images. Further, we propose a model of cortical learning based on self-supervised principle and show that this model naturally learns to generate color vision by disentangling retinal invariants from the sensory signals. When the retina contains N types of color photoreceptors, our simulation shows that N-dimensional color vision naturally emerges, verified through formal colorimetry. Using this framework, we also present the first simulation work that successfully boosts the color dimensionality, as observed in gene therapy on squirrel monkeys, and demonstrates the possibility of enhancing human color vision from 3D to 4D.
@misc{kotani2024color,
Author = {Atsunobu Kotani and Ren Ng},
Title = {A Computational Framework for Modeling Emergence of Color Vision in the Human Brain},
Year = {2024},
Eprint = {arXiv:2408.16916},
}