We are largely blind to our surroundings

The eyes are the window to the brain. Vision is undoubtedly our dominant sense: Much of how we perceive our environment, learn, and interact with the world is shaped by seeing. In this sense, it is not surprising that nearly one-third of the neocortex, the most recently developed part of the brain, is dedicated to vision. So if we want to unravel the mysteries of the brain, we need to understand seeing.

Hundreds of millions of cones and rods in our retina are constantly receiving light of different intensities and wavelengths. How do we make sense of this vast input to recognize a square, an apple, or even a familiar face? Visual processing already begins in the eyes: Neurons in the retina respond to simple image features. Some retinal neurons fire when they detect bright dots; others react to dark spots.
But of course, the brain has to do the heavy lifting of visual processing. All information from the eyes first arrives in a brain area called the primary visual cortex, or V1 for short. Neurons in V1 can recognize slightly more complex image features: They respond to bar segments. We know this thanks to the pioneering work of David Hubel and Torsten Wiesel in the 1960s, which opened up new avenues in vision research and won them the Nobel prize a few years later.

Has vision research overlooked the crux of the matter?

But what happens then to the visual information? The primary visual cortex forwards it to higher visual areas. One might expect that individual neurons there detect more complex features such as arcs, triangles, maybe even faces. However, despite intensive visual research over the past decades, nothing of the sort has been found. Why is it so hard to study vision beyond the primary visual cortex?
Zhaoping Li, head of the Department Sensory and Sensorymotor Systems, is convinced that vision research has overlooked the crux of the matter: We only perceive what our attention is focused on. In fact, we are blind to more than 99% of all information! This is inevitable because our brains are limited. By directing our gaze, we select a tiny fraction of the incoming information for deeper and more attentive processing.
In other words, to see something, we must first look at it. But how does our brain decide what deserves our attention? More than 20 years ago, Zhaoping Li proposed the V1 saliency hypothesis: The primary visual cortex creates a saliency map that guides our gaze. For example, V1 might mark a red apple in a basket of green pears as a salient object; thus, the apple automatically attracts our gaze and our attention. What is not deemed worthy of our attention – in this case, the pears – is filtered out and does not even reach our higher brain areas.

Why we are so easily tricked by illusions

This has consequences: we are susceptible to illusions. Typically, optical illusions occur where we are not looking: in the peripheral field of vision. This is because the brain creates its own interpretation of the sparse input, such as “this is a red rose” or “this is a red apple”. The higher visual areas often use their prior experience to find plausible interpretations. So does this mean that everything we think we see is just an illusion created by the brain? Fortunately, no! When we look directly at the apple, the higher visual areas ask the primary visual cortex to verify that the interpretation actually matches the raw data. Only then do we actually see the apple!

Go to Editor View