When we open our eyes in the morning, we take for granted that we will consciously see the world in all of its dazzling variety. Likewise, when we consciously hear conversations with family and friends, consciously have feelings about them, and consciously know who they are.
The immediacy of our conscious experiences does not, however, explain how we consciously see, hear, feel, and know; where in our brains this happens; or, perhaps more importantly, why evolution was driven to invent conscious states of mind. I will summarize some of the reasons here, starting with why we consciously see. My answer will propose, in brief, that we consciously see in order to be able to reach.
Being able to make an arm movement that reaches a nearby object cannot be taken for granted, if only because of the way our eyes process light from the world. Seeing begins when light passes into our eyes through a lens and hits our photosensitive retinas, much as occurs in a camera. However, our retinas are not manmade. They are made from living cells that need to be nourished at a very fast rate. In addition, the light-sensitive photoreceptors that comprise a retina send their signals to a brain using an optic nerve. These two factors force our retinas to pick up visual signals from the world in a very noisy and incomplete way, as the first two images illustrate.
Figure one shows a cross-section through the eye, showing the lens on the left and the retina on the right. The photodetectors send their light-activated signals down pathways, called axons. All the axons are collected together to form the optic nerve, which sends all the signals to the brain.
Figure two shows that the part of the retina behind which the optic nerve forms is called the blind spot because there are no photodetectors there. Note that the blind spot is about as large as the fovea. The fovea is where the retina can form images with high acuity. Our eyes move incessantly through the day to point our foveas to look directly at objects that interest us.
In addition, retinal veins that nourish retinal cells lie between the lens and the retina, and thereby prevent light from reaching retinal positions behind them.
We can now begin to understand what it means to claim that conscious seeing is for reaching. This is true because, as illustrated in figure three, visual images are occluded by the blind spot and retinal veins. Even a simple blue line that is registered on the retina is sufficient to illustrate why this is a problem. Suppose that, as in the figure, the blue line passes through positions of the blind spot. Because the blue line is not registered at those positions, without further processing we could not reach for the blue line at any of these positions. The brain reconstructs the missing segments of the blue line at higher processing stages so that we can, in fact, reach all positions along the line. The same problem occurs no matter what object is occluded by the blind spot or the retinal veins.
This is not a minor problem because, as I already noted, the blind spot is as big as the fovea.
But what does this have to do with consciousness?!
As I will indicate below, it takes multiple processing stages for our brains to complete representations of images that are occluded by the blind spot and retinal veins. But then how do our brains know which of these processing stages generates a complete enough representation with which to control reliable reaches? Choosing an incomplete representation with which to control actions could have disastrous consequences.
The answer lies in the claim that “all conscious states are resonant states.” I will explain what a resonance is in a moment. For now, the main point is that, a resonance between a complete surface representation of an object and the next processing stage renders that surface representation conscious. Once such a complete surface representation is highlighted by consciousness, it can control actions. And because it is complete, this representation can successfully control accurate reaches to any position on an attended object that is sufficiently near.
The selection of complete surface representations occurs in prestriate visual cortical area V4, which resonates with the posterior parietal cortex, or PPC, to generate a surface-shroud resonance. As illustrated in figure four, spatial attention from the PPC can highlight particular positions of the V4 surface representation via a top-down interaction, at the same time that spatial intention can activate movement commands upstream to look at and reach for a desired goal object.
A resonance is a dynamical state during which neuronal firings across a brain network are amplified and synchronized when they interact via reciprocal excitatory feedback signals during a matching process that occurs between bottom-up and top-down pathways, like the pathways between V4 and PPC. Resonant states focus attention on patterns of critical features that control predictive success, while suppressing irrelevant features. They also trigger learning of critical features—hence the name adaptive resonance—and buffer learned memories against catastrophic (sudden and unpredictable) forgetting.
The conscious states that adaptive resonances support are part of larger adaptive behavioral capabilities that help us to adapt to a changing world. Accordingly, resonances for conscious seeing help to ensure effective looking and reaching; for conscious hearing help to ensure effective auditory communication, including speaking; and for conscious feeling help to ensure effective goal-oriented action.
Figure five summarizes six types of resonances and the functions that they carry out in different brain regions.
Surface-shroud resonances derive their name from the fact that surface representations resonate with spatial attention that covers the shape of the attended object, a so-called attentional shroud. Surface-shroud resonances support conscious seeing of the object, whereas feature-category resonances support conscious recognition of them. When both kinds of resonances synchronize, we can consciously see and know about familiar objects.
What processes are needed to form a complete surface representation from the noisy retinal images that are occluded by the blind spot and retinal veins? First, the blind spot and retinal veins themselves are removed from this representation. This happens because they are attached to the retina, which continually jiggles in its orbit, thereby creating persistent transient signals on the photoreceptors from objects in the world. Retinally stabilized images like the blind spot and retinal veins fade because they do not cause such transients. Next, our brains compensate for changes in illumination that occur through the day and that could undermine the processing of object shapes. Finally, our brains need even more stages to complete the boundaries and fill-in the surface brightnesses and colors that are occluded by the blind spot and retinal veins, as illustrated by figure six. Conscious states enable our brains to select the complete boundaries and surfaces that result from all of these processes.
Feature image: S Migaj via Pexels