Our subjective experience of sight, or any of our senses, gives us a shared arena to discuss consciousness, the mind, and the origin of our behavior. Animals differ in how they detect and categorize external stimuli. One constant stream of information about the world is electromagnetic energy. Electromagnetic radiation varies on a spectrum from high energy gamma rays to low energy radio waves. Each animal’s senses cut this spectrum up differently. The human eyes can detect wavelengths from about 300 nanometers to about 700 nanometers. We see the high-end frequencies as purple and blue, and the lower energies (frequencies) as yellow and red. These are our subjective interpretations of these wavelengths. Other animals, with different visual systems, experience this differently, and we can verify this with biological and behavioral data (as we discussed in the last blog).
The eyes and brain are not cameras and hard drives, they do not record and store images. Vision is a constructive process where sensory cells are tuned to and detect very specific attributes about the environment. During brain development, two stalks emerge and exit the protective layers of the central nervous system. These stalks, the future optic nerves, sprout the retinas. Your eyes are part of your brain and are not passive receptors of stimulation. Like the rest of the brain, they don’t just process information, they create information.
Neurons in the three-layered retina have evolved from pressure sensitive receptors to receptors that respond to light. These photoreceptors, called rods and cones (because of their shape) respond to different wavelengths of light (our color vision), direction and velocity of motion (our motion system), and spots of light (our object detection system). Rods respond to motion and in low light in the visual field, and our three types of cones respond to specific wavelengths of light.
There are about 250 million photoreceptors in each eye. These neurons, these rods and cones, do not fire true action potentials like other neurons. Stimulation of the neurons by light causes a reduction in the amount of neurotransmitter (glutamate) being released into the synapse with the second layer cells. That, my friends, is very odd, and I’m not going to get into the why. Sometimes why is the wrong question. The response patterns of the rods and cones cells are summarized, augmented, and condensed by the second layer of cells in the retina, which contains two more types of retinal neurons called horizontal cells and bipolar cells.
Horizontal cells, which lie horizontally across the neural retina, and bipolar cells begin the process of analyzing the transduced light information into information about objects and their location and movement in the visual field. The photoceptors synapse onto the horizontal and bipolar cells forming a cellular triad, a circuit that begins to condense and summarize the photoreceptor input into receptive fields. Our retinas create millions of receptive fields, functioning much like the compound eyes of the fly. Cells in the second layer can respond to lines and angles in the visual field and increase their contrast through a process called lateral inhibition. The visual data sent to our brain is not a copy of the light information coming into our eyes.
The third layer of cells includes the ganglion cells and amacrine cells. “Ganglion” is another unfortunate name in neuroanatomy that is used differently in different parts of the body. No wonder students get confused. There are about one million ganglion cells per eye. The input from 250,000,000 cells is compressed into 1,000,000 outputs. Take a minute to appreciate this level of data compression. Before the light information has left the eye, it is systematically condensed and summarized such that what was detected by the eye is not copied and sent to the brain. The eyes are tiny brains, shaped over millions of years, analyzing their input.
The ganglion cells of the third layer fire action potentials that travel down their axons. These bundles of axons from each eye are the optic nerves and leave the eye and enter the skull through gaps in the orbital bones of the skull. Once inside the skull, the optic nerve from each eye splits into two tracts, one that crosses to the other side of the brain and one that stays on the same side. The structure where the nerves cross is easy to see on the ventral surface of the brain and is called the optic chiasm. A tumor from the pituitary or an aneurysm on the anterior communicating artery can affect the optic chiasm, causing tunnel vision, a loss of peripheral vision.
The crossing branches carry information from the retina closest to the nose, and this part of the retina receives light from the periphery of vision. The branch that stays on the same side of the head and brain carries information from the retinas closest to the side of the head, the temples. This part of the retina receives light from the center of the visual. This means that light information from the left visual field, the left side of the head, is sent to the right side of the brain, and vice-versa. This is true of most sensory information. Stimulation on one side of the body is sent to the other side of the brain. Recall that motor commands from the right side of the brain control the left side of the body and vice-versa.
These bundles of axons from the ganglion cells of the retinas continue after the optic chiasm and synapse on the lateral sides of the thalamus, the sensory hub in the middle of the brain. This visual nucleus in the thalamus is called the lateral geniculate nucleus. Individual axons from the retina are organized into input from ganglion cells that respond best to fine details and color, and from cells that respond best to movement. We have two visual systems. One for object recognition and focusing on fine details and one for direction and velocity of movement. One system can get knocked out and the other remains, resulting in patients who are object blind, and motion blind.
Information from the retina synapses on different layers of the lateral geniculate nucleus preserving the segregation of object and motion. From the lateral geniculate nuclei on each side of the thalamus, the visual information travels on axons to the back of the brain, the occipital lobe. Relevant to our discussion of consciousness and emergent properties, there are more axons traveling from the occipital lobe to the thalamus than from the thalamus to the occipital lobe. We can refer to this as top-down processing. This neural information influences the bottom-up processing of the incoming visual data. The current state of our visual phenomenology, what we are conscious of, affects the flow of new information coming in. An emergent process is affecting the behavior of upstream neurons.
On the very back of the brain, the pointed end of the occipital lobe, is the visual cortex. This six-layered cortex (neocortex) is divided into left and right halves by the longitudinal fissure that divides the left and right hemispheres. The right side is assembling information from the left visual field and the left side is getting information from the right visual field. A tumor on one side can wipe out the patient’s awareness of the entire opposite visual field. They cannot see people approaching from that side; they cannot see half the food on their plate. Their brain still constructs and uses a complete visual phenomenology. The visual field for the patient is not blocked, it does not exist.
The visual cortex is also divided into upper and lower banks by a fissure called the calcarine sulcus. The upper banks process information from the lower visual field and the lower banks from the upper visual field. This results in four visual quadrants being processed in four anatomically distinct areas. A common visual deficit from damage to one of these banks is called pie in the sky, meaning that the patient is unaware of the upper right or left visual quadrant. They don’t talk or behave as if the visual quadrant is missing, they talk and behave as if it does not exist.
This organization of input by location and type is maintained in the visual cortex in the occipital lobe. It creates segregated streams of information about what an object is (the What Stream) and where it is (the Where Stream). This What Stream goes from the occipital lobes into the temporal lobes, and the Where Stream goes from the occipital lobe to the parietal lobes. The partitioning of visual input we saw from the retina to the thalamus to the occipital lobe is preserved in these visual streams. Mixing occurs in the occipital lobe for our stereoscopic vision and is then reseparated in these streams, which share information again before exiting the occipital lobe and heading for the parietal and temporal lobes. Damage in these routes can produce odd perception, awareness, and behavior. For example, stroke patients can point to objects in their visual field and claim to recognize them, but they cannot find a name for them. They do not have access to the results of the What stream processing. I have seen this deficit in patients.
One last neuroanatomical point of interest is the control of eye muscles. Six pairs of muscles move the eyes. They are tiny muscles and the neurons powering them (three pairs of cranial nerves) send bursts of action potentials to get them moving. All other cranial nerves, and all spinal nerves, have an upper motor neuron synapsing on them, telling them to fire. We talked about this two-neuron chain in a previous blog. The cranial nerves powering the eye muscles do not have upper motor neurons. Instead, they are targeted by large pools of neurons in the brain called eye fields. The frontal and supplementary eye fields, in the frontal lobe, control the eyes by sending signals down to the brainstem, through the area responsible for maintaining our attention (the reticular activating system), and back up to the eyes in a few milliseconds. Other sensory input travels along this pathway as well and influences eye movements. We will talk more about this system in the blog on attention, but for now, I think we are seeing, in these eye fields, the eyes of the ghost.
Like other examples we have seen, our subjective experience of the world is stitched together from samples taken by our sensory systems, shaped by our biology and experience, our memory and emotional state. Input from and into the conscious system can be corrupt or missing, and we still construct an experience. An experience agnostic to aspects of neural processing. The patient with blindsight, with hemispheric neglect, the patient that cannot find words to describe experience, the patient who is unaware that people and objects move around them, even patients who are unaware of entire parts of their body are still conscious. They, like us, have the illusion that our awareness and the world is intact and whole.
Consciousness cannot be found in these neural modules, in these neural circuits, but it does affect them. We can construct subjective experience and function, to some extent, even when large pieces of the neural underpinnings are damaged or missing. Consciousness emerges from the interaction of neural systems, such as our sensory and motor systems, our attention systems, and then affects those systems top-down. Before we get into the neuroanatomic details of the attention systems, let’s turn to another system that has profound effects on our consciousness, thinking, and behavior – the autonomic system. The autonomic nervous system is responsible for our fight or flight behaviors, our resting and eating behaviors, and our sexual behaviors. This will be our discussion next week.
Leave a comment