The writer is very fast, professional and responded to the review request fast also. Thank you.
Assignment Instructions
For your Midterm essay exam, you will complete 10 essay questions which focus on the course readings. Midterm essay answers must be attached as a Word document to the appropriate assignment page, not typed into the assignment student comments boxes. In addition to writing a 300 word answer to each essay question with APA formatted citations and references (APA title page and reference page are required. Each question should be answered clearly and numbered) students will answer each question thoroughly and completely, providing examples where required.
Answer the questions below in your Midterm exam.
1. Describe the process of perception as a series of steps, beginning with the environmental stimulus and culminating in the behavioral responses of perceiving, recognizing, and acting.
2. Because the long axons of neurons look like electrical wires, and both neurons and electrical wires conduct electricity, it is tempting to equate the two. Compare and contrast the functioning of axons and electrical wires in terms of their structure and the nature of the electrical signals they conduct.
3. What are the two answers (one “simple” and the other “profound”) to the question, “Why is our perception of colors and details worse in dim illumination than in bright illumination?”
4. What is color constancy? Describe three factors that help achieve color constancy. What is brightness constancy? Describe the factors that are responsible for brightness constancy. Lastly, compare and contrast color constancy with brightness constancy.
5. When you walk from outside, which is illuminated by sunlight, to inside, which is illuminated by “tungsten” illumination, your perception of colors remains fairly constant. But under some illuminations, such as street lights called “sodium vapor” lights that sometimes illuminate highways or parking lots, colors do seem to change. Why do you think color constancy would hold under some illuminations, but not others?
6. What is sensory adaptation? How does it occur within the various senses? What function does sensory adaptation serve? Provide a relevant example that illustrates your point.
7. What are the characteristics of the energy that we see as visible light? Provide an example illustrating how these characteristics are expressed when someone sees a rainbow. What types of things (situations and/or objects) can interfere with these characteristics?
8. How does the eye transduce light energy into a neural message? What is the blind spot in the eye and how does it impact the transduction of light energy?
9. How is visual information processed in the brain? What are some things (situations and/or objects) which can impede visual information being processed in the brain? Please include a relevant example to illustrate your answer.
10. What theories contribute to our understanding of color vision? Discuss at least two relevant theories of color vision.
READING
WEEK-1
https://www.youtube.com/watch?v=unWnZvXJH2o
https://www.youtube.com/watch?v=R-sVnmmw6WY
READING
Introduction
Topics to be covered include:
· Sensation and perception
· Sensory processing
· Physiology-perception relationship
· Neurons
What are sensation and perception? This lesson will walk through the process of sensory processing with an overview of the parts of the process you will explore in greater detail in later lessons. The sensation and perception process is a process that involves our physical senses reacting to a stimulus in the environment (like a bee), and moving that information to the brain for analysis based on our own unique bundle of experiences and knowledge. This makes perceptions unique to each person.
Visual processes will be introduced as they pertain to sensation and perception. Why is light important in visual processing? This lesson will answer that question and discuss the route sensory information takes in the visual processing systems. One of the main actors in this process is the neuron. By the end of this lesson, you will have a better understanding of what a neuron is and what it does as a messenger, conveying signals to and from the brain.
Sensation and Perception
A girl is out in a field enjoying the warm summer sunshine and beautiful flowers with a toddler. She hears a buzzing sound, and starts to look around to place the source. She is familiar with this sound, and looks for a bee. When she finds it, she realizes it is flying toward her and she begins to run. She has been stung by a bee before, and does not want to feel that pain again. The toddler that is with her, hears the same buzzing and sees the same bee, but does not run. As a matter of fact, the toddler is curious and stands watching until the girl picks him up and moves him to a new location.
Why did the toddler have a different reaction to the bee? It has to do with perception, and the cognitive processing that occurs as information from the senses is relayed to the brain for analysis. There is quite a lot of processing that occurs in between hearing the buzzing sound and looking for a bee, and seeing the bee, identifying what it is, and moving away from it.
PRIOR KNOWLEDGE CHANGES PERCEPTION
If the information relayed to the brain matches information previously stored through acquired knowledge and experience, the brain perceives this sensory information based on what is previously stored. The toddler did not have information stored on what bees do, so she was not afraid. The girl, on the other hand, has been stung before, and has previously stored knowledge about bees. Her knowledge led to her perception of the situation. This example shows the interaction of sensation and perception.
Sensation is the process of gathering and transmitting information from the five senses: sight, hearing, taste, smell and touch. Sensation acts as a conveyor of information, but does not process the information (Goldstein & Brockmole, 2017). The information from the senses is processed by the brain as it interprets the information based on knowledge previously stored. This cognitive process is called perception (Goldstein & Brockmole, 2017). In this example, sensation occurs as the girl hears a buzzing sound and sees the bee. The information is then sent to the brain, where it is compared to information already stored in memory systems to create perception.
Processing Sensory Information
‹3/7 ›
· Bottom-up and Top-down Processing
From the beginning, the girl heard a buzzing sound. This information was relayed to the brain. The brain prompted the visual senses to search for the sound. The eyes registered the bee and sent that information to the brain. This time the brain sent the command to move away from the area. The command was based on information stored in the girl’s memory, and experiences she had previously had with a bee. The information that was relayed from the senses to the brain is called bottom-up processing (Goldstein & Brockmole, 2017). The information relayed from the brain to the rest of the body, and the recognition and review of knowledge is called top-down processing (Goldstein & Brockmole, 2017). One easy way to remember this is to think about the location of the brain, and most of the senses. Most of the senses are below the brain, so sending information to the brain would be an uphill process. The brain sits at the top of the body, for the most part, so information used by the brain would be top-down.
Vision
‹1/3 ›
·
The sense most people tend to rely on first is vision. The brain interprets the wavelength of light and interprets it as color. Different wavelengths of visible light represent different colors (Griggs, 2016). Shorter wavelengths appearing as blueish colors, mid-length waves appear greenish, and longer waves appear as red, orange and yellow hues (Goldstein & Brockmole, 2017). The wavelength determines color and the amplitude determines the intensity (Griggs, 2016). The wavelength is measured as the distance between wave crests and amplitude is measured by the height of each crest (Griggs, 2016). The number of waves that pass in one second in the frequency.
Visible light is the range of electromagnetic radiation that humans can see, from about 400 to 700 nanometers of wavelengths. The rest of the electromagnetic spectrum is invisible to the human eye (Goldstein & Brockmole, 2017).
Neurons
· MESSAGE TRANSPORTERS
· PARTS OF A NEURON
· SPECIALIZATION OF NEURONS
· HOW A MESSAGE TRAVELS
Let us look at how the message travels through a neuron. The dendrites receive the message from other neurons. They then send the information to the soma. The soma receives the messages from the other neurons and moves them to the axon. The axon carries a message from the soma to the terminal branches that will move the message on to the next neuron in the communication network. The message moves down the axon through an electrical current. The message moving through the axon is called the action potential (Goldstein & Brockmole, 2017).
Action Potential
When a message travels through a neuron it is based on an electrical charge. When an axon is at rest, meaning a message is not traveling through it, the electric charge inside the axon is more negative than positive. This is called resting potential. When a signal for a new message occurs, the charge inside the axon reverses to a positive electrical charge. This signal and the change in electrical charge is the action potential. Once the signal passes the axon and moves to the terminal branches, the charge inside the axon returns to a negative status and as long as a new signal is not coming through, the axon returns to resting potential (Goldstein & Brockmole, 2017). An action potential response is an all or nothing response. Once it begins it does not stop until the message signal has moved through the axon.
The electrical charge of the signal would remain the same size throughout its passage in the axon). This is called a propagated response. There is a limit to how many responses can move through the axon during a period of time. There is an interval between signals, which is called a refractory period (Goldstein & Brockmole, 2017).
Neurotransmitters
Now that we have talked about what happens in a neuron as the signal passes from the dendrites to the terminal branches, let’s talk about what happens as the signal passes from one neuron to the next. Neurons do not touch each other. There is a small gap between each neuron called the synapse, or synaptic gap. As you can imagine, this gap is extremely small. Yet, the charge does need to pass through this gap to get to the next neuron. This is where chemical action comes into play. When an electric signal reaches the end of a terminal branch, a chemical process occurs that allows the signal to move across the synapse using neurotransmitters (Goldstein & Brockmole, 2017).
Neurotransmitters are used to transport the signal from the terminal branches through the synapse and docking on the next neuron at the dendrites. As you can see, we have both an electrical action within the neuron, and a chemical reaction to cross from one neuron to the next.
Neurotransmitters, like neurons themselves, are very numerous and very specific. Certain neurotransmitters are used to send certain messages across the gap. They come in different shapes and when they move across the synapse to the next neuron they fit in to specific shapes docks on the next neuron on its receptor site (Goldstein & Brockmole, 2017). This helps to maintain the specialization of the message. For example, the message from the girl’s ears about the buzzing would likely have used a different neurotransmitter than a message about the color of a flower. When the signal from the transmitter encounters the correct receptor site, it sets off the electrical signal that again runs through the neuron starting with the dendrite and moving through the neuron to terminal branch. The signal then releases the same type of neurotransmitter as it did in the previous neuron to cross the synapse to the next neuron in the network.
There are two different types of responses to receptor sites based on the type of neurotransmitter released, and the nature of the receptor site targeted by it. The two responses are excitatory and inhibitory. The excitatory response occurs as the inside of the neuron becomes more positive, ultimately triggering action potential. This process is called depolarization, which occurs as the change in the charge moves from very negative to positive. The excitatory response triggers increasing rates of nerve firing (Goldstein & Brockmole, 2017).
Think about what might happen if the girl moves away from the bee and it continues to follow her. Her reaction would become more excited, and the nerve firing would reflect this. Once the bee moves off in a different direction, the girl would calm down, and the nerve firing would be greatly reduced. As the girl calms down and stands still, the messages would be reduced. An inhibitory response would occur as the inside of the neuron becomes more negative, decreasing the likelihood of an action potential (Goldstein & Brockmole, 2017). Thus, information is processed as it travels through the neurons. When the information is urgent, like the bee chasing the girl, the neurons move with urgency. When the bee is gone and girl calms down, the neurons reduce the need for excitement and reaction.
Neural Convergence
· From the lens, the light reaches the retina, which is the light-sensitive part of the eye that engages in the transduction process for vision. The retina has three layers of cells: ganglion, bipolar and receptor cells. The ganglion cells receive the light waves first, passing them through to the bipolar cells, and then finally to the receptor cells. The receptor cells consist of rods and cones. The rods are responsible for dim light and colorless vision, while the cones are responsible for brighter light and color (Griggs, 2016). There are many more rod cells than cone cells. Remarkably varied, rods and cones are distributed differently over the surface of the retina. The fovea is where our vision is best and we only have cones. The peripheral retina is the part of the retina outside of the fovea area and contains both cones and rods.
The outer segments of the rod and cones contain light-sensitive chemicals called visual pigments that react to light and trigger electrical signals (Goldstein & Brockmole, 2017). These signals emerge from the back of the eye through the optic nerve (Goldstein & Brockmole, 2017). The information is then passed to the optic nerve via the ganglion cells where it begins processing in the brain (Griggs, 2016).
Referring back to the example of the girl seeing the bee; light enters the outermost layer, the cornea, then passes through to her pupil, the opening at the center of the iris, the lens enables her to focus on the bee from a distance as it flies closer to her (Goldstein & Brockmole, 2017). Sending electrical “messages,” the girl’s retina translates light into nerve signals allowing her to detect the colors of the bee, yellow and black. Located in the back of the retina are her visual receptors, the cones allow the girl to see the details and colors of the bee, while the rods, which are far more numerous than cones, do not play a significant role in seeing the bee because it is daylight.
Neurons
· MESSAGE TRANSPORTERS
· PARTS OF A NEURON
· SPECIALIZATION OF NEURONS
· HOW A MESSAGE TRAVELS
We have a basic understanding of visual and auditory information processes, but what transports the message from the sensory organs to the brain and from the brain to the rest of the body? Signals are sent via neurons. Neurons are cells capable of transmitting information. Each neuron is specialized to transmit certain types of information. There are vast networks of complex circuits made up of billions of neurons in thousands of neuron networks throughout the body. Neurons communicate with other neurons with like specializations. Thus, neurons that specialize in identifying what a sound is through vision would communicate with each other. As you can imagine, the messages travel through the sensory organs and brain systems very quickly. If you think about how long it would take for a girl to hear buzzing, then look for the bee, identify the bee and move away, you can see how quickly the numerous neural messages move.
· Reception and Transduction
In this example, the visual receptors respond to the light reflected from the bee to the girl’s eyes (Goldstein & Brockmole, 2017). Each sense has sensory receptors, which are cells that respond to the different types of energy that transmit information in the environment, like the sound and light waves from the bee. The auditory receptors were also in effect, because the girl’s ears picked up the sound of the buzzing before the girl saw the bee. With visual receptors, light energy is transformed into electrical energy as the visual pigments react to the light. This process is called transduction. Transduction occurs as information from the senses is translated into a message sent to the brain. The message is sent via specialized neural networks that transmit the sensory information from the sensory receptors (Goldstein & Brockmole, 2017). Information that is transmitted through the neural networks is coded, or converted to a form of information that travels the neurons to the brain. While the signals change as they move from the initial reception of auditory and visual sensory stimuli, the electrical signals ultimately become a conscious experience based on the perception of the bee in the recognition of what it is (Goldstein & Brockmole, 2017). Recognition is based on the meaning that the girl had assigned to a bee through her previous knowledge and experiences. That meaning is what prompted the behavioral response to flee.
Vision
‹2/3 ›
·
Your eyes are made up of different interworking parts. Light enters through the cornea, which is located at the front of the eye (Griggs, 2016). The cornea begins the process as it starts to bend the light waves that then passes through to the pupil. The pupil is a tiny hole that lets light through. It is controlled by the iris, which gives the eye its color and regulates the size of the pupil (Griggs, 2016). The iris adjusts the amount of light that enters the eye, then the light moves through a transparent lens, which adjusts the focus of the images (Griggs, 2016). The lens is flexible and can change in shape to take in either distant or close images. This ability to change in shape to fit the distance of images is called accommodation (Griggs, 2016). The cornea and lens together determine the eye’s focusing power. About two-thirds of light is reflected by the cornea, while the remaining third passes to the lens. The lens can change its shape in order to focus the light from objects at different distances, while the cornea remains fixed in place (Goldstein & Brockmole, 2017).
Neurons
· MESSAGE TRANSPORTERS
· PARTS OF A NEURON
· SPECIALIZATION OF NEURONS
· HOW A MESSAGE TRAVELS
Neurons consist of different parts responsible for electrically transmitting the message through each neuron:
· The dendrites are the branchlike structures that first receive information.
· The soma, or cell body, maintains the health and metabolism of the cell.
· The nucleus resides within the soma, and is responsible for making the proteins that maintain cell function. The nucleus also houses the DNA for the cell.
· The axon is the long nerve fiber that is connected to the soma, and transmits the information that is passed on from the soma.
· The terminal branch is at the end of each neuron. It is the launching place of the information from that neuron across the synapse to the next neuron.
· The synapse is the very small gap between each neuron.
· Reception and Transduction
In this example, the visual receptors respond to the light reflected from the bee to the girl’s eyes (Goldstein & Brockmole, 2017). Each sense has sensory receptors, which are cells that respond to the different types of energy that transmit information in the environment, like the sound and light waves from the bee. The auditory receptors were also in effect, because the girl’s ears picked up the sound of the buzzing before the girl saw the bee. With visual receptors, light energy is transformed into electrical energy as the visual pigments react to the light. This process is called transduction. Transduction occurs as information from the senses is translated into a message sent to the brain. The message is sent via specialized neural networks that transmit the sensory information from the sensory receptors (Goldstein & Brockmole, 2017). Information that is transmitted through the neural networks is coded, or converted to a form of information that travels the neurons to the brain. While the signals change as they move from the initial reception of auditory and visual sensory stimuli, the electrical signals ultimately become a conscious experience based on the perception of the bee in the recognition of what it is (Goldstein & Brockmole, 2017). Recognition is based on the meaning that the girl had assigned to a bee through her previous knowledge and experiences. That meaning is what prompted the behavioral response to flee.
Vision
‹2/3 ›
·
Your eyes are made up of different interworking parts. Light enters through the cornea, which is located at the front of the eye (Griggs, 2016). The cornea begins the process as it starts to bend the light waves that then passes through to the pupil. The pupil is a tiny hole that lets light through. It is controlled by the iris, which gives the eye its color and regulates the size of the pupil (Griggs, 2016). The iris adjusts the amount of light that enters the eye, then the light moves through a transparent lens, which adjusts the focus of the images (Griggs, 2016). The lens is flexible and can change in shape to take in either distant or close images. This ability to change in shape to fit the distance of images is called accommodation (Griggs, 2016). The cornea and lens together determine the eye’s focusing power. About two-thirds of light is reflected by the cornea, while the remaining third passes to the lens. The lens can change its shape in order to focus the light from objects at different distances, while the cornea remains fixed in place (Goldstein & Brockmole, 2017).
Neurons
· MESSAGE TRANSPORTERS
· PARTS OF A NEURON
· SPECIALIZATION OF NEURONS
· HOW A MESSAGE TRAVELS
Neurons consist of different parts responsible for electrically transmitting the message through each neuron:
· The dendrites are the branchlike structures that first receive information.
· The soma, or cell body, maintains the health and metabolism of the cell.
· The nucleus resides within the soma, and is responsible for making the proteins that maintain cell function. The nucleus also houses the DNA for the cell.
· The axon is the long nerve fiber that is connected to the soma, and transmits the information that is passed on from the soma.
· The terminal branch is at the end of each neuron. It is the launching place of the information from that neuron across the synapse to the next neuron.
· The synapse is the very small gap between each neuron.
Now that we have some background on the neuron as a single functioning unit and how signals travel through individually, it is time to look at neurons collectively as they work in the retina. Five different types of neurons are layered together into interconnected neural circuits within the retina. As we discussed earlier, information travels from receptor cells to bipolar cells to ganglion cells. There are two other types of neurons that are part of the connections in the retina: the horizontal cells that send signals between receptor neurons, and amacrine cells that pass signals between the bipolar and ganglion cells. Neural convergence, or convergence occurs when multiple neurons synapse onto one neuron. In the retina, there are 126 million receptor cells and 1 million ganglion cells, which means multiple receptor cells will send signals to the same ganglion cell. There are 120 million rods and 6 million cones in the retina, so again, there is a difference in the number of cells sending information to the ganglion cells. Based on these numbers, we can see that more rods will send signals to ganglion cells than cones. The greater convergence of rods results in more sensitivity than cones, and better detail vision in cones than rods (Goldstein & Brockmole, 2017).
As we have just seen, rods are more sensitive than cones. In the daylight, the intensity of light is much greater than at night, so there are many photons hitting the retina. At night or in dim light, the intensity of light is low, meaning far fewer photons reach the retina. This is why we need lots more rods than cones. When you are walking in dim light, you are using your rods to detect the objects you are looking at. So, less light is needed by the rods to identify the stimuli in your presence. Cones have better visual acuity, or ability to distinguish details (Goldstein & Brockmole, 2017). This visual acuity would have been beneficial to the girl as she looked for a small bee in a large field of flowers.
Conclusion
Let’s revisit the girl and her bee. We have just learned how much goes into hearing buzzing, looking for the sound, seeing a bee, recognizing the bee and remembering what a bee does and moving away from the bee. Try an experiment: listen for a sound, turn toward the sound, identify it and then wave your hand. How long did it take you? Did it seem like any time passed at all? Yet, quite a bit occurred in that seemingly infinitesimal period of time. The sound was transmitted through the auditory system via neural networks, transduced, processed by the brain. A message was then sent from the brain through motor neurons to have the head move to find the source of the sound. The visual systems picked up the light reflection of the bee, sent it through the optical systems, transduced it and moved it to the brains systems for processing. The brain then sent the message to motor neurons to move. Thousands of neurons were activated to send these messages. All of this happened in a period of time we would experience as instant.
In our next lesson we will look at the trip the message takes through the nervous system in more detail.
Sources
Carlson, N. R., Miller, H. L., Heth, D. S., Donahoe, J. W., & Martin, G. N. (2010). Psychology: The science of behavior (7th ed.). Boston, MA: Allyn & Bacon.
Goldstein, E. B. & Brockmole, J. R. (2017). Sensation and perception (10th ed.). Boston, MA: Cengage.
Griggs, R. A. (2016). Psychology: A concise introduction (5th ed.). New York, NY: Worth Publishers.
Image Citations
“A bee on a flower ” by https://pixabay.com/en/bee-lavender-insect-nature-yellow-1040521/.
“A hearing test: a girl has on headphones and her eyes closed as a technician changes which ear receives the sound” by 36420083.
“Visible spectrum of light” by 20609699.
“Close up of an eye ” by https://pixabay.com/en/eye-iris-algae-macro-blur-natural-2340806/.
“Alt text: Anatomy of the eye, as described in this section of the lesson, with a ray of light being focused on the fovea” by https://commons.wikimedia.org/wiki/File:Cataracts.png.
“Anatomy of the ear, as described in this section of the lesson” by 59184884
“Anatomy of a neuron, as described in this section of the lesson” by 56921155
“A diagram that shows messages traveling across the synapse between two nerves” by https://en.wikipedia.org/wiki/Synapse#/media/File:Chemical_synapse_schema_cropped.jpg.
“Anatomy of the ear, as described in this section of the lesson” by https://pixabay.com/en/eye-diagram-eyeball-body-pupil-39998/.
WEEK 2
READING
Topics to be covered include:
In this lesson, we will learn more about sound and the auditory systems that sound waves pass through as they are transmuted to signals the brain can understand. Sound travels as vibrations through the outer and middle ears before it is transmuted to electrical signals in the inner ear. We will also look at how we are able to identify where a sound came from, and how sound hits each of our ears.
For many, sight is the first sense we rely on. We see something and go by what we see. Yet, we cannot always see something, and what we perceive based on our sight is not always accurate. So, which sense do we rely on more than we realize? We can hear in the dark, and while we can be fooled by sounds, we might be a little more cautious with what we hear as opposed to what we see. We use our hearing to listen to and identify different sounds. Some sounds are enjoyable, and others might be a little too loud, or have an unpleasant sound, like a siren or a child playing the same note on a recorder for the fiftieth time trying to get it just right.
Yet, let’s look at an example that will help us explain sound and auditory perception. We are at a concert for second grade children playing their recorders, the plastic flute-like instruments elementary children often learn to play notes on. A couple of children seem to be doing better than others, and have solo parts. Parents scramble to record their children and happily move to the sounds that fill the auditorium. Of course, some visitors might not conclude that the recorders are quite as melodious as they listen to the concert. In each case, pressure changes in the air create the stimulus for hearing, similar to how light is processed by visual senses. This change in air pressure activates the auditory senses. The information travels through the outer ear to the middle ear, then to the inner ear. The information is processed and sent through brains systems to create a perceptual experience. We have systems that help us determine where the sound comes from, based on how quickly it hits an ear, and which ear it hits first. In some ways, this information is more reliable than visual senses.
This video shows how sounds are produced and how you hear them: What is Sound?
Open file: Transcript ‹1/5 ›
The frequency of sound is on the horizontal axis; the dB levels at which we can hear each frequency is on the vertical axis.
The amplitude of a sound is expressed in dB. The perceptual aspect of the sound stimulus loudness is related to the level of an auditory stimulus. The higher the dB the louder we perceive a sound, but this varies with the frequency of the sound. The audibility curve indicates the range of frequencies we can hear. Underneath the audibility curve we would not be able to hear talk, but above the curve we can hear tones. This area above the curve is called the auditory response area. The area above the upper range of the audibility curve is the threshold of feeling, which is an area where the amplitudes are so high that we can feel them, and they would likely cause us pain, but we wouldn’t necessarily hear them (Goldstein & Brockmole, 2017). How many of you have ever heard of a dog whistle? The amplitude of a dog whistle is so high that we, as humans, cannot hear it but dogs can. Dogs can hear frequencies higher in the human audibility curve. As you get older, the range of frequencies you can hear shrinks. You can test your hearing at: Hearing Test. (transcript not yet available)
The video plays sounds of the frequency indicated on the screen. Watch the video until you can hear the sound. That is the lower threshold of your hearing. Towards the end of the video you will probably find that you cannot hear sounds above a certain frequency.
‹1/3 ›
As sound vibrations move through the stapes and press against the oval window, the oval window begins a back-and-forth motion that transmit the vibrations to the liquid inside the cochlea, which, in turn, sets the basilar membrane into an up and down motion. Remember that the basilar membrane lies below the organ of Corti, so the up and down motions cause the organ of Corti to move up and down also. The organ of Corti in turn causes the tectorial membrane to move back and forth just above the outer hair cells. At this point, the vibrations are transformed into electrical signals, beginning the process of transduction. As the cilia of the hair cells bend in one direction structures called tip links are stretched, opening tiny ion channels in cilia membranes. When its channels are open, positive ions flow into the cell and create an electric signal. When the cilia bend in the opposite direction, the tip links go slack, ion channels close and the electrical signal stops. This causes alternating bursts of electrical signals and no electrical signals as the tip links stretch and then slacken. When signals are sent, neurotransmitters are released to cross the synapse between the inner hair cells and the auditory nerve fibers, which causes the nerve fibers to fire. If you think about this, you see a pattern. The auditory nerve fibers fire with the rising and falling pressure of a pattern from a pure tone. When the auditory nerve fibers fire at the same place in the sound stimulus is called phase locking (Goldstein & Brockmole, 2017).
Remember that pitch is concerned with the quality of the sound described as high or low. This is determined based on the frequency, which we have just seen is impacted by place. So, what is pitch impacted by? One other theory is frequency theory, which proposes that the frequency of the sound wave throughout the basilar membrane is the same as the firing rate of the hair cells. If, for example, a frequency of the sound is 300 Hz, the firing rate of the hair cells across the basilar membrane would be 300 pulses per second. So, if we put the place theory and the frequency theory together, what would we get? Research has determined that specific locations on the basilar membrane match specific sound wave frequencies – except for the lower ones. The lower ones seem to match the frequency theory and the firing rate of the entire basilar membrane. There is a maximum firing rate for nerve cells, and cells take turns firing, which increases the maximum firing rate for all of the cells in the group. This process is called the volley principle, and between place theory, frequency theory, and the volley principle, we can see how information is processed by the brain to perceive pitch (Griggs, 2016).
Now that we have seen what happens in the cochlea, let’s move out of the cochlea and continue toward the brain. The auditory nerve carries the signal away from the cochlea toward a sequence of subcortical structures. The first structure is the cochlear nucleus, and then the superior olivary nucleus in the brain stem. This signal then moves to the inferior colliculus located in the midbrain, and then on to the medial geniculate nucleus in the thalamus. The signal continues from the thalamus to the primary auditory cortex in the temporal lobe. While the exact location of the brain specifically responsible for response to pitch, the most responsive area seems to be the anterior auditory cortex, which is an area close to the front of the brain (Goldstein & Brockmole, 2017).
This graph demonstrates the hearing damage for workers in a noisy weaving factory. dBA is another abbreviation of dB.
So far, we have looked at the process for normal hearing. What if someone experiences a loss of hearing? How does that happen? Most hearing loss is associated with the outer hair cells, and damage to auditory nerve fibers. Damage to outer hair cells results in a loss of sensitivity in the basilar membrane, making it harder for someone to separate sounds, such as hearing a door close during a concert. Inner hair cell damage can also result in loss of sensitivity.
One form of hair loss is presbycusis, which is caused by damage to hair cells from extended exposure to loud noise, ingestion of substances that can cause hair cell damage, and age-related degeneration. There is a loss of sensitivity that is more pronounced at higher frequencies with presbycusis, and tends to have a higher prevalence in males than females. Noise-induced hearing loss is another form of hearing degeneration resulting from loud noises. In this case, the damage often involves the organ of Corti. It is also possible to have hearing loss that is not indicated by standard hearing test results, called hidden hearing loss. Standard hearing tests often measure hair cell function, which might not indicate issues with complex sounds (Goldstein & Brockmole, 2017).
We have covered perception of sound based on pitch, frequency and amplitude, so now what about how we perceive where a sound comes from? Imagine you are at the concert and you hear a baby crying in the audience. You turn your head to the left and see the parent quickly ferrying the child out of the auditorium. You knew where to look based on auditory localization. Now, let’s say you are in the school’s waiting room, waiting with other parents for your child’s name to be called so you can pick them up. It is a small room with quite a few parents, and when the teacher calls your name, you are able to hear it the first time, even though it travels two different paths – directly from the teacher’s mouth to your ears, and by bouncing off the walls of the small room. The fact that your auditory perception relies mainly on the direct path is called precedence effect. Think about this small, noisy waiting room again. Many parents are talking to each other. You are speaking with two parents, and are able to hear what they are saying even though others are talking all around you. Your ability to segregate your conversation from the other conversations in the area is called auditory stream segregation (Goldstein & Brockmole, 2017).
Let’s think back to our first scenario where we heard the baby crying while the concert recorder band is playing. You hear sounds from two different directions, which creates an auditory space When you locate the sound of the baby in that auditory space, it is called auditory localization. If you think about the baby’s cry and the sound of the recorders, you will see that they are different and would stimulate different hair cells and nerve fibers in the cochlea. Thus, the auditory system uses location cues created by the way the sound interacts with your head and ears. The two location cues are binaural cues, which depend on information from both ears, and monaural cues, which depends on information from just one ear. Research indicates three dimensions are involved in location of sound: the azimuth, extending from left to right, the elevation, extending up and down, and the distance the sound travels from its source to the person listening to it.
Binaural cues use the time it takes to reach both ears to determine horizontal positions (left or right), but they do not help with vertical information (azimuth). There are two types of binaural cues, interaural level difference, which is based on the difference in sound level, and interaural time difference, which is based on the difference between the time it takes for a sound to reach the left ear, and the time it takes for a sound to reach the right ear. Both time and level differences can be the same at different elevations, which means they do not account for the elevation of a sound, causing a place of ambiguity, or cone of confusion. Information using monaural cues can locate sounds at different elevations using the spectral cue (Goldstein & Brockmole, 2017).
Now that we have identified different cues, think about how they might send and receive signals through neural circuits. One theory, the Jeffress model, proposes that neurons used to transmit signals from the ears are designed to receive signals from both ears. In other words, each neuron processes signals from both ears. The signals move inward and ultimate meet as the neurons sending the sound from the right ear meet the neurons sending the sound from the left ear. The neuron they meet at are called coincidence detectors because they only fire when both signals meet at the same time. When they meet at the same time at this neuron, the neuron indicates that interaural time difference is zero. If the sound comes from one side first, the signal from the ear on that side begins sending signals before the other ear (Goldstein & Brockmole, 2017).
Areas of the brain that have been indicated in sound location include the back of the cortex, or posterior belt area, and an area toward the front of the cortex, or the anterior belt area. There seems to be a “what” auditory pathway that extends from the anterior belt to the frontal cortex, and the “where” auditory pathway, which extend from the posterior belt to the frontal cortex. The “what” pathway works with determining what a sound is, and the “where” pathway determines where the sound is coming from (Goldstein & Brockmole, 2017).
We are going to return to the recorder concert. If the concert had been outside, perception of the sounds would have directly moved from the recorders to your ears, or direct sound. This concert was inside in an auditorium, so sound reached the ears of the parents through the direct path, and by bouncing off of the various surfaces of the auditorium, which is indirect sound. As parents talk to each other in separate groups, adding to a general array of sound sources the environment is called the auditory scene. You are able to separate out and listen to your conversation with another parent even though numerous conversations were going on around you. This ability to separate the sound from each source is called auditory scene analysis.
Imagine that you hear your name from a female voice while you are talking to a parent, and you saw someone open their mouth and look your way at the same time, so you believed the sound of your name came from that person (even though another parent said your name). You did this based on the ventriloquist effect, which occurs when sounds come from one place, but appear to come from another. In this case, you relied more on your vision than your hearing, and you were wrong. On the other side of this, people can use echolocation to detect the positions and shapes of objects without sight. People who cannot see often learn this technique of making a clicking sound and listening for echoes to determine locations and shapes (Goldstein & Brockmole, 2017). These examples show how important hearing is as a source of sensory information.
A simple concert shows us how much we use our hearing in our daily lives. Sound is processed as vibrations that are transported through the outer ear to the middle and then inner ear systems. Systems in the inner ear are responsible for transforming the vibrations into electrical signals that the brain can understand as audio messages. We also have mechanisms that help us determine where a sound is coming from based on which ear the sound arrives at first. Of course, sometimes we can be mistaken. This can happen when our eyes register one thing while our ears register a sound, causing us to make an assumption about where the sound comes from. Sound is important, and our ears can provide information when our eyes cannot, or when our eyes are mistaken.
Goldstein, E. B. & Brockmole, J. R. (2017). Sensation and perception (10th ed.). Boston, MA: Cengage.
Griggs, R. A. (2016). Psychology: A concise introduction (5th ed.). New York, NY: Worth Publishers.
“A close up of a microphone ” by https://pixabay.com/en/microphone-shure-singing-music-2498641/.
“A graph representing sound, with time on the x-axis and air pressure on the y-axis” by http://oceanexplorer.noaa.gov/explorations/sound01/background/acoustics/media/sinewave_261.jpg.
“An audibility graph showing the dB level needed to hear sounds of different frequencies” by https://upload.wikimedia.org/wikipedia/commons/b/bc/Audible.JPG.
“The anatomy of the ear as described in this section.” by 13699578_ML.
“The middle ear anatomy” by 13699578_ML.
“The anatomy of the cochlea ” by 46938501.
“The organ of Corti” by 73652691.
“The auditory pathway” by 15313015.
“A graph showing the hearing loss of workers in a noisy weaving factory” by https://commons.wikimedia.org/w/index.php?search=threshold+of+hearing&title=Special:Search&profile=default&fulltext=1&searchToken=975xk3qgfyy96u9ixxtnhepzs#/media/File:Permanent_threshold_shift_(hearing_loss)_after_no
WEEK 3
https://www.youtube.com/watch?v=o0DYP-u1rNM
https://www.youtube.com/watch?v=6YxffFmi4Eo
https://www.youtube.com/embed/1ss2EWgRCM4?wmode=opaque&rel=0
https://www.youtube.com/embed/Rxl_jh4N_iQ?wmode=opaque&rel=0
https://www.intechopen.com/books/advances-in-ophthalmology/astigmatism
http://www.innerbody.com/image/nervov.html
https://www.intechopen.com/books/advances-in-ophthalmology/myopia-light-and-circadian-rhythms
By: Tim Barclay, PhD
Medically reviewed by: Stephanie Curreli, MD, PhD
Last Updated: Apr 9, 2020
The nervous system consists of the brain, spinal cord, sensory organs, and all of the nerves that connect these organs with the rest of the body. Together, these organs are responsible for the control of the body and communication among its parts. The brain and spinal cord form the control center known as the central nervous system (CNS), where information is evaluated and decisions made. The sensory nerves and sense organs of the peripheral nervous system (PNS) monitor conditions inside and outside of the body and send this information to the CNS. Efferent nerves in the PNS carry signals from the control center to the muscles, glands, and organs to regulate their functions. CONTINUE SCROLLING TO READ MORE BELOW…
4
7
2
6
3
5
o
o HEAD AND NECK
o Brain
o Cerebrum
o Cranial Nerve X – Vagus Nerve
o CHEST AND UPPER BACK
o LOWER TORSO
o ARM AND HAND
o LEG AND FOOT
o Common Plantar Digital Nerves
o Peroneal Communicating Branch of Musculocutaneous Nerve
o Posterior Femoral Cutaneous Nerve
o Nervous System (Male Posterior View)
o Nervous System (Posterior View)
o Immune and Lymphatic Systems
×
◎
Join our Newsletter and receive our free ebook: Guide to Mastering the Study of Anatomy
Subscribe
We hate spam as much as you do. Unsubscribe at any time.
Click To View Large Image
CONTINUED FROM ABOVE…
The majority of the nervous system is tissue made up of two classes of cells: neurons and neuroglia.
Neurons, also known as nerve cells, communicate within the body by transmitting electrochemical signals. Neurons look quite different from other cells in the body due to the many long cellular processes that extend from their central cell body. The cell body is the roughly round part of a neuron that contains the nucleus, mitochondria, and most of the cellular organelles. Small tree-like structures called dendrites extend from the cell body to pick up stimuli from the environment, other neurons, or sensory receptor cells. Long transmitting processes called axons extend from the cell body to send signals onward to other neurons or effector cells in the body.
There are 3 basic classes of neurons: afferent neurons, efferent neurons, and interneurons.
1. Afferent neurons. Also known as sensory neurons, afferent neurons transmit sensory signals to the central nervous system from receptors in the body.
2. Efferent neurons. Also known as motor neurons, efferent neurons transmit signals from the central nervous system to effectors in the body such as muscles and glands.
3. Interneurons. Interneurons form complex networks within the central nervous system to integrate the information received from afferent neurons and to direct the function of the body through efferent neurons.
Neuroglia, also known as glial cells, act as the “helper” cells of the nervous system. Each neuron in the body is surrounded by anywhere from 6 to 60 neuroglia that protect, feed, and insulate the neuron. Because neurons are extremely specialized cells that are essential to body function and almost never reproduce, neuroglia are vital to maintaining a functional nervous system.
The brain, a soft, wrinkled organ that weighs about 3 pounds, is located inside the cranial cavity, where the bones of the skull surround and protect it. The approximately 100 billion neurons of the brain form the main control center of the body. The brain and spinal cord together form the central nervous system (CNS), where information is processed and responses originate. The brain, the seat of higher mental functions such as consciousness, memory, planning, and voluntary actions, also controls lower body functions such as the maintenance of respiration, heart rate, blood pressure, and digestion.
The spinal cord is a long, thin mass of bundled neurons that carries information through the vertebral cavity of the spine beginning at the medulla oblongata of the brain on its superior end and continuing inferiorly to the lumbar region of the spine. In the lumbar region, the spinal cord separates into a bundle of individual nerves called the cauda equina (due to its resemblance to a horse’s tail) that continues inferiorly to the sacrum and coccyx. The white matter of the spinal cord functions as the main conduit of nerve signals to the body from the brain. The grey matter of the spinal cord integrates reflexes to stimuli.
Nerves are bundles of axons in the peripheral nervous system (PNS) that act as information highways to carry signals between the brain and spinal cord and the rest of the body. Each axon is wrapped in a connective tissue sheath called the endoneurium. Individual axons of the nerve are bundled into groups of axons called fascicles, wrapped in a sheath of connective tissue called the perineurium. Finally, many fascicles are wrapped together in another layer of connective tissue called the epineurium to form a whole nerve. The wrapping of nerves with connective tissue helps to protect the axons and to increase the speed of their communication within the body.
· Afferent, Efferent, and Mixed Nerves. Some of the nerves in the body are specialized for carrying information in only one direction, similar to a one-way street. Nerves that carry information from sensory receptors to the central nervous system only are called afferent nerves. Other neurons, known as efferent nerves, carry signals only from the central nervous system to effectors such as muscles and glands. Finally, some nerves are mixed nerves that contain both afferent and efferent axons. Mixed nerves function like 2-way streets where afferent axons act as lanes heading toward the central nervous system and efferent axons act as lanes heading away from the central nervous system.
· Cranial Nerves. Extending from the inferior side of the brain are 12 pairs of cranial nerves. Each cranial nerve pair is identified by a Roman numeral 1 to 12 based upon its location along the anterior-posterior axis of the brain. Each nerve also has a descriptive name (e.g. olfactory, optic, etc.) that identifies its function or location. The cranial nerves provide a direct connection to the brain for the special sense organs, muscles of the head, neck, and shoulders, the heart, and the GI tract.
· Spinal Nerves. Extending from the left and right sides of the spinal cord are 31 pairs of spinal nerves. The spinal nerves are mixed nerves that carry both sensory and motor signals between the spinal cord and specific regions of the body. The 31 spinal nerves are split into 5 groups named for the 5 regions of the vertebral column. Thus, there are 8 pairs of cervical nerves, 12 pairs of thoracic nerves, 5 pairs of lumbar nerves, 5 pairs of sacral nerves, and 1 pair of coccygeal nerves. Each spinal nerve exits from the spinal cord through the intervertebral foramen between a pair of vertebrae or between the C1 vertebra and the occipital bone of the skull.
The meninges are the protective coverings of the central nervous system (CNS). They consist of three layers: the dura mater, arachnoid mater, and pia mater.
· Dura mater. The dura mater, which means “tough mother,” is the thickest, toughest, and most superficial layer of meninges. Made of dense irregular connective tissue, it contains many tough collagen fibers and blood vessels. Dura mater protects the CNS from external damage, contains the cerebrospinal fluid that surrounds the CNS, and provides blood to the nervous tissue of the CNS.
· Arachnoid mater. The arachnoid mater, which means “spider-like mother,” is much thinner and more delicate than the dura mater. It lines the inside of the dura mater and contains many thin fibers that connect it to the underlying pia mater. These fibers cross a fluid-filled space called the subarachnoid space between the arachnoid mater and the pia mater.
· Pia mater. The pia mater, which means “tender mother,” is a thin and delicate layer of tissue that rests on the outside of the brain and spinal cord. Containing many blood vessels that feed the nervous tissue of the CNS, the pia mater penetrates into the valleys of the sulci and fissures of the brain as it covers the entire surface of the CNS.
The space surrounding the organs of the CNS is filled with a clear fluid known as cerebrospinal fluid (CSF). CSF is formed from blood plasma by special structures called choroid plexuses. The choroid plexuses contain many capillaries lined with epithelial tissue that filters blood plasma and allows the filtered fluid to enter the space around the brain.
Newly created CSF flows through the inside of the brain in hollow spaces called ventricles and through a small cavity in the middle of the spinal cord called the central canal. CSF also flows through the subarachnoid space around the outside of the brain and spinal cord. CSF is constantly produced at the choroid plexuses and is reabsorbed into the bloodstream at structures called arachnoid villi.
Cerebrospinal fluid provides several vital functions to the central nervous system:
1. CSF absorbs shocks between the brain and skull and between the spinal cord and vertebrae. This shock absorption protects the CNS from blows or sudden changes in velocity, such as during a car accident.
2. The brain and spinal cord float within the CSF, reducing their apparent weight through buoyancy. The brain is a very large but soft organ that requires a high volume of blood to function effectively. The reduced weight in cerebrospinal fluid allows the blood vessels of the brain to remain open and helps protect the nervous tissue from becoming crushed under its own weight.
3. CSF helps to maintain chemical homeostasis within the central nervous system. It contains ions, nutrients, oxygen, and albumins that support the chemical and osmotic balance of nervous tissue. CSF also removes waste products that form as byproducts of cellular metabolism within nervous tissue.
All of the bodies’ many sense organs are components of the nervous system. What are known as the special senses—vision, taste, smell, hearing, and balance—are all detected by specialized organs such as the eyes, taste buds, and olfactory epithelium. Sensory receptors for the general senses like touch, temperature, and pain are found throughout most of the body. All of the sensory receptors of the body are connected to afferent neurons that carry their sensory information to the CNS to be processed and integrated.
The nervous system has 3 main functions: sensory, integration, and motor.
1. Sensory. The sensory function of the nervous system involves collecting information from sensory receptors that monitor the body’s internal and external conditions. These signals are then passed on to the central nervous system (CNS) for further processing by afferent neurons (and nerves).
2. Integration. The process of integration is the processing of the many sensory signals that are passed into the CNS at any given time. These signals are evaluated, compared, used for decision making, discarded or committed to memory as deemed appropriate. Integration takes place in the gray matter of the brain and spinal cord and is performed by interneurons. Many interneurons work together to form complex networks that provide this processing power.
3. Motor. Once the networks of interneurons in the CNS evaluate sensory information and decide on an action, they stimulate efferent neurons. Efferent neurons (also called motor neurons) carry signals from the gray matter of the CNS through the nerves of the peripheral nervous system to effector cells. The effector may be smooth, cardiac, or skeletal muscle tissue or glandular tissue. The effector then releases a hormone or moves a part of the body to respond to the stimulus.
Unfortunately of course, our nervous system doesn’t always function as it should. Sometimes this is the result of diseases like Alzheimer’s and Parkinson’s disease. Did you know that DNA testing can help you discover your genetic risk of acquiring certain health conditions that affect the organs of our nervous system? Late-onset Alzheimer’s, Parkinson’s disease, macular degeneration – visit our guide to DNA health testing to find out more.
The brain and spinal cord together form the central nervous system, or CNS. The CNS acts as the control center of the body by providing its processing, memory, and regulation systems. The CNS takes in all of the conscious and subconscious sensory information from the body’s sensory receptors to stay aware of the body’s internal and external conditions. Using this sensory information, it makes decisions about both conscious and subconscious actions to take to maintain the body’s homeostasis and ensure its survival. The CNS is also responsible for the higher functions of the nervous system such as language, creativity, expression, emotions, and personality. The brain is the seat of consciousness and determines who we are as individuals.
The peripheral nervous system (PNS) includes all of the parts of the nervous system outside of the brain and spinal cord. These parts include all of the cranial and spinal nerves, ganglia, and sensory receptors.
The somatic nervous system (SNS) is a division of the PNS that includes all of the voluntary efferent neurons. The SNS is the only consciously controlled part of the PNS and is responsible for stimulating skeletal muscles in the body.
The autonomic nervous system (ANS) is a division of the PNS that includes all of the involuntary efferent neurons. The ANS controls subconscious effectors such as visceral muscle tissue, cardiac muscle tissue, and glandular tissue.
There are 2 divisions of the autonomic nervous system in the body: the sympathetic and parasympathetic divisions.
· Sympathetic. The sympathetic division forms the body’s “fight or flight” response to stress, danger, excitement, exercise, emotions, and embarrassment. The sympathetic division increases respiration and heart rate, releases adrenaline and other stress hormones, and decreases digestion to cope with these situations.
· Parasympathetic. The parasympathetic division forms the body’s “rest and digest” response when the body is relaxed, resting, or feeding. The parasympathetic works to undo the work of the sympathetic division after a stressful situation. Among other functions, the parasympathetic division works to decrease respiration and heart rate, increase digestion, and permit the elimination of wastes.
The enteric nervous system (ENS) is the division of the ANS that is responsible for regulating digestion and the function of the digestive organs. The ENS receives signals from the central nervous system through both the sympathetic and parasympathetic divisions of the autonomic nervous system to help regulate its functions. However, the ENS mostly works independently of the CNS and continues to function without any outside input. For this reason, the ENS is often called the “brain of the gut” or the body’s “second brain.” The ENS is an immense system—almost as many neurons exist in the ENS as in the spinal cord.
Neurons function through the generation and propagation of electrochemical signals known as action potentials (APs). An AP is created by the movement of sodium and potassium ions through the membrane of neurons. (See Water and Electrolytes.)
· Resting Potential. At rest, neurons maintain a concentration of sodium ions outside of the cell and potassium ions inside of the cell. This concentration is maintained by the sodium-potassium pump of the cell membrane which pumps 3 sodium ions out of the cell for every 2 potassium ions that are pumped into the cell. The ion concentration results in a resting electrical potential of -70 millivolts (mV), which means that the inside of the cell has a negative charge compared to its surroundings.
· Threshold Potential. If a stimulus permits enough positive ions to enter a region of the cell to cause it to reach -55 mV, that region of the cell will open its voltage-gated sodium channels and allow sodium ions to diffuse into the cell. -55 mV is the threshold potential for neurons as this is the “trigger” voltage that they must reach to cross the threshold into forming an action potential.
· Depolarization. Sodium carries a positive charge that causes the cell to become depolarized (positively charged) compared to its normal negative charge. The voltage for depolarization of all neurons is +30 mV. The depolarization of the cell is the AP that is transmitted by the neuron as a nerve signal. The positive ions spread into neighboring regions of the cell, initiating a new AP in those regions as they reach -55 mV. The AP continues to spread down the cell membrane of the neuron until it reaches the end of an axon.
· Repolarization. After the depolarization voltage of +30 mV is reached, voltage-gated potassium ion channels open, allowing positive potassium ions to diffuse out of the cell. The loss of potassium along with the pumping of sodium ions back out of the cell through the sodium-potassium pump restores the cell to the -55 mV resting potential. At this point the neuron is ready to start a new action potential.
A synapse is the junction between a neuron and another cell. Synapses may form between 2 neurons or between a neuron and an effector cell. There are two types of synapses found in the body: chemical synapses and electrical synapses.
· Chemical synapses. At the end of a neuron’s axon is an enlarged region of the axon known as the axon terminal. The axon terminal is separated from the next cell by a small gap known as the synaptic cleft. When an AP reaches the axon terminal, it opens voltage-gated calcium ion channels. Calcium ions cause vesicles containing chemicals known as neurotransmitters (NT) to release their contents by exocytosis into the synaptic cleft. The NT molecules cross the synaptic cleft and bind to receptor molecules on the cell, forming a synapse with the neuron. These receptor molecules open ion channels that may either stimulate the receptor cell to form a new action potential or may inhibit the cell from forming an action potential when stimulated by another neuron.
· Electrical synapses. Electrical synapses are formed when 2 neurons are connected by small holes called gap junctions. The gap junctions allow electric current to pass from one neuron to the other, so that an AP in one cell is passed directly on to the other cell through the synapse.
The axons of many neurons are covered by a coating of insulation known as myelin to increase the speed of nerve conduction throughout the body. Myelin is formed by 2 types of glial cells: Schwann cells in the PNS and oligodendrocytes in the CNS. In both cases, the glial cells wrap their plasma membrane around the axon many times to form a thick covering of lipids. The development of these myelin sheaths is known as myelination.
Myelination speeds up the movement of APs in the axon by reducing the number of APs that must form for a signal to reach the end of an axon. The myelination process begins speeding up nerve conduction in fetal development and continues into early adulthood. Myelinated axons appear white due to the presence of lipids and form the white matter of the inner brain and outer spinal cord. White matter is specialized for carrying information quickly through the brain and spinal cord. The gray matter of the brain and spinal cord are the unmyelinated integration centers where information is processed.
Reflexes are fast, involuntary responses to stimuli. The most well known reflex is the patellar reflex, which is checked when a physicians taps on a patient’s knee during a physical examination. Reflexes are integrated in the gray matter of the spinal cord or in the brain stem. Reflexes allow the body to respond to stimuli very quickly by sending responses to effectors before the nerve signals reach the conscious parts of the brain. This explains why people will often pull their hands away from a hot object before they realize they are in pain.
Each of the 12 cranial nerves has a specific function within the nervous system.
· The olfactory nerve (I) carries scent information to the brain from the olfactory epithelium in the roof of the nasal cavity.
· The optic nerve (II) carries visual information from the eyes to the brain.
· Oculomotor, trochlear, and abducens nerves (III, IV, and VI) all work together to allow the brain to control the movement and focus of the eyes. The trigeminal nerve (V) carries sensations from the face and innervates the muscles of mastication.
· The facial nerve (VII) innervates the muscles of the face to make facial expressions and carries taste information from the anterior 2/3 of the tongue.
· The vestibulocochlear nerve (VIII) conducts auditory and balance information from the ears to the brain.
· The glossopharyngeal nerve (IX) carries taste information from the posterior 1/3 of the tongue and assists in swallowing.
· The vagus nerve (X), sometimes called the wandering nerve due to the fact that it innervates many different areas, “wanders” through the head, neck, and torso. It carries information about the condition of the vital organs to the brain, delivers motor signals to control speech and delivers parasympathetic signals to many organs.
· The accessory nerve (XI) controls the movements of the shoulders and neck.
· The hypoglossal nerve (XII) moves the tongue for speech and swallowing.
All sensory receptors can be classified by their structure and by the type of stimulus that they detect. Structurally, there are 3 classes of sensory receptors: free nerve endings, encapsulated nerve endings, and specialized cells. Free nerve endings are simply free dendrites at the end of a neuron that extend into a tissue. Pain, heat, and cold are all sensed through free nerve endings. An encapsulated nerve ending is a free nerve ending wrapped in a round capsule of connective tissue. When the capsule is deformed by touch or pressure, the neuron is stimulated to send signals to the CNS. Specialized cells detect stimuli from the 5 special senses: vision, hearing, balance, smell, and taste. Each of the special senses has its own unique sensory cells—such as rods and cones in the retina to detect light for the sense of vision.
Functionally, there are 6 major classes of receptors: mechanoreceptors, nociceptors, photoreceptors, chemoreceptors, osmoreceptors, and thermoreceptors.
· Mechanoreceptors. Mechanoreceptors are sensitive to mechanical stimuli like touch, pressure, vibration, and blood pressure.
· Nociceptors. Nociceptors respond to stimuli such as extreme heat, cold, or tissue damage by sending pain signals to the CNS.
· Photoreceptors. Photoreceptors in the retina detect light to provide the sense of vision.
· Chemoreceptors. Chemoreceptors detect chemicals in the bloodstream and provide the senses of taste and smell.
· Osmoreceptors. Osmoreceptors monitor the osmolarity of the blood to determine the body’s hydration levels.
· Thermoreceptors. Thermoreceptors detect temperatures inside the body and in its surroundings.
READING
PSYC304| LESSON 3: VISUAL PERCEPTION: NEURAL SELECTIVITY
Topics to be covered include:
In this lesson, we will look at how the visual systems identify and process information in greater depth. Light is processed and changed as it moves through visual systems to reach the brain. The information is then interpreted by the brain, meaning the exact visual stimulus initially encountered as a distal stimulus is not completely the same once it is processed by the brain. This lesson will explain the different processes that occur and move the information from the eye to the brain.
For this lesson, let us take a trip to the store. When you go grocery shopping, think about how you accomplish the goal of getting everything on your list. Is it a straight line, or do you weave in and out of aisles as you put different types of groceries in your cart? If you only go to the store for one item, a straight line might work, but I’m not sure anyone could exist on just one item to eat for the entire week. The point is that the straight line would be fairly one dimensional and would not provide you with the variety of foods you need.
This is true for electrical signals too. Signals sent to the brain from receptors do not go directly to the brain from the receptor in a straight line. The information that is sent to the brain gets there via the signals of many neurons responsible for different aspects of the initial sensory image. This interaction of the signals of multiple neurons is called neural processing (Goldstein & Brockmole, 2017).
Now, imagine you are in the middle of the store when an end cap full of potato chips falls over leaving mess everywhere. At the time of this incident, 20 people were clustered together nearby. Each one saw something different about this incident. You were in this cluster and caught this incident out of the corner of your eye. You did not directly see it, but did see some and had a good idea of what else happened. By the time you think about it later, the image you pull up seems more complete somehow. Your brain added some details to create a complete picture for you.
When an image is transmitted to the retina, it is processed through a layer of photoreceptors, or neurons that measure light intensity, and alter this information into something that can be processed by the rest of the nervous system. Different photoreceptors correspond to different light points in the observed stimulus. Photoreceptors that correspond to brighter areas of a stimulus process an increased amount of light, which results in larger signals when compared to photoreceptors that correspond to darker areas of a stimulus. This information is then processed in different ways in different interactions with different neurons within the retina. The photoreceptors generate signals based on the amount of light they are receiving, which means signals will be different based on different amounts of light (Grobstein, 2017).
Lateral Inhibition refers to the inhibition that neurons have on one another and how this inhibition is transmitted across the retina in order to increase contrast between dark and light areas to produce a sharper image for the brain (Goldstein & Brockmole, 2017). Think about the endcap of potato chips falling over. If the manager asks what happened and everyone speaks at once, the message becomes blurred as too many people provide information. If only a couple of people speak – preferably people who saw the event from different angles, the picture becomes clearer and more defined. So, how does this work with vision? Output neurons point out to the brain the areas of contrast where light intensity changes quickly, like the light and dark patterns of a checkerboard (Grobstein, 2017).
In a classical study conducted by Keffer Hartline, Henry Wagner, and Floyd Ratcliff, researchers used the Limulus Polyphemus, the horseshoe crab to demonstrate how lateral inhibition can affect the response of neurons in a circuit. The limulus is a favorite specimen to use because its retinal neurons are large and easy accessible. Their eyes, which have many tiny structures called ommatidia, have provided a significant amount of research about the physiological processes of human vision. Each ommatidia has a small lens on the eye’s surface that is located directly over a single receptor. What makes the limulus eye interesting to study is that light shown on a single receptor led to rapid firing rate of nerve fiber, yet as additional light promoted neighboring receptors, this inhibited the firing of the initial single receptor (Goldstein & Brockmole, 2017). Thus, lateral inhibition reduced the firing intensity of neighboring receptors. As this reduction occurs, contrast and definition of the stimulus increase.
Contrast seems a bit less defined when you look at an array of rectangles with the same color ranging from lighter to darker. This can bring illusions to light. French chemist Michel-Eugene Chevreul’s illusion provided research on brightness illusion by placing gray rectangles side by side. They ranged from light to dark gray from left to right. When you look at these rectangles, you can see an illusion of brightness and color due to the adjacent rectangles. When you look at the rectangles, you can see that they are consistent between the borders, but when you look at the borders, you will notice that it seems like the line becomes darker on the left and lighter on the right as you transition to the next hue of gray (Goldstein & Brockmole, 2017). This illusion is the result of lateral inhibition as neural output varies when the amount of light varies.
Another perceptual phenomenon explained by lateral inhibition is the Hermann Grid. Each intersection contains gray images in between the white “paths” and black squares. Yet when you look directly at the gray zones they vanish. Lateral inhibition can help to explain why this occurs. Signals from bipolar cells create the illusion of gray squares at each intersection. Lateral inhibition creates a slower response to the perception of gray squares and explains why perception doesn’t match the actual physical stimulus (Goldstein & Brockmole, 2017). In a sense, the brain fills in the intensity between the intensity of the two darker squares.
If you think about it, we live in a world of constantly changing light. We encounter intense light, and then we encounter less intense light in varying shades as we move throughout our day. Yet, we do not really notice this. What we see is not really the visual stimulus as it truly appears, but something processed through neural networks as the light is prepared for analysis by the brain.
Of course, lateral inhibition is not the only explanation for the visual illusions that occur. Researchers have conducted studies that challenge the use of lateral inhibition as an explanation of the Chevreul illusion as well as the Hermann Gird. First, for the Chevreul illusion, researchers changed the background ramp from light on the left, dark on the right to the opposite. In so doing, one’s perception of the top and bottom changes, while lateral inhibition between the rectangles stays the same (Goldstein & Brockmole, 2017). The effect is thus impacted by the top and bottom in addition to changes in light to dark.
With regard to the Hermann Grid, problems with lateral inhibition explanations arise when the grid is made with curvy squares rather than straight. Using curvy squares should have little effect on the dark spots, but when the squares are curved, the dark spots vanish. While this calls to question the lateral inhibition theory to an extent, it does not completely discount it (Goldstein & Brockmole, 2017). Perception of changes in the stimuli in these illusions opens up the door for additional research to determine the extent of the influence of lateral inhibition, or, perhaps the change in how we see this influence.
‹1/4 ›
Using stimuli from an animal, Hubel and Wiesel’s research showed how cortical neurons at higher levels of the visual system become more specialized to certain types of stimuli. Using a projector instead of direct light on the animal’s eye the researchers were able to determine which areas, excitatory or inhibitory did not respond to screen (Goldstein & Brockmole, 2017).
It is important to understand how signals travel from the retina, following Hubel and Wiesel’s approach; signals leave the eye in the optic nerve, travel to the lateral geniculate nucleus (LGN), and then to the occipital lobe (the visual receiving area). The visual receiving area is the sensory location of the cortex. Interestingly, center-surround receptive fields are present in both the optic nerve fibers and the neurons in the LGN, which calls into question the function of the LGN. It is possible that it acts to regulate neural information based on the reduction in output from the LGN in comparison to the input going into it. Another thought is that it is involved in feedback of information received from the brain (Goldstein & Brockmole, 2017).
Hubel and Wiesel also conducted research on receptive fields in the striate cortex. They discovered that instead of the center-surround arrangement, the receptive cells in these fields are arranged side-by-side. These side-by-side cells are called simple cortical cells. These cells respond to specific stimuli orientations, in particular these cells are sensitive to vertical orientation. This is part of the orientation turning curve, which indicates changes in cell firing based on vertical or tilted orientations. As the cells are vertically oriented, the firing response is optimum. As the bar is tilted, the cell response decreases, and begins to show the impact of inhibitory areas (Goldstein & Brockmole, 2017).
Not all cells in the striate cortex responded the same way for Hubel and Wiesel. Some cells did nothing when exposed to small spots of light. This is best defined as the measure between orientation and firing (Goldstein & Brockmole, 2017). They discovered by accident that some cells in the striate cortex respond to other stimuli.
Different cells respond to different, specific features, earning the name feature detectors.
Now that we have explored how feature detectors respond to specific stimuli, it is time to see if they have anything to do with perception. One way is through selective adaptation. Selective adaptation occurs as neurons that are firing eventually become fatigued, or they adapt. Selective adaptation causes the neuron firing rate to decrease. Adaptation also causes the neuron firing frequency to decrease with the stimulus is quickly presented again. Adaptation is selective because vertical neurons adapt and non-firing neurons do not. This indicates that the adaptation selectively affects specific orientations, just like neurons selectively respond to specific orientations (Goldstein & Brockmole, 2017).
Now that we have explored receptive fields and the response properties of neurons, it is time to look how the visual system is organized.
‹ 1/4 ›
Most of what we have been talking about is based on normal function of the parts of the visual system. That, however is not always the case and you can learn as much from what happens when something does not work, as you can when it works properly. The fMRI is also used to study people who have suffered some form of damage to the visual association pathway. Visual agnosia is an inability to properly perceive a stimulus as it should be perceived (Carlson, Miller, Heth, Donahoe, & Martin, 2010). With visual agnosia, an individual is capable of sight, and can see a stimulus with visual sensory organs, but cannot identify the stimulus. Let’s look at a few types of visual agnosia.
Imagine you are on your shopping trip and your best friend is also there. Unfortunately, when you look at him you see a head with eyes, a nose, a mouth, and cheeks, but they are not where they are supposed to be and you are unable to identify him. You can recognize his voice, which gives his identify away, but your visual system is not translating the information it is processing in a way that allows you to see a complete face as it is truly put together. This is prosopagnosia.
Damage to the temporal lobe can result in prosopagnosia, or a difficulty recognizing the faces of people whose identity is known (Goldstein & Brockmole, 2017). People diagnosed with prosopagnosia understand that they are looking at a face, but cannot identify the owner of the face, even if it is a close loved one. The individual can recognize the parts of the face, but the configuration of the features does not align correctly (Carlson et al., 2010).
Faces are not the only topic of specialization in the temporal cortex. We also see specialization for place, specifically pictures of indoor and outdoor scenes, which activate the parahippocampal place area (PPA) in the ventral stream (Goldstein & Brockmole, 2017). An individual with a visual agnosia in this area might be able to recognize the grocery store as a store, but might be unable to recognize the specific objects in the store, such as the displays, the food on the shelves, or other objects. So, it would seem that the spatial layout is intact, but the objects within the layout are not.
One other area of specialization in the region next to the primary visual cortex is the extrastriate body area (EBA), which is activated by the rest of the body parts other than faces (Goldstein & Brockmole, 2017). If visual agnosia occurs in the EBA, an individual might be able to recognize a face, but not a hand or leg. With this form of agnosia, you would be able to recognize your friend’s face in the store, and the shelves and produce, but not your friend’s hand or arm as a hand or arm
So, what about the environment and the types of stimuli encountered on a regular basis? Selective rearing occurs when an animal is raised in a particular environment with limited specific types of stimuli. Due to the limited selection, the neurons respond more to these stimuli. When this happens, the response potential for other stimuli is reduced. This is neural plasticity, or the shaping of neurons through perceptual experiences (Goldstein & Brockmole, 2017). As stimuli are limited, neural plasticity becomes more specialized to the stimuli that the animal has been exposed to. Blakemore and Cooper explored this ‘use it or lose it’ effect of neural plasticity by limiting the stimuli kittens were exposed to. Kittens were limited to viewing either horizontal or vertical stripes for the first five months of life. The kittens were then tested to see the effects of the selective rearing. Results indicated that cats raised in horizontal stimuli responded to horizontal but not vertical stimuli. The same occurred for cats raised in vertical stimuli (Goldstein & Brockmole, 2017).
Sensory Coding refers to how neurons represent different characteristics of the environment. When specialized neurons respond to specific stimuli, specificity coding has occurred. However, this idea is likely to be incorrect because the brain would need one different neuron to perceive every different object. Neurons usually respond to more than stimulus. It could be that there are a number of neurons that are involved in representing a stimulus. Another form of coding looks at the different neuronal firing patterns that can occur in the representation of a particular stimulus. Where population coding utilizes larger groups of neurons that can create a greater number of different patterns, sparse coding involves smaller groups of neurons. Sparse coding is in effect when a stimulus is represented by the firing of a smaller group of neurons (Goldstein & Brockmole, 2017).
Neurons are also organized through streams, or pathways. The pathway leading from the striate cortex to the parietal lobe is called the dorsal pathway, and the pathway that leads to the parietal lobe is the ventral pathway. The ventral pathway identifies what a stimulus is, and the dorsal pathway identifies where a stimulus is located, and whether or not it is stationary. The dorsal and ventral pathways serve different functions, but they are connected and signals flow both up toward the parietal and temporal lobes and back. Some information is shared between them as what an object is and where it is interact. The dorsal pathway also seems to be linked to the actions of an object, including how an action is carried out (Goldstein & Brockmole, 2017).
While the pathways are connected, they do serve different functions. Based on this, different areas of the cortex respond to different stimuli. This is called modularity. The different specialized areas that process information are called modules. One area where specialization to specific stimuli is the face. Researchers have used fMRI brain imaging to identify areas of the brain where neurons respond best to faces when distinguished from other objects. The main area of activity occurs in the fusiform face area (FFA), located at the base of the brain below the IT cortex (Goldstein & Brockmole, 2017).
Now that we have looked at specific areas that specialize in faces, body parts, and environmental scenes, we need to understand that these specific areas do not exist in a vacuum. Other areas of the cortex, and the rest of the brain for that matter, are also involved in identification of these stimuli. This is called distributed representation, or activation in multiple different areas of the brain. This is important to know because while research continuously indicates areas of specialization, it also indicates that the activation is distributed to other areas of the brain at the same time (Goldstein & Brockmole, 2017).
So, why might this occur? Well for one, we discussed at the beginning of this lesson that processing does not occur in a straight line. The processing of the stimulus travels around to different areas of the brain. Additionally, a face is not just a face. Each face has different features and movements, all of which must be processed based on a multidimensional approach in different sections (Goldstein & Brockmole, 2017). Let’s relate it to our shopping trip. Think about making spaghetti and meatballs for dinner and shopping for the ingredients. Is it just spaghetti and meatballs? What is the sauce made up of? What about the meatballs?
Now, in addition to using the recipe, you have some understanding about what goes into spaghetti and meatballs stored in your memory. You remember perceptual experiences of cooking and eating spaghetti and meatballs previously. Next, we will look at how research has measured the relationship between memory and perception in the hippocampus, an area of the brain associated with memory formation and storage (Goldstein & Brockmole, 2017).
What would happen if you had your hippocampus removed on both sides of your brain? The following case study shows us. Henry Molaison (H.M.) had the hippocampus removed completely as doctors attempted to stop the epileptic seizures he was experiencing. The seizures were eliminated, but so was his ability to store experiences and form long-term memories. Other research showed that there is a connection between visual processes and the hippocampus that respond to our three areas again: faces, bodies and environment scenes (Goldstein & Brockmole, 2017). Where one neuron might respond to one face, another might respond to recognition of another known face. Thus, certain neurons would be responsible for certain categories or types of stimuli.
What has all of this taught us? Can we conclusively connect certain neurons to certain stimuli? Do we have a solid answer to the mind-body problem, or the question of how biological neural processes become our perceptual experience? Well, if we have seen anything with this lesson, it is the fact that each new discovery leads to more questions and potential exceptions to the explanations proposed. Lateral inhibition seems to make sense in some cases, but not all cases. Each person has their own individual experiences based on their own individual perspective and processing of information from a given stimulus.
If I cook a lot, my perception of that plate of spaghetti and meatballs might be a little more detailed as I note the spices mixed in the sauce. This goes along with the expertise hypothesis, which proposes that changes occur via the plasticity of experience that we looked at earlier in this lesson as individuals spend more time with a given stimulus (Goldstein & Brockmole, 2017). Of course, that does not mean that the expertise hypothesis would explain everything. Studies on faces and FFA neurons indicate that there is merit to this hypothesis as experts in a field indicate increased neuronal responses for what is known based on strong experience or expertise. Yet, some researchers argue that this has more to do with neural connections that are already there rather than strengthening and expanding new responses (Goldstein & Brockmole, 2017).
As we have seen, there are no straight lines in visual processing. A simple shopping trip could involve thousands of neurons acting together to transmute a clear and understandable picture to the brain. Different receptors provide different light perspectives that can cause neurons to fire more or less. We have also seen that this process is regulated by lateral inhibition based on how the light patterns are distributed across the retina. Amazingly, more light over more of the retina increases the activation of lateral inhibition, which decreases the firing of the receptor neurons. Now, this seems like it would hurt our visual processing, but in actuality, it provides clarity.
Imagine going shopping and having hundreds of spaghetti sauce options all crowded together. They look similar and now you have to find the one you want, but there are so many and because there are so many none stand out and provide the contrast you need to recognize the one you want.
We also looked, to some extent, at how the brain fills in blanks and processes the information to make it more understandable and cohesive. Think about filling in the information about the incident with the display falling over. You didn’t see the whole thing, but your brain had some understanding to help it fill in what you did not see and then process it into an experiential memory.
We learned that we have areas that specialize in visual processing of certain types of information, such as faces, and views of our environment. Parts of the brain are responsible for recognizing the faces of the people we encounter at the grocery store, the aisles and products, and the arm of someone reaching for spaghetti sauce. Of course, even though certain neurons specialize and correspond to certain areas of the brain, other neurons and areas of the brain are involved in a distributed representation of the stimulus. We do not process sensory information in just one dimension. We also tend to process information we have a lot of experience with in more detail with more neuronal action.
Of course, we also learned that for everything concluded in one research article, there are exceptions. We might have more questions than answers now that we are really looking at the details of the perceptual process, but that does not mean we have not learned valuable information about how we see what we see. In our next lesson we will look a little more at how we tend to organize what we see.
Carlson, N. R., Miller, H. L., Heth, D. S., Donahoe, J. W., & Martin, G. N. (2010). Psychology: The science of behavior (7th ed.). Boston, MA: Allyn & Bacon.
Goldstein, E. B. & Brockmole, J. R. (2017). Sensation and perception (10th ed.). Boston, MA: Cengage.
Grobstein, P. (2017). Tricks of the eye, wisdom of the brain. Retrieved from: http://serendip.brynmawr.edu/bb/latinhib.html
“A brain neuron ” by 23684899_ML.
“Along the boundary between adjacent shades of grey in the Mach bands illusion, lateral inhibition makes the darker area falsely appear even darker and the lighter area falsely appear even lighter.” by https://commons.wikimedia.org/wiki/File:Bandes_de_mach.PNG.
“The Hermann Grid” by https://commons.wikimedia.org/wiki/File:HermannGrid.gif.
“On center and off center retinal ganglion cells respond oppositely to light in the center and surround of their receptive fields. A strong response means high frequency firing, a weak response is firing at a low frequency, and no response means no action potential is fired.” by https://commons.wikimedia.org/wiki/File:Receptive_field.png.
“Gabor filter” by https://commons.wikimedia.org/wiki/File:Gabor_filter.png.
“The dorsal stream (shown in green) and the ventral stream (shown in purple)” by https://commons.wikimedia.org/wiki/File:Ventral-dorsal_streams.svg.
“Computer enhanced fMRI san of a person who has been asked to look at faces. The image shows increased blood flow in the part of the visual cortex that recognizes faces” by https://commons.wikimedia.org/wiki/File:Face_recognition.jpg.
“Parts of the brain highlighted in different colors ” by By Allan Ajifo – https://www.flickr.com/photos/125992663@N02/14414604077/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=35380024.
“The parahippocampal gyrus is shown in blue” by https://commons.wikimedia.org/wiki/File:Sobo_1909_630_-_Parahippocampal_gyrus.png.
“The body and arms of a woman in the grocery store” by 50632177_ML.
“Henry Molaison, also known as H.M. ” by https://en.wikipedia.org/wiki/Henry_Molaison#/media/File:Henry_Gustav_1.jpg.
WEEK 4
https://www.youtube.com/embed/yxa85kUxBDQ?wmode=opaque&rel=0
Topics to be covered include:
Perception is a complex and varied system. Unlike machines, humans have overcome perception complexities that allow us to identify objects that are different but shaped the same. Yet, we do not perceive a stimulus exactly as it exists. It is a dual process of sensory information processes and brain analysis. As we have seen in previous lessons, our sensory processing changes stimuli into signals the brain can translate. As it does this, some aspects might be lost or altered in some way. When the brain receives the messages, and analyzes them based on previous experiences and knowledge, it again alters how the stimulus is perceived and stored in memory. It is also important to take attention into account. We perceive and process what we attend to, yet this is not without errors.
A local park is hosting a Little League baseball tournament. You have volunteered to help announce the teams and the plays during the games. This sounded so easy until you started watching a game and trying to keep track of everything going on. At one point, you were watching a hitter get ready to swing at a pitch, but at the same moment, the runner on first base moved to steal second. You did not realize this was happening until you heard some parents cheering while others were shouting a warning. You quickly turned to the runner and saw him reach the base at about the same time as the short stop with the ball. Who got there first? It is a very important question because it is the bottom of the ninth inning and it would be the last out. The hitting team needs one more run to win the game, and if this runner is out, the game is a tie and they go into overtime. Parents are yelling “safe,” while others are yelling “out.” You feel sorry for the umpire who must make this call. You know what you think you saw – but how accurate was it? What went into your perception of that play?
Think about how an image is processed. The image may start on the retina, but the perceptual system is trying to identify this image, which takes more than just an image. It also takes understanding in some form, and some images are not as clearly identified (and classified) as others. Images on the retina are affected by many different objects in the environment. To understand the different possibilities, work backwards to trace the rays of light outward from the retina to the object in the environment created the image. If this outward projection defines a rectangle, that rectangle could be the page of a book, a photograph, or a brick in a wall. This is called the inverse projection problem because it moves from the retina out to the object (Goldstein, 2017).
Images on the retina can appear one way, but when one’s viewpoint changes to a different angle, the same image can appear differently. For example, the view of the play at second base would be different for the umpire looking at it from behind the plate than it would be from the angle of the parents sitting in the stands, or from you in the announcer’s booth. Now, if the short stop was in front of the plate with just the glove showing, it might seem like he tagged the player before getting to the plate to those in the stands. Yet, while the glove might have been in a position before the plate toward the runner, it might not have been back far enough to touch the runner – even though it looked like it when all you saw was the glove and the position of the runner. This is an example of erroneous perception, most of the time the inverse projection problem practice is able to prevent erroneous perception from occurring by pinpointing which object is responsible for a specific image on the retina (Goldstein, 2017).
For familiar objects we have looked at objects from many different angles, since at any moment in time, you only see if from one angle. As a result, objects can change their appearance depending on which angle we are using to view the object. The ability to identify an object regardless of its angle is called viewpoint invariance – like the short stop’s possible glove position (Goldstein, 2017). People with different angles see different aspects of the same stimulus.
According to modern perceptual psychologists, perception is influenced by physical and semantic regularities. Physical regularities include regularly occurring physical elements in the environment (Goldstein, 2017). Much of our physical environment is designed based on vertical and horizontal orientations – trees and skyscrapers are vertical, while the ocean is horizontal. In our baseball field, the players are vertical, and the field is horizontal. On the other side of this, Semantic regularities are “characteristics that are common in different types of scenes” (Goldstein, 2017, p. 109). When you think about a baseball field, you think about what is regularly in that scene: the baseball diamond, bases, baseball players, etc. You visualize usual objects expected within a certain setting.
In addition to physical and semantic regularities, inference plays a role in perception. In the eighteenth century, Hermann von Helmholtz proposed the theory of unconscious inference, which noted that our perceptions are the result of unconscious ideas that we make about the environment. We tend to complete pictures or fill in blanks based on what we expect the picture to be. Helmholtz also referred to the likelihood principle which states that perception of objects corresponds to familiar patterns. We perceive the most likely source that could have caused the pattern of stimuli received by the brain. This leads to unconscious inference, which indicates that our perceptions can be the result of unconscious assumptions that we make about an environment, such as a baseball field (Goldstein, 2017). Basically, Helmholtz proposed that vision occurs as a result of perceptual interpretation of incomplete data provided by visual senses. We base these inferences on assumptions we have about our world and the processing of visual information: that light comes from above, and that objects are viewed from above rather than below (peopleshealth.com, n.d.).
The most recent idea that supports inference’s importance with regard to perception is known as the Bayesian inference. Following Helmholtz ideas, Thomas Bayes approach uses statistical methods that takes probabilities into account when looking at what caused stimulation. He proposed that the estimate of a probability of an outcome is determined by the prior, or our internal estimate of the probability of an outcome, and the likelihood, or level of consistency between the evidence available about this outcome and the outcome itself (Goldstein, 2017). What is the probability that the glove of the shortstop held the baseball and touched the runner before the runner touched the base? What does the evidence show that indicates that he did touch him? This will impact how people perceive this play.
Filtering Techniques
We use a few different techniques to filter through the information our senses encounter. Think about what happens when you are walking down a street. Do your eyes remain fixed in one place, or are they constantly roving, taking in visual information from a variety of sources, selecting what to attend to? This process is called visual scanning, and it is one method of reducing what we try to process. The pitcher might do this in a baseball game while making sure everyone is where they need to be before throwing a pitch. When you scan a scene, for example when you are looking in a crowd for your friend, fixation provides us with the ability to focus on particular persons or objects so that we can recognize an individual. When you scanned the crowd for your friend, you were engaging in saccadic eye movement, which is a rapid eye movement from one fixation to another. You probably also participate in overt attention which involves looking directly at the attended object. In addition, covert attention may occur, this is attention without moving our eyes. Sometimes a pitcher will try to look out of the corner of the eye at a runner on first base to make sure they are where they are supposed to be – they will use covert attention. We do not use one type of attention over the other when we are interacting with our environment, instead we are constantly shifting back and forth between both overt and covert attention (Goldstein, 2017).
Attentional Capture
Scenes that stand out due to brightly colored stimuli or provide movement grab our attention because they stand out against a background of white or fixed objects. These markedly different regions or objects have visual salience. The eye seems to be drawn to something like this automatically rather than consciously because the visually salient stimulus grabs attention (Goldstein, 2017). This is known as attentional capture, which directs attention to something unusual.
Determinants of Attention
There are three major determinants of attention that depend on the observer. The first is scene schemas, which refers to the individual’s knowledge about what is in a typical scene. Things that do not necessarily belong in a certain scene would attract more attention. If a parent went out and stood on the baseball mound with the pitcher, it would be out of the ordinary for our understanding of what happens in a baseball game. An observer’s interests and goals will tend to attract more attention. For example, if you like horses and are driving on a country road, you are more likely to focus on horses out in the field than houses or other scenery because it is relevant to you. Additionally, task-related knowledge will draw more attention based on their necessity at the moment (Goldstein, 2017). A pitcher will focus more attention on the batter when it is time to pitch the baseball at him, but will then focus more on what is happening with runners on bases when the batter hits or misses the ball.
When different images are presented simultaneously to each eye, the visual system uses binocular rivalry seemingly automatically, yet attention seems to play a role, which makes this effortful too. This points to the importance of attention and conscious focus on one or the other stimuli. One conclusion that could be made is that when we do not pay attention to two different images at the same time, the rivalry ceases, and the signals from the information from the two eyes is combined in the visual cortex (Zhang et al., 2011).
The perception of the human face is neurally organized in a unique manner. Faces and images that resemble faces are pervasive in the environment. Stimuli that resemble faces generate a response on the left side of the brain in the fusiform face area (FFA), while images of objects, such as a house, increases activity in the para-hippocampal place area (PPA) on the right and decreases activity on the left side in the FFA. Looking at human faces generates eye movements more rapid than looking at other objects, However, when a face is turned upside down or when the eyes are obscured, facial recognition is significantly delayed. Thus, there are specialized modules in the brain as well as distributed processing that explain perception of faces (Goldstein, 2017).
If we look at this from a different angle, we can measure brain responses and determine which stimuli generate a response in a process called neural mind reading. This activity uses fMRI imaging to measure brain activation for different tasks and stimuli to predict the orientation of what people are observing based on the pattern of activity. The patterns were decoded by using one of two different methods: structural encoding, which looks at the relationship between activity patterns and the characteristics of the scene presented, and semantic encoding, which analyses the relationship between activation and the meaning of the scene (Goldstein, 2017). Structural encoding would register the baseball field and the people on it, while semantic encoding would register the meaning of what is going on – a baseball game.
This is an MRI scan of a person looking at faces. The active part of the brain is the fusiform face area (FFA).
· BINOCULAR RIVALRY
· RESOLUTION OF RIVALRY
· PERCEPTION OF FACES
· NEURAL MIND READING
Now let’s consider the interaction of neurons and the brain with based on visual perception. Our eyes are in two different locations, because of this the images we see are slightly different. Binocular rivalry creates a connection between perception and neural responses. A neural response is necessary to make sense out of what both eyes perceive. Usually, the images perceived from both eyes is synthesized by the brain into a single image. However, when different images are presented to each eye in an experimental setting the brain will not reconcile the perceptions into a single image, but instead will continuously shift from the right eye to the left eye’s perception (Goldstein, 2017). Studies have indicated that conscious attempts to maintain the attention to one eye as opposed to shifting between the two eyes ae unsuccessful, which could mean that the process of switching is automatic and is not controlled by attention (Zhang, Jamison, Engel, He, & He, 2011).
The second drawing traces the path of where an observer’s eyes were focusing during saccadic eye movement.
‹1/4 ›
This encoding is based on what we perceive, which is based on what we pay attention to. Perception requires human interaction with stimuli, the brain does not passively receive images. Your perceptual system has a limited capacity for processing information. There are five senses continuously encountering information, but we cannot possibly process every single thing we encounter in any given normal day, let alone an eventful day. Therefore, it must scan the visual environment to select relevant stimuli and disregard the rest because our retina cannot take in this overload of stimuli. This process of filtering and focusing is called attention. The fovea enables us to filter out superfluous information (Goldstein, 2017). But what do we consider superfluous? How do we decide what to attend to?
Change Blindness
Now, imagine that it is a new pitcher who is much taller than the previous pitcher, yet you do not notice that it is a different pitcher. This would be called change blindness, which occurs as an individual fails to notice something has changed even if it is obvious (Goldstein, 2017). It is important to note the distinction between change and difference. The two pitchers are different, but the point is that the change in pitchers was not noticed. One aspect of change blindness seems to be the lack of detection of motion (Rensink, 2009). If you think about it, had you noticed the pitcher move off of the mound, you would have attended to the change.
Attention can influence appearance and physiological responding and plays a significant role in what we perceive. Attention enhances our ability to respond and perceive images in our environment. When we attend to a specific location, like first base, it is called spatial attention. Since information processing is more effective where you are attending, if something happens on first base while you are attending to it, the information will move through sensory processing quickly (Goldstein, 2017). Processing also speeds up when we attend to specific objects, like the baseball.
Physiological effects of attention include an increase in firing neurons that are activated by the stimulus as well as an increased activation in specific areas of the brain. In addition, attention alters the relationship between activities in different areas of the brain. An experiment conducted by Conrado Bosman, called the local field potential, demonstrated communication between neurons and synchronization (Goldstein, 2017).
Attention not only allows us to better perceive images and scenes around us, it also helps create binding, which refers to the integration of an object’s features such as color, motion, and form as they are combined into a homogenous perception. The question of how objects are bound together into a whole is considered a binding problem. As a result, several solutions have been proposed. Feature integration theory seeks to explain how we perceive individual features of a whole object. According to this theory the first step in object processing is known at the preattentive stage, which looks at what happens before we focus our attention on the stimulus. We might note the color of the baseball field, the bases, and the uniforms of the players without putting it all together. The second stage is known as the focused attention stage, in which the separate features noted in the preattentive stage are combined to create conscious attention on the baseball field (Goldstein, 2017).
Attention and Perception
Is attention necessary for perception? Some research has shown that perception can occur without attention. For example, people can be briefly shown a picture of a forest and still identify the type of scene. This demonstrates that scene perception can occur without attention. However, the debate as to whether attention is required for perception swings back and forth between researchers who present studies demonstrating that perception can occur in the absence of attention to those who point out that attention is needed for perception of objects, scenes, and faces (Goldstein, 2017).
Perceptual Capacity
Similar to the earlier discussion that pointed out our perceptual processes have a limited ability or capacity for processing information, the load theory of attention proposes that there is a limit in attentional processes and reaction time based on perceptual limits. An individual’s perceptual capacity is the limit of available perceptual processes that can be used to carry out a perceptual task. Perceptual load refers to the amount of an individual’s perceptual capacity that will be necessary to carry out a perceptual task. Easier tasks that do not require as much perceptual capacity are called low-load tasks, and tasks that use more perceptual capacity are called high-load tasks (Goldstein, 2017).
In this picture find you might be asked to find the blue book.
What happens when we try to attend to more than one stimulus? Researchers have conducted divided attention studies to see what individuals do when they try to work with multiple tasks at the same time. Divided attention studies have demonstrated that early in the perceptual process, some features exist independently from one another. As subjects were asked to divide their attention among objects, their inability to focus attention on the shapes presented resulted in illusory conjunctions, which are different stimuli combinations. These combinations of features can occur even when the stimuli differ greatly in size and shape.
While the illusory conjunctions and reduced processing speed impact divided attention, if someone has neurological deficits, this could be more pronounced. Larger binding failure conditions can be observed in individuals who suffer from Balint’s syndrome. With Balint’s syndrome, individuals have an inability to focus and shift attention when presented with multiple stimuli (Goldstein, 2017).
Visual search is another component to studying the role of attention in binding. Visual search occurs anytime we are searching for a specific stimulus amidst other stimuli. If you have ever looked through a picture find book, you use visual search to search for the objects you are supposed to find. One type of visual search, called conjunction searching involves scanning a display in search of a specific target. Conjunction searching seems harder for individuals with neural deficits because it involves perceiving more than a single feature and utilizes both bottom-up and top-down processing (Goldstein, 2017).
‹1/4 ›
Let’s go back to the baseball game you are announcing. You are so caught up in the previous play that as you are announcing a call for the pitcher to return to the mound, you fail to notice that the pitcher is already on the mound. This is called inattentional blindness, which occurs when something is missed because attention is not directed toward it even though the stimulus is in plain sight (Goldstein, 2017). When you are so caught up in thought about what happened on second base, your visual field actually narrows based on your focus. So, things that are highly visible could occur and you would still fail to notice them even though they are within your line of sight (Rensink, 2009).
Perceptual processes are multidimensional. It is not simply a matter of seeing something and identifying it based on what you see. The information moves and is transformed as it is received by visual senses and moves through to the brain. There are certain ways we tend to organize the information we attend to. There are also certain aspects of a stimulus we might focus on. We focus our attention based our interests, something that stands out, or perhaps something we need to focus on. Of course, that does not mean we do so correctly. Think of everything you do each day. We do so many things without really attending to them, like brushing teeth, taking a shower, etc. This leaves us open to plenty of chances for error. In our baseball game, we really didn’t figure out whether the runner was safe at second base or not, but we did look at quite a few things that might have been important in determining this.
Goldstein, E.B. (2017). Sensation and Perception. (10th ed). California: Cengage.
Griggs, R. A. (2016). Psychology: A concise introduction (5th ed.). New York, NY: Worth Publishers.
People’s Health. (n.d.). Unconscious inference of visual perception. Retrieved from: http://www.peoples-health.com/unconscious-inference-visual-perception.htm
Rensink, R. A. (2009). Attention: Change blindness and inattentional blindness. Retrieved from: http://www2.psych.ubc.ca/~rensink/publications/download/EncycConsc-CB-IB-rr.pdf
Zhang, P., Jamison, K., Engel, S., He, B., & He, S. (2011). Binolcular rivalry requires visual attention. Neuron, 71(2), 362-369. DOI 10.1016/j.neuron.2011.05.035
“A baseball player sliding into a base with the opponent trying to tag him out” by https://pixabay.com/en/baseball-player-tag-second-base-1613012/.
“An MRI scan of a person looking at faces, with hot spots in the FFA area” by https://en.wikipedia.org/wiki/Fusiform_face_area#/media/File:Face_recognition.jpg.
“A face with lines drawn on it tracing the area where the eye is looking” by https://en.wikipedia.org/wiki/Saccade#/media/File:Szakkad.jpg.
“A drawing of a woodland scene” by 65227264.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more