Chapter 8 Thought and Language
8.1 The Organization of Knowledge Concepts and Categories 315
Working the Scientific Literacy Model: Priming and Semantic Networks 318
Module 8.1a Quiz 319
Memory, Culture, and Categories 319
Module 8.1b Quiz 323
Module 8.1 Summary 323
8.2 Problem Solving, Judgment, and Decision Making Defining and Solving Problems 325
Module 8.2a Quiz 328
Judgment and Decision Making 328
Working the Scientific Literacy Model: Maximizing and Satisficing in Complex Decisions 332
Module 8.2b Quiz 334
Module 8.2 Summary 335
8.3 Language and Communication
What Is Language? 337
Module 8.3a Quiz 341
The Development of Language 341
Module 8.3b Quiz 344
Genes, Evolution, and Language 344
Working the Scientific Literacy Model: Genes and Language 344
Module 8.3c Quiz 348
Module 8.3 Summary 348
Module 8.1 The Organization of Knowledge
Dmitry Vereshchagin/Fotolia
Learning Objectives
Know . . . the key terminology associated with concepts and categories. Understand . . . theories of how people organize their knowledge about the world. Understand . . . how experience and culture can shape the way we organize our knowledge. Apply . . . your knowledge to identify prototypical examples.
8.1a 8.1b
8.1c
8.1d
When Edward regained consciousness in the hospital, his family immediately noticed that something was wrong. The most obvious problem was that he had difficulty recognizing faces, a relatively common disorder known as prosopagosia. As the doctors performed more testing, it became apparent that Edward had other cognitive problems as well. Edward had difficulty recognizing objects—but not all objects. Instead, he couldn’t distinguish between different vegetables even though he could use language to describe their appearance. His ability to recognize most other types of objects seemed normal.
Neurological patients like Edward may seem unrelated to your own life. However, for specific categories of visual information to be lost, they must have been stored in similar areas of the brain before brain damage occurred. Therefore, these cases give us some insight into how the brain stores and organizes the information that we have encoded into memory.
Focus Questions
1. How do people form easily recognizable categories from complex information?
2. How does culture influence the ways in which we categorize information?
Each of us has amassed a tremendous amount of knowledge in the course of our lifetime. Indeed, it is impossible to put a number on just how many facts each of us knows. Imagine trying to record everything you ever learned about the world—how many books could you fill? Instead of asking how much we know, psychologists are interested in how we keep track of it all. In this module, we will explore what those processes are like and how they work. We will start by learning about the key terminology before presenting theories about how
Analyze . . . the claim that the language we speak determines how we think.
8.1e
knowledge is stored over the long term.
Concepts and Categories
A concept is the mental representation of an object, event, or idea. Although it seems as though different concepts should be distinct from each other, there are actually very few independent concepts. You do not have just one concept
for chair, one for table, and one for sofa. Instead, each of these concepts can be divided into smaller groups with more precise labels, such as arm chair or coffee table. Similarly, all of these items can be lumped together under the single label, furniture. Psychologists use the term categories to refer to these clusters of interrelated concepts. We form these groups using a process called categorization.
Classical Categories: Definitions and Rules
Categorization is difficult to define in that it involves elements of perception
(Chapter 4 ), memory (Chapter 7 ), and “higher-order” processes like decision making (Module 8.2 ) and language (Module 8.3 ). The earliest approach to the study of categories is referred to as classical categorization ; this theory claims that objects or events are categorized according to a certain set of rules or by a specific set of features—something similar to a dictionary definition (Lakoff & Johnson, 1999; Rouder & Ratcliffe, 2006). Definitions do a fine job of explaining how people categorize items, at least in certain situations. For example, a triangle can be defined as “a figure
(usually, a plane rectilinear figure) having three angles and three sides” (Oxford English Dictionary, 2011). Using this definition, you should find it easy to categorize the triangles in Figure 8.1 .
Figure 8.1 Using the Definition of a Triangle to Categorize Shapes
Classical categorization does not tell the full story of how categorization works, however. We use a variety of cognitive processes in determining which objects fit
which category. One of the major problems we confront in this process is graded membership —the observation that some concepts appear to make better category members than others. For example, see if the definition in Table 8.1
fits your definition of bird and then categorize the items in the table.
Table 8.1 Categorizing Objects According to the Definition of Bird
Definition: “Any of the class Aves of warm-blooded, egg-laying, feathered vertebrates
with forelimbs modified to form wings.” (American Heritage Dictionary, 2016)
Now categorize a set of items by answering yes or no regarding the truth of the
following sentences.
1. A sparrow is a bird.
2. An apple is a bird.
3. A penguin is a bird.
Ideally, you said yes to the sparrow and penguin, and no to the apple. But did you notice any difference in how you responded to the sparrow and penguin? Psychologists have researched classical categorization using a behavioural
measure known as the sentence-verification technique, in which volunteers wait for a sentence to appear in front of them on a computer screen and respond as quickly as they can with a yes or no answer to statements such as “A sparrow is a bird,” or, “A penguin is a bird.” The choice the participant makes, as well as her reaction time to respond, is measured by the researcher. Sentence-verification shows us that some members of a category are recognized faster than others
(Olson et al., 2004; Rosch & Mervis, 1975). In other words, subjects almost always answer “yes” faster to sparrow than to penguin. This seems to go against a classical, rule-based categorization system because both sparrows and penguins are equally good fits for the definition, but sparrows are somehow perceived as being more bird-like than penguins. Thus, a modern approach to categorization must explain how “best examples” influence how we categorize items.
Prototypes: Categorization by Comparison
When you hear the word bird, what mental image comes to mind? Does it resemble an ostrich? Or is your image closer to a robin, sparrow, or blue jay? The likely image that comes to mind when you imagine a bird is what
psychologists call a prototype (see Figure 8.2 ). Prototypes are mental representations of an average category member (Rosch, 1973). If you took an average of the three most familiar birds, you would get a prototypical bird.
Figure 8.2 A Prototypical Bird Left: chatursunil/Shutterstock; centre: Al Mueller/Shutterstock; right: Leo/Shutterstock
Prototypes allow for classification by resemblance. When you encounter a little creature you have never seen before, its basic shape—maybe just its silhouette —can be compared to your prototype of a bird. A match will then be made and you can classify the creature as a bird. Notice how different this process is from classical categorization: No rules or definitions are involved, just a set of similarities in overall shape and function.
The main advantage of prototypes is that they help explain why some category members make better examples than others. Ostriches are birds just as much as blue jays are, but they do not resemble the rest of the family very well. In other words, blue jays are closer to the prototypical bird.
Now that you have read about categories based on a set of rules or characteristics (classical categories) and as a general comparison based on resemblances (prototypes), you might wonder which approach is correct.
Research says that we can follow either approach—the choice really depends on how complicated a category or a specific example might be. If there are a few major distinctions between items, we use resemblance; if there are
complications, we switch to rules (Feldman, 2003; Rouder & Ratcliff, 2004, 2006). For example, in the case of seeing a bat dart by, your first impression might be “bird” because it resembles a bird. But if you investigated further, you will see that a bat fits the classical description of a mammal, not a bird. In other words, it has hair, gives live birth rather than laying eggs, and so on.
Networks and Hierarchies
Classical categorization and prototypes only explain part of how we organize information. Each concept that we learn about has similarities to other concepts. A sparrow has physical similarities to a bat (e.g., size and shape); a sparrow will have even more in common with a robin because they are both birds (e.g., size, shape, laying eggs, etc.). These connections among ideas can be represented in
a network diagram known as a semantic network , an interconnected set of nodes (or concepts) and the links that join them to form a category (see Figure 8.3 ). Nodes are circles that represent concepts, and links connect them together to represent the structure of a category as well as the relationships
among different categories (Collins & Loftus, 1975). In these networks, similar items have more, and stronger, connections than unrelated items.
Figure 8.3 A Semantic Network Diagram for the Category “Animal” The nodes include the basic-level categories, Bird and Fish. Another node represents the broader category of Animal, while the lowest three nodes represent the more specific categories of Robin, Emu, and Trout. Source: Based on Collins, A. M., & Quillian, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal Learning
and Verbal Behavior, 8, 240–248.
Something you may notice about Figure 8.3 is that it is arranged in a hierarchy—that is, it consists of a structure moving from general to very specific. This organization is important because different levels of the category are useful in different situations. The most frequently used level, in both thought and
language, is the basic-level category, which is located in the middle row of the diagram (where birds and fish are) (Johnson & Mervis, 1997; Rosch et al., 1976). A number of qualities make the basic-level category unique:
Basic-level categories are the terms used most often in conversation. They are the easiest to pronounce. They are the level at which prototypes exist. They are the level at which most thinking occurs.
To get a sense for how different category levels influence our thinking, we can compare sentences referring to an object at different levels. Consider what would happen if someone approached you and made any one of the following statements:
There’s an animal in your yard. There’s a bird in your yard. There’s a robin in your yard.
The second sentence—”There’s a bird in your yard”—is probably the one you are most likely to hear, and it makes reference to a basic level of a category
(birds). Many people would respond that the choice of animal as a label indicates confusion, claiming that if the speaker knew it was a bird, he should have said so; otherwise, it sounds like he is trying to figure out which kind of animal he is
looking at. Indeed, superordinate categories like “animal” are generally used when someone is uncertain about an object or when he or she wishes to group together a number of different examples from the basic-level category (e.g.,
birds, cats, dogs). In contrast, when the speaker identifies a subordinate-level category like robin, it suggests that there is something special about this particular type of bird. It may also indicate that the speaker has expert-level knowledge of the basic category and that using the more specific level is necessary to get her point across in the intended way.
In order to demonstrate the usefulness of semantic networks in our attempt to explain how we organize knowledge, complete this easy test based on the
animal network in Figure 8.3 . If you were asked to react to dozens of sentences, and the following two sentences were included among them, which do you think you would mark as “true” the fastest?
A robin is a bird. A robin is an animal.
As you can see in the network diagram, robin and bird are closer together; in fact, to connect robin to animal, you must first go through bird. Sure enough,
people regard the sentence “A robin is a bird” as a true statement faster than “A robin is an animal.”
Now consider another set of examples. Which trait do you think you would verify faster?
A robin has wings. A robin eats.
Using the connecting lines as we did before, we can predict that it would be the first statement about wings. As research shows, our guess would be correct. These results demonstrate that how concepts are arranged in semantic networks can influence how quickly we can access information about them.
Working the Scientific Literacy Model Priming and Semantic Networks
The thousands of concepts and categories in long-term memory are not isolated, but connected in a number of ways. What are the consequences of forming all the connections in semantic networks?
What do we know about semantic networks? In your daily life, you notice the connections within semantic networks anytime you encounter one aspect of a category and other related concepts seem to come to mind. Hearing the word “fruit,” for example, might lead you to think of an apple, and the apple may lead you to think of a computer, which may lead you to think of a paper that is due tomorrow. These associations
illustrate the concept of priming —the activation of individual concepts in long-term memory. Interestingly, research has shown that priming can also occur without your awareness; “fruit” may not have brought the image of a watermelon to mind, but the
concept of a watermelon may have been primed nonetheless.
How can science explain priming effects? Psychologists can test for priming through reaction time measurements, such as those in the sentence verification tasks
discussed earlier or through a method called the lexical decision task. With the lexical decision method, a volunteer sits at a computer and stares at a focal point. Next, a string of letters flashes on the screen. The volunteer responds yes or no as quickly as possible to indicate whether the letters spell a word
(see Figure 8.4 ). Using this method, a volunteer should respond faster that “apple” is a word if it follows the word “fruit”
(which is semantically related) than if it follows the word “bus” (which is not semantically related).
Figure 8.4 A Lexical Decision Task
In a lexical decision task, an individual watches a computer screen as strings of letters are presented. The participant must respond as quickly as possible to indicate whether the letters spell a word (e.g., “desk”) or are a non-word (e.g., “sekd”).
Given that lexical decision tasks are highly controlled experiments, we might wonder if they have any impact outside of the laboratory. One test by Jennifer Coane suggests that priming
does occur in everyday life (Coane & Balota, 2009). Coane’s research team invited volunteers to participate in lexical decision
tasks about holidays at different times of the year. The words they chose were based on the holiday season at that time. Sure enough, without any laboratory priming, words such as “nutcracker” and “reindeer” showed priming effects at times when
they were congruent (or “in season”) in December, relative to other times of the year (see Figure 8.5 ). Similarly, words like “leprechaun” and “shamrock” showed a priming effect during the month of March. Because the researchers did not instigate the priming, it must have been the holiday spirit at work: Decorations and advertisements may serve as constant primes.
Figure 8.5 Priming Affects the Speed of Responses on a Lexical Decision Task
Average response times were faster when the holiday-themed
words were congruent (in season), as represented by the blue bars. This finding is consistent for both the first half and the second half of the list of words.
Source: Republished with permission of Springer, from Priming the Holiday Spirit: Persistent
Activation due to Extraexperimental Experiences Fig. 1, Pg.1126, Psychonomic Bulletin & Review, 16
(6), 1124–1128, 2009. Permission conveyed through Copyright Clearance Center, Inc.
Can we critically evaluate this information? Priming influences thought and behaviour, but is certainly not all- powerful. In fact, it can be very weak at times. Because the strength of priming can vary a great deal, some published experiments have been very difficult to replicate—an important criterion of quality research. So, while most psychologists agree that priming is an important area of research, there have been very open debates at academic conferences and in peer- reviewed journals about the best way to conduct the research
and how to interpret the results (Cesario, 2014; Klatzky & Creswell, 2014).
Why is this relevant? Advertisers know all too well that priming is more than just a curiosity; it can be used in a controlled way to promote specific behaviours. For example, cigarette advertising is not allowed on television stations, but large tobacco companies can sponsor anti-smoking ads. Why would a company advertise against its own product? Researchers brought a group of smokers into the lab to complete a study on television programming and subtly included a specific type of advertisement between segments (they did not reveal the true purpose of the study until after it was completed). Their participants were four times as likely to light up after watching a tobacco-company anti-smoking ad than if they saw the control group ad about supporting a youth sports league
(Harris et al., 2013). It would appear that while the verbal message is “don’t smoke,” the images actually prime the behaviour. Fortunately, more healthful behaviours have been promoted through priming; for example, carefully designed
primes have been shown to reduce mindless snacking (Papies &
Hamstra, 2010) and binge-drinking in university students (Goode et al., 2014).
Module 8.1a Quiz:
Concepts and Categories
Know . . . 1. A is a mental representation of an average member of a
category.
A. subordinate-level category B. prototype C. similarity principle D. network
2. refer to mental representations of objects, events, or ideas. A. Categories B. Concepts C. Primings D. Networks
Understand . . . 3. Classical categorization approaches do not account for , a type
of categorization that notes some items make better category members than others.
A. basic-level categorization B. prototyping C. priming D. graded membership
Memory, Culture, and Categories
In the first part of this module, we examined how we group together concepts to form categories. However, it is important to remember that these processes are based, at least in part, on our experiences. In this section of the module, we examine the role of experience—both in terms of memory processes and cultural influences—on our ability to organize our vast stores of information.
Categorization and Experience
People integrate new stimuli into categories based on what they have
experienced before (Jacoby & Brooks, 1984). When we encounter a new item, we select its category by retrieving the item(s) that are most similar to it from
memory (Brooks, 1978). Normally, these procedures lead to fast and accurate categorization. If you see an animal with wings and a beak, you can easily retrieve from memory a bird that you previously saw; doing so will lead you to infer that this new object is a bird, even if it is a type of bird that you might not have encountered before.
However, there are also times when our reliance on previously experienced items can lead us astray. In a series of studies with medical students and practising physicians, Geoffrey Norman and colleagues at McMaster University found that recent exposure to an example from one category can bias how
people diagnose new cases (Leblanc et al., 2001; Norman, Brooks, et al., 1989; Norman, Rosenthal, et a., 1989). In one experiment, medical students were taught to diagnose different skin conditions using written rules as well as photographs of these diseases. Some of the photographs were typical examples of that disorder whereas other photographs were unusual cases that resembled other disorders. When tested later, the participants were more likely to rely on the previously viewed photographs than they were on the rules (a fact that would surprise most medical schools); in fact, the unusual photographs viewed during training even led to wrong diagnoses for test items that were textbook examples
of that disorder (Allen et al., 1992)! This shows the power that our memory can have on how we take in and organize new information. As an aside, expert physicians were accurate over 90% of the time in most studies, so you can still trust your doctor.
Categories, Memory, and the Brain
The fact that our ability to make categorical decisions is influenced by previous experiences tells us that this process involves memory. Studies of neurological patients like the man discussed at the beginning of this module provide a unique perspective on how these memories are organized in the brain. Some patients with damage to the temporal lobes have trouble identifying objects such as pictures of animals or vegetables despite the fact that they were able to describe the different shapes that made up those objects (i.e., they could still see). The
fact that these deficits were for particular categories of objects was intriguing, as it suggested that damaging certain parts of the brain could impair the ability to recognize some categories while leaving others unaffected (Warrington &
McCarthy, 1983; Warrington & Shallice, 1979). Because these problems were isolated to certain categories, these patients were diagnosed as having a
disorder known as category specific visual agnosia (or CSVA).
Early attempts to find a pattern in these patients’ deficits focused on the
distinction between living and non-living categories (see Figure 8.6 ). Several patients with CSVA had difficulties identifying fruits, vegetables, and/or animals but were still able to accurately identify members of categories such as tools and
furniture (Arguin et al., 1996; Bunn et al., 1998). However, although CSVA has been observed in a number of patients, researchers also noted that it would be
physically impossible for our brains to have specialized regions for every category we have encountered. There simply isn’t enough space for this to occur. Instead, they proposed that evolutionary pressures led to the development
of specialized circuits in the brain for a small group of categories that were important for our survival. These categories included animals, fruits and
vegetables, members of our own species, and possibly tools (Caramazza & Mahon, 2003). Few, if any, other categories involve such specialized memory storage. This theory can explain most, but not all, of the problems observed in the patients tested thus far. It is also in agreement with brain-imaging studies showing that different parts of the temporal lobes are active when people view
items from different categories including animals, tools, and people (Martin et
al., 1996). Thus, although different people will vary in terms of the exact location that these categories are stored, it does appear that some categories are stored separately from others.
Figure 8.6 Naming Errors for a CSVA Patient Patients with CSVA have problems identifying members of specific categories. When asked to identify the object depicted by different line drawings, patient E. W. showed a marked impairment for the recognition of animals. Her ability to name items from other categories demonstrated that her overall perceptual abilities were preserved. Source: Based on data from Caramazza, A., & Mahon, B. Z. (2003). The organization of conceptual knowledge: the evidence
from category-specific semantic deficits. Trends in Cognitive Sciences, 7 (8), 354–361.
Biopsychosocial Perspectives Culture and
Categorical Thinking Animals, relatives, household appliances, colours, and other entities all fall into categories. However, people from different cultures might differ in how they categorize such objects. In North America, cows are sometimes referred to as “livestock” or “food animals,” whereas in India, where cows are regarded as sacred, neither category would apply.
In addition, how objects are related to each other differs considerably across cultures. Which of the two photos in Figure 8.7 a do you think someone from North America took? Researchers asked both American and Japanese university students to take a picture of someone, from whatever angle or degree of focus they chose. American students were more likely to take close-up pictures, whereas Japanese students
typically included surrounding objects (Nisbett & Masuda, 2003). When asked which two objects go together in Figure 8.7 b, American college students tend to group cows with chickens—because both are animals. In contrast, Japanese students coupled cows with grass, because grass
is what cows eat (Gutchess et al., 2010; Nisbett & Masuda, 2003). These examples demonstrate cross-cultural differences in perceiving how objects are related to their environments. People raised in North America tend to focus on a single characteristic, whereas Japanese people tend to view objects in relation to their environment.
Figure 8.7 Your Culture and Your Point of View (a) Which of these two pictures do you think a North American would be more likely to take? (b) Which two go together? Top photos: Blend Images/Shutterstock
Source, bottom: Adapted from Nisbett, R. E., & Masuda, T. (2003). Culture and point of view. Proceedings of the
National Academy of Sciences, 100 (19), 11163–11170. Copyright © 2003. Reprinted by permission of National
Academy of Sciences.
Researchers have even found differences in brain function when people
of different cultural backgrounds view and categorize objects (Park & Huang, 2010). Figure 8.8 reveals differences in brain activity when
Westerners and East Asians view photos of objects, such as an animal, against a background of grass and trees. Areas of the brain devoted to processing both objects (lateral parts of the occipital lobes) and background (the parahippocampal gyrus, an area underneath the hippocampus) become activated when Westerners view these photos, whereas only areas devoted to background processes become activated
in East Asians (Goh et al., 2007). These findings demonstrate that a complete understanding of how humans categorize objects requires application of the biopsychosocial model.
Figure 8.8 Brain Activity Varies by Culture Brain regions that are involved in object recognition and processing are activated differently in people from Western and Eastern cultures. Brain regions that are involved in processing individual objects are more highly activated when Westerners view focal objects against background scenery, whereas people from East Asian countries appear to attend to background scenery more closely than focal objects. Source: Park, D. C. & Huang, C.-M. (2010). Culture wires the brain: A cognitive neuroscience perspective.
Perspectives on Psychological Science, 5 (4), 391–400. Reprinted by permission of SAGE Publications.
Myths in Mind How Many Words for Snow?
Cultural differences in how people think and categorize items have led to
the idea of linguistic relativity (or the Whorfian hypothesis)—the theory that the language we use determines how we understand (and categorize) the world. One often-cited example is about the Inuit in Canada’s Arctic regions, who are thought to have many words for snow,
each with a different meaning. For example, aput means snow that is on the ground, and gana means falling snow. This observation, which was made in the early 19th century by anthropologist Franz Boas, was often repeated and exaggerated, with claims that Inuit people had dozens of words for different types of snow. With so many words for snow, it was thought that perhaps the Inuit people perceive snow differently than someone who does not live near it almost year-round. Scholars used the example to argue that language determines how people categorize the world.
Research tells us that we must be careful in over-generalizing the influence of language on categorization. The reality is that the Inuit seem to categorize snow the same way a person from the rest of Canada does. Someone from balmy Winnipeg can tell the difference between falling snow, blowing snow, sticky snow, drifting snow, and “oh-sweet-God-it’s- snowing-in-May-snow,” just as well as an Inuit who lives with snow for
most of the year (Martin, 1986). Therefore, we see that the linguistic relativity hypothesis is incorrect in this case: The difference in vocabulary for snow does not lead to differences in perception.
Categories and Culture
The human brain is wired to perceive similarities and differences and, as we learned from prototypes, the end result of this tendency is to categorize items based on these comparisons as well as on our previous experiences with members of different categories. However, our natural inclination to do so interacts with our cultural experiences; how we categorize objects depends to a great extent on what we have learned about those objects from others in our culture.
Various researchers have explored the relationships between culture and categorization by studying basic-level categories among people from different cultural backgrounds. For example, researchers have asked individuals from traditional villages in Central America to identify a variety of plants and animals that are extremely relevant to their diet, medicine, safety, and other aspects of their lives. Not surprisingly, these individuals referred to plants and animals at a
more specific level than North American university students would (Bailenson et al., 2002; Berlin, 1974). Thus, categorization is based—at least to some extent —on cultural learning. Psychologists have also discovered that cultural factors influence not just how we categorize individual objects, but also how objects in our world relate to one another.
Although culture and memory both clearly affect how we describe and categorize our world, we do need to remember to critically analyze the results of these studies. Specifically, as our world becomes more Westernized, it is possible— even likely—that these cultural differences will decrease. These results, then, tell
us about cultural differences at a given time. As you saw in the Myths in Mind feature above, we should also exercise caution when reading about another form of cultural influences on categorization—linguistic relativity.
Module 8.1b Quiz:
Memory, Culture, and Categories
Know . . . 1. The idea that our language influences how we understand the world is
referred to as . A. the context specificity hypothesis B. sentence verification C. the Whorfian hypothesis D. priming
Understand . . . 2. A neurologist noticed that a patient with temporal-lobe damage seemed
to have problems naming specific categories of objects. Based upon
what you read in this module, which classes of objects are most likely to be affected by this damage?
A. Animals and tools B. Household objects that he would use quite frequently C. Fruits and vegetables D. Related items such as animals and hunting weapons
Apply . . . 3. Janice, a medical school student, looked at her grandmother’s hospital
chart. Although her grandmother appeared to have problems with her intestines, Janice thought the pattern of the lab results resembled those of a patient with lupus she had seen in the clinic earlier that week. Janice is showing an example of
A. how memory for a previous example can influence categorization decisions.
B. how people rely on prototypes to categorize objects and events. C. how we rely on a set of rules to categorize objects. D. how we are able to quickly categorize examples from specific
categories.
Analyze . . . 4. Research on linguistic relativity suggests that
A. language has a complete control over how people categorize the world.
B. language can have some effects on categorization, but the effects are limited.
C. language has no effect on categorization. D. researchers have not addressed this question.
Module 8.1 Summary
categories
Know . . . the key terminology associated with concepts and categories.
8.1a
classical categorization
concept
graded membership
linguistic relativity (Whorfian hypothesis)
priming
prototypes
semantic network
Certain objects and events are more likely to be associated in clusters. The priming effect demonstrates this phenomenon; for example, hearing the word “fruit” makes it more likely that you will think of “apple” than, say, “table.” More specifically, we organize our knowledge about the world through semantic networks, which arrange categories from general to specific levels. Usually we think in terms of basic-level categories, but under some circumstances we can be either more or less specific. Studies of people with brain damage suggest that the neural representations of members of evolutionarily important categories are stored together in the brain. These studies also show us that our previous experience with a category can influence how we categorize and store new stimuli in the brain.
One of many possible examples of this influence was discussed. Specifically, ideas of how objects relate to one another differ between people from North America and people from Eastern Asia. People from North America (and Westerners in general) tend to focus on individual, focal objects in a scene, whereas people from Japan tend to focus on how objects are interrelated.
Understand . . . theories of how people organize their knowledge about the world.
8.1b
Understand . . . how experience and culture can shape the way we organize our knowledge.
8.1c
Apply Activity Try the following questions for practice.
1. What is the best example for the category of fish: a hammerhead shark, a trout, or an eel?
2. What do you consider to be a prototypical sport? Why? 3. Some categories are created spontaneously, yet still have prototypes.
For example, what might be a prototypical object for the category “what to save if your house is on fire”?
Researchers have shown that language can influence the way we think, but it cannot entirely shape how we perceive the world. For example, people can perceive visual and tactile differences between different types of snow even if they don’t have unique words for each type.
Apply . . . your knowledge to identify prototypical examples.8.1d
Analyze . . . the claim that the language we speak determines how we think.
8.1e
Module 8.2 Problem Solving, Judgment, and Decision Making
Polaris/Newscom
Learning Objectives
Ki-Suck Han was about to die. He had just been shoved onto the subway’s tracks and was desperately scrambling to climb back onto the station’s platform as the subway train rushed toward him. If you were a few metres away from Mr. Han, what would you have done? What factors would have influenced your actions?
In this case, the person on the platform was R. Umar Abbasi, a freelance photographer working for The New York Post. Mr. Abbasi did not put down his camera and run to help Mr. Han. Instead, he took a well-framed photograph that captured the terrifying scene. The photograph was published on the front page of the Post and was immediately condemned by people who were upset that the photographer didn’t try to save Mr. Han’s life (and that the Post used the photograph to make money). In a statement released to other media outlets, the Post claimed that Mr. Abbasi felt that he wasn’t strong enough to lift the man and instead tried to use his camera’s flash to signal the driver. According to this explanation, Mr. Abbasi analyzed the situation and selected a course of action that he felt would be most helpful. Regardless of whether you believe this account, it does illustrate an important point: Reasoning and decision making can be performed in a number of ways and can be influenced by a number of factors. That is why we don’t all respond the same way to the same situation.
Know . . . the key terminology of problem solving and decision making. Understand . . . the characteristics that problems have in common. Understand . . . how obstacles to problem solving are often self-imposed. Apply . . . your knowledge to determine if you tend to be a maximizer or a satisficer. Analyze . . . whether human thought is primarily logical or intuitive.
8.2a 8.2b 8.2c 8.2d
8.2e
Focus Questions
1. How do people make decisions and solve problems? 2. How can having multiple options lead people to be dissatisfied
with their decisions?
In other modules of this text, you have read about how we learn and remember
new information (Modules 7.1 and 7.2 ) and how we organize our knowledge of different concepts (Module 8.1 ). This module will focus on how we use this information to help us solve problems and make decisions. Although it may seem like such “higher-order cognitive abilities” are distinct from memory and categorization, they are actually a wonderful example of how the different topics within the field of psychology relate to each other. When we try to solve a problem or decide between alternatives, we are actually drawing on our knowledge of different concepts and using that information to try to imagine
different possible outcomes (Green et al., 2006). How well we perform these tasks depends on a number of factors including our problem-solving strategies and the type of information available to us.
Defining and Solving Problems
You are certainly familiar with the general concept of a problem, but in
psychological terminology, problem solving means accomplishing a goal when the solution or the path to the solution is not clear (Leighton & Sternberg, 2003; Robertson, 2001). Indeed, many of the problems that we face in life contain obstacles that interfere with our ability to reach our goals. The challenge, then, is to find a technique or strategy that will allow us to overcome these obstacles. As you will see, there are a number of options that people use for this purpose—although none of them are perfect.
Problem-Solving Strategies and Techniques
Each of us will face an incredible number of problems in our lives. Some of these problems will be straightforward and easy to solve; however, others will be quite complex and will require us to come up with a novel solution. How do we remember the strategies we can use for routine problems? And, how do we develop new strategies for nonroutine problems? Although these questions
appear as if they could have an infinite number of answers, there seem to be two common techniques that we use time and again.
One type of strategy is more objective, logical, and slower, whereas the other is
more subjective, intuitive, and quicker (Gilovich & Griffin, 2002; Holyoak & Morrison, 2005). The difference between them can be illustrated with an example. Suppose you are trying to figure out where you have left your phone. You’ve tried the trick of calling yourself using a landline phone, but you couldn’t
hear it ringing. So, it’s not in your house. A logical approach might involve making of list of the places you’ve been in the last 24 hours and then retracing
your steps until you (hopefully) find your phone. An intuitive approach might involve thinking about previous times you’ve lost your phone or wallet and using these experiences to guide your search (e.g., “I’m always forgetting my phone at Dan’s place, so I should look there first”).
When we think logically, we rely on algorithms , problem-solving strategies based on a series of rules. As such, they are very logical and follow a set of steps, usually in a pre-set order. Computers are very good at using algorithms because they can follow a preprogrammed set of steps and perform thousands of operations every second. People, however, are not always so rule-bound. We tend to rely on intuition to find strategies and solutions that seem like a good fit
for the problem. These are called heuristics , problem-solving strategies that stem from prior experiences and provide an educated guess as to what is the most likely solution. Heuristics are often quite efficient; these “rules of thumb” are usually accurate and allow us to find solutions and to make decisions quickly. In the example of trying to figure out where you left your phone, you are more likely to put your phone down at a friend’s house than on the bus, so that increases the likelihood that your phone is still sitting on his coffee table. Calling your friend to
ask about your phone is much simpler than retracing your steps from class to the gym to the grocery store, and so on.
The overall goal of both algorithms and heuristics is to find an accurate solution as efficiently as possible. In many situations, heuristics allow us to solve problems quite rapidly. However, the trade-off is that these shortcuts can occasionally lead to incorrect solutions, a topic we will return to later in this module.
Of course, different problems call for different approaches. In fact, in some cases, it might be useful to start off with one type of problem-solving and then switch to another. Think about how you might play the children’s word-game
known as hangman, shown in Figure 8.9 . Here, the goal state is to spell a word. In the initial state, you have none of the letters or other clues to guide you. So, your obstacles are to overcome (i.e., fill in) blanks without guessing the wrong letters. How would you go about achieving this goal?
Figure 8.9 Problem Solving in Hangman In a game of hangman, your job is to guess the letters in the word represented by the four blanks to the left. If you get a letter right, your opponent will put it in the correct blank. If you guess an incorrect letter, your opponent will draw a body part on the stick figure. The goal is to guess the word before the entire body is drawn.
On one hand, an algorithm might go like this: Guess the letter A, then B, then C,
and so on through the alphabet until you lose or until the word is spelled. However, this would not be a very successful approach. An alternative algorithm would be to find out how frequently each letter occurs in the alphabet and then guess the letters in that order until the game ends with you winning or losing. So,
you would start out by selecting E, then A, and so on. On the other hand, a heuristic might be useful. For example, if you discover the last letter is G, you might guess that the next-to-last letter is N, because you know that many words end with -ing. Using a heuristic here would save you time and usually lead to an accurate solution.
As you can see, some problems (such as the hangman game) can be approached with either algorithms or heuristics. In other words, most people start out a game like hangman with an algorithm: Guess the most frequent letters until
a recognizable pattern emerges, such as -ing, or the letters -oug (which are often followed by h, as in tough or cough) appear. At that point, you might switch to heuristics and guess which letters would be most likely to fit in the spaces.
Cognitive Obstacles
Using algorithms or heuristics will often allow you to eventually solve a problem; however, there are times when the problem-solving rules and strategies that you have established might actually get in the way of problem solving. The nine-dot
problem (Figure 8.10 ; Maier, 1930) is a good example of such a cognitive obstacle. The goal of this problem is to connect all nine dots using only four straight lines and without lifting your pen or pencil off the paper. Try solving the nine-dot problem before you read further.
Figure 8.10 The Nine-Dot Problem Connect all nine dots using only four straight lines and without lifting your pen or
pencil (Maier, 1930). The solution to the problem can be seen in Figure 8.11 .
Source: Maier, N. F. (1930). Reasoning in humans. I. On direction. Journal of Comparative Psychology, 10 (2), 115–143.
American Psychological Association.
Figure 8.11 One Solution to the Nine-Dot Problem In this case, the tendency is to see the outer edge of dots as a boundary, and to assume that one cannot go past that boundary. However, if you are willing to extend some of the lines beyond the dots, it is actually quite a simple puzzle to complete.
Here is something to think about when solving this problem: Most people impose limitations on where the lines can go, even though those limits are not a part of
the rules. Specifically, people often assume that a line cannot extend beyond the
dots. As you can see in Figure 8.11 , breaking these rules is necessary in order to find a solution to the problem.
Having a routine solution available for a problem generally allows us to solve that problem with less effort than we would use if we encountered it for the first time. This efficiency saves us time and effort. Sometimes, however, routines may impose cognitive barriers that impede solving a problem if circumstances change
so that the routine solution no longer works. A mental set is a cognitive obstacle that occurs when an individual attempts to apply a routine solution to what is actually a new type of problem. Figure 8.12 presents a problem that often elicits a mental set. The answer appears at the bottom of the figure, but make your guess before you check it. Did you get it right? If not, then you probably succumbed to a mental set.
Figure 8.12 The Five-Daughter Problem Maria’s father has five daughters: Lala, Lela, Lila, and Lola. What is the fifth daughter’s name?
The fifth daughter’s name is Maria.
Mental sets can occur in many different situations. For instance, a person may
experience functional fixedness , which occurs when an individual identifies an object or technique that could potentially solve a problem, but can think of only its most obvious function. Functional fixedness can be illustrated with a classic thought problem: Figure 8.13 shows two strings hanging from a ceiling. Imagine you are asked to tie the strings together. However, once you grab a string, you cannot let go of it until both are tied together. The problem is, unless you have extraordinarily long arms, you cannot reach the second string
while you are holding on to the first one (Maier, 1931). So how would you solve the problem? Figure 8.16 offers one possible answer and an explanation of what makes this problem challenging.
Figure 8.13 The Two-String Problem Imagine you are standing between two strings and need to tie them together.
The only problem is that you cannot reach both strings at the same time (Maier, 1931). In the room with you is a table, a piece of paper, a pair of pliers, and a ball of cotton. What do you do? For a solution, see Figure 8.16 .
Figure 8.16 A Solution to the Two-String Problem One solution to the two-string problem from Figure 8.13 is to take the pliers off the table and tie them to one string. This provides enough weight to swing one string back and forth while you grab the other. Many people demonstrate functional fixedness when they approach this problem—they do not think of using the pliers as a weight because its normal function is as a grasping tool.
Problem solving occurs in every aspect of life, but as you can see, there are basic cognitive processes that appear no matter what the context. We identify the goal we want to achieve, try to determine the best strategy to do so, and hope that we do not get caught by unexpected obstacles—especially those we create in our own minds.
Of course, not all problems are negative obstacles that must be overcome. Problem solving can also be part of some positive events as well.
PSYCH@ Problem Solving and Humour
Jokes often involve a problem that needs to be solved. Solving the problem typically requires at least two steps. The initial step requires the audience to detect that some part of the joke’s set-up is not what is
Question: Why can’t university students take exams at the zoo?
Answer: There are too many cheetahs.
expected. Theories of humour sometimes refer to this as incongruity detection. Incongruities create an initial tension. In the example we’re using, the key word in this joke is “cheetahs.” Why would the presence of cheetahs affect exam taking? The trick is to understand that “cheetahs” sounds a lot like “cheaters.” So, a zoo would have “cheetahs,” but an exam could have “cheaters.” Once we understand the incongruity, we no
longer feel any tension. Incongruity resolution has occurred (Suls, 1972).
At this point, the audience has solved the problem. But, is it funny? Wyer and Collins (1992) suggested that for an incongruity resolution to be funny, the audience or reader would need to elaborate on the joke, possibly thinking about how it relates to them or forming humourous
mental images (see Figure 8.14 ). This process of elaboration should, ideally, lead to an emotional response of amusement, although this might differ across cultures.
Figure 8.14 The Comprehension-Elaboration Theory of Humour Humour is a form of problem solving. With most jokes, we identify the incongruity or “twist” involved in the wording of the joke and then attempt to resolve it. Once we have found the solution, we think about (elaborate on) the joke, oftentimes relating it to ourselves or to mental imagery. These processes lead to a feeling of amusement or, in the case of the cheetah joke, a rolling of the eyes. Source: Republished with permission of Elsevier, Inc. from Towards a neural circuit model of verbal humor
processing: An fMRI study of the neural substrates of incongruity detection and resolution, NeuroImage 66 (2013)
169–176. Copyright © 2012. Permission conveyed through Copyright Clearance Center, Inc.
Recent neuroimaging studies have manipulated the characteristics of
verbal stimuli to allow the researchers to identify brain areas related to nonsense stimuli (incongruities that did not undergo cognitive
elaboration) and stimuli that were perceived as humourous (incongruities that did undergo elaboration). Incongruity detection and resolution activated areas in the temporal lobes and the medial frontal lobes (close to the middle of the brain). Elaboration activated a network involving the
left frontal and parietal lobes (Chan et al., 2013). The purpose of this section wasn’t to take the joy out of humour. Instead, it was to show that humour, like most of our behaviours, involves the biopsychosocial model. If we suggested otherwise, we’d be lion.
Module 8.2a Quiz:
Defining and Solving Problems
Know . . . 1. are problem-solving strategies that provide a reasonable
guess for the solution.
A. Algorithms B. Heuristics C. Operators D. Subgoals
Understand . . . 2. Javier was attempting to teach his daughter how to tie her shoes. The
strategy that would prove most effective in this situation would be a(n)
. A. heuristic B. algorithm C. obstacle D. mental set
3. Jennifer was trying to put together her new bookshelf in her bedroom. Unfortunately, she didn’t have a hammer. Frustrated, she went outside
and sat down beside some bricks that were left over from a gardening project. Her inability to see that the bricks could be used to hammer in
nails is an example of . A. a mental set B. an algorithm C. functional fixedness D. a heuristic
Judgment and Decision Making
Like problem solving, judgments and decisions can be based on logical algorithms, intuitive heuristics, or a combination of the two types of thought
(Gilovich & Griffin, 2002; Holyoak & Morrison, 2005). We tend to use heuristics more often than we realize, even those of us who consider ourselves to be logical thinkers. This isn’t necessary a bad thing—heuristics allow us to make efficient judgments and decisions all the time. In this section of the module, we will examine specific types of heuristics, how they positively influence our
decision making, and how they can sometimes lead us to incorrect conclusions.
Conjunction Fallacies and Representativeness
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As
a student, she was deeply concerned with issues of discrimination and social justice,
and also participated in antinuclear demonstrations. Which is more likely?
A. Linda is a bank teller. B. Linda is a bank teller and is active in the feminist movement.
Which answer did you choose? In a study that presented this problem to participants, the researchers reported that (B) was chosen more than 80% of the time. Most respondents stated that option (B) seemed more correct even though option (A) is actually much more likely and would be the correct choice based on
the question asked (Tversky & Kahneman, 1982).
So how is the correct answer (A)? Individuals who approach this problem from the stance of probability theory would apply some simple logical steps. The world has a certain number of (A) bank tellers; this number would be considered the
base rate, or the rate at which you would find a bank teller in the world’s population just by asking random people on the street if they are a bank teller. Among the base group, there will be a certain number of (B) bank tellers who are
feminists, as shown in Figure 8.15 . In other words, the number of bank tellers who are feminists will always be a fraction of (i.e., less than) the total number of bank tellers. But, because many of Linda’s qualities could relate to a “feminist,”
the idea that Linda is a bank teller and a feminist feels correct. This type of error, known as the conjunction fallacy , reflects the mistaken belief that finding a specific member in two overlapping categories (i.e., a member of the conjunction of two categories) is more likely than finding any member of one of the larger, general categories.
Figure 8.15 The Conjunction Fallacy There are more bank tellers in the world than there are bank tellers who are feminists, so there is a greater chance that Linda comes from either (A) or (B) than just (B) alone.
The conjunction fallacy demonstrates the use of the representativeness
heuristic : making judgments of likelihood based on how well an example represents a specific category. In the bank teller example, we cannot identify any traits that seem like a typical bank teller. At the same time, the traits of social activism really do seem to represent a feminist. Thus, the judgment was biased by the fact that Linda seemed representative of a feminist, even though a feminist bank teller will always be rarer than bank tellers in general (i.e., the representativeness heuristic influenced the decision more than logic or mathematical probabilities).
Seeing this type of problem has led many people to question what is wrong with people’s ability to use logic: Why is it so easy to get 80% of the people in a study
to give the wrong answer? In fact, there is nothing inherently wrong with using heuristics; they simply allow individuals to obtain quick answers based on readily available information. In fact, heuristics often lead to correct assumptions about a situation.
Consider this scenario:
You are in a department store trying to find a product that is apparently sold out. At the
end of the aisle, you see a young man in tan pants with a red polo shirt—the typical
employee’s uniform of this chain of stores. Should you stop and consider the
probabilities yielding an answer that was technically most correct?
A. A young male of this age would wear tan pants and a red polo shirt. B. A young male of this age would wear tan pants and a red polo shirt and work at
this store.
Or does it make sense to just assume (B) is correct, and to simply ask the young
man for help (Shepperd & Koch, 2005)? In this case, it would make perfect sense to assume (B) is correct and not spend time wondering about the best logical way to approach the situation. In other words, heuristics often work and, in the process, save us time and effort. However, there are many situations in which these mental shortcuts can lead to biased or incorrect conclusions.
The Availability Heuristic
The availability heuristic entails estimating the frequency of an event based on how easily examples of it come to mind. In other words, we assume that if examples are readily available, then they must be very frequent. For example, researchers asked volunteers which was more frequent in the English language:
A. Words that begin with the letter K B. Words that have K as the third letter
Most subjects chose (A) even though it is not the correct choice. The same thing
happened with the consonants L, N, R, and V, all of which appear as the third letter in a word more often than they appear as the first letter (Tversky & Kahneman, 1973). This outcome reflects the application of the availability heuristic: People base judgments on the information most readily available.
Of course, heuristics often do produce correct answers. Subjects in the same study were asked which was more common in English:
A. Words that begin with the letter K B. Words that begin with the letter T
In this case, more subjects found that words beginning with T were readily available to memory, and they were correct. The heuristic helped provide a quick, intuitive answer.
There are numerous real-world examples of the availability heuristic. In the year following the September 11, 2001 terrorist attacks, people were much more likely to overestimate the likelihood that planes could crash and/or be hijacked. As a result, fewer people flew that year than in the year prior to the attacks, opting instead to travel by car when possible. The availability of the image of planes crashing into the World Trade Center was so vivid and easily retrieved from memory that it influenced decision making. Ironically, this shift proved to be
dangerous, particularly given that driving is statistically much more dangerous than flying. Gerd Gigerenzer, a German psychologist at the Max Planck Institute
in Berlin, examined traffic fatalities on U.S. roads in the years before and after 2001. He found that in the calendar year following these terrorist attacks, there were more than 1500 additional deaths on American roads (when compared to the average of the previous years). Within a year of the attacks, the number of people using planes returned to approximately pre-9/11 levels; so did the
number of road fatalities (Gigerenzer, 2004). In other words, for almost a year, people overestimated the risks of flying because it was easier to think of examples of 9/11 than to think of all of the times hijackings and plane crashes
did not occur; and, they underestimated the risks associated with driving because these images were less available to many people. This example shows us that heuristics, although often useful, can cause us to incorrectly judge the
risks associated with many elements of our lives (Gardner, 2008).
Anchoring and Framing Effects
While the representativeness and availability heuristics involve our ability to remember examples that are similar to the current situation, other heuristics influence our responses based on the way that information is presented. Issues such as the wording of a problem and the problem’s frames of reference can
have a profound impact on judgments. One such effect, known as the anchoring effect , occurs when an individual attempts to solve a problem involving numbers and uses previous knowledge to keep (i.e., anchor) the response within a limited range. Sometimes this previous knowledge consists of facts that we can retrieve from memory. For example, imagine that you are asked to name the year that British Columbia became part of Canada. Although most of you would, of course, excitedly jump from your chair and shout, “1871!” the rest might assume that if Canada became a country in 1867, then B.C. likely joined a few years after that. In this latter case, the birth of our country in 1867 served as an anchor for the judgment about when B.C. joined Confederation.
The anchoring heuristic has also been produced experimentally. In these cases, questions worded in different ways can produce vastly different responses
(Epley & Gilovich, 2006; Kahneman & Miller, 1986). For example, consider what might happen if researchers asked the same question to two different groups, using a different anchor each time:
A. What percentage of countries in the United Nations are from Africa? Is it greater than or less than 10%? What do you think the exact percentage is?
B. What percentage of countries in the United Nations are from Africa? Is it greater than or less than 65%? What do you think the exact percentage is?
Researchers conducted a study using similar methods and found that individuals in group (A), who received the 10% anchor, estimated the number to be approximately 25%. Individuals in group (B), who received the 65% anchor, estimated the percentage at approximately 45%. In this case, the anchor obviously had a significant effect on the estimates.
The anchoring heuristic can have a large effect on your life. For example, have you ever had to bargain with someone while travelling? Or have you ever negotiated the price of a car? If you are able to establish a low anchor during bargaining, the final price is likely to be much lower than if you let the salesperson dictate the terms. So don’t be passive—use what you learn in this course to save yourself some money.
Decision making can also be influenced by how a problem is worded or framed. Consider the following dilemma: Imagine that you are a selfless doctor volunteering in a village in a disease-plagued part of Africa. You have two treatment options. Vaccine A has been used before; you know that it will save 200 of the 600 villagers. Vaccine B is untested; it has a 33% chance of saving all 600 people and a 67% chance of saving no one. Which option would you choose?
Now let’s suppose that you are given two different treatment options for the villagers. Treatment C has been used before and will definitely kill 67% of the villagers. Treatment D is untested; it has a 33% chance of killing none of the villagers and a 67% chance of killing them all. Which option would you choose?
Most people choose the vaccine that will definitely save 200 people (Vaccine A)
and the treatment that has a chance of killing no one (Treatment D). This tendency is interesting because options A and C are identical as are options B
and D. As you can see by looking at Figure 8.17 , the only difference between them is that one is framed in terms of saving people and the other is framed in terms of killing people. Yet, people become much more risk-averse when the question is framed in terms of potential losses (or deaths).
Figure 8.17 Framing Effects When people are asked which vaccine or treatment they would use to help a hypothetical group of villagers, the option they select is influenced by how the
question is worded or framed. If the question is worded in terms of saving villagers, most people choose Vaccine A. If the question is worded in terms of killing villagers, most people choose Treatment D. Source: Wade Carole; Tavris, Carol, Invitation to Psychology, 2nd Ed., ©2002, p. 121. Adapted and Electronically reproduced
by permissin of Pearson Eduation, Inc., Upper Saddle River, New Jersey.
Belief Perseverance and Confirmation Bias
Whenever we solve a problem or make a decision, we have an opportunity to evaluate the outcome to make sure we got it right and to judge how satisfied we are with the decision. However, feeling satisfied does not necessarily mean we are correct.
Let’s use an example to make this discussion more concrete. Each time there is a mass shooting in the U.S., thousands of gun owners will post messages on social media stating that Americans need to be able to easily purchase more guns in order to protect themselves. Many people (including many Americans), might think this idea is a bit illogical given that easy access to lethal weapons is what makes mass shootings so prevalent in that country. However, gun lovers often engage in (at least) two cognitive biases in order to maintain their beliefs.
One cognitive bias is belief perseverance , when an individual believes he or she has the solution to the problem or the correct answer for a question and will hold onto that belief even in the face of evidence against it. So, gun advocates will oppose any form of gun control even when presented with evidence from other countries (e.g., Australia) showing that preventing the public from owning assault rifles reduces or even eliminates mass shootings.
A second cognitive bias is the confirmation bias , when an individual searches for (or pays attention to) only evidence that will confirm his or her beliefs instead of evidence that might disconfirm them. To continue our example, gun advocates will often present statistics showing that particular U.S. states with strict gun laws still have high crime rates. These data are consistent with the claim that limiting gun access does not reduce crime. Of course, it ignores a
great deal of evidence suggesting that limiting gun access also makes it more difficult for ordinary citizens to commit gun-related violence. In other words, it is a selective representation of the data. The goal of these paragraphs isn’t to pick on gun enthusiasts or Americans! But, as mass shootings become more and more common, it is worth looking at some of the biases that are influencing the discussions around these issues.
Brain-imaging research provides an interesting perspective on belief perseverance and confirmation bias. This research shows that people treat evidence in ways that minimize negative or uncomfortable feelings while
maximizing positive feelings (Westen et al., 2006). For example, one American study examined the brain regions and self-reported feelings involved in interpreting information about presidential candidates during the 2004 campaign. The participants were all deeply committed to either the Republican (George “Dubya” Bush) or Democratic (John Kerry) candidate, and they all encountered information that was politically threatening toward each candidate (in this case, evidence that the candidate had contradicted himself). As you can see from the
results in Figure 8.18 , participants had strong emotional reactions to threatening (self-contradictory) information about their own candidate, but not to the alternative candidate, or a relatively neutral person, such as a retired network news anchor. Analyses of the brain scans demonstrated that participants from both political parties engaged in motivated reasoning. When the threat was directed at the participant’s own candidate, brain areas associated with ignoring or suppressing information were more active, whereas few of the regions
associated with logical thinking were activated (Westen et al., 2006).
Figure 8.18 Ratings of Perceived Contradictions in Political Statements Democrats and Republicans reached very different conclusions about candidates’ contradictory statements. Democrats readily identified the opponent’s contradictions but were less likely to do so for their own candidate; the same was true for Republican responders. Source: Westen, D., Blagov, P. S., & Harenski, K. (2006). Neural bases for motivated reasoning: An fMRI study of emotional
constraints on partisan political judgment in the 2004 U.S. presidential election. Journal of Cognitive Neuroscience, 18, 1974–
1958. Reprinted with permission of MIT Press.
These data demonstrate that a person’s beliefs can influence their observable behavioural responses to information as well as the brain activity underlying these behaviours. As we shall see, decision making—and our happiness with those decisions—can also be influenced by a person’s personality.
Working the Scientific Literacy Model Maximizing and Satisficing in Complex Decisions
One privilege of living in a technologically advanced, democratic society is that we get to make many decisions for ourselves. However, for each decision there can be more choices than we can possibly consider. As a result, two types of consumers have
emerged in our society. Satisficers are individuals who seek to make decisions that are, simply put, “good enough.” In contrast,
maximizers are individuals who attempt to evaluate every option for every choice until they find the perfect fit. Most people exhibit some of both behaviours, satisficing at times and maximizing at other times. However, if you consider all the people you know, you can probably identify at least one person who is an extreme maximizer—he or she will always be comparing products, jobs, classes, and so on, to find out who has made the best decisions. At the same time, you can probably identify an extreme satisficer —the person who will be satisfied with his or her choices as long as they are “good enough.”
What do we know about maximizing and satisficing? If one person settles for the good-enough option while another searches until he finds the best possible option, which individual do you think will be happier with the decision in the end? Most people believe the maximizer will be happier, but this is not always the case. In fact, researchers such as Barry Schwartz of Swarthmore College and his colleagues have no shortage of data
about the paradox of choice, the observation that more choices can lead to less satisfaction. In one study, the researchers asked participants to recollect both large (more than $100) and small (less than $10) purchases and report the number of options they considered, the time spent shopping and making the decision, and the overall satisfaction with the purchase. Sure enough, those who ranked high on a test of maximization invested more time and effort, but were actually less pleased with the outcome
(Schwartz et al., 2002).
In another study, researchers questioned recent university graduates about their job search process. Believe it or not, maximizers averaged 20% higher salaries, but were less happy
about their jobs than satisficers (Iyengar et al., 2006). This outcome occurred even though we would assume that
maximizers would be more careful when selecting a job—if humans were perfectly logical decision makers.
So, now we know that just the presence of alternative choices can drive down satisfaction—but how can that be?
How can science explain maximizing and satisficing? To answer this question, researchers asked participants to read vignettes that included a trade-off between number of choices
and effort (Dar-Nimrod et al., 2009). Try this example for yourself:
Your cleaning supplies (e.g., laundry detergent, rags, carpet cleaner,
dish soap, toilet paper, glass cleaner) are running low. You have the
option of going to the nearest grocery store (5 minutes away), which
offers 4 alternatives for each of the items you need, or you can drive to
the grand cleaning superstore (25 minutes away), which offers 25
different alternatives for each of the items (for approximately the same
price). Which store would you go to?
In the actual study, maximizers were much more likely to spend the extra time and effort to have more choices. Thus, if you decided to go to the store with more options, you are probably a maximizer. What this scenario does not tell us is whether having more or fewer choices was pleasurable for either maximizers or satisficers.
See how well you understand the nature of maximizers and satisficers by predicting the results of the next study: Participants
at the University of British Columbia completed a taste test of one piece of chocolate, but they could choose this piece of chocolate from an array of 6 pieces or an array of 30 pieces. When there were 6 pieces, who was happier—maximizers or satisficers? What happened when there were 30 pieces to choose from? As
you can see in Table 8.2 , the maximizers were happier when
there were fewer options. On a satisfaction scale indicating how much they enjoyed the piece of chocolate that they selected, the maximizers scored higher in the 6-piece condition (5.64 out of 7)
than in the 30-piece condition (4.73 out of 7; Dar-Nimrod et al., 2009). In contrast, satisficers did not show a statistical difference between the conditions (5.44 and 6.00 for the 6-piece and 30- piece conditions, respectively).
Table 8.2 Satisfaction of Maximizers and Satisficers
6 Alternatives 30 Alternatives Difference
Maximizers 5.64 4.73 −0.91
Satisficers 5.44 6.00 +0.46
Source: Adapted from Dar-Nimrod et al. (2009). The Maximization Paradox: The costs of
seeking alternatives. Personality and Individual Differences, 46, 631–635, Figure 1 and Table 1.
Can we critically evaluate this information? One hypothesis that seeks to explain the dissatisfaction of maximizers suggests that they invest more in the decision, so they expect more from the outcome. Imagine that a satisficer and a maximizer purchase the same digital camera for $175. The maximizer may have invested significantly more time and effort
into the decision so, in effect, she feels like she paid considerably more for the camera.
Regardless of the explanation, we should keep in mind that maximizers and satisficers are preexisting categories. People cannot be randomly assigned to be in one category or another, so these findings represent the outcomes of quasi-experimental
research (see Module 2.2 ). We cannot be sure that the act of maximizing leads to dissatisfaction based on these data. Perhaps maximizers are the people who are generally less satisfied, which in turn leads to maximizing behaviour.
Why is this relevant? Although we described maximizing and satisficing in terms of purchasing decisions, you might also notice that these styles of decision making can be applied to other situations, such as multiple-choice exams. Do you select the first response that sounds reasonable (satisficing), or do you carefully review each of the responses and compare them to one another before marking your choice (maximizing)? Once you make your choice, do you stick with it, believing it is good enough (satisficing), or are you willing to change your answer to make the best possible choice (maximizing)? Despite the popular wisdom that you should never change your first response, there may be an advantage to maximizing on exams. Research focusing on more than 1500 individual examinations showed that when people changed their answers, they changed them from incorrect to correct 51% of the time, from correct to incorrect 25% of the time, and from incorrect
to another incorrect option 23% of the time (Kruger et al., 2005).
The research discussed above suggests that there are some aspects of our consumer-based society that might actually be making us less happy. This seems counterintuitive given that the overwhelming number of product options
available to us almost guarantees that we will get exactly what we want (or think we want). It’s worth thinking about how the different biases discussed in this
module relate to your own life. By examining how your thinking is affected by different heuristics and biases, you will gain some interesting insights into why you behave the way you do. You will also be able to increase the amount of control you have over your own life.
Module 8.2b Quiz:
Judgment and Decision Making
Know . . . 1. When an individual makes judgments based on how easily things come
to mind, he or she is employing the heuristic. A. confirmation B. representativeness C. availability D. belief perseverance
Understand . . . 2. Belief perseverance seems to function by
A. maximizing positive feelings. B. minimizing negative feelings. C. maximizing negative feelings while minimizing positive feelings. D. minimizing negative feelings while maximizing positive feelings.
Analyze . . . 3. Why do psychologists assert that heuristics are beneficial for problem
solving?
A. Heuristics increase the amount of time we spend arriving at good solutions to problems.
B. Heuristics decrease our chances of errors dramatically. C. Heuristics help us make decisions efficiently. D. Heuristics are considered the most logical thought pattern for
problem solving.
4. The fact that humans so often rely on heuristics is evidence that A. humans are not always rational thinkers. B. it is impossible for humans to think logically. C. it is impossible for humans to use algorithms. D. humans will always succumb to the confirmation bias.
Module 8.2 Summary
algorithms
anchoring effect
availability heuristic
belief perseverance
confirmation bias
conjunction fallacy
functional fixedness
heuristics
mental set
problem solving
representativeness heuristic
All problems involve people attempting to reach some sort of goal; this goal can be an observable behaviour like learning to serve a tennis ball or a cognitive behaviour like learning Canada’s ten provincial capitals. This process involves forming strategies that will allow the person to reach the goal. It may also require a person to overcome one or more obstacles along the way.
Many obstacles arise from the individual’s mental set, which occurs when a person focuses on only one potential solution and does not consider alternatives.
Know . . . the key terminology of problem solving and decision making.
8.2a
Understand . . . the characteristics that problems have in common.8.2b
Understand . . . how obstacles to problem solving are often self- imposed.
8.2c
Similarly, functional fixedness can arise when an individual does not consider alternative uses for familiar objects.
Apply Activity Rate the following items on a scale from 1 (completely disagree) to 7 (completely agree), with 4 being a neutral response.
1. Whenever I’m faced with a choice, I try to imagine what all the other possibilities are, even ones that aren’t present at the moment.
2. No matter how satisfied I am with my job, it’s only right for me to be on the lookout for better opportunities.
3. When I am in the car listening to the radio, I often check other stations to see whether something better is playing, even if I am relatively satisfied with what I’m listening to.
4. When I watch TV, I channel surf, often scanning through the available options even while attempting to watch one program.
5. I treat relationships like clothing: I expect to try a lot on before finding the perfect fit.
6. I often find it difficult to shop for a gift for a friend. 7. When shopping, I have a difficult time finding clothing that I really love. 8. No matter what I do, I have the highest standards for myself. 9. I find that writing is very difficult, even if it’s just writing to a friend,
because it’s so difficult to word things just right. I often do several drafts of even simple things.
10. I never settle for second best.
When you are finished, average your ratings together to find your overall score. Scores greater than 4 indicate maximizers; scores less than 4 indicate satisficers. Approximately one-third of the population scores below 3.25 and approximately one-third scores above 4.75. Where does your score place you?
Apply . . . your knowledge to determine if you tend to be a maximizer or a satisficer.
8.2d
Analyze . . . whether human thought is primarily logical or intuitive.8.2e
This module provides ample evidence that humans are not always logical. Heuristics are helpful decision-making and problem-solving tools, but they do not always follow logical principles. Even so, the abundance of heuristics does not mean that humans are never logical; instead, they simply point to the limits of our rationality.
Module 8.3 Language and Communication
Manuela Hartling/Reuters
Learning Objectives
Know . . . the key terminology from the study of language. Understand . . . how language is structured. Understand . . . how genes and the brain are involved in language use. Apply . . . your knowledge to distinguish between units of language such as phonemes and morphemes. Analyze . . . whether species other than humans are able to use language.
8.3a 8.3b 8.3c 8.3d
8.3e
Dog owners are known for attributing a lot of intelligence, emotion, and “humanness” to their canine pals. Sometimes they may appear to go overboard—such as Rico’s owners, who claimed their border collie understood 200 words, most of which referred to different toys and objects he liked to play with. His owners claimed that they could show Rico a toy, repeat its name a few times, and toss the toy into a pile of other objects; Rico would then retrieve the object upon verbal command. Rico’s ability appeared to go well beyond the usual “sit,” “stay,” “heel,” and perhaps a few other words that dog owners expect their companions to understand.
Claims about Rico’s language talents soon drew the attention of scientists, who skeptically questioned whether the dog was just responding to cues by the owners, such as their possible looks or gestures toward the object they asked their pet to retrieve. The scientists set up a carefully controlled experiment in which no one present in the room knew the location of the object that was requested. Rico correctly retrieved 37 out of 40 objects. The experimenters then tested the owners’ claim that Rico could learn object names in just one trial. Rico again confirmed his owners’ claims, and the researchers concluded that his ability to understand new words was comparable to that of a three-year-
old child (Kaminski et al., 2004).
However, as you will see in this module, Rico’s abilities, while impressive, are dwarfed by those of humans. Our ability to reorganize words into complex thoughts is unique in the animal kingdom and may even have aided our survival as a species.
Focus Questions
1. What is the difference between language and other forms of communication?
2. Might other species, such as chimpanzees, also be capable of learning human language?
Communication happens just about anywhere you can find life. Dogs bark, cats meow, monkeys chatter, and mice can emit sounds undetectable to the human ear when communicating. Honeybees perform an elaborate dance to
communicate the direction, distance, and quality of food sources (von Frisch, 1967). Animals even communicate by marking their territories with their distinct scent, much to the chagrin of the world’s fire hydrants. Language is among the ways that humans communicate. It is quite unlike the examples of animal communication mentioned previously. So what differentiates language from these other forms of communication? And, what is it about our brains that enables us to turn different sounds and lines into the sophisticated languages found across different human cultures?
What Is Language?
Language is one of the most intensively studied areas in all of psychology. Thousands of experiments have been performed to identify different characteristics of language as well as the brain regions associated with them. But, all fields of study have a birthplace. In the case of the scientific study of language, it began with an interesting case study of a patient in Paris in the early 1860s.
Early Studies of Language
In 1861, Paul Broca, a physician and founder of the Society of Anthropology of Paris, heard of an interesting medical case. The patient appeared to show a very specific impairment resulting from a stroke suffered 21 years earlier. He could understand speech and had fairly normal mental abilities; however, he had great
difficulty producing speech and often found himself uttering single words separated by pauses (uh, er . . .). In fact, this patient acquired the nickname “Tan” because it was one of the only sounds that he could reliably produce. Tan
had what is known as aphasia , a language disorder caused by damage to the
brain structures that support using and understanding language.
Tan died a few days after being examined by Broca. During the autopsy, Broca noted that the brain damage appeared primarily near the back of the frontal lobes in the left hemisphere. Over the next couple of years, Broca found 12 other patients with similar symptoms and similar brain damage, indicating that Tan was
not a unique case. This region of the left frontal lobe that controls our ability to articulate speech sounds that compose words is now known as Broca’s area
(see Figure 8.19 ). The symptoms associated with damage to this region, as seen in Tan, are known as Broca’s aphasia.
Figure 8.19 Two Language Centres of the Brain Broca’s and Wernicke’s areas of the cerebral cortex are critical to language function.
The fact that a brain injury could affect one part of language while leaving others preserved suggested that the ability to use language involves a number of different processes using different areas of the brain. In the years following the publication of Broca’s research, other isolated language impairments were discovered. In 1874, a young Prussian (German) physician named Carl
Wernicke published a short book detailing his study of different types of aphasia. Wernicke noted that some of his patients had trouble with language
comprehension rather than language production. These patients typically had damage to the posterior superior temporal gyrus (the back and top part of the
temporal lobe). This region, now known as Wernicke’s area , is the area of the brain most associated with finding the meaning of words (see Figure 8.19 ). Damage to this area results in Wernicke’s aphasia, a language disorder in which a person has difficulty understanding the words he or she hears. These patients are also unable to produce speech that other people can understand— the words are spoken fluently and with a normal intonation and accent, but these words seem randomly thrown together (i.e., what is being said does not make sense). Consider the following example:
Examiner: I’d like to have you tell me something about your problem.
Person with Wernicke’s aphasia: Yes, I, ugh, cannot hill all of my way. I cannot talk all of the things I do, and part of the part I can go alright, but I cannot tell from the other people. I usually most of my things. I know what can I talk and know what they are, but I cannot always come back even though I know they should be in, and I know should something eely I should know what I’m doing . . .
The important thing to look for in this sample of speech is how the wrong words appear in an otherwise fluent stream of utterances. Contrast this with an example of Broca’s aphasia:
Examiner: Tell me, what did you do before you retired?
Person with Broca’s aphasia: Uh, uh, uh, pub, par, partender, no.
Examiner: Carpenter?
Person with Broca’s aphasia: (Nodding to signal yes) Carpenter, tuh, tuh, twenty year.
Notice that the individual has no trouble understanding the question or coming
up with the answer. His difficulty is in producing the word carpenter and then putting it into an appropriate phrase. Did you also notice the missing “s” from
twenty year? This is another characteristic of Broca’s aphasia: The individual words are often produced without normal grammatical flair: no articles, suffixes, or prefixes.
Broca’s aphasia can include some difficulties in comprehending language as well. In general, the more complex the sentence structure, the more difficult it will be to understand. Compare these two sentences:
The girl played the piano.
The piano was played by the girl.
These are two grammatically correct sentences (although the second is somewhat awkward) that have the same meaning but are structured differently. Patients with damage to Broca’s area would find it much more difficult to understand the second sentence than the first. This impairment suggests that the distinction between speech production and comprehension is not as simple as was first thought. Indeed, as language became a central topic of research in psychology, researchers quickly realized that this ability—or set of abilities—is among the most complex processes humans perform.
Properties of Language
Language, like many other cognitive abilities, flows so automatically that we often overlook how complicated it really is. However, cases like those described above show us that language is indeed a complex set of skills. Researchers
define language as a form of communication that involves the use of spoken, written, or gestural symbols that are combined in a rule-based form. With this definition in mind, we can distinguish which features of language make it a unique form of communication.
Language can involve communication about objects and events that are not in the present time and place. We can use language to talk about events happening on another planet or that are happening within atoms. We can also use different tenses to indicate that the topic of the sentence occurred or
will occur at a different time. For instance, you can say to your roommate, “I’m going to order pizza tonight,” without her thinking the pizza is already there.
Languages can produce entirely new meanings. It is possible to produce a sentence that has never been uttered before in the history of humankind, simply by reorganizing words in different ways. As long as you select English words and use correct grammar, others who know the language should be able to understand it. You can also use words in novel ways. Imagine the
tabloid newspaper headline: Bat Boy Found in Cave! In North American culture, “bat boys” are regular kids who keep track of the baseball bats for baseball players. In this particular tabloid, the story concerned a completely novel creature that was part bat and part boy. Both meanings could be
correct, depending upon the context in which the term bat boy is used. Language is passed down from parents to children. As we will discuss later in this module, children learn to pay attention to the particular sounds of their
native language(s) at the expense of other sounds (Werker, 2003). Children also learn words and grammatical rules from parents, teachers, and peers. In
other words, even if we have a natural inclination to learn a language, experience dictates which language(s) we will speak.
Language requires us to link different sounds (or gestures) with different meanings in order to understand and communicate with other people. Therefore, understanding more about these seemingly simple elements of language is essential for understanding language as a whole.
Words can be arranged or combined in novel ways to produce ideas that have never been expressed before. Weekly World News
Phonemes and Morphemes: The Basic Ingredients
of Language
Languages contain discrete units that exist at differing levels of complexity. When people speak, they assemble these units into larger and more complex units. Some psychologists have used a cooking analogy to explain this phenomenon: We all start with the same basic language ingredients, but they
can be mixed together in an unlimited number of ways (Pinker, 1999).
Phonemes are the most basic of units of speech sounds. You can identify phonemes rather easily; the phoneme associated with the letter t (which is written as /t/, where the two forward slashes indicate a phoneme) is found at the
end of the word pot or near the beginning of the word stop. If you pay close attention to the way you use your tongue, lips, and vocal cords, you will see that phonemes have slight variations depending on the other letters around them.
Pay attention to how you pronounce the /t/ phoneme in stop, stash, stink, and stoke. Your mouth will move in slightly different ways each time, and there will be very slight variations in sound, but they are still the same basic phoneme. Individual phonemes typically do not have any meaning by themselves; if you want someone to stop doing something, asking him to /t/ will not suffice.
Morphemes are the smallest meaningful units of a language. Some morphemes are simple words, whereas others may be suffixes or prefixes. For
example, the word pig is a morpheme—it cannot be broken down into smaller units of meaning. You can combine morphemes, however, if you follow the rules
of the language. If you want to pluralize pig, you can add the morpheme /-s/, which will give you pigs. If you want to describe a person as a pig, you can add the morpheme /-ish/ to get piggish. In fact, you can add all kinds of morphemes to a word as long as you follow the rules. You could even say piggable (able to be pigged) or piggify (to turn into a pig). These words do not make much literal sense, but they combine morphemes according to the rules; thus we can make a reasonable guess as to the speaker’s intended meaning. Our ability to combine morphemes into words is one distinguishing feature of language that sets it apart from other forms of communication (e.g., we don’t produce a lengthy series of facial expressions to communicate a new idea). In essence, language gives us
productivity—the ability to combine units of sound into an infinite number of meanings.
Finally, there are the words that make up a language. Semantics is the study of how people come to understand meaning from words. Humans have a knack for this kind of interpretation, and each of us has an extensive mental dictionary to prove it. Not only do normal speakers know tens of thousands of words, but they can often understand new words they have never heard before based on
their understanding of morphemes.
Although phonemes, morphemes, and semantics have an obvious role in spoken language, they also play a role in our ability to read. When you recognize a word,
you effortlessly translate the word’s visual form (known as its orthography) into the sounds that make up that word (known as its phonology or phonological code). These sounds are combined into a word, at which point you can access its meaning or semantics. However, not all people are able to translate
orthography into sounds. Individuals with dyslexia have difficulties translating words into speech sounds. Indeed, children with dyslexia show less activity in the left fusiform cortex (at the bottom of the brain where the temporal and occipital lobes meet), a brain area involved with word recognition and with linking
word and sound representations (Desroches et al., 2010). This difficulty linking letters with phonemes leads to unusually slow reading in both children and adults despite the fact that these people have normal hearing and are cognitively and
neurologically healthy (Desroches & Joanisse, 2009; Shaywitz, 1998).
This research into the specific impairments associated with dyslexia allows scientists and educators to develop treatment programs to help children improve their reading and language abilities. One of the most successful programs has been developed by Maureen Lovett and her colleagues at Sick Kids Hospital in Toronto and Brock University. Their Phonological and Strategy Training (PHAST)
program (now marketed as Empower Reading to earn research money for the hospital) has been used to assist over 6000 students with reading disabilities. Rather than focusing on only one aspect of language, this program teaches children new word-identification and reading-comprehension strategies while also educating them about how words and phrases are structured (so that they know what to expect when they see new words or groups of words). Children who completed these programs showed improvements on a number of
measures of reading and passage comprehension (Frijters et al., 2013; Lovett et al., 2012). Given that 5–15% of the population has some form of reading impairment, treatment programs like PHAST could have a dramatic effect on our educational system.
As you can see, languages derive their complexity from several elements,
TM
including phonemes, morphemes, and semantics. And, when these systems are not functioning properly, language abilities suffer. But phonemes, morphemes, and semantics are just the list of the ingredients of language—we still need to figure out how to mix these ingredients together.
Syntax: The Language Recipe
Perhaps the most remarkable aspect of language is syntax , the rules for combining words and morphemes into meaningful phrases and sentences—the recipe for language. Children master the syntax of their native language before they leave elementary school. They can string together morphemes and words
when they speak, and they can easily distinguish between well-formed and ill- formed sentences. But despite mastering those rules, most speakers cannot tell you what the rules are; syntax just seems to come naturally. It might seem odd that people can do so much with language without a full understanding of its inner workings. Of course, people can also learn how to walk without any understanding of the biochemistry that allows their leg muscles to contract and relax.
The most basic units of syntax are nouns and verbs. They are all that is required
to construct a well-formed sentence, such as Goats eat. Noun–verb sentences are perfectly adequate, if a bit limited, so we build phrases out of nouns and
verbs, as the diagram in Figure 8.20 demonstrates.
Figure 8.20 Syntax Allows Us to Understand Language by the Organization of the Words The rules of syntax help us divide a sentence into noun phrases, verb phrases, and other parts of speech. Source: Adapted from S. Pinker. (1994). The Language Instinct. New York: HarperCollins.
Syntax also helps explain why the order of words in a sentence has such a strong effect on what the sentence means. For example, how would you make a question out of this statement?
A. A goat is in the garden. B. IS a goat in the garden?
This example demonstrates that a statement (A) can be turned into a well-
formed question (B) just by moving the verb is to the beginning of the sentence. Perhaps that is one of the hidden rules of syntax. Try it again:
A. A goat that is eating a flower is in the garden. B. IS a goat that eating a flower is in the garden?
As you can see, the rule “move is to the beginning of the sentence” does not apply in this case. Do you know why? It is because we moved the wrong is. The phrase that is eating a flower is a part of the noun phrase because it describes the goat. We should have moved the is from the verb phrase. Try it again:
A. A goat that is eating a flower is in the garden. B. IS a goat that is eating a flower in the garden?
This is a well-formed sentence. It may be grammatically awkward, but the syntax
is understandable (Pinker, 1994).
As you can see from these examples, the order of words in a sentence helps determine what the sentence means, and syntax is the set of rules we use to determine that order.
Pragmatics: The Finishing Touches
If syntax is the recipe for language, pragmatics is the icing on the cake. Pragmatics is the study of nonlinguistic elements of language use. It places heavy emphasis on the speaker’s behaviours and the social situation (Carston, 2002).
Pragmatics reminds us that sometimes what is said is not as important as how it is said. For example, a student who says, “I ate a 50-pound cheeseburger,” is most likely stretching the truth, but you probably would not call him a liar. Pragmatics helps us understand what he implied. The voracious student was
actually flouting—or blatantly disobeying—a rule of language in a way that is obvious (Grice, 1975; Horn & Ward, 2004). There are all sorts of ways in which flouting the rules can lead to implied, rather than literal, meanings; a sample of
these are shown in Table 8.3 .
Table 8.3 Pragmatic Rules Guiding Language Use
The Rule Flouting the Rule The Implication
Say what
you believe
is true.
My roommate is a
giraffe.
He does not really live with a giraffe. Maybe
his roommate is very tall?
Say only
what is
relevant.
Is my blind date
good-looking? He’s
got a great
personality.
She didn’t answer my question. He’s probably
not good-looking.
Say only
as much as
you need
to.
I like my lab partner,
but he’s no Einstein.
Of course he’s not Einstein. Why is she
bothering to tell me this? She probably means
that her partner is not very smart.
Importantly, pragmatics depends upon both the speaker (or writer) and listener (or reader) understanding that rules are being flouted in order to produce a desired meaning. If you speak with visitors from a different country, you may find that they don’t understand what you mean when you flout the rules of Canadian English or use slang (shortened language). When we say “The goalie stood on his head,” most hockey-mad Canadians understand that we are commenting on a goaltender’s amazing game; however, someone new to hockey would be baffled by this expression. This is another example of how experience—in this case with a culture—influences how we use and interpret language.
Module 8.3a Quiz:
What Is Language?
Know . . . 1. What are the rules that govern how words are strung together into
meaningful sentences?
A. Semantics B. Pragmatics C. Morphemics D. Syntax
2. The study of how people extract meaning from words is called . A. syntax B. pragmatics C. semantics D. flouting
Understand . . . 3. Besides being based in a different region of the brain, a major distinction
between Broca’s aphasia and Wernicke’s aphasia is that
A. words from people with Broca’s aphasia are strung together fluently, but often make little sense.
B. Broca’s aphasia is due to a FOXP2 mutation. C. Wernicke’s aphasia results in extreme stuttering. D. words from people with Wernicke’s aphasia are strung together
fluently, but often make little sense.
Apply . . . 4. is an example of a morpheme, while is a phoneme.
A. /dis/; /ta/ B. /a/; /like/ C. /da/; /ah/ D. /non/; /able/
The Development of Language
Human vocal tracts are capable of producing approximately 200 different phonemes. However, no language uses all of these sounds. Jul’hoan, one of the “clicking languages” of Botswana, contains almost 100 sounds (including over 80 different consonant sounds). In contrast, English contains about 40 sounds. But, if Canadians are genetically identical to people in southern Africa, why are our languages different? And, why can’t we produce and distinguish between some of the sounds of these other languages? It turns out that experience plays a major role in your ability to speak the language, or languages, that you do.
Infants, Sound Perception, and Language
Acquisition
Say the following phrase out loud: “Your doll.” Now, say this phrase: “This doll.”
Did you notice a difference in how you pronounced doll in these two situations? If English is your first language, it is quite likely that you didn’t notice the slight change in how the letter “d” was expressed. But, Hindi speakers would have no
problem making this distinction. To them, the two instances of the word doll would be pronounced differently and would mean lentils and branch, respectively.
Janet Werker of the University of British Columbia and her colleagues found that very young English-learning infants are able to distinguish between these two “d” sounds. But, by 10 months of age, the infants begin hearing sounds in a way that is consistent with their native language; because English has only one “d” sound, English-learning infants stop detecting the difference between these two sounds
(Werker & Tees, 1984; Werker et al., 2012). This change is not a weakness on the part of English-learning infants. Rather, it is evidence that they are learning the statistical principles of their language. Infants who hear only English words will group different pronunciations of the letter “d” into one category because that is how this sound is used in English. Hindi-learning children will learn to separate different types of “d” sounds because this distinction is important. A related study using two “k” sounds from an Interior Salish (First Nations) language from British Columbia produced similar results—English-learning infants showed a significant drop-off in hearing sounds for the non-English language after 8–10 months
(Werker & Tees, 1984).
In addition to becoming experts at identifying the sounds of their own language, infants also learn how to separate a string of sounds into meaningful groups (i.e., into words). Infants as young as two months old show a preference for speech
sounds over perceptually similar non-speech sounds (Vouloumanos & Werker, 2004). And, when presented with pronounceable non-words (e.g., strak), infants prefer to hear words that follow the rules of their language. An English-learning baby would prefer non-words beginning in “str” to those beginning in “rst” because there are a large number of English words that begin with “str”
(Jusczyk et al., 1993). Additionally, newborn infants can distinguish between function words (e.g., prepositions) and content words (e.g., nouns and verbs)
based on their sound properties (Shi et al., 1999). By six months of age, infants prefer the content words (Shi & Werker, 2001), thus showing that they are learning which sounds are most useful for understanding the meaning of a statement.
By the age of 20 months, the children are able to use the perceptual categories that they developed in order to rapidly learn new words. In some cases, children
can perform fast mapping —the ability to map words onto concepts or objects
after only a single exposure. Human children seem to have a fast-mapping capacity that is superior to any other organism on the planet. This skill is one
potential explanation for the naming explosion, a rapid increase in vocabulary size that occurs at this stage of development.
The naming explosion has two biological explanations as well. First, at this stage of development, the brain begins to perform language-related functions in the left hemisphere, similar to the highly efficient adult brain; prior to this stage, this
information was stored and analyzed by both hemispheres (Mills et al., 1997). Second, the naming explosion has also been linked to an increase in the amount of myelin on the brain’s axons, a change that would increase the speed of
communication between neurons (Pujol et al., 2006). These changes would influence not only the understanding of language, but also how a child uses language to convey increasingly complex thoughts such as “How does Spiderman stick to walls?” and “Why did Dad’s hair fall out?”
Producing Spoken Language
Learning to identify and organize speech sounds is obviously an important part of language development. An equally critical skill is producing speech that other people will be able to understand. Early psychologists focused only on behavioural approaches to language learning. They believed that language was learned through imitating sounds and being reinforced for pronouncing and using
words correctly (Skinner, 1985). Although it is certainly true that imitation and reinforcement are involved in language acquisition, they are only one part of this
complex process (Messer, 2000). Here are a few examples that illustrate how learning through imitation and reinforcement is just one component of language development:
Children often produce phrases that include incorrect grammar or word forms. Because adults do not (often) use these phrases, it is highly unlikely that such phrases are imitations.
Children learn irregular verbs and pluralizations on a word-by-word basis. At first, they will use ran and geese correctly. However, when children begin to use grammar on their own, they over-generalize the rules. A child who learns
the /-ed/ morpheme for past tense will start saying runned instead of ran. When she learns that /-s/ means more than one, she will begin to say gooses instead of geese. It is also unlikely that children would produce these forms by imitating.
When children use poor grammar, or when they over-generalize their rules, parents may try to correct them. Although children will acknowledge their parents’ attempts at instruction, this method does not seem to work. Instead, children go right back to over-generalizing.
In light of these and many other examples, it seems clear that an exclusively behaviourist approach falls short in explaining how language is learned. After all, there are profound differences in the success of children and adults in learning a new language: Whereas adults typically struggle, children seem to learn the language effortlessly. If reinforcement and imitation were the primary means by which language was acquired, then adults should be able to learn just as well as children.
The fact that children seem to learn language differently than adults has led
psychologists to use the term language acquisition when referring to children instead of language learning. The study of language acquisition has revealed remarkable similarities among children from all over the world. Regardless of the
language, children seem to develop this capability in stages, as shown in Table 8.4 .
Table 8.4 Milestones in Language Acquisition and Speech
Average Time of Onset
(Months)
Milestone Example
1–2 Cooing Ahhh, ai-ai-ai
4–10 Babbling (consonants start) Ab-ah-da-ba
8–16 Single-word stage Up, mama, papa
24 Two-word stage Go potty
24+ Complete, meaningful phrases
strung together
I want to talk to
Grandpa.
Sensitive Periods for Language
The phases of language development described above suggest that younger brains are particularly well-suited to acquiring languages; this is not the case for older brains. Imagine a family with two young children who immigrated to Canada from a remote Russian village where no one spoke English. The parents would struggle with English courses, while the children would attend English- speaking schools. Within a few years, the parents would have accumulated some vocabulary but they would likely still have difficulty with pronunciation and
grammar (Russian-speaking people often omit articles such as the). Meanwhile, their children would likely pick up English without much effort and have language skills equivalent to those of their classmates; they would have roughly the same vocabulary, the same accents, and even the same slang.
Why can children pick up a language so much more easily than adults? Most
psychologists agree that there is a sensitive period for language—a time during childhood in which children’s brains are primed to develop language skills (see
also Module 10.1 ). Children can absorb language almost effortlessly, but this ability seems to fade away starting around age seven. Thus, when families immigrate to a country that uses a different language, young children are able to
pick up this language much more quickly than their parents (Hakuta et al., 2003; Hernandez & Li, 2007).
A stunning example of critical periods comes from Nicaragua. Until 1979, there was no sign language in this Central American country. Because there were no schools for people with hearing impairments, there was no (perceived) need for a common sign language. When the first schools for the deaf were established, adults and teenaged students attempted to learn to read lips. While few mastered this skill, these students did do something even more astonishing:
They developed their own primitive sign language. This language, Lenguaje de Signos Nicaragüese (LSN), involves a number of elaborate gestures similar to a game of charades and did not have a consistent set of grammatical rules. But, it was a start. Children who attended these schools at an early age (i.e., during the sensitive period for language acquisition) used this language as the basis for a
more fluent version of sign language: Idioma de Signos Nicaragüese (ISN). ISN has grammatical rules and can be used to express a number of complicated,
abstract ideas (Pinker, 1994). It is now the standard sign language in Nicaragua. The difference between LSN and ISN is similar to the difference between adults and children learning a new language. If you acquire the new language during childhood, you will be much more fluent than if you try to acquire it during
adulthood (Senghas, 2003; Senghas et al., 2004).
The Bilingual Brain
Let’s go back to the example of the Russian-speaking family who immigrated to balmy Canada. The young children learning English would also be speaking Russian at home with their parents. As a result, they would be learning two languages essentially at the same time. What effect would this situation have on their ability to learn each language?
Although bilingualism leads to many benefits (see below), there are some costs to learning more than one language. Bilingual children tend to have a smaller
vocabulary in each language than unilingual children (Mahon & Crutchley, 2006). In adulthood, this difference is shown not by vocabulary size, but by how easily bilinguals can access words. Compared to unilingual adults, bilingual
adults are slower at naming pictures (Roberts et al., 2002), have more difficulty on tests that ask them to list words starting with a particular letter (Rosselli et al., 2000), have more tip-of-the-tongue experiences in which they can’t quite retrieve a word (Gollan & Acenas, 2004), and are slower and less accurate when making word/non-word judgments (Ransdell & Fischler, 1987). These problems with accessing words may be due to the fact that they use each
language less than a unilingual person would use their single language (Michael & Gollan, 2005).
The benefits of bilingualism, however, appear to far outweigh the costs. One difference that has been repeatedly observed is that bilingual individuals are much better than their unilingual counterparts on tests that require them to
control their attention or their thoughts. These abilities, known as executive functions (or executive control), enable people who speak more than one language to inhibit one language while speaking and listening to another (or to limit the interference across languages). If they didn’t, they would produce
confusing sentences like The chien is tres sick. Although most of you can figure out that this person is talking about a sick dog, you can see how such sentences would make communication challenging. Researchers have found that bilinguals score better than unilinguals on tests of executive control throughout the
lifespan, beginning in infancy (Kovacs & Mehler, 2009) and the toddler years (Poulin-Dubois et al., 2011) and continuing throughout adulthood (Costa et al., 2008) and into old age (Bialystok et al., 2004). Bilingualism has also recently been shown to have important health benefits. Because the executive control involved with bilingualism uses areas in the frontal lobes, these regions may form
more connections in bilinguals than unilinguals (Bialystok, 2009, 2011a, 2011b). As a result, these brains likely have more back-up systems if damage occurs. Indeed, Ellen Bialystok at York University and her colleagues have shown that being bilingual helps protect against the onset of dementia and Alzheimer’s
disease (Bialystok et al., 2007; Schweizer et al., 2012), a finding that leaves many at a loss for words.
Module 8.3b Quiz:
The Development of Language
Know . . . 1. What is fast mapping?
A. The rapid rate at which chimpanzees learn sign language B. The ability of children to map concepts to words with only a single
example
C. The very short period of time that language input can be useful for language development
D. A major difficulty that people face when affected by Broca’s
aphasia
Understand . . . 2. The term “sensitive period” is relevant to language acquisition because
A. exposure to language is needed during this time for language abilities to develop normally.
B. Broca’s area is active only during this period. C. it is what distinguishes humans from the apes. D. it indicates that language is an instinct.
Analyze . . . 3. What is the most accurate conclusion from studies of bilingualism and the
brain?
A. Being bilingual causes the brain to form a larger number of connections than it normally would.
B. Being bilingual reduces the firing rate of the frontal lobes. C. Only knowing one language allows people to improve their
executive functioning.
D. Being bilingual makes it more likely that a person will have language problems if they suffer brain damage.
Genes, Evolution, and Language
This module began with a discussion of two brain areas that are critical for language production and comprehension: Broca’s area and Wernicke’s area, respectively. But, these brain areas didn’t appear out of nowhere. Rather, genetics and evolutionary pressures led to the development of our language- friendly brains. Given recent advances in our understanding of the human
genome (see Module 3.1 ), it should come as no surprise that researchers are actively searching for the genes involved with language abilities.
Working the Scientific Literacy Model Genes and Language
Given that language is a universal trait of the human species, it likely involves a number of different genes. These genes would, of course, also interact with the environment. In this section we examine whether it is possible that specific genes are related to language.
What do we know about genes and language? Many scientists believe that the evidence is overwhelming that language is a unique feature of the human species, and that language evolved to solve problems related to survival and reproductive fitness. Language adds greater efficiency to thought, allows us to transmit information without requiring us to have direct experience with potentially dangerous situations, and, ultimately, facilitates communicating social needs and desires. Claims that language promotes survival and reproductive success are difficult to test directly with scientific experimentation, but there is a soundness to the logic of the speculation. We can also move beyond speculation and actually examine how genes play a role in human language. As with all complex psychological traits, there are likely many genes associated with language. Nevertheless, amid all of these myriad possibilities, one gene has been identified that is of particular importance.
How can science explain a genetic basis of language? Studies of this gene have primarily focused on the KE family (their name is abbreviated to maintain their confidentiality). Many members of this family have inherited a mutated version of a
gene on chromosome 7 (see Figure 8.21 ; Vargha-Khadem et al., 2005). Each gene has a name—and this one is called FOXP2. All humans carry a copy of the FOXP2 gene, but the KE
family passes down a mutated copy. Those who inherit the mutated copy have great difficulty putting thoughts into words
(Tomblin et al., 2009). Thus, it appears that the physical and chemical processes that FOXP2 codes for are related to language function.
Figure 8.21 Inheritance Pattern for the Mutated FOXP2 Gene in the KE Family
Family members who are “affected” have inherited a mutated form of the FOXP2 gene, which results in difficulty with articulating words. As you can see from the centre of the figure, the mutated gene is traced to a female family member and has been passed on to the individuals of the next two generations. Source: Republished with permission of Nature Publishing Group, from FOXP2 and the
neuroanatomy of speech and language, Fig. 1, Nature Reviews Neuroscience, 6, 131–138 by
Faraneh Vargha-Khadem; David G. Gadian; Andrew Copp; Mortimer Mishkin. Copyright 2005;
permission conveyed through Copyright Clearance Center, Inc.
What evidence indicates that this gene is specifically involved in language? If you were to ask the members of the family who inherited the mutant form of the gene to speak about how to change the batteries in a flashlight, they would be at a loss. A rather jumbled mixture of sounds and words might come out, but nothing that could be easily understood. However, these same individuals have no problem actually performing the task. Their challenges with using language are primarily restricted to the use
of words, not with their ability to think.
Scientists have used brain-imaging methods to further test whether the FOXP2 mutation affects language. One group of researchers compared brain activity of family members who inherited the mutation of FOXP2 with those who did not
(Liégeois et al., 2003). During the brain scans, the participants were asked to generate words themselves, and also to repeat
words back to the experimenters. As you can see from Figure 8.22 , the members of the family who were unaffected by the mutation showed normal brain activity: Broca’s area of the left hemisphere became activated, just as expected. In contrast, Broca’s area in the affected family members was silent, and the brain activity that did occur was unusual for this type of task.
Figure 8.22 Brain Scans Taken While Members of the KE Family Completed a Speech Task
The unaffected group shows a normal pattern of activity in Broca’s area, while the affected group shows an unusual pattern. Source: Republished with permission of Nature Publishing Group, from Source: Language fMRI
abnormalities associated with FOXP2 gene mutation, Figure 1, Nature Neuroscience, 6, 1230–1237,
Copyright © 2003. permission conveyed through Copyright Clearance Center, Inc.
Can we critically evaluate this evidence? As you have now read, language has multiple components. Being able to articulate words is just one of many aspects of using and understanding language. The research on FOXP2 is very important, but reveals only how a single gene relates to one aspect of language use. There are almost certainly a large
number of different genes working together to produce each component of language. To their credit, FOXP2 researchers are quick to point out that many other genes will need to be identified before we can claim to understand the genetic basis of language; FOXP2 is just the beginning.
It is also worth noting that although the FOXP2 gene affects human speech production, it does occur in other species that do not produce sophisticated language. This gene is found in both mice and birds as well as in humans, and the human version shares a very similar molecular structure to the versions observed in these other species. Interestingly, the molecular structure and activity of the FOXP2 gene in songbirds (unlike non-songbirds) is similar to that in humans, again highlighting its
possible role in producing meaningful sounds (Vargha-Khadem et al., 2005).
Why is this relevant? This work illuminates at least part of the complex relationship between genes and language. Other individual genes that have direct links to language function will likely be discovered as research continues. It is possible that this information could be used to help us further understand the genetic basis of language disorders. The fact that the FOXP2 gene is found in many other species suggests that it may play a role in one of the components
of language rather than being the gene for language. Thus, scientists will have to perform additional research in order to understand why and how human language became so much more complex than that of any other species.
The fact that animals such as songbirds have some of the same language-
related genes as humans suggests that other species may have some language abilities. As it turns out, many monkey species have areas in their brains that are similar to Broca’s and Wernicke’s area. As in humans, these regions are connected by white-matter pathways, thus allowing them to communicate with
each other (Galaburda & Pandya, 1982). These areas appear to be involved with the control of facial and throat muscles and with identifying when other monkeys have made a vocalization. This is, of course, a far cry from human language. But, the fact that some monkey species have similar “neural hardware” to humans does lead to some interesting speculations about language abilities in the animal kingdom.
Can Animals Use Language?
Psychologists have been studying whether nonhuman species can acquire human language for many decades. Formal studies of language learning in nonhuman species gained momentum in the mid-1950s when psychologists
attempted to teach spoken English to a chimpanzee named Viki (Hayes & Hayes, 1951). Viki was cross-fostered , meaning that she was raised as a member of a family that was not of the same species. Like humans, chimps come into the world dependent on adults for care, so the humans who raised Viki were basically foster parents. Although the psychologists learned a lot about how smart chimpanzees can be, they did not learn that Viki was capable of language —she managed to whisper only about four words after several years of trying.
Psychologists who followed in these researchers’ footsteps did not consider the case to be closed. Perhaps Viki’s failure to learn spoken English was a limitation not of the brain, but of physical differences in the vocal tract and tongue that
distinguish humans and chimpanzees. One project that began in the mid-1960s involved teaching chimpanzees to use American Sign Language (ASL). The first chimpanzee involved in this project was named Washoe. The psychologists immersed Washoe in an environment rich with ASL, using signs instead of speaking and keeping at least one adult present and communicating with her throughout the day. By the time she turned two years old, Washoe had acquired about 35 signs through imitation and direct guidance of how to configure and move her hands. Eventually, she learned approximately 200 signs. She was able to generalize signs from one context to another and to use a sign to represent entire categories of objects, not just specific examples. For example, while Washoe learned the sign for the word “open” on a limited number of doors and cupboards, she subsequently signed “open” to many different doors, cupboards, and even her pop bottles. The findings with Washoe were later replicated with
other chimps (Gardner et al., 1989).
Washoe was the first chimpanzee taught to use some of the signs of American Sign Language. Washoe died in 2007 at age 42 and throughout her life challenged many to examine their beliefs about human uniqueness. Photo permission granted by Friends of Washoe
Instead of using sign language, some researchers have developed a completely artificial language to teach to apes. This language consists of symbols called
lexigrams—small keys on a computerized board that represent words and, therefore, can be combined to form complex ideas and phrases. One subject of the research using this language is a bonobo named Kanzi (bonobos are another species of chimpanzee). Kanzi has learned approximately 350 symbols through
training, but he learned his first symbols simply by watching as researchers attempted to teach his mother how to use the language. In addition to the lexigrams he produces, Kanzi seems to recognize about 3000 spoken words. His
trainers claim that Kanzi’s skills constitute language (Savage-Rumbaugh & Lewin, 1994). They argue that he can understand symbols and at least some syntax; that he acquired symbols simply by being around others who used them; and that he produced symbols without specific training or reinforcement. Those who work with Kanzi conclude that his communication skills are quite similar to those of a young human in terms of both the elements of language (semantics and syntax) and the acquisition of language (natural and without effortful training).
Despite their ability to communicate in complex ways, debate continues to swirl about whether these animals are using language. Many language researchers point out that chimpanzees’ signing and artificial language use is very different from how humans use language. Is the vastness of the difference important? Is using 200 signs different in some critical way from being able to use 4000 signs,
roughly the number found in the ASL dictionary (Stokoe et al., 1976)? If our only criterion for whether a communication system constitutes language is the number of words used, then we can say that nonhuman species acquire some language skills after extensive training. But as you have learned in this module, human language involves more than just using words. In particular, our manipulation of phonemes, morphemes, and syntax allow us to utter an infinite number of words and sentences, thereby conveying an infinite number of thoughts.
Kanzi is a bonobo chimpanzee that has learned to use an artificial language consisting of graphical symbols that correspond to words. Kanzi can type out responses by pushing buttons with these symbols, shown in this photo. Researchers are also interested in Kanzi’s ability to understand spoken English (which is transmitted to the headphones by an experimenter who is not in the room). MICHAEL NICHOLS/National Geographic Creative
Some researchers who have worked closely with language-trained apes observed too many critical differences between humans and chimps to conclude
that language extends beyond our species (Seidenberg & Pettito, 1979). For example:
One major argument is that apes are communicating only with symbols, not with the phrase-based syntax used by humans. Although some evidence of syntax has been reported, the majority of their “utterances” consist of single signs, a couple of signs strung together, or apparently random sequences.
There is little reputable experimental evidence showing that apes pass their language skills to other apes.
Productivity—creating new words (gestures) and using existing gestures to name new objects or events—is rare, if it occurs at all.
Some of the researchers become very engaged in the lives of these animals
and talk about them as friends and family members (Fouts, 1997; Savage- Rumbaugh & Lewin, 1994). This tendency has left critics to wonder the extent to which personal attachments to the animals might interfere with the objectivity of the data.
It must be pointed out that the communication systems of different animals have their own adaptive functions. It is possible that some species simply didn’t have a need to develop a complex form of language. However, in the case of chimpanzees, this point doesn’t hold true. Both humans and chimpanzees evolved in small groups in (for the most part) similar parts of the world; thus, chimpanzees would have faced many of the same social and environmental pressures as humans. However, their brains, although quite sophisticated, are not as large or well-developed as those of humans. It seems, therefore, that a major factor in humanity’s unique language abilities is the wonderful complexity and plasticity of the human brain.
Module 8.3c Quiz:
Genes, Evolution, and Language
Know . . . 1. Which nonhuman species has had the greatest success at learning a
human language?
A. Border collies B. Bonobo chimpanzees C. Dolphins D. Rhesus monkeys
Understand . . . 2. Studies of the KE family and the FOXP2 gene indicate that
A. language is controlled entirely by a single gene found on chromosome 7.
B. language is still fluent despite a mutation to this gene. C. this particular gene is related to one specific aspect of language. D. mutations affecting this gene lead to highly expressive language
skills.
Analyze . . . 3. What is the most accurate conclusion from research conducted on
primate language abilities?
A. Primates can learn some aspects of human language, though many differences remain.
B. Primates can learn human language in full. C. Primates cannot learn human language in any way. D. Primates can respond to verbal commands, but there is no
evidence they can respond to visual cues such as images or hand signals.
Module 8.3 Summary
aphasia
Broca’s area
cross-foster
fast mapping
language
morpheme
phoneme
pragmatics
semantics
syntax
Wernicke’s area
Know . . . the key terminology from the study of language.8.3a
Understand . . . how language is structured.8.3b
Sentences are broken down into words that are arranged according to grammatical rules (syntax). The relationship between words and their meaning is referred to as semantics. Words can be broken down into morphemes, the smallest meaningful units of speech, and phonemes, the smallest sound units that make up speech.
Studies of the KE family show that the FOXP2 gene is involved in our ability to speak. However, mutation to this gene does not necessarily impair people’s ability to think. Thus, the FOXP2 gene seems to be important for just one of many aspects of human language. Multiple brain areas are involved in language —two particularly important ones are Broca’s and Wernicke’s areas.
Apply Activity Which of these represent a single phoneme and which represent a morpheme? Do any of them represent both?
1. /dis/ 2. /s/ 3. /k/
Nonhuman species certainly seem capable of acquiring certain aspects of human language. Studies with apes have shown that they can learn and use some sign language or, in the case of Kanzi, an artificial language system involving arbitrary symbols. However, critics have pointed out that many differences between human and nonhuman language use remain.
Understand . . . how genes and the brain are involved in language use.
8.3c
Apply . . . your knowledge to distinguish between units of language such as phonemes and morphemes.
8.3d
Analyze . . . whether species other than humans are able to use language.
8.3e
Chapter 9 Intelligence Testing
9.1 Measuring Intelligence Different Approaches to Intelligence Testing 351
Module 9.1a Quiz 355
The Checkered Past of Intelligence Testing 356
Working the Scientific Literacy Model: Beliefs about Intelligence 358
Module 9.1b Quiz 360
Module 9.1 Summary 361
9.2 Understanding Intelligence Intelligence as a Single, General Ability 363
Module 9.2a Quiz 365
Intelligence as Multiple, Specific Abilities 365
Working the Scientific Literacy Model: Testing for Fluid and Crystallized Intelligence 366
Module 9.2b Quiz 371
The Battle of the Sexes 371
Module 9.2c Quiz 372
Module 9.2 Summary 373
9.3 Biological, Environmental, and Behavioural Influences on
Intelligence Biological Influences on Intelligence 375
Working the Scientific Literacy Model: Brain Size and Intelligence 377
Module 9.3a Quiz 379
Environmental Influences on Intelligence 379
Module 9.3b Quiz 382
Behavioural Influences on Intelligence 382
Module 9.3c Quiz 384
Module 9.3 Summary 384
Module 9.1 Measuring Intelligence
Leilani Muir, who passed away in Alberta in 2016. The Canadian Press/Edmonton Journal
Learning Objectives
Leilani Muir kept trying to get pregnant, but to no avail. Finally, frustrated, she went to her doctor to see if there was a medical explanation. It turned out that there was, but not one that she expected; the doctors found that her fallopian tubes had been surgically destroyed, permanently sterilizing her.
How could someone’s fallopian tubes be destroyed without them knowing? Tragically, forced sterilization was a not uncommon practice in the United States and parts of Canada for almost half of the 20th century. In 1928, Alberta passed the Sexual Sterilization Act, giving doctors the power to sterilize people deemed to be “genetically unfit,” without their consent. One of the criteria that could qualify a person for being genetically unfit was getting a low score on an IQ test, which was the reason for Leilani’s own sterilization.
Leilani Muir is one of the tens of thousands of victims of the misguided application of intelligence tests. Born into a poor farming family near Calgary, Alberta, Leilani was entered by her parents into the Provincial Training School for Mental Defectives when she was 11. A few years later, when given an intelligence test, she scored 64, which was below the 70 point cut-off required by law for forced sterilization. When she was 14, she was told by doctors she needed to have her appendix removed. Trusting the good doctors, she went under the knife, never knowing the
Know . . . the key terminology associated with intelligence and intelligence testing. Understand . . . the reasoning behind the eugenics movements and its use of intelligence tests. Apply . . . the concepts of entity theory and incremental theory to help kids succeed in school. Analyze . . . why it is difficult to remove all cultural bias from intelligence testing.
9.1a
9.1b
9.1c
9.1d
full extent of the surgery she was about to undergo. After the surgery, she was never informed that her fallopian tubes had been destroyed, and had to find out on her own after her many attempts to get pregnant. Later in her life, Leilani had her IQ re-tested. She scored 89, which is close to average.
In 1996, Leilani received some measure of justice. She sued the government of Alberta and won her case, becoming the first person to receive compensation for injustices committed under the Sexual Sterilization Act. For her lifetime of not being able to have children, she received almost $750 000 in damages.
Focus Questions
1. How have intelligence tests been misused in modern society? 2. Why do we have the types of intelligence tests that we have?
What happened to Leilani Muir was terrible and should never have happened. But this story also serves to drive home an extremely important truth about
psychology, and science in general—it is important to measure things properly. This may sound trite, but Leilani’s story underscores the importance of ensuring that the research carried out in psychology and other disciplines is as rigorous as possible. Research isn’t just about writing complicated articles that only scientists and academics read; its real-world implications may ripple through society and affect people’s lives in countless ways. In Leilani’s case, her misfortune was the result of both inhumane policies passed by government and the failure to accurately measure her intelligence. Intelligence is not something like the length or mass of a physical object; there is no “objective” standard to which we can compare our measures to see if they are accurate. Instead, we have to rely upon rigorous testing of our methodologies.
So, how can we measure intelligence accurately? What does science say? As you will see in this module, this question is not easy to answer. Intelligence
measures have a very checkered past, making the whole notion of intelligence one of the most hotly contested areas in all of psychology.
Different Approaches to Intelligence Testing
Intelligence is a surprisingly difficult concept to define. You undoubtedly know people who earn similar grades even though one may seem to be “smarter” than the other. You likely also know people who do very well in school and have “book smarts,” but have difficulty in many other aspects of life, perhaps lacking “street smarts.” Furthermore, you may perceive a person to be intelligent or unintelligent, but how do you know your perceptions are not biased by their confidence, social skills, or other qualities? The history of psychology has seen many different attempts to define and measure intelligence. In this module, we will examine some of the more influential of these attempts, and then explore some of the important social implications of intelligence testing.
Francis Galton believed that intelligence was something people inherit. Thus, he believed that an individual’s relatives were a better predictor of intelligence than practice and effort. Mary Evans Picture Library/Alamy Stock Photo
Intelligence and Perception: Galton’s
Anthropometric Approach
The systematic attempt to measure intelligence in the modern era began with Francis Galton (1822–1911) (who is often given the appellation “Sir,” because he was knighted in 1909). Galton believed that because people learn about the world through their senses, those with superior sensory abilities would be able to learn more about it. Thus, he argued, sensory abilities should be an indicator of a person’s intelligence. In 1884, Galton created a set of 17 sensory tests, such as
the highest and lowest sounds people could hear or their ability to tell the difference between objects of slightly different weights, and began testing
people’s abilities in his anthropometric laboratory. Anthropometrics (literally, “the measurement of people”) referred to methods of measuring physical and mental variation in humans. Galton’s lab attracted many visitors, allowing him to measure the sensory abilities of thousands of people in England (Gillham, 2001).
One of Galton’s colleagues, James McKeen Cattell, took his tests to the United States and began measuring the abilities of university students. This research revealed, however, that people’s abilities on different sensory tests were not correlated with each other, or only very weakly. For example, having exceptional eyesight seemed to signify little about whether one would have exceptional hearing. Clearly, this was a problem, because if two measures don’t correlate well with each other, then they can’t both be indicators of the same thing, in this case, intelligence. Cattell also found that students’ scores on the sensory tests did not predict their grades, which one would expect would also be an indicator of intelligence. As a result, Galton’s approach to measuring intelligence was generally abandoned.
Intelligence and Thinking: The Stanford–Binet Test
In contrast to Galton, a prominent French psychologist, Alfred Binet, argued that intelligence should be indicated by more complex thinking processes, such as memory, attention, and comprehension. This view has influenced most
intelligence researchers up to the present day; they define intelligence as the ability to think, understand, reason, and adapt to or overcome obstacles (Neisser et al., 1996). From this perspective, intelligence reflects how well people are able to reason and solve problems, plus their accumulated knowledge.
In 1904, Binet and his colleague, Theodore Simon, were hired by the French government to develop a test to measure intelligence. At the end of the 19th century, institutional reforms in France had made primary school education available to all children. As a result, French educators struggled to deliver a
curriculum to students ranging from the very bright to those who found school exceptionally challenging. To respond to this problem, the French government wanted an objective way of identifying “retarded” children who would benefit from
specialized education (Siegler, 1992).
Binet and Simon experimented with a wide variety of tasks, trying to capture the complex thinking processes that presumably comprised intelligence. They settled on thirty tasks, arranged in order of increasing difficulty. For example, simple tasks included repeating sentences and defining common words like “house.” More difficult tasks included constructing sentences using combinations of certain words (e.g., Paris, river, fortune), reproducing drawings from memory, and being able to explain how two things differed from each other. Very difficult tasks included being able to define abstract concepts and to logically reason
through a problem (Fancher, 1985).
Binet and Simon gave their test to samples of children from different age groups to establish the average test score for each age. Binet argued that a child’s test
score measured her mental age , the average intellectual ability score for children of a specific age. For example, if a 7-year-old’s score was the same as the average score for 7-year-olds, she would have a mental age of 7, whereas if it was the same as the average score for 10-year-olds, she would have a mental age of 10, even though her chronological age would be 7 in both cases. A child with a mental age lower than her chronological age would be expected to struggle in school and to require remedial education.
The practicality of Binet and Simon’s test was apparent to others, and soon researchers in the United States began to adapt it for their own use. Lewis Terman at Stanford University adapted the test for American children and established average scores for each age level by administering the test to thousands of children. In 1916, he published the first version of his adapted test,
and named it the Stanford-Binet Intelligence Scale (Siegler, 1992).
Terman and others almost immediately began describing the Stanford-Binet test as a test intended to measure innate levels of intelligence. This differed substantially from Binet, who had viewed his test as a measure of a child’s
current abilities, not as a measure of an innate capacity. There is a crucial difference between believing that test scores reflect a changeable ability or believing they reflect an innate capacity that is presumably fixed. The interpretation of intelligence as an innate ability set the stage for the incredibly misguided use of intelligence tests in the decades that followed, as we discuss later in this module.
To better reflect people’s presumably innate levels of intelligence, Terman
adopted William Stern’s concept of the intelligence quotient, or IQ , a label that has stuck to the present day. IQ is calculated by taking a person’s mental age, dividing it by his chronological age, and then multiplying by 100. For example, a 10-year-old child with a mental age of 7 would have an IQ of 7/10 × 100 = 70. On the other hand, if a child’s mental and chronological ages were the same, the IQ score would always be 100, regardless of the age of the child; thus, 100 became the standard IQ for the “average child.”
To see the conceptual difference implied by these two ways of reporting intelligence, consider the following two statements. Does one sound more optimistic than the other?
He has a mental age of 7, so he is 3 years behind. He has an IQ of 70, so he is 30 points below average.
To many people, being 3 years behind in mental age seems changeable; with sufficient work and assistance, it feels like such a child should be able to catch up to his peers. On the other hand, having an IQ that’s 30 points below average sounds like the diagnosis of a permanent condition; such a person seems doomed to be “unintelligent” forever.
One other odd feature of both Binet’s mental age concept and Stern’s IQ was that they didn’t generalize very well to adult populations. For example, are 80- year olds twice as intelligent as 40-year-olds? After all, an 80-year-old who was as intelligent as an average 40-year-old would have an IQ of 50 (40/80 × 100 = 50); clearly, this doesn’t make sense. Similarly, imagine a 30-year-old with a mental age of 30; her IQ would be 100. But in 10 years, when she was 40, if her
mental age stayed at 30, she would have an IQ of only 75 (30/40 × 100 = 75).
Given that IQ scores remain constant after about age 16 (Eysenck, 1994), this would mean that adults get progressively less smart with every year that they age. Although children may sometimes think exactly this about their parents, their parents would clearly have a different opinion.
To adjust for this problem, psychologists began to use a different measure,
deviation IQ, for calculating the IQ of adults (Wechsler, 1939). The deviation IQ
is calculated by comparing the person’s test score with the average score for people of the same age. In order to calculate deviation IQs, one must first establish the norm, or average, for a population. To do so, psychologists administer tests to huge numbers of people and use these scores to estimate the average for people of different ages. These averages are then used as baselines against which to compare a person. Because “average” is defined to be 100, a deviation IQ of 100 means that the person is average, whereas an IQ of 115
would mean that the person’s IQ is above average (see Figure 9.1 ). One advantage of using deviation IQ scores is that it avoids the problem of IQ scores that consistently decline with age because scores are calculated relative to others of the same age.
Figure 9.1 The Normal Distribution of Scores for a Standardized Intelligence Test
The Wechsler Adult Intelligence Scale
In an ironic twist, the Wechsler Adult Intelligence Scale (WAIS) , the most common intelligence test in use today for adolescents and adults, was developed by a man who himself had been labelled as “feeble minded” by intelligence tests after immigrating to the United States from Romania at the age of nine. David Wechsler originally developed the scale in 1955 and it is now in its fourth edition.
The WAIS provides a single IQ score for each test taker—the Full Scale IQ—but also breaks intelligence into a General Ability Index (GAI) and a Cognitive Proficiency Index (CPI), as shown in Figure 9.2 . The GAI is computed from scores on the Verbal Comprehension and Perceptual Reasoning indices. These measures tap into an individual’s intellectual abilities, but without placing much emphasis on how fast he can solve problems and make decisions. The CPI, in contrast, is based on the Working Memory and Processing Speed subtests. It is included in the Full Scale IQ category because greater working memory capacity and processing speed allow more cognitive resources to be devoted to
reasoning and solving problems. Figure 9.3 shows some sample test items from the WAIS.
Figure 9.2 Subscales of the Wechsler Adult Intelligence Scale
Figure 9.3 Types of Problems Used to Measure Intelligence These hypothetical problems are consistent with the types seen on the Wechsler Adult Intelligence Scale.
Raven’s Progressive Matrices
Although the Stanford-Binet test and the WAIS have been widely used across North America, they have also been criticized by a number of researchers. One of the key problems with many intelligence tests, such as these, is that questions often are biased to favour people from the test developer’s culture or who primarily speak the test developer’s language. This cultural bias puts people from different cultures, social classes, educational levels, and primary languages, at an immediate disadvantage. Clearly, this is a problem, because a person’s “intelligence” should not be affected by whether they are fluent in English or familiar with Western culture. In response to this problem, psychologists have tried to develop “culture-free” tests.
In the 1930s, John Raven developed Raven’s Progressive Matrices , an intelligence test that is based on pictures, not words, thus making it relatively unaffected by language or cultural background. The main set of tasks found in Raven’s Progressive Matrices measure the extent to which test takers can see patterns in the shapes and colours within a matrix and then determine which
shape or colour would complete the pattern (see Figure 9.4 ).
Figure 9.4 Sample Problem from Raven’s Progressive Matrices Which possible pattern (1–8) should go in the blank space? Check your answer at the bottom of the page. Source: “Sample Problem from Raven’s Progressive Matrices,” NCS Pearson, 1998.
Answer to Figure 9.4 : Pattern 6.
Module 9.1a Quiz:
Different Approaches to Intelligence Testing
Know . . .
1. Galton developed anthropometrics as a means to measure intelligence based on .
A. creativity B. perceptual abilities C. physical size and body type D. brain convolution
Understand . . . 2. The deviation IQ is calculated by comparing an individual’s test score
A. at one point in time to that same person’s test score at a different point in time.
B. to that same person’s test score from a different IQ test; the “deviation” between the tests is a measure of whether either test is inaccurate.
C. to that same individual’s school grades. D. to the average score for other people who are the same age.
3. In an attempt to be culturally unbiased, Raven’s Progressive Matrices relies upon what types of questions?
A. Verbal analogies B. Spatial calculations C. Visual patterns D. Practical problems that are encountered in every culture
Apply . . . 4. If someone’s mental age is double her chronological age, what would her
IQ be?
A. 100 B. 50 C. 200 D. Cannot be determined with this information
The Checkered Past of Intelligence
Testing
IQ testing in North America got a significant boost during World War I. Lewis Terman, developer of the Stanford-Binet test, worked with the United States military to develop a set of intelligence tests that could be used to identify which military recruits had the potential to become officers and which should be streamed into non-officer roles. The intention was to make the officer selection process more objective, thereby increasing the efficiency and effectiveness of officer training programs. Following World War I, Terman argued for the use of intelligence tests in schools for similar purposes—identifying students who should be channelled into more “advanced” academic topics that would prepare them for higher education, and others who should be channelled into more skill- based topics that would prepare them for direct entry into the skilled trades and the general workforce. Armed with his purportedly objective IQ tests, he was a man on a mission to improve society. However, the way he went about doing so was rife with problems.
IQ Testing and the Eugenics Movement
In order to understand the logic of Terman and his followers, it is important to examine the larger societal context in which his theories were developed. The end of the 19th and beginning of the 20th centuries was a remarkable time in human history. A few centuries of European colonialism had spread Western influence through much of the world. The Industrial Revolution, which was concentrated in the West, compounded this, making Western nations more powerful militarily, technologically, and economically. And in the sciences, Darwin’s paradigm-shattering work on the origin of species firmly established the
idea of evolution by natural selection (see Module 3.1 ), permanently transforming our scientific understanding of the living world.
Although an exciting time for the advancement of human knowledge, this confluence of events also had some very negative consequences, especially in terms of how colonialism affected non-Western cultures and people of non-White ethnicities. However, the stage was set for social “visionaries” to apply Darwin’s
ideas to human culture, and to explain the military–economic–technological dominance of Western cultures by assuming that Westerners (and especially White people) were genetically superior. This explanation served as a handy justification for the colonial powers’ imposition of Western-European values on other cultures; in fact, it was often viewed that the colonizers were actually doing other cultures a favour, helping to “civilize” them by assimilating them into a “superior” cultural system.
The social Darwinism that emerged gave rise to one of the uglier social movements of recent times—eugenics, which means “good genes” (Gillham, 2001). The history of eugenics is intimately intertwined with the history of intelligence testing. In fact, Francis Galton himself, a cousin of Charles Darwin,
coined the term eugenics, gaining credibility for his ideas after making an extensive study of the heritability of intelligence.
Many people viewed eugenics as a way to “improve” the human gene pool. Their definition of “improve” is certainly up for debate. American Philosophical Society
Galton noticed that many members of his own family were successful businessmen and some, like Charles Darwin, eminent scientists. He studied other families and concluded that eminence ran in families, which he believed
was due to “good breeding.” Although families share more than genes, such as wealth, privilege, and social status, Galton believed that genes were the basis of
the family patterns he observed (Fancher, 2009).
Galton’s views influenced Lewis Terman, who promoted an explicitly eugenic philosophy; he argued for the superiority of his own “race,” and in the interest of “improving” society, believed that his IQ tests provided a strong empirical justification for eugenic practices. One such practice was the forced sterilization of people like Leilani Muir, whom we discussed at the beginning of this module.
Supporters of eugenics often noted that its logic was based on research and philosophy from many different fields. Doing so put the focus on the abstract intellectual characteristics of eugenics rather than on some of its disturbing, real- world implications. American Philosophical Society
As Terman administered his tests to more people, it seemed like his race-based beliefs were verified by his data. Simply put, people from other cultures and other apparent ethnic backgrounds, didn’t score as highly on his tests as did White people from the West (i.e., the U.S., Canada, and Western Europe, for the most
part). For example, 40% of new immigrants to Canada and the United States
scored so low they were classified as “feebleminded” (Kevles, 1985). As a result, Terman concluded that people from non-Western cultures and non-White ethnicities generally had lower IQs, and he therefore argued that it was appropriate (even desirable) to stream them into less challenging academic pursuits and jobs of lower status. For example, he wrote, “High-grade or border- line deficiency . . . is very, very common among Spanish-Indian and Mexican families of the Southwest and also among negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come. . . . Children of this group should be segregated into separate classes. . . . They cannot master abstractions but they can often be made into efficient workers . . . from a eugenic point of view they constitute a grave problem because of their
unusually prolific breeding” (Terman, 1916, pp. 91–92).
Such ideas gained enough popularity that forced sterilization was carried out in at least 30 states and two Canadian provinces, lasting for almost half a century.
In Alberta, the Sexual Sterilization Act remained in force until 1972, by which time more than 2800 people had undergone sterilization procedures in that province alone. And as you might have guessed, new immigrants, the poor, Native people, and Black people were sterilized far more often than middle and upper class White people.
The Race and IQ Controversy
One of the reasons intelligence tests played so well into the agendas of eugenicists is that, from Terman onwards, researchers over the last century have consistently found differences in the IQ scores of people from different ethnic groups. Before we go any further, we want to acknowledge that this is a difficult, and potentially upsetting, set of research findings. However, it’s important to take a close look at this research, and to understand the controversy that surrounds it, because these findings are well known in the world of intelligence testing and could be easily misused by those who are motivated by prejudiced views. As you will see, when you take a close look at the science, the story is not nearly as clear as it may appear at first glance.
The root of this issue about “race and IQ” is that there is a clear and reliable hierarchy of IQ scores across different ethnic groups. This was first discovered in the early 1900s, and by the 1920s, the United States passed legislation making it standard to administer intelligence tests to new immigrants arriving at Ellis Island for entry into the country. The result was that overwhelming numbers of immigrants were officially classified as “morons” or “feebleminded.” Some psychologists suspected that these tests were unfair, and that the low scores of these minority groups might be due to language barriers and a lack of knowledge of American culture. Nevertheless, as intelligence tests were developed that were increasingly culturally sensitive—such as Raven’s Progressive Matrices— these differences persisted. Specifically, Asian people tended to score the highest, followed by Whites, followed by Latinos and Blacks; this has been found
in samples in several parts of the world, including Canada (Rushton & Jensen, 2005). Other researchers have found that Native people in Canada score lower as a group than Canadians with European ancestry (e.g., Beiser & Gotowiec, 2000).
The race–IQ research hit the general public in 1994 with the publication of The Bell Curve (Herrnstein & Murray, 1994), which became a bestseller. This book focused on over two decades of research that replicated the race differences in IQ that we mentioned earlier. Herrnstein and Murray also argued that human intelligence is a strong predictor of many different personal and social outcomes, such as workplace performance, income, and the likelihood of being involved in
criminal activities. Additionally, The Bell Curve argued that those of high intelligence were reproducing less than those of low intelligence, leading to a dangerous population trend in the United States. They believed that America was becoming an increasingly divided society, populated by a small class of “cognitive elite,” and a large underclass with lower intelligence. They argued that
a healthy society would be a meritocracy, in which people who had the most ability and worked the hardest would receive the most wealth, power, and status. Those who didn’t have what it took to rise to the top, such as those with low IQs, should be allowed to live out their fates, and should not therefore be helped by programs such as Head Start, affirmative action programs, or scholarships for members of visible minorities. Instead, the system should simply allow people with the most demonstrable merit to rise to the top, regardless of their cultural or
ethnic backgrounds. Although many people agree with the idea of a meritocracy in principle, a huge problem arises in implementing a meritocracy when the system is set up to systematically give certain groups advantages over other groups; in this situation, assessing true “merit” is far from straightforward.
As you can imagine, research on the race–IQ gap sparked bitter controversy. Within the academic world, some researchers have claimed that these findings
are valid (e.g., Gottfredson, 2005), whereas others have argued that these results are based on flawed methodologies and poor measurements (e.g.,
Lieberman, 2001; Nisbett, 2005). Others have sought to discredit Herrnstein and Murray’s conclusions, in particular their argument that the differences in IQ scores between ethnic groups means that there are inherent, genetic differences in intelligence between the groups. Within the general public, reaction was similarly mixed; however, this research does get used by some people to justify policies such as limiting immigration, discontinuing affirmative action programs, and otherwise working to overturn decades of progress made in the fight for civil rights and equality.
Problems with the Racial Superiority Interpretation
In many ways, the simplest critique of the racial superiority interpretation of these test score differences is that the tests themselves are culturally biased. This critique was lodged against intelligence tests from the time of Terman and, as we discussed earlier, a considerable amount of research focused on creating tests that were not biased due to language and culture. But in spite of all this work, the test score differences between ethnic groups remained.
A more subtle critique was that it wasn’t necessarily the tests that were biased, but the very process of testing itself. If people in minority groups are less familiar with standardized tests, if they are less motivated to do well on the tests, or if they are less able to focus on performing well during the testing sessions, they will be more likely to produce lower test scores. This indeed seems to be the case; researchers have found that cultural background affects many aspects of the testing process including how comfortable people are in a formal testing environment, how motivated they are to perform well on such tests, and their
ability to establish rapport with the test administrators (Anastasi & Urbina, 1996).
Research has also indicated that the IQ differences may be due to a process
known as stereotype threat , which occurs when negative stereotypes about a group cause group members to underperform on ability tests (Steele, 1997). In other words, if a Black person is reminded of the stereotype that Black people perform more poorly than White people on intelligence tests, she may end up scoring lower on that test as a result. Researchers have identified at least three reasons why this may happen. First, stereotype threat increases arousal due to the fact that individuals are aware of the negative stereotype about their group, and are concerned that a poor performance may reflect poorly on their group; this arousal then undermines their test performance. Second, stereotype threat causes people to become more self-focused, paying more attention to how well they are performing; this leaves fewer cognitive resources for them to focus on the test itself. Third, stereotype threat increases the tendency for people to actively try to inhibit negative thoughts they may have, which also reduces the
cognitive resources that could otherwise be used to focus on the test (Schmader et al., 2008). There have now been more than 200 studies on stereotype threat (Nisbett et al., 2012), establishing it as a reliable phenomenon that regularly suppresses the test scores of members of stereotyped groups.
These concerns cast doubt on the validity of IQ scores for members of non- White ethnic and cultural groups, suggesting that differences in test scores do not necessarily reflect differences in the underlying ability being tested (i.e., intelligence), but instead may reflect other factors, such as such as linguistic or cultural bias in the testing situation.
Another important critique has been lodged against the race–IQ research, arguing that even if one believes that the tests are valid and that there are intelligence differences between groups in society, these may not be the result of innate, genetic differences between the groups. For example, consider the circumstances that poor people and ethnic minorities face in countries like Canada or the United States. People from such groups tend to experience a host of factors that contribute to poorer cognitive and neurological development, such
as poorer nutrition, greater stress, lower-quality schools, higher rates of illness
(Acevedo-Garcia et al., 2008) with reduced access to medical treatment, and greater exposure to toxins such as lead (Dilworth-Bart & Moore, 2006).
One additional, subtle factor that may interfere with the test performances of people from disadvantaged groups is that the life experiences of people in those groups may encourage them to adopt certain beliefs about themselves, which then interfere with their motivations to perform their best. For example, if early experiences in educational settings lead people to believe that they are not intelligent, and that this is a fixed quality, they will tend to believe that there is little they can do to change their own intelligence, and as a result, they won’t try
very hard to do so. However, recent research suggests that it is possible to improve one’s intelligence—but one has to believe this in order to take the necessary steps to make it happen.
Working the Scientific Literacy Model Beliefs about Intelligence
Think of something you’re not very good at (or maybe have never even tried), like juggling knives, solving Sudoku puzzles, or speaking Gaelic. Most likely, you would expect that even if your initial attempts didn’t go well, with practice you could get better.
Now think about how smart you are. Do you think you could make yourself smarter? Do you ever say things like “I’m no good at math,” or “I just can’t do multiple choice tests?” Do you think about these abilities the same way that you think about knife- juggling?
Many people hold implicit beliefs that their intelligence level is relatively fixed and find it surprising that intelligence is, in fact, highly changeable. Ironically, this mistaken belief itself will tend to limit people’s potential to change their own intelligence. This is an
especially important issue for students, as children’s self- perceptions of their mental abilities have a very strong influence
on their academic performance (Greven et al., 2009).
What do we know about the kinds of beliefs that may affect test scores? Research into this phenomenon has helped to shed light on the frustrating mystery of why some people seem to consistently fall
short of reaching their potential. Carol Dweck (2002) has found that people seem to hold one of two theories about the nature of
intelligence. They may hold an entity theory : the belief that intelligence is a fixed characteristic and relatively difficult (or impossible) to change; or they may hold an incremental theory : the belief that intelligence can be shaped by experiences, practice, and effort. Whether one holds to an entity theory or incremental theory has powerful effects on one’s academic performance.
How can science test whether beliefs affect performance? In experiments by Dweck and her colleagues, students were identified as holding either entity theory or incremental theory beliefs. The students had the chance to answer 476 general knowledge questions dealing with topics such as history, literature, math, and geography. They received immediate feedback on whether their answers were correct or incorrect. Those who held entity beliefs were more likely to give up in the face of highly challenging problems, and they were likely to withdraw from situations that resulted in failure. These individuals seemed to believe that intelligence was something you either had, or you didn’t; thus, when encountering difficult problems or feelings of failure, they seemed to conclude “Well, I guess I don’t
have it,” and as a result, gave up trying (Mangels et al., 2006). As Homer Simpson has said, “Kids, you tried your best and you
failed miserably. The lesson is, never try” (Richdale & Kirkland, 1994). To the entity theorist, difficulty is a sign of inadequacy.
In comparison, people with incremental views of intelligence were
more resilient (Mangels et al., 2006), continuing to work hard even when faced with challenges and failures. After all, if intelligence and ability can change, then rather than getting discouraged by difficulties, one should keep working hard, improving one’s abilities.
Because resilience is such a desirable trait, Dweck and her colleagues tested a group of junior high students to see whether
incremental views could be taught (Blackwell et al., 2007). In a randomized, controlled experiment, they taught one group of Grade 7 students incremental theory—that they could control and change their abilities. This group’s grades increased over the school year, whereas the control group’s grades actually declined
(Figure 9.5 ).
Figure 9.5 Personal Beliefs Influence Grades
Students who held incremental views of intelligence (i.e., the belief that intelligence can change with effort) show improved grades in math compared to students who believed that
intelligence was an unchanging entity (Blackwell et al., 2007). Source: From Implicit theories of intelligence predict achievement across an Adult Transition: A
Longitudinal Study and an intervention.” Child Development, Vol 78, No 1, Pg 246-263 byLisa S.
Blackwell, Kali H. Trzesniewski, Carol Sorich Dweck. Copyright © 2007 by John Wiley & Sons, Inc.
Reproduced by permission of John Wiley & Sons, Inc.
The moral of the story? If you think you can, you might; but if you think you can’t, you won’t.
Can we critically evaluate this research? These findings suggest that it is desirable to help people adopt incremental beliefs about their abilities. However, is this always for the best? What if, in some situations, it is true that no matter how hard a person tries, he or she is unlikely to succeed, and continuing to try at all costs may be detrimental to the person’s well-being, or may close the door on other opportunities that may have turned out better? At what point do we encourage people to be more “realistic” and to accept their limitations? So far, these remain unanswered questions in this literature.
An additional difficulty surrounding these studies is that it is not fully clear what mechanisms might be causing the improvements. Does the incremental view of intelligence lead to increased attention, effort, and time invested in studying? Does it lead to less-critical self-judgments following failure experiences? Or, does it have a positive effect on mood, which has been shown to
improve performance on tests of perception and creativity (Isen et al., 1987)? In order to better understand why these mindsets work the way they do, and perhaps, how to apply them more effectively, a great deal of research is needed to determine which mechanisms are operating in which circumstances. However, regardless of the mechanism(s) involved, the fact that it is possible to help students by changing their view of intelligence could be a powerful force for educational change in the future.
Why is this relevant?
This research has huge potential to be applied in schools and to become a part of standard parenting practice. Teaching people to adopt the view that intelligence and other abilities are trainable skills should give them a greater feeling of control over their lives, strengthen their motivations, enhance their resilience to difficulty, and improve their goal-striving success. Carol Dweck and Lisa Sorich Blackwell have designed a program called Brainology to teach students from elementary through high school that the brain can be trained and strengthened through practice. They hope that programs such as this can counteract the disempowering effects of stereotypes by helping members of stereotyped groups to have greater resilience and to avoid succumbing to negative beliefs about themselves. Not only is intelligence changeable, as this research shows, but perhaps society itself can be changed through the widespread application of this research.
Module 9.1b Quiz:
The Checkered Past of Intelligence Testing
Know . . . 1. People who believe that intelligence is relatively fixed are said to
advocate a(n) theory of intelligence. A. incremental B. entity C. sexist D. hereditary
2. When people are aware of stereotypes about their social group, and their social group membership is brought to their minds, they may experience a reduction in their performance on a stereotype-relevant task. This is
known as . A. incremental intelligence B. hereditary intelligence C. stereotype threat D. intelligence discrimination
3. Eugenics was a movement that promoted A. the use of genetic engineering technologies to improve the
human gene pool.
B. the assimilation of one culture into another, often as part of colonialism.
C. using measures of physical capabilities (e.g., visual acuity) as estimates of a person’s intelligence.
D. preventing people from reproducing if they were deemed to be genetically inferior, so as to improve the human gene pool.
Apply . . . 4. As a major exam approaches, a teacher who is hoping to reduce
stereotype threat and promote an incremental theory of intelligence would most likely
A. remind test takers that males tend to do poorly on the problems. B. remind students that they inherited their IQ from their parents. C. cite research of a recent study showing that a particular gene is
linked to IQ.
D. let students know that hard work is the best way to prepare for the exam.
Analyze . . . 5. According to the discussion of the race and IQ controversy
A. there are clear IQ differences between people of different ethnicities, and these probably have a genetic basis.
B. the use of Raven’s Progressive Matrices has shown that there are in fact no differences in IQ between the “races”; any such group differences must be due to cultural biases built into the tests.
C. many scholars believe that the ethnic differences in IQ are so large that one could argue that a person’s race should be considered a relevant factor in important decisions, such as who to let into medical school or who to hire for a specific job.
D. even if tests are constructed that are culturally unbiased, the testing process itself may still favour some cultures over others.
Module 9.1 Summary
anthropometrics
deviation IQ
entity theory
incremental theory
intelligence
intelligence quotient (IQ)
mental age
Raven’s Progressive Matrices
Stanford-Binet test
stereotype threat
Wechsler Adult Intelligence Scale (WAIS)
The eugenicists believed that abilities like intelligence were inborn, and thus, by encouraging reproduction between people with higher IQs, and reducing the birthrate of people with lower IQs, the gene pool of humankind could be
9.1a Know . . . the key terminology associated with intelligence and intelligence testing.
9.1b Understand . . . the reasoning behind the eugenics movements and its use of intelligence tests.
improved.
One of the key reasons that people stop trying to succeed in school, and then eventually drop out, is that they hold a belief that their basic abilities, such as their intelligence, are fixed. Not trying then guarantees that they perform poorly, which reinforces their tendency to not try. However, this downward spiral can be stopped by training young people to think of themselves as changeable. Specifically, learning to think that the brain is like a muscle that can be strengthened through exercise leads people to improve their scores on intelligence tests, helps them become more resilient to negative circumstances, and enables them to respond to life’s challenges more effectively.
There are many reasons why the process of intelligence testing may be systematically biased, resulting in inaccuracies when testing people from certain cultural groups: Tests may contain content that is more relevant or familiar to some cultures; the method of testing (e.g., paper- and-pencil multiple-choice questions) may be more familiar to people from some cultures; the environment of testing may make people from some cultures less comfortable; the presence of negative stereotypes about one’s group may interfere with test-taking abilities; and the internalization of self-defeating beliefs may affect performance.
9.1c Apply . . . the concepts of entity theory and incremental theory to help kids succeed in school.
9.1d Analyze . . . why it is difficult to remove all cultural bias from intelligence testing.
Module 9.2 Understanding Intelligence
Lane V. Erickson/Shutterstock
Learning Objectives
Know . . . the key terminology related to understanding intelligence. Understand . . . why intelligence is divided into fluid and crystallized types. Understand . . . intelligence differences between males and females.
9.2a 9.2b
9.2c
Blind Tom was born into a Black slave family in 1849. When his mother was bought in a slave auction by General James Bethune, Tom was included in the sale for nothing because he was blind and believed to be useless. Indeed, Tom was not “smart” in the normal sense of the term. Even as an adult he could speak fewer than 100 words and would never be able to go to school. But he could play more than 7000 pieces on the piano, including a huge classical music repertoire and many of his own compositions. Tom could play, flawlessly, Beethoven, Mendelssohn, Bach, Chopin, Verdi, Rossini, and many others, even after hearing a piece only a single time. As an 11-year-old, he played at the White House, and by 16 went on a world tour. A panel of expert musicians performed a series of musical experiments on him, and universally agreed he was “among the most wonderful phenomena in musical history.” Despite his dramatic linguistic limitations, he could reproduce, perfectly, up to a 15-minute conversation without losing a single syllable, and could do so in English, French, or German, without understanding any part of what he was saying. In the mid-1800s, he was considered to be the “eighth wonder of the world.”
Today, Tom would be considered a savant , an individual with low mental capacity in most domains but extraordinary abilities in other specific areas such as music, mathematics, or art. The existence of savants complicates our discussion of intelligence considerably. Normally, the label “intelligent” or “unintelligent” is taken to indicate some sort of overall ability, the amount of raw brainpower available to the person, akin to an engine’s horsepower. But this doesn’t map onto savants at all—they have seemingly unlimited “horsepower” for certain skills and virtually none for many others. The existence of savants, and the more general phenomenon of people being good at some things (e.g., math, science) but not others (e.g., languages, art), challenges our
Apply . . . your knowledge to identify examples from the triarchic theory of intelligence. Analyze . . . whether teachers should spend time tailoring lessons to each individual student’s learning style.
9.2d
9.2e
understanding of intelligence and makes us ask more deeply, what is intelligence? Is it one ability? Or is it many?
Focus Questions
1. Is intelligence one ability or many? 2. How have psychologists attempted to measure intelligence?
When we draw conclusions about someone’s intelligence (e.g., Sally is really smart!), we intuitively know what we mean. Right? Being intelligent has to do with a person’s abilities to think, understand, reason, learn, and find solutions to problems. But this intuitive understanding unravels quickly when you start considering the questions it raises. Are these abilities related to each other? Does the content of a person’s intelligence matter? That is, does it mean the same thing if a person is very good at different things, like math, music, history, poetry, and child rearing? Or should intelligence be thought of more as a person’s abilities on these specific types of tasks? Perhaps that would mean that there isn’t any such thing as “intelligence” per se, but rather a whole host of narrower “intelligences.” As you will learn in this module, a full picture of intelligence involves considering a variety of different perspectives.
Intelligence as a Single, General Ability
When we say someone is intelligent, we usually are implying they have a high level of generalized cognitive ability. We expect intelligent people to be “intelligent” in many different ways, about many different topics. We wouldn’t normally call someone intelligent if she were good at, say, making up limericks, but nothing else. Intelligence should manifest itself in many different domains.
Scientific evidence for intelligence as a general ability dates back to early 20th- century work by Charles Spearman, who began by developing techniques to
calculate correlations among multiple measures of mental abilities (Spearman, 1923). One of these techniques, known as factor analysis , is a statistical technique that examines correlations between variables to find clusters of related variables, or “factors.” For example, imagine that scores on tests of vocabulary, reading comprehension, and verbal reasoning correlate highly together; these would form a “language ability” factor. Similarly, imagine that scores on algebra, geometry, and calculus questions correlate highly together; these would form a “math ability” factor. However, if the language variables don’t correlate very well with the math variables, then you have some confidence that these are separate factors; in this case, it would imply that there are at least two types of independent abilities: math and language abilities. For there to be an overarching general ability called “intelligence,” one would expect that tests of different types of abilities would all correlate with each other, forming only one factor.
Spearman’s General Intelligence
Spearman found that schoolchildren’s grades in different school subjects were positively correlated, even though the content of the different topics (e.g., math vs. history) was very different. This led Spearman to hypothesize the existence
of a general intelligence factor (abbreviated as “g”). Spearman believed that g represented a person’s “mental energy,” reflecting his belief that some people’s brains are simply more “powerful” than others (Sternberg, 2003). This has greatly influenced psychologists up to the present day, cementing within the field
the notion that intelligence is a basic cognitive trait comprising the ability to learn, reason, and solve problems, regardless of their nature; common intelligence
tests in use today calculate g as an “overall” measure of intelligence (Johnson et al., 2008).
But is g real? Does it predict anything meaningful? In fact, g does predict many important phenomena. For example, g correlates quite highly with high school and university grades (Neisser et al., 1996), how many years a person will stay in school, and how much they will earn afterwards (Ceci & Williams, 1997).
General intelligence scores also predict many seemingly unrelated phenomena,
such as how long people are likely to live (Gottfredson & Deary, 2004), how
quickly they can make snap judgments on perceptual discrimination tasks (i.e.,
laboratory tasks that test how quickly people form perceptions; Deary & Stough, 1996), and how well people can exert self-control (Shamosh et al., 2008). Some other examples of g’s influences are depicted in Figure 9.6 .
Figure 9.6 General Intelligence Is Related to Many Different Life Outcomes General intelligence (g) predicts not just intellectual ability, but also psychological well-being, income, and successful long-term relationships. Source: Based on “General Intelligence is related to Various Outcomes” Adapted from Herrnstein, R., & Murray, C. (1994).
The bell curve: Intelligence and class structure in American life. New York: Free Press.; Gottfredson, L. (1997). Why g
matters : Complexity of everyday life. Intelligence, 24, 79–132.
In the workplace, intelligence test scores not only predict who gets hired, but also how well people perform at a wide variety of jobs. In fact, the correlation is so
strong that after almost a century of research (Schmidt & Hunter, 1998), general mental ability has emerged as the single best predictor of job
performance (correlation = .53; Hunter & Hunter, 1984). Overall intelligence is a far better predictor than the applicant’s level of education (correlation = .10) or how well the applicant does in the job interview itself (correlation = .14). It is amazing to think that in order to make a good hiring decision, a manager would be better off using a single number given by an IQ test than actually sitting down and interviewing applicants face to face!
The usefulness of g is also shown by modern neuroscience research findings that overall intelligence predicts how well our brains work. For example, Tony Vernon at Western University and his colleagues have found that general intelligence test scores predict how efficiently we conduct impulses along nerve
fibres and across synapses (Johnson et al., 2005; Reed et al., 2004). This
efficiency of nerve conduction allows for more efficient information processing overall. As a result, when working on a task, the brains of highly intelligent people don’t have to work as hard as those of less intelligent people; high IQ
brains show less overall brain activation than others for the same task (Grabner et al., 2003; Haier et al., 1992).
Thus, overall intelligence, as indicated by g, is related to many real-world phenomena, from how well we do at work to how well our brains function.
Does g Tell us the Whole Story?
Clearly, g reflects something real. However, we have to remember that correlation does not equal causation. It is possible that the effects of g are due to motivation, self-confidence, or other variables. For example, one would expect that being motivated to succeed, as well as being highly self-confident, could lead to better grades, better IQ scores, and better job performance. Therefore, it is important to be cautious when interpreting these results.
We should also ask whether g can explain everything about a person’s intelligence. For example, how could a single number possibly capture the kinds of genius exhibited by savants like Blind Tom, who are exceptionally talented in some domains but then severely impaired in others? It is easy to find other examples in your own experience; surely, you have known people who were very talented in art or music but terrible in math or science? Or perhaps you have known an incredibly smart person who was socially awkward, or a charismatic and charming person whom you’d never want as your chemistry partner? There may be many ways of being intelligent, and reducing such diversity to a single number seems to overlook the different types of intelligence that people have.
Module 9.2a Quiz:
Intelligence as a Single, General Ability
Know . . . 1. Spearman believed that
A. people have multiple types of intelligence. B. intelligence scores for math and history courses should not be
correlated.
C. statistics cannot help researchers understand how different types of intelligence are related to each other.
D. some people’s brains are more “powerful” than others, thus giving them more “mental energy.”
Understand . . . 2. What is factor analysis?
A. A method of ranking individuals by their intelligence B. A statistical procedure that is used to identify which sets of
psychological measures are highly correlated with each other
C. The technique of choice for testing for a single, general intelligence
D. The technique for testing the difference between two means
3. Researchers who argue that g is a valid way of understanding intelligence would NOT point to research showing
A. people with high g make perceptual judgments more quickly. B. people with high g are more likely to succeed at their jobs. C. the brains of people with low g conduct impulses more slowly. D. people with low g are better able to do some tasks than people
with high g.
Intelligence as Multiple, Specific Abilities
Spearman himself believed that g didn’t fully capture intelligence because his own analyses showed that although different items on an intelligence test were correlated with each other, their correlations were never 1.0, and usually far less
than that. Thus, g cannot be the whole story; there must, at the very least, be other factors that account for the variability in how well people respond to different questions.
One possible explanation is that in addition to a generalized intelligence, people also possess a number of specific skills. Individual differences on these skills may explain some of the variability on intelligence tests that is not accounted for
by g. In a flurry of creativity, Spearman chose the inspired name “s” to represent this specific-level, skill-based intelligence. His two-factor theory of intelligence
was therefore comprised of g and s, where g represents one’s general, overarching intelligence, and s represents one’s skill or ability level for a given task.
Nobody has seriously questioned the s part of Spearman’s theory; obviously, each task in life, from opening a coconut, to solving calculus problems, requires
abilities that are specific to the task. However, the concept of g has come under heavy fire throughout the intervening decades, leading to several different
theories of multiple intelligences.
The first influential theory of multiple intelligences was created by Louis Thurstone, who examined scores of general intelligence tests using factor
analysis, and found seven different clusters of what he termed primary mental abilities. Thurstone’s seven factors were word fluency (the person’s ability to produce language fluently), verbal comprehension, numeric abilities, spatial
visualization, memory, perceptual speed, and reasoning (Thurstone, 1938). He argued that there was no meaningful g, but that intelligence needed to be understood at the level of these primary mental abilities that functioned
independently of each other. However, Spearman (1939) fired back, arguing that Thurstone’s seven primary mental abilities were in fact correlated with each other, suggesting that there was after all an overarching general intelligence.
A highly technical and statistical debate raged for several more decades
between proponents of g and proponents of multiple intelligences, until it was eventually decided that both of them were right.
The Hierarchical Model of Intelligence
The controversy was largely settled by the widespread adoption of hierarchical
models that describe how some types of intelligence are “nested” within others in a similar manner to how, for example, a person is nested within her community, which may be nested within a city. The general hierarchical model describes how
our lowest-level abilities (those relevant to a particular task, like Spearman’s s) are nested within a middle level that roughly corresponds to Thurstone’s primary mental abilities (although not necessarily the specific ones that Thurstone
hypothesized), and these are nested within a general intelligence (Spearman’s g; Gustaffson, 1988). By the mid-1990s, analyses of prior research on intelligence concluded that almost all intelligence studies were best explained by a three-
level hierarchy (Carroll, 1993).
What this means is that we have an overarching general intelligence, which is made up of a small number of sub-abilities, each of which is made up of a large number of specific abilities that apply to individual tasks. However, even this didn’t completely settle the debate about what intelligence really is, because it left open a great deal of room for different theories of the best way to describe the middle-level factors. And as you will see in the next section, even the debate
about g has been updated in recent years.
Working the Scientific Literacy Model
Testing for Fluid and Crystallized Intelligence
The concept of g implies that performance on all aspects of an intelligence test is influenced by this central ability. But careful analyses of many data sets, and recent neurobiological evidence,
have shown that there may be two types of g that have come to be called fluid intelligence (Gf) and crystallized intelligence (Gc).
What do we know about fluid and crystallized intelligence? The distinction between fluid and crystallized intelligence is basically the difference between “figuring things out” and
“knowing what to do from past experience.” Fluid intelligence (Gf) is a type of intelligence used in learning new information
and solving new problems not based on knowledge the person already possesses. Tests of Gf involve problems such as pattern recognition and solving geometric puzzles, neither of which is heavily dependent on past experience. For example, Raven’s Progressive Matrices, in which a person is asked to complete a series of geometric patterns of increasing complexity (see Module 9.1 ), is the most widely used measure of Gf. In contrast, crystallized intelligence (Gc) is a type of intelligence that draws upon past learning and experience. Tests of Gc, such as tests of vocabulary and general knowledge, depend heavily on individuals’ prior knowledge to come up with
the correct answers (Figure 9.7 ; Cattell, 1971).
Figure 9.7 Fluid and Crystallized Intelligence
Fluid intelligence is dynamic and changing, and may eventually become crystallized into a more permanent form. AVAVA/Shutterstock
Gf and Gc are thought to be largely separate from each other, with two important exceptions. One is that having greater fluid intelligence means that the person is better able to process information and to learn; therefore, greater Gf may, over time, lead to greater Gc, as the person who processes more
information will gain more crystallized knowledge (Horn & Cattell, 1967). Note, however, that this compelling hypothesis has received little empirical support thus far (Nisbett et al., 2012). The second is that it is difficult, perhaps impossible, to measure Gf without tapping into people’s pre-existing knowledge and experience, as we discuss below.
How can science help distinguish between fluid and crystallized intelligence? One interesting line of research that supports the Gf/Gc distinction comes from examining how each type changes over
the lifespan (Cattell, 1971; Horn & Cattell, 1967). In one study, people aged 20 to 89 years were given a wide array of tasks,
including the Block Design task (see Figure 9.3 ), the Tower of London puzzle (see Figure 9.8 ), and tests of reaction time. Researchers have found that performance in Gf-tasks declines after a certain age, which some research estimates as middle
adulthood (Bugg et al., 2006), whereas other studies place the beginning of the decline as early as the end of adolescence
(Avolio & Waldman, 1994; Baltes & Lindenberger, 1997). Measures of Gc (see Figure 9.9 ), by comparison, show greater stability as a person ages (Schaie, 1994). Healthy, older adults generally do not show much decline, if any, in their crystallized knowledge, at least until they reach their elderly years
(Miller et al., 2009).
Figure 9.8 Measuring Fluid Intelligence
The Tower of London problem has several versions, each of which requires the test taker to plan and keep track of rules. For example, the task might involve moving the coloured beads from the initial position so that they match any of the various end goal positions. Source: Shallice, T. (1982). Specific impairments of planning. Philosophical Transcripts of the Royal
Society of London, B 298, 199–209. “Measuring Fluid Intelligence.” Copyright © 1982 by The Royal
Society. Reprinted by permission of The Royal Society.
Figure 9.9 Measuring Crystallized Intelligence
Crystallized intelligence refers to facts, such as names of countries.
Neurobiological evidence further backs this up. The functioning of brain regions associated with Gf tasks declines sooner than the
functioning of those regions supporting Gc tasks (Geake & Hansen, 2010). For example, the decline of Gf with age is associated with reduced efficiency in the prefrontal cortex
(Braver & Barch, 2002), a key brain region involved in the cognitive abilities that underlie fluid intelligence (as discussed below). In contrast, this brain region does not play a central role in crystallized intelligence, which is more dependent on long-term memory systems that involve a number of different regions of the cortex.
Can we critically evaluate crystallized and fluid intelligence? There are certainly questions we can ask about crystallized and fluid intelligence. For one, is there really any such thing as fluid intelligence, or does it merely break down into specific sub- abilities?
Cognitive psychologists generally accept that fluid intelligence is a blending of several different cognitive abilities. For example, the abilities to switch attention from one stimulus to another, inhibit distracting information from interfering with concentration, sustain attention on something at will, and keep multiple pieces of information in working memory at the same time, are all part of
fluid intelligence (Blair, 2006). If Gf is simply a statistical creation that reflects the integration of these different processes, perhaps researchers would be better off focusing their attention on these systems, rather than the more abstract construct Gf.
Another critique is that fluid and crystallized intelligence are not, after all, entirely separable. Consider the fact that crystallized intelligence involves not only possessing knowledge, but also being able to access that knowledge when it’s needed. Fluid cognitive processes, and the brain areas that support them such as the prefrontal cortex, play important roles in both storing and retrieving crystallized knowledge from long-term memory
(Ranganath et al., 2003).
Similarly, tests of fluid intelligence likely also draw upon crystallized knowledge. For example, complete-the-pattern tasks
such as Raven’s Progressive Matrices may predominantly reflect fluid intelligence, but people who have never seen any type of similar task or had any practice with such an exercise will likely struggle with them more than someone with prior exposure to similar tasks. Imagine learning a new card game—you would have to rely on your fluid intelligence to help you learn the rules, figure out effective strategies, and outsmart your opponents. However, your overall knowledge of cards, games, and strategies will help you, especially if you compare yourself to a person who has played no such games in his life.
Why is this relevant? Recognizing the distinctiveness of Gf and Gc can help to reduce stereotypes and expectations about intelligence in older persons, reminding people that although certain kinds of intelligence may decline with age, other types that rely on accumulated knowledge
and wisdom may even increase as we get older (Kaufman, 2001). Also, research on fluid intelligence has helped psychologists to develop a much more detailed understanding of the full complement of cognitive processes that make up intelligence, and to devise tests that measure these processes more precisely.
Sternberg’s Triarchic Theory of Intelligence
Other influential models of intelligence have been proposed in attempts to move
beyond g. For example, Robert Sternberg (1983, 1988) developed the triarchic theory of intelligence , a theory that divides intelligence into three distinct types: analytical, practical, and creative (see Figure 9.10 ). These components can be described in the following ways:
Analytical intelligence is “book smarts.” It’s the ability to reason logically
through a problem and to find solutions. It also reflects the kinds of abilities
that are largely tested on standard intelligence tests that measure g. Most intelligence tests predominantly measure analytical intelligence, while generally ignoring the other types.
Practical intelligence is “street smarts.” It’s the ability to find solutions to real-world problems that are encountered in daily life, especially those that involve other people. Practical intelligence is what helps people adjust to new environments, learn how to get things done, and accomplish their goals. Practical intelligence is believed to have a great deal to do with one’s job performance and success.
Creative intelligence is the ability to generate new ideas and novel solutions to problems. Obviously, artists must have some level of creative intelligence, because they are, by definition, trying to create things that are new. It also takes creative intelligence to be a scientist because creative thinking is often required to conceive of good scientific hypotheses and develop ways of
testing them (Sternberg et al., 2001).
Figure 9.10 The Triarchic Theory of Intelligence According to psychologist Robert Sternberg, intelligence comprises three overlapping yet distinct components. Source: Lilienfeld, Scott O.; Lynn, Steven J; Namy, Laura L.; Woolf, Nancy J., Psychology: From Inquiry To Understanding,
Books A La Carte Edition, 2nd Ed., ©2011. Reprinted and Electronically reproduced by permission of Pearson Education,
Inc., New York, NY.
Myths in Mind Learning Styles One of the biggest arenas in which people have applied the idea that there are multiple types of intelligence is the widespread belief in educational settings that different people process information better through specific modalities, such as sight, hearing, and bodily movement. If this is true, then it suggests that people have different learning styles (e.g., people may be visual learners, auditory learners, tactile learners, etc.), and therefore, educators would be more effective if they tailor their lesson plans to the learning styles of their students, or at least ensure that they appeal to a variety of learning styles.
However, finding evidence to support this has proven difficult. In fact, dozens of studies have failed to show any benefit for tailoring information
to an individual’s apparent learning style (Pasher et al., 2008). This result probably reflects the fact that regardless of how you encounter information—through reading, watching, listening, or moving around— retaining it over the long term largely depends on how deeply you
process and store the meaning of the information (Willingham, 2004), which in turn is related to how motivated students are to learn. As a result, rather than trying to match the way that information is presented to the presumed learning styles of students, it is likely far more important for teachers to be able to engage students in ways they find interesting, meaningful, fun, personally relevant, and experientially engaging.
Sternberg believed that both practical and creative intelligences are better than analytical intelligence at predicting real-world outcomes, such as job success
(Sternberg et al., 1995). However, some psychologists have criticized Sternberg’s studies of job performance, arguing that the test items that were supposed to measure practical intelligence were merely measuring job-related
knowledge (Schmidt & Hunter, 1993). Other psychologists have questioned whether creative intelligence, one of the key components of Sternberg’s theory, actually involves “intelligence” per se, or is instead measuring the tendency to
think in ways that challenge norms and conventions (Gottfredson, 2003; Jensen, 1993). These critiques show us how challenging it can be to define intelligence, and to predict how intelligence—or intelligences—will influence real- world behaviours.
Gardner’s Theory of Multiple Intelligences
Howard Gardner proposed an especially elaborate theory of multiple intelligences. Gardner was inspired by specific cases, such as people who were savants (discussed in the introduction to this module), who had extraordinary
abilities in limited domains, very poor abilities in many others, and low g. Gardner also was influenced by cases of people with brain damage, which indicated that some specific abilities could be dramatically affected while others remained
intact (Gardner, 1983, 1999). He also noted that “normal people” (presumably, those of us who are not savants and also don’t have brain damage) differ widely in their abilities and talents, having a knack for some things but hopeless at others, which doesn’t fit the notion that intelligence is a single, overarching ability.
Based on his observations, Gardner proposed a theory of multiple intelligences , a model claiming that there are seven (now updated to at least nine) different forms of intelligence, each independent from the others (see Table 9.1 ). As intuitively appealing as this is, critics have pointed out that few of Gardner’s intelligences can be accurately and reliably measured, making his theory unfalsifiable and difficult to research. For example, how would you reliably measure “existential intelligence” or “bodily/kinesthetic intelligence”? You cannot simply ask people how existential they are, or how well they are able to attune to their bodies, relative to other people. Creating operational definitions of these concepts has proven to be a difficult challenge, and has held back empirical work on Gardner’s theory. This is not a critique against Gardner specifically, but rather, highlights the need for researchers to develop better ways of measuring
intelligence (Tirri & Nokelainen, 2008).
Table 9.1 Gardner’s Proposed Forms of Intelligence Source: Based on The Nine Types of Intelligence By Howard Gardner.
Verbal/linguistic
intelligence
The ability to read, write, and speak effectively
Logical/mathematical
intelligence
The ability to think with numbers and use abstract thought; the
ability to use logic or mathematical operations to solve
problems
Visuospatial
intelligence
The ability to create mental pictures, manipulate them in the
imagination, and use them to solve problems
Bodily/kinesthetic
intelligence
The ability to control body movements, to balance, and to
sense how one’s body is situated
Musical/rhythmical
intelligence
The ability to produce and comprehend tonal and rhythmic
patterns
Interpersonal
intelligence
The ability to detect another person’s emotional states,
motives, and thoughts
Self/intrapersonal
intelligence
Self-awareness; the ability to accurately judge one’s own
abilities, and identify one’s own emotions and motives
Naturalist
intelligence
The ability to recognize and identify processes in the natural
world—plants, animals, and so on
Existential
intelligence
The tendency and ability to ask questions about purpose in life
and the meaning of human existence
PSYCH @ The NFL Draft One rather interesting application of IQ has been to try to predict who will succeed in their careers. One test in particular, the Wonderlic Personnel Test, is widely used to predict career success in many different types of
jobs (Schmidt & Hunter, 1998; Schmidt et al., 1981). It has even become famous to National Football League (NFL) fans, because Wonderlic scores are one of many factors that influence which college players are drafted by NFL teams. This is no small thing—high draft picks receive multimillion dollar contracts. The logic behind using Wonderlic scores is that football is a highly complex game, involving learning and memorizing many complicated strategies, following all the rules, and being able to update strategies “on the fly.” Football is not only about being agile, fast, and strong; it might also involve intelligence.
But does the Wonderlic, a 50-item, 12-minute IQ test, actually predict NFL success, as the NFL has believed since the 1970s? According to research, the answer is “only sometimes,” but not the way you might think.
After studying 762 players from the 2002, 2003, and 2004 drafts and measuring their performance in multiple ways, researchers concluded that there was no significant correlation between Wonderlic scores and performance. What’s more, the performance of only two football positions, tight end and defensive back, showed any significant correlation with Wonderlic scores, and it was in a negative direction
(Lyons et al., 2009)! This means that lower intelligence scores predicted greater football success for these positions.
It seems that NFL teams would be well advised to throw out the Wonderlic test entirely, or perhaps only use it to screen for defensive backs and tight ends, and choose the lower-scoring players. No offence is intended whatsoever to football players, who may be extremely intelligent individuals, but in general, being highly intelligent does not seem to be an advantage in professional football. In the now immortalized words of former Washington Redskins quarterback Joe Thiesmann, “Nobody in the game of football should be called a genius. A genius is somebody like Norman Einstein.”
The Wonderlic Personnel Test is supposed to predict success in professional football, although it is not always very successful. This failure could be because of low validity. Ed Reinke/AP Images
Gardner’s theory has set off a firestorm of controversy in the more than 30 years since its initial proposal, gaining little traction in the academic literature, but being widely embraced in applied fields, such as education. While critics point to the lack of reliable ways of measuring Gardner’s different intelligences, proponents argue that there is more to a good theory than whether you can measure its
constructs. From the applied perspective, Gardner’s theory is useful. It helps teachers to create more diverse and engaging lesson plans to connect with and motivate students with different strengths. It helps people to see themselves as capable in different ways, rather than feeling limited by their IQ score, especially if it is not very high. And, it helps explain the wide range of human abilities and accomplishments far better than a mere IQ score.
From this perspective, perhaps the “psychometric supremacists” (Kornhaber, 2004), who insist that variables must be reliably quantifiable, might be missing the point. After all, even though IQ scores, for example, predict real-world outcomes like job status and income and offer a reliable means for identifying
students who qualify for extra educational attention (such as “gifted” students or students with learning issues), they help very little in understanding people’s strengths or weaknesses, and offer little to no guidance in actually helping people to improve their performance in different areas. Besides, IQ tests are almost exclusively based on highly unrealistic and limited testing situations, such as answering questions on paper-and-pencil tests while sitting in a room, whereas Gardner’s theory was formed out of real-world observations of the abilities of people with a wide range of accomplishments. Given that it is essentially impossible to objectively quantify many different types of abilities (e.g., being a good dancer, farmer, actor, comedian), it follows that you cannot judge a theory that purports to explain such abilities on the same grounds as theories about more easily quantifiable constructs.
The debate over Gardner’s theory lays bare a fundamental tension in the psychological sciences, which is that sometimes at least, the nuances of human behaviour cannot be easily measured, or perhaps even be measured at all. Should the observations and wisdom of teachers with decades of experience be discounted because scientists cannot develop quantifiable measures of certain constructs? However, if you accept the argument that “human experience” can trump psychometrically rigorous evidence, then where do you draw the line? Does this not throw into question the whole scientific basis of psychology itself?
We can’t resolve these questions for you here, but they remain excellent questions. The debate rages on.
Module 9.2b Quiz:
Intelligence as Multiple, Specific Abilities
Know . . . 1. Which of the following is not part of the triarchic theory of intelligence?
A. Practical intelligence B. Analytical intelligence C. Kinesthetic intelligence D. Creative intelligence
2. proposed that there are multiple forms of intelligence, each independent from the others.
A. Robert Sternberg B. Howard Gardner C. L. L. Thurstone D. Raymond Cattell
3. The ability to adapt to new situations and solve new problems reflects intelligence(s), whereas the ability to draw on one’s experiences and knowledge reflects intelligence(s).
A. fluid; crystallized B. crystallized; fluid C. general; multiple D. multiple; general
Analyze . . . 4. The hierarchical model of intelligence claims that
A. some types of intelligence are more powerful and desirable than others.
B. intelligence is broken down into two factors, a higher-level factor called g, and a lower-level factor called s.
C. scores on intelligence tests are affected by different levels of factors, ranging from lower-level factors such as physical health, to higher-level factors such as a person’s motivation for doing well on a test.
D. intelligence is comprised of three levels of factors, which are roughly similar to Spearman’s g, Thurstone’s primary mental abilities, and Spearman’s s.
The Battle of the Sexes
The distinction between g and multiple intelligences plays an important role in the oft-asked question, “Who is smarter, females or males?” Although earlier
studies showed some average intelligence differences between males and females, this has not been upheld by subsequent research and is likely the result of bias in the tests that favoured males over females. One of the most conclusive studies used 42 different tests of mental abilities to compare males and females
and found almost no differences in intelligence between the sexes (Johnson & Bouchard, 2007).
Some research has found that although males and females have the same average IQ score, there is much greater variability in male scores, which suggests that there are more men with substantial intellectual challenges, as well
as more men who are at the top of the brainpower heap (Deary et al., 2007; Dykiert et al., 2009). However, this may not be as simple as it appears. For example, one type of test that shows this male advantage at the upper levels of ability examines math skills on standardized tests. A few decades ago, about 12
times more males than females scored at the very top (Benbow & Stanley, 1983). This difference has decreased in recent years to 3–4 times as many males scoring at the top end of the spectrum. Not surprisingly, this change has occurred just as the number of math courses being taken by females—and the efforts made to increase female enrollment in such courses—has increased. So, the difference in results between the sexes is still there, but has been vastly
reduced by making math education more accessible for females (Wai et al., 2010).
The apparent advantage enjoyed by males may also be the result of an unintentional selection bias. More males than females drop out of secondary school; because these males would have lower IQs, on average, the result is that fewer low-IQ men attend university. Therefore, most of the samples of students used in psychology studies are skewed in that they under-represent men with low IQs. This biased sampling of males and females would make it
seem like men have higher fluid intelligence, when in reality they may not (Flynn & Rossi-Casé, 2011).
So, who is smarter, males or females? Neither. The best data seems to show that they are basically equal in overall intelligence.
Do Males and Females have Unique Cognitive
Skills?
Although the results discussed above suggest that males and females are equally intelligent, when multiple intelligences are considered, rather than overall IQ, a clear difference between the sexes does emerge. Females are, on average, better at verbal abilities, some memory tasks, and the ability to read people’s basic emotions, whereas males have the advantage on visuospatial
abilities, such as mentally rotating objects or aiming at objects (see Figure 9.11 ; Halpern & LaMay, 2000; Johnson & Bouchard, 2007; Tottenham et al., 2005; Weiss et al., 2003).
Figure 9.11 Mental Rotation and Verbal Fluency Tasks Some research indicates that, on average, males outperform females on mental rotation tasks (a), while females outperform men on verbal fluency (b).
This finding is frequently offered as an explanation for why males are more represented in fields like engineering, science, and mathematics. However, there are many other factors that could explain the under-representation of women in these disciplines, such as prevalent stereotypes that discourage girls from entering the maths and sciences, parents from supporting them in doing so, and
teachers from evaluating females’ work without bias.
Overlooking the many other factors that limit females’ participation in the maths and sciences is a dangerous thing to do. This was dramatically shown in 2005 when the President of Harvard University, Lawrence Summers, was removed from his position shortly after making a speech in which he argued that innate differences between the sexes may be responsible for under-representation of women in science and engineering. The outrage many expressed at his comments reflected the fact that many people realize that highlighting innate differences while minimizing or ignoring systemic factors only serves to perpetuate problems, not solve them.
Module 9.2c Quiz:
The Battle of the Sexes
Know . . . 1. Men tend to outperform women on tasks requiring , whereas
women outperform men on tasks requiring . A. spatial abilities; the ability to read people’s emotions B. practical intelligence; interpersonal intelligence C. memory; creativity D. logic; intuition
Analyze . . . 2. Research on gender differences in intelligence leads to the general
conclusion that
A. males are more intelligent than females. B. females are more intelligent than males. C. males and females are equal in overall intelligence. D. it has been impossible, thus far, to tell which gender is more
intelligent.
Module 9.2 Summary
crystallized intelligence (Gc)
factor analysis
fluid intelligence (Gf)
general intelligence factor (g)
multiple intelligences
savant
triarchic theory of intelligence
Mental abilities encompass both the amount of knowledge accumulated and the ability to solve new problems. This understanding is consistent not only with our common views of intelligence, but also with the results of decades of intelligence testing. Also, the observation that fluid intelligence can decline over the lifespan, even as crystallized intelligence remains constant, lends further support to the contention that they are different abilities.
Males and females generally show equal levels of overall intelligence, as measured by standard intelligence tests. However, men do outperform women on some tasks, particularly spatial tasks such as mentally rotating objects, whereas women outperform men on other tasks, such as perceiving emotions. Although there are some male–female differences in specific abilities, such as math, it is not yet clear whether these reflect innate differences between the sexes, or whether other factors are responsible, such as reduced enrollment of women in math classes and the presence of stereotype threat in testing
9.2a Know . . . the key terminology related to understanding intelligence.
9.2b Understand . . . why intelligence is divided into fluid and crystallized types.
9.2c Understand . . . intelligence differences between males and females.
sessions.
This theory proposes the existence of analytical, practical, and creative forms of intelligence.
Apply Activity Classify whether the individual in the following scenario is low, medium, or high in regard to each of the three aspects of intelligence.
Katrina is an excellent chemist. She has always performed well in school, so it is no
surprise that she earned her PhD from a prestigious institution. Despite her many
contributions and discoveries related to chemistry, however, she seems to fall short in
some domains. For example, Katrina does not know how to cook her own meals and if
anything breaks at her house, she has to rely on someone else to fix it.
Certainly, no one would want to discourage teachers from being attentive to the unique characteristics that each student brings to the classroom. However, large- scale reviews of research suggest that there is little basis for individualized teaching based on learning styles (e.g., auditory, visual, kinesthetic).
9.2d Apply . . . your knowledge to identify examples from the triarchic theory of intelligence.
9.2e Analyze . . . whether teachers should spend time tailoring lessons to each individual student’s learning style.
Module 9.3 Biological, Environmental, and Behavioural Influences on Intelligence
Miguel Medina/AFP/Newscom
Learning Objectives
Know . . . the key terminology related to heredity, environment, and intelligence. Understand . . . different approaches to studying the genetic basis of intelligence.
9.3a
9.3b
In 1955, the world lost one of the most brilliant scientists in history, Albert Einstein. Although you are probably familiar with his greatest scientific achievements, you may not know about what happened to him after he died—or more specifically, what happened to his brain.
Upon his death, a forward-thinking pathologist, Dr. Thomas Harvey, removed Einstein’s brain (his body was later cremated) so that it could be studied in the hope that medical scientists would eventually unlock the secret to his genius. Dr. Harvey took photographs of Einstein’s brain, and then it was sliced up into hundreds of tissue samples placed on microscope slides, and 240 larger blocks of brain matter, which were preserved in fluid. Surprisingly, Dr. Harvey concluded that the brain wasn’t at all remarkable, except for being smaller than average (1230 grams, compared to the average of 1300–1400 grams).
You might expect that Einstein’s brain was intensively studied by leading neurologists. But, instead, the brain mysteriously disappeared. Twenty- two years later, a journalist named Steven Levy tried to find Einstein’s brain. The search was fruitless until Levy tracked down Dr. Harvey in Wichita, Kansas, and interviewed him in his office. Dr. Harvey was initially reluctant to tell Levy anything about the brain, but eventually admitted that he still had it. In fact, he kept it right there in his office! Sheepishly, Dr. Harvey opened a box labelled “Costa Cider” and there, inside two large jars, floated the chunks of Einstein’s brain. Levy later wrote, “My eyes were fixed upon that jar as I tried to comprehend that these pieces of gunk bobbing up and down had caused a revolution in physics and quite possibly changed the course of civilization. Swirling in formaldehyde was the power of the smashed atom, the mystery of the universe’s black holes, the utter miracle of human achievement.”
Since that time, several research teams have discovered important
Apply . . . your knowledge of environmental and behavioural effects on intelligence to understand how to enhance your own cognitive abilities. Analyze . . . the belief that older children are more intelligent than their younger siblings.
9.3c
9.3d
abnormalities in Einstein’s brain. Einstein had a higher than normal ratio
of glial cells to neurons in the left parietal lobe (Diamond et al., 1985) and parts of the temporal lobes (Kigar et al., 1997), and a higher density of neurons in the right frontal lobe (Anderson & Harvey, 1996). Einstein’s parietal lobe has been shown to be about 15% larger than
average, and to contain an extra fold (Witelson et al., 1999). The frontal lobes contain extra convolutions (folds and creases) as well. These extra folds increase the surface area and neural connectivity in those areas.
How might these unique features have affected Einstein’s intelligence? The frontal lobes are heavily involved in abstract thought, and the parietal lobes are involved in spatial processing, which plays a substantial role in mathematics. Thus, these unique brain features may provide a key part of the neuroanatomical explanation for Einstein’s remarkable abilities in math and physics. Einstein not only had a unique mind, but a unique brain.
Focus Questions
1. Which biological and environmental factors have been found to be important contributors to intelligence?
2. Is it possible for people to enhance their own intelligence?
Wouldn’t it be wonderful to be as smart as Einstein? Or even just smarter than you already are? Imagine if you could boost your IQ, upgrading your brain like you might upgrade a hard drive. You could learn more easily, think faster, and remember more. What benefits might you enjoy? Greater success? A cure for cancer? A Nobel Prize? At least you might not have to study as much to get good grades. As you will read in this module, there are in fact ways to improve your intelligence (although perhaps not to “Einsteinian” levels). However, to understand how these techniques can benefit us, we must also understand how our biology and our environment—”nature” and “nurture”—interact to influence intelligence.
Biological Influences on Intelligence
The story of Einstein’s brain shows us, once again, that our behaviours and abilities are linked to our biology. However, although scientists have been interested in these topics for over 100 years, we are only beginning to understand the complex processes that influence measures like IQ scores. In this section, we discuss the genetic and neural factors that influence intelligence, and how they may interact with our environment.
The Genetics of Intelligence: Twin and Adoption
Studies
The belief that intelligence is a capacity that we are born with has been widely held since the early studies of intelligence. However, early researchers lacked today’s sophisticated methods for studying genetic influences, so they had to rely upon their observations of whether intelligence ran in families, which it seemed
to do (see Module 9.1 ). Since those early days, many studies have been conducted to see just how large the genetic influence on intelligence may be.
Studies of twins and children who have been adopted have been key tools allowing researchers to begin estimating the genetic contribution to intelligence. Decades of such research have shown that genetic similarity does contribute to intelligence test scores. Several important findings from this line of study are
summarized in Figure 9.12 (Plomin & Spinath, 2004). The most obvious trend in the figure shows that as the degree of genetic relatedness increases,
similarity in IQ scores also increases. The last two bars on the right of Figure 9.12 present perhaps the strongest evidence for a genetic basis for intelligence. The intelligence scores of identical twins correlate with each other at about .85 when they are raised in the same home, which is much higher than the correlation for fraternal twins. Even when identical twins are adopted and raised apart, their intelligence scores are still correlated at approximately .80—a very strong relationship. In fact, this is about the same correlation that researchers
find when individuals take the same intelligence test twice and are compared with themselves!
Figure 9.12 Intelligence and Genetic Relatedness Several types of comparisons reveal genetic contributions to intelligence
(Plomin & Spinath, 2004). Generally, the closer the biological relationship between people, the more similar their intelligence scores. Source: Adapted from Plomin, R., & Spinath, F. M. (2004). Intelligence: Genetics, genes, and genomics. Journal of
Personality & Social Psychology, 86 (1), 112–129.
The Heritability of Intelligence
Overall, the heritability of intelligence is estimated to be between 40% and 80%
(Nisbett et al., 2012). However, interpreting what this means is extremely tricky. People often think that this means 40% or more of a person’s intelligence is determined by genes. But this is a serious misunderstanding of heritability.
A heritability estimate describes how much of the differences between people in a sample can be accounted for by differences in their genes (see Module 3.1 ). This may not sound like a crucial distinction, but in fact it’s extremely
important! It means that a heritability estimate is not a single, fixed number;
instead, it is a number that depends on the sample of people being studied. Heritability estimates for different samples can be very different. For example, the heritability of intelligence for wealthy people has been estimated to be about
72%, but for people living in poverty, it’s only 10% (Turkheimer et al., 2003). Why might this be?
The key to solving this puzzle is to recognize that heritability estimates depend on other factors, such as how different or similar people’s environments are. If people in a sample inhabit highly similar environments, the heritability estimate will be higher, whereas if they inhabit highly diverse environments, the heritability estimate will be lower. Because most wealthy people have access to good nutrition, good schools, plenty of enrichment opportunities, and strong parental support for education, these factors contribute fairly equally to the intelligence of wealthy people; thus, differences in their intelligence scores are largely explained by genetic differences. But the environments inhabited by people living in poverty differ widely. Some may receive good schooling and others very little. Some may receive proper nutrition (e.g., poor farming families that grow their own food), whereas others may be chronically malnourished (e.g., children in poor inner-city neighbourhoods). For poorer families, these differences in the environment would impact intelligence (as we discuss later in this module), leading to lower heritability estimates.
There are many other problems with interpreting heritability estimates as
indications that genes cause differences in intelligence. Two of the most important both have to do with an under-appreciation for how genes interact with
the environment. First, as discussed in Module 3.1 , genes do not operate in isolation from the environment. We know now that the “nature vs. nurture” debate has evolved into a discussion of how “nurture shapes nature.” Environmental factors determine how genes express themselves and influence the organism.
Second, genes that influence intelligence may do so indirectly, operating through other factors. For example, imagine genes that promote novelty-seeking. People with these genes would be more likely to expose themselves to new ideas and
new ways of doing things. This tendency to explore, rooted in their genes, may lead them to become more intelligent. However, in more dangerous environments, these novelty-seeking genes could expose the person to more danger. Therefore, genes that encourage exploratory behaviour might be related to higher intelligence in relatively safe environments, but in dangerous environments might be related to getting eaten by cave-bears more often.
Behavioural Genomics
Twin and adoption studies show that some of the individual differences observed in intelligence scores can be attributed to genetic factors. But these studies do not tell us which genes account for the differences. To answer that question,
researchers use behavioural genomics, a technique that examines how specific genes interact with the environment to influence behaviours, including those related to intelligence. Thus far, the main focus of the behavioural genomics approach to intelligence is to identify genes that are related to cognitive abilities,
such as learning and problem solving (Deary et al., 2010).
Overall, studies scanning the whole human genome show that intelligence levels can be predicted, to some degree, by the collection of genes that individuals
inherit (Craig & Plomin, 2006; Plomin & Spinath, 2004). These collections of genes seem to pool together to influence general cognitive ability; although each contributes a small amount, the contributions combine to have a larger effect. However, although almost 300 individual genes have been found to have a large
impact on various forms of mental retardation (Inlow & Restifo, 2004), very few genes have been found to explain normal variation in intelligence (Butcher et al., 2008). In one large study that scanned the entire genome of 7000 people, researchers found a mere six genetic markers that predicted cognitive ability. Taken together, these six markers only explained 1% of the variability in
cognitive ability (Butcher et al., 2008). Thus, there is still a long way to go before we can say that we understand the genetic contributors to intelligence.
One way of speeding the research up has been to develop ways of
experimenting with genes directly, in order to see what they do. Gene knockout (KO) studies involve removing a specific gene and comparing the
characteristics of animals with and without that gene. In one of the first knockout studies of intelligence, researchers discovered that removing one particular gene
disrupted the ability of mice to learn spatial layouts (Silva et al., 1992). Since this investigation was completed, numerous studies using gene knockout methods have shown that specific genes are related to performance on tasks that have
been adapted to study learning and cognitive abilities in animals (Robinson et al., 2011).
Scientists can also take the opposite approach; instead of knocking genes out, they can insert genetic material into mouse chromosomes to study the changes associated with the new gene. The animal that receives this so-called gene
transplant is referred to as a transgenic animal. Although this approach may sound like science fiction, it has already yielded important discoveries, such as
transgenic mice that are better than average learners (Cao et al., 2007; Tang et al., 1999).
One now-famous example is the creation of “Doogie mice,” named after the 1990s TV character Doogie Howser (played by a young Neil Patrick Harris), a genius who became a medical doctor while still a teenager. Doogie mice were
created by manipulating a single gene, NR2B (Tang et al., 1999). This gene encodes the NMDA receptor, which plays a crucial role in learning and memory. Having more NMDA receptors should, therefore, allow organisms to retain more information (and possibly to access it more quickly). Consistent with this view, Doogie mice with altered NR2B genes learned significantly faster and had better memories than did other mice. For example, when the Doogie mice and normal mice were put into a tank of water in which they had to find a hidden platform in order to escape, the Doogie mice took half as many trials to remember how to get out of the tank.
The Princeton University lab mouse, Doogie, is able to learn faster than other mice thanks to a bit of genetic engineering. Researchers inserted a gene known as NR2B that helps create new synapses and leads to quicker learning. Princeton University/KRT/Newscom
The different types of studies reviewed in this section show us that genes do
have some effect on intelligence. What they don’t really show us is how these effects occur. What causes individual differences in intelligence? One theory suggests that these differences could be due to varying brain size.
Working the Scientific Literacy Model Brain Size and Intelligence
Are bigger brains more intelligent? We often assume that to be the case—think of the cartoon characters that are super- geniuses; they almost always have gigantic heads. Or think about what it means to call someone a “pea brain.” Psychologists have not been immune to this belief, and many studies have searched for a correlation between brain size and intelligence.
What do we know about brain size and intelligence?
Brain-based approaches to measuring intelligence rest on a common-sense assumption: Thinking occurs in the brain, so a larger brain should be related to greater intelligence. But does scientific evidence support this assumption? In the days before modern brain imaging was possible, researchers typically obtained skulls from deceased subjects, filled them with fine- grained matter such as metal pellets, and then transferred the pellets to a flask to measure the volume. These efforts taught us very little about intelligence and brain or skull size, but a lot about problems with measurement and racial prejudice. In some cases, the studies were highly flawed and inevitably led to conclusions that Caucasian males (including the Caucasian male scientists who conducted these experiments) had the largest brains and,
therefore, were the smartest of the human race (Gould, 1981). Modern approaches to studying the brain and intelligence are far more sophisticated, thanks to newer techniques and a more enlightened knowledge of the brain’s form and functions.
How can science explain the relationship between brain size and intelligence? In relatively rare cases, researchers have had the two most important pieces of data needed: brains, and people attached to those brains who had taken intelligence tests when they were
alive. In one ambitious study at McMaster University, Sandra Witelson and her colleagues (2006) collected 100 brains of deceased individuals who had previously completed the Wechsler Adult Intelligence Scale (WAIS). Detailed anatomical examinations and size measurements were made on the entire brains and certain regions that support cognitive skills. For women and right-handed men (but not left-handed men), 36% of the variation in verbal intelligence scores was accounted for by the size of the brain; however, brain size did not significantly account for the other component of intelligence that was measured, visuospatial abilities. Thus, it appears that brain size does predict intelligence, but certainly doesn’t tell the whole story.
In addition to the size of the brain and its various regions, there are other features of our neuroanatomy that might be important to consider. The most obvious, perhaps, is the convoluted surface of fissures and folds (called gyri; pronounced “ji-rye”) that
comprise the outer part of the cerebral cortex (see Figure 9.13 ). Interestingly, the number and size of these cerebral gyri seems strongly related to intelligence across different species; species that have complex cognitive and social lives, such as elephants, dolphins, and primates, have particularly convoluted
cortices (Marino, 2002; Rogers et al., 2010). And indeed, even within humans, careful studies using brain imaging technology have shown that having more convolutions on the surface of certain parts of the cortex was also positively correlated with scores on the WAIS intelligence test, accounting for
approximately 25% of the variability in WAIS scores (Luders et al., 2008).
Figure 9.13 Does Intelligence Increase with Brain Size?
While the size of the brain may have a modest relationship to intelligence, the convolutions or “gyri” along the surface of the cortex are another important factor: Increased convolutions are associated with higher intelligence test scores.
Can we critically evaluate this issue?
A common critique of studies examining brain size and IQ is that it is not always clear what processes or abilities are being tested. IQ scores could be measuring a number of things including working memory, processing speed, ability to pay attention, or even motivation to perform well on the test. Therefore, when studies show that brain size can account for 25% of the variability in IQ scores, it is not always clear what ability (or abilities) are underlying these results.
Another potential problem is the third-variable problem; even if brain size and performance on intelligence tests are correlated with each other, it might be the case that they are both related to some other factor, like stress, nutrition, physical health, environmental toxins, or the amount of enriching stimulation
experienced during childhood (Choi et al., 2008). If these other factors can account for the relationship between brain size and intelligence, then the brain–IQ relationship itself may be overestimated.
A final critique is simply the recognition that there is more to intelligence than just the size of one’s brain. After all, if brain size explains 25% of the variability in IQ scores, the other 75% must be due to other things.
Why is this relevant? This research is important for reasons that go far beyond the issue of intelligence and IQ tests. More generally, research on the neurology of intelligence has furthered our understanding of the relationship between brain structure and function, which are related to many important phenomena. For example, certain harmful patterns of behaviour, such as anorexia nervosa (a psychological disorder marked by self-starvation) or prolonged periods of alcohol abuse, both have been shown to lead to changes in cognitive abilities and corresponding loss of brain
volume (e.g., McCormick et al., 2008; Schottenbauer et al.,
2007). Measurements of brain volume have also played a key role in understanding the impaired neurological and cognitive development of children growing up in institutional settings (e.g., orphanages), as well as how these children benefit from
adoption, foster care, or increased social contact (Sheridan et al., 2012). Better understanding of how experiences like anorexia, alcoholism, and child neglect affect brain development may provide ways of developing effective interventions that could help people who have suffered from such experiences.
Module 9.3a Quiz:
Biological Influences on Intelligence
Know . . . 1. When scientists insert genetic material into an animal’s genome, the
result is called a . A. genomic animal B. transgenic animal C. knockout animal D. fraternal twin
Understand . . . 2. How do gene knockout studies help to identify the contribution of specific
genes to intelligence?
A. After removing or suppressing a portion of genetic material, scientists can look for changes in intelligence.
B. After inserting genetic material, scientists can see how intelligence has changed.
C. Scientists can rank animals in terms of intelligence and then see how the most intelligent animals differ genetically from the least intelligent.
D. They allow scientists to compare identical and fraternal twins.
Analyze . . . 3. Identical twins, whether reared together or apart, tend to score very
similarly on standardized measures of intelligence. Which of the following statements does this finding support?
A. Intelligence levels are based on environmental factors for both twins reared together and twins reared apart.
B. Environmental factors are stronger influences on twins raised together compared to twins reared apart.
C. The “intelligence gene” is identical in both twins reared together and reared apart.
D. Genes are an important source of individual variations in intelligence test scores.
Environmental Influences on Intelligence
As described earlier, research on the biological underpinnings of intelligence repeatedly emphasizes the importance of environmental factors. For example, environmental conditions determine which genes get expressed (“turned on”) for a given individual; thus, without the right circumstances, genes can’t appropriately affect the person’s development. Also, brain areas involved in intelligence are responsive to a wide variety of environmental factors. The full story of how “nature” influences intelligence is intricately bound up with the story of how “nurture” influences intelligence.
Both animal and human studies have demonstrated how environmental factors influence cognitive abilities. Controlled experiments with animals show that growing up in physically and socially stimulating environments results in faster learning and enhanced brain development compared to growing up in a dull
environment (Hebb, 1947; Tashiro et al., 2007). For example, classic studies in the 1960s showed that rats who grew up in enriched environments (i.e., these rats enjoyed toys, ladders, and tunnels) ended up with bigger brains than rats
who grew up in impoverished environments (i.e., simple wire cages). Not only
were their cerebral cortices approximately 5% larger (Diamond et al., 1964; Rosenzweig et al., 1962), but their cortices contained 25% more synapses (Diamond et al., 1964). With more synapses, the brain can make more associations, potentially enhancing cognitive abilities such as learning and creativity. In this section, we review some of the major environmental factors that influence intelligence.
Birth Order
One of the most hotly debated environmental factors affecting intelligence is simply whether you were the oldest child in your family, or whether you were lower in the pecking order of your siblings. Debate about this issue has raged for many decades within psychology. Regardless of the larger debate about why birth order might affect intelligence, the evidence seems to indicate that it does. For example, a 2007 study of more than 240 000 people in Norway found that the IQs of first-born children are, on average, about three points higher than those of second-born children and four points higher than those of third-born
children (Kristensen & Bjerkedal, 2007).
Socioeconomic status is related to intelligence. People from low-socioeconomic backgrounds typically have far fewer opportunities to access educational and other important resources that contribute to intellectual growth. John Dominis/Getty Images
Why might this be? The most important factor, researchers believe, is that older siblings, like it or not, end up tutoring and mentoring younger siblings, imparting
the wisdom they have gained through experience on to their younger siblings. Although this may help the younger sibling, the act of teaching their knowledge
benefits the older sibling more (Zajonc, 1976). The act of teaching requires the older sibling to rehearse previously remembered information and to reorganize it in a way that their younger sibling will understand. Teaching therefore leads to a deeper processing of the information, which, in turn, increases the likelihood that
it will be remembered later (see Module 7.2 ).
Before any first-born children reading this section start building monuments to their greatness, it is important to note that the differences between the IQs of first- and later-born siblings are quite small: three or four points. There will definitely be many individual families in which the later-born kids have higher IQs than their first-born siblings. Nevertheless, this finding is one example of how environments can influence intelligence.
Socioeconomic Status
One of the most robust findings in the intelligence literature is that IQ correlates strongly with socioeconomic status (SES). It is perhaps no surprise that children growing up in wealthy homes have, on average, higher IQs than those growing
up in poverty (Turkheimer et al., 2003), but there may be many reasons for this that have nothing to do with the “innate” or potential intelligence of the rich or the poor. Think of the many environmental differences and greater access to resources and opportunities enjoyed by the wealthy! For example, consider how much language children are exposed to at home; one U.S. study estimated that by age three, children of professional parents will have heard 30 million words, children of working-class parents will have heard only 20 million words, and children of unemployed African-American mothers will have heard only 10 million
words. Furthermore, the level of vocabulary is strikingly different for families in the different socioeconomic categories, with professional families using the most
sophisticated language (Hart & Risley, 1995).
Other studies have shown that higher SES homes are much more enriching and supportive of children’s intellectual development—high SES parents talk to their children more; have more books, magazines, and newspapers in the home; give
them more access to computers; take them to more learning experiences outside the home (e.g., visits to museums); and are less punitive toward their children
(Bradley et al., 1993; Phillips et al., 1998).
Unfortunately, the effects of SES don’t end here. SES interacts with a number of other factors that can influence intelligence, including nutrition, stress, and education. The difference between rich and poor people’s exposure to these factors almost certainly affects the IQ gap between the two groups.
Nutrition
It’s a cliché we are all familiar with—“you are what you eat.” Yet over the past century, the quality of the North American diet has plummeted as we have adopted foods that are highly processed, high in sugar and fat, low in fibre and nutrients, and laden with chemicals (preservatives, colours, and flavourings). Some evidence suggests that poor nutrition could have negative effects on intelligence. For example, research has shown that diets high in saturated fat quickly lead to sharp declines in cognitive functioning in both animal and human subjects. In contrast, diets low in such fats and high in fruits, vegetables, fish,
and whole grains are associated with higher cognitive functioning (Greenwood & Winocur, 2005; Parrott & Greenwood, 2007).
A massive longitudinal study on diet is currently underway in the United Kingdom. The Avon Longitudinal Study of Parents and Children is following the development of children born to 14 000 women in the early 1990s. This research has shown that a “poor” diet (high in fat, sugar, and processed foods) early in life leads to reliably lower IQ scores by age 8.5, whereas a “health-conscious” diet (emphasizing salads, rice, pastas, fish, and fruit) leads to higher IQs. Importantly, this was true even when researchers accounted for the effects of other variables,
such as socioeconomic status (Northstone et al., 2012).
So what kinds of foods should we eat to maximize our brainpower? Although research on nutrition and intelligence is still relatively new, it would appear that eating foods low in saturated fats and rich in omega-3 fats, whole grains, and fruits and veggies are your smartest bets.
Stress
High levels of stress in economically poor populations is also a major factor in explaining the rich–poor IQ gap. People living in poverty are exposed to high levels of stress through many converging factors, ranging from higher levels of environmental noise and toxins, to more family conflict and community violence, to less economic security and fewer employment opportunities. These and many other stresses increase the amounts of stress hormones such as cortisol in their
bodies, which in turn is related to poorer cognitive functioning (Evans & Schamberg, 2009). High levels of stress also interfere with working memory (the ability to hold multiple pieces of information in memory at one time; Evans & Schamberg, 2009), and the ability to persevere when faced with challenging tasks, such as difficult questions on an IQ test (Evans & Stecker, 2004). These deficits interfere with learning in school (Blair & Razza, 2007; Ferrer & McArdle, 2004).
The toxic effects of chronic stress show up in the brain as well, damaging the neural circuitry of the prefrontal cortex and hippocampus, which are critical for working memory and other cognitive abilities (e.g., controlling attention, cognitive flexibility) as well as for the consolidation and storage of long-term memories
(McEwen, 2000). In short, too much stress makes us not only less healthy, but can make us less intelligent as well.
Education
One of the great hopes of modern society has been that universal education would level the playing field, allowing all children, rich and poor alike, access to the resources necessary to achieve success. Certainly, attending school has
been shown to have a large impact on IQ scores (Ceci, 1991). During school, children accumulate factual knowledge, learn basic language and math skills, and learn skills related to scientific reasoning and problem solving. Children’s IQ
scores are significantly lower if they do not attend school (Ceci & Williams, 1997; Nisbett, 2009). In fact, for most children, IQ drops even over the months
of summer holiday (Ceci, 1991; Jencks et al., 1972), although the wealthiest 20% actually show gains in IQ over the summer, presumably because they enjoy activities that are even more enriching than the kinds of experiences delivered in
the classroom (Burkam et al., 2004; Cooper et al., 2000). However, although education has the potential to help erase the rich–poor gap in IQ, its effectiveness at doing so will depend on whether the rich and poor have equal access to the same quality of education and other support and resources that would allow them to make full use of educational opportunities.
Clearly, environmental factors such as nutrition, stress, and education all influence intelligence, which gives us some clues as to how society can contribute to improving the intelligence of the population. Interestingly, exactly such a trend has been widely observed across the last half-century or so; it appears that generation after generation, people are getting smarter!
The Flynn Effect: IS Everyone Getting Smarter?
The Flynn effect , named after researcher James Flynn, refers to the steady population level increases in intelligence test scores over time (Figure 9.14 ). This effect has been found in numerous situations across a number of countries. For example, in the Dutch and French militaries, IQ scores of new recruits rose dramatically between the 1950s and 1980s—21 points for the Dutch and about
30 for the French (Flynn, 1987). From 1932 to 2007, Flynn estimates that, in general, IQ scores rose about one point every three years (Flynn, 2007).
Figure 9.14 The Flynn Effect For decades, there has been a general trend toward increasing IQ scores. This trend, called the Flynn effect, has been occurring since standardized IQ tests have been administered. Source: Flynn, J. R. (1999). Searching for justice: The discovery of IQ gains over time. American Psychologist, 54, 5–20.
The magnitude of the Flynn effect is striking. In the Dutch study noted above, today’s group of 18-year-olds would score 35 points higher than 18-year-olds in 1950. The average person back then had an IQ of 100, but the average person today, taking the same test, would score 135, which is above the cut-off considered “gifted” in most gifted education programs! Or consider this the opposite way—if the average person today scored 100 on today’s test, the average person in 1950 would score about 65, enough to qualify as mentally disabled.
How can we explain this increase? Nobody knows for sure, but one of the most likely explanations is that modern society requires certain types of intellectual skills, such as abstract thinking, scientific reasoning, classification, and logical analysis. These have been increasingly emphasized since the Industrial
Revolution, and particularly since the information economy and advent of computers have restructured society over the past half-century or so. Each successive generation spends more time manipulating information with their minds; more time with visual media in the form of television, video games, and now the Internet; and more time in school. It seems reasonable to propose that
these shifts in information processing led to the increases in IQ scores (Nisbett et al., 2012).
Module 9.3b Quiz:
Environmental Influences on Intelligence
Understand . . . 1. What have controlled experiments with animals found in regard to the
effects of the environment on intelligence?
A. Stimulating environments result in faster learning and enhanced brain development.
B. Deprived environments result in faster learning and enhanced brain development.
C. Stimulating environments result in slower learning and poorer brain development.
D. Deprived environments have no effect on learning and brain development.
2. In which way have psychologists NOT studied the major environmental factors that, through their interaction with genes, influence intelligence?
A. By measuring stress hormones among poor and affluent children and correlating them with intelligence test scores
B. By depriving some children of education and comparing them to others who attended school
C. By measuring children’s nutrition and then correlating it with intelligence scores
D. By correlating children’s birth order in their family with intelligence scores
Analyze . . . 3. What effect does birth order have on intelligence scores? Why is this the
case?
A. Older children often have lower IQs than their siblings because their parents spend more time taking care of younger children.
B. Younger siblings often have higher IQs because their older siblings spent time teaching them new information and skills.
C. Younger siblings have lower IQs because they have had less time to learn information and skills.
D. Older children typically have slightly higher IQs, likely because they reinforce their knowledge by teaching younger siblings.
Behavioural Influences on Intelligence
If you want to make yourself more intelligent, we’ve covered a number of ways to do that—eat a brain-healthy diet, learn how to manage stress better, keep yourself educated (if not in formal schooling, then perhaps by continuing to be an active learner), and expose yourself to diverse and stimulating activities. But is there anything else you can do? For example, if you want bigger muscles, you can go to the gym and exercise. Can you do the same thing for the brain?
Brain Training Programs
One potential technique to improve intelligence is the use of “brain training” programs designed to improve working memory and other cognitive skills. The idea behind such programs is that playing games related to memory and attention will not only improve your performance on these games, but will also help you use those abilities in other, real-world situations.
Research in this area initially appeared quite promising. For instance, in one line of research, a computer task (the “N-back” task) was used as an exercise program for working memory. In this task, people are presented with a stimulus, such as squares that light up on a grid, and are asked to press a key if the
position on the grid is the same as the last trial. The task gets progressively more difficult, requiring participants to remember what happened two, three, or more trials ago (although it takes considerable practice for most people to be able to reliably remember what happened even three trials ago). Practising the N-back task was shown to not only improve performance at that task, but also to
increase participants’ fluid intelligence (Jaeggi et al., 2008). Importantly, the benefits were not merely short term, but lasted for at least three months (Jaeggi et al., 2011).
However, recent reviews of this area of research suggest that we should be cautious when interpreting the results (and media reports). Many studies of brain-training programs involved small sample sizes; other studies included
major methodological flaws such as a lack of a control group (Simons et al., 2016). A more careful examination of this research area suggests that the effects of brain-training programs are typically quite limited. Practising games related to working memory will improve working memory, but will rarely have an effect on other types of tasks, particularly on behaviours occurring outside of the
laboratory (Melby-Lervåg & Hulme, 2013). Although these results are disappointing—particularly for people who have spent money on expensive brain-training programs—they help remind us of the importance of being critical consumers of scientific information.
Nootropic Drugs
Another behaviour that many people believe improves their cognitive functioning
is the use of certain drugs. Nootropic substances (meaning “affecting the mind”) are substances that are believed to beneficially affect intelligence. Nootropics can work through many different mechanisms, from increasing overall arousal and alertness, to changing the availability of certain neurotransmitters, to stimulating nerve growth in the brain.
Certainly, these drugs can work for many people. For example, two drugs commonly used are methylphenidate (Ritalin) and modafinil (Provigil). Methylphenidate is a drug that inhibits the reuptake of norepinephrine and dopamine, thus leaving more of these neurotransmitters in the synapses
between cells. Although generally prescribed to help people with attentional disorders, Ritalin can also boost cognitive functioning in the general population
(Elliott et al., 1997). Modafinil, originally developed to treat narcolepsy (a sleep disorder), is known to boost short-term memory and planning abilities by
affecting the reuptake of dopamine (Turner et al., 2003).
Boosting the brain, however, does not come without risk. For example, the long- term effects of such drugs are poorly understood and potential side effects can be severe. There can also be dependency issues as people come to rely on such drugs and use them more regularly, and problems with providing unfair advantages to people willing to take such drugs, which puts pressure on others
to take them as well in order to stay competitive (Sahakian & Morein-Zamir, 2007). Because of these risks, a September 2013 review in the Canadian Medical Association Journal recommended that doctors “should seriously consider refusing to prescribe medications for cognitive enhancement to healthy
individuals” (Forlini et al., 2013, p. 1047).
These risks have to be weighed against the potential benefits of developing these drugs, at least for clinical populations. For example, researchers in the United Kingdom have argued that if nootropic drugs could improve the cognitive functioning of Alzheimer’s patients by even a small amount, such as a mere 1% change in the severity of the disease each year, this would be enough not only to dramatically improve the lives of people with Alzheimer’s and their families, but to completely erase the predicted increases in long-term health care costs for the
U.K.’s aging population (Sahakian & Morein-Zamir, 2007).
As with most questions concerning the ethical and optimally desirable uses of technologies, there are no easy answers when it comes to nootropic drugs. But we would caution you—there are much safer ways to increase your performance than ingesting substances that can affect your brain in unknown ways.
In sum, although few people are blessed with brains as abnormally intelligent as Einstein’s, there are practical things anyone can do to maximize their potential brainpower. From eating better to providing our brains with challenging exercises, we can use the science of intelligence to make the most out of our
genetic inheritance.
Module 9.3c Quiz:
Behavioural Influences on Intelligence
Know . . . 1. A commonly used nootropic drug is .
A. Tylenol B. Ecstasy C. Ritalin D. Lamictal
Understand . . . 2. Which of the following seems to be affected by brain-training tasks like
the N-back task?
A. Crystallized intelligence B. Fluid intelligence C. A person’s dominant learning style D. A person’s belief that they are more intelligent
Analyze . . . 3. Research on nootropic drugs shows that
A. they have a much larger effect on intelligence than do environmental factors such as socioeconomic status.
B. they show low addiction rates and are therefore quite safe. C. they have a larger effect on long-term memory than on working
memory.
D. these drugs can produce increases in intelligence in some individuals.
Module 9.3 Summary
9.3a Know . . . the key terminology related to heredity, environment, and intelligence.
Flynn effect
gene knockout (KO) studies
nootropic substances
video deficit
Behavioural genetics typically involves conducting twin or adoption studies. Behavioural genomics involves looking at gene–behaviour relationships at the molecular level. This approach often involves using animal models, including knockout and transgenic models.
Based on the research we reviewed, there are many different strategies that are good bets for enhancing the cognitive abilities that underlie your own intelligence. (Note: some of these strategies are known to be helpful for children, and the effects on adult intelligence are not well researched.)
Choose challenging activities and environments that are stimulating and enriching.
Eat diets low in saturated fat and processed foods and high in omega-3 fatty acids, nuts, seeds, fruits, and antioxidant-rich vegetables.
Reduce sources of stress and increase your ability to handle stress well. Remain an active learner by continually adding to your education or learning. Don’t spend too much time watching TV and other media that are relatively poor at challenging your cognitive abilities.
The use of nootropic drugs remains a potential strategy for enhancing your cognitive faculties; however, given the potential side effects, addictive
9.3b Understand . . . different approaches to studying the genetic basis of intelligence.
9.3c Apply . . . your knowledge of environmental and behavioural effects on intelligence to understand how to enhance your own cognitive abilities.
possibilities, and the uncertainty regarding the long-term consequences of using such drugs, this option may not be the best way to influence intelligence.
Reviews of intelligence tests show that the oldest child in a family tends to have higher IQs than their younger siblings. However, this effect is quite small: 3 IQ points. Importantly, this difference is not due to the genetic superiority of the older siblings; rather, it is likely related to the fact that older children often spend time teaching things to their younger siblings.
9.3d Analyze . . . the belief that older children are more intelligent than their younger siblings.
Chapter 10 Lifespan Development
10.1 Physical Development from Conception through Infancy Methods for Measuring Developmental Trends 387
Module 10.1a Quiz 388
Zygotes to Infants: From One Cell to Billions 388
Working the Scientific Literacy Model: The Long-Term Effects of Premature Birth 392
Module 10.1b Quiz 394
Sensory and Motor Development in Infancy 394
Module 10.1c Quiz 399
Module 10.1 Summary 399
10.2 Infancy and Childhood: Cognitive and Emotional Development Cognitive Changes: Piaget’s Cognitive Development Theory 401
Working the Scientific Literacy Model: Evaluating Piaget 404
Module 10.2a Quiz 406
Social Development, Attachment, and Self-Awareness 406
Module 10.2b Quiz 412
Psychosocial Development 412
Module 10.2c Quiz 415
Module 10.2 Summary 415
10.3 Adolescence Physical Changes in Adolescence 418
Module 10.3a Quiz 419
Emotional Challenges in Adolescence 419
Working the Scientific Literacy Model: Adolescent Risk and Decision Making 420
Module 10.3b Quiz 422
Cognitive Development: Moral Reasoning vs. Emotions 422
Module 10.3c Quiz 425
Social Development: Identity and Relationships 425
Module 10.3d Quiz 427
Module 10.3 Summary 427
10.4 Adulthood and Aging From Adolescence through Middle Age 429
Module 10.4a Quiz 433
Late Adulthood 433
Working the Scientific Literacy Model: Aging and Cognitive Change 436
Module 10.4b Quiz 437
Module 10.4 Summary 438
Module 10.1 Physical Development from Conception through Infancy
Leungchopan/Fotolia
Learning Objectives
Know . . . the key terminology related to prenatal and infant physical development. Understand . . . the pros and cons to different research designs in
10.1a
10.1b
It is difficult to overstate the sheer miracle and profundity of birth. Consider the following story, told by a new father. “About two days after the birth of my first child, I was driving to the hospital and had one of ‘those moments,’ an awe moment, when reality seems clear and wondrous. What triggered it was that the person driving down the highway in the car next to mine yawned. Suddenly, I remembered my newborn baby yawning just the day before, and somehow, it hit me—we are all just giant babies, all of us, the power broker in the business suit, the teenager in jeans and a hoodie, the tired soccer parent in the mini- van and the elderly couple holding hands on the sidewalk. Although we have invented these complex inner worlds for ourselves, with all of our cherished opinions, political beliefs, dreams, and aspirations, at our essence, we are giant babies. We have the same basic needs as babies —food, security, love, air, water. Our bodies are basically the same, only bigger. Our brains are basically the same, only substantially more developed. Our movements are even basically the same, just more coordinated. I like to remember that now and then, when I feel intimidated by someone, or when I feel too self-important. Just giant babies!”
Of course, we don’t stay “just babies” over our lives. We develop in many complex ways as we age and learn to function in the world. Understanding how we change, and how we stay the same, over the course of our lives, is what developmental psychology is all about.
Focus Questions
1. How does the brain develop, starting even before birth? 2. What factors can significantly harm or enhance babies’
neurological development?
developmental psychology. Apply . . . your understanding to identify the best ways expectant parents can ensure the health of their developing fetus. Analyze . . . the effects of preterm birth.
10.1c
10.1d
Developmental psychology is the study of human physical, cognitive, social, and behavioural characteristics across the lifespan. Take just about anything you have encountered so far in this text, and you will probably find psychologists approaching it from a developmental perspective. From neuroscientists to cultural psychologists, examining how we function and change across different stages of life raises many central and fascinating questions.
Methods for Measuring Developmental Trends
Studying development requires some special methods for measuring and
tracking change over time. A cross-sectional design is used to measure and compare samples of people at different ages at a given point in time. For example, to study cognition from infancy to adulthood, you could compare people of different age groups—say, groups of 1-, 5-, 10-, and 20-year-olds. In
contrast, a longitudinal design follows the development of the same set of individuals through time. With this type of study, you would select a sample of infants and measure their cognitive development periodically over the course of
20 years (see Figure 10.1 ).
Figure 10.1 Cross-Sectional and Longitudinal Methods In cross-sectional studies, different groups of people—typically of different ages —are compared at a single point in time. In longitudinal studies, the same group of subjects is tracked over multiple points in time.
These different methods have different strengths and weaknesses. Cross- sectional designs are relatively cheap and easy to administer, and they allow a study to be done quickly (because you don’t have to wait around while your
participants age). On the other hand, they can suffer from cohort effects , which are differences between people that result from being born in different time periods. For example, if you find differences between people born in the 2000s with those born in the 1970s, this may reflect any number of differences between people from those time periods—such as differences in technological advances, parenting norms, cultural changes, environmental pollutants, nutritional practices, or many other factors. This creates big problems in interpreting the findings of a study—do differences between the age groups reflect normal developmental processes or do they reflect more general differences between people born into these time periods?
A longitudinal study fixes the problem of cohort effects, but these studies are often difficult to carry out and tend to be costly and time consuming to follow, due to the logistic challenges involved in following a group of people for a long period
of time. Longitudinal designs often suffer from the problem of attrition, which occurs when participants drop out of a study for various reasons, such as losing interest or moving away.
The combination and accumulation of cross-sectional and longitudinal studies has taught us a great deal about the processes of human development. This can help parents and educators who want to have a positive influence on children’s development. It can help us understand how to better serve the needs of those who are aging. And it can help all of us, who just want to better understand who we are, and why we turned out the way that we did.
One quite famous example of a longitudinal study is the Seven-up series, which is a documentary and extensive longitudinal study of a group of people who started the study at age 7, more than 50 years ago. Watching the series is a fascinating look at how people retain basic features of their personality over pretty much their entire lifespan, whereas they also change as their circumstances take them down different paths in life. If you are interested, you can find this series online; search for “7 up,” “14 up,” etc., up to “56 up,” which was released in 2013.
Patterns of Development: Stages and Continuity
One of the challenges that has faced developmental psychologists is that human development does not unfold in a gradual, smooth, linear fashion; instead, periods of seeming stability are interrupted by sudden, often dramatic upheavals and shifts in functioning as a person transitions from one pattern of functioning to a qualitatively different one. This common pattern, relatively stable periods interspersed with periods of rapid reorganization, has been reflected in many
different stage models of human development. According to these models, specific stages of development can be described, differentiated by qualitatively different patterns of how people function. In between these stages, rapid shifts in thinking and behaving occur, leading to a new set of patterns that manifest as
the next stage. Stage models have played an important role in helping psychologists understand both continuity and change over time.
Module 10.1a Quiz:
Methods for Measuring Developmental Trends
Know . . . 1. Studies that examine factors in groups of people of different ages (e.g., a
group of 15–20 year-olds; a group of 35–40 year-olds; and a group of
75–80 year-olds), are employing a research design. A. cohort B. longitudinal C. cross-sectional D. stage-model
Apply . . . 2. A researcher has only one year to complete a study on a topic that spans
the entire range of childhood. To complete the study, she should use a
design. A. cohort B. longitudinal C. correlational D. cross-sectional
Analyze . . . 3. Which of the following is a factor that would be least likely to be a cohort
effect for a study on cognitive development in healthy people?
A. Differences in genes between individuals B. Differences in educational practices over time C. Changes in the legal drinking age D. Changes in prescription drug use
Zygotes to Infants: From One Cell to
Billions
The earliest stage of development begins at the moment of conception, when a single sperm (out of approximately 200 million that start the journey into the vagina), is able to find its way into the ovum (egg cell). At this moment, the ovum
releases a chemical that bars any other sperm from entering, and the nuclei of egg and sperm fuse, forming the zygote . Out of the mysterious formation of this single cell, the rest of our lives flow.
Fertilization and Gestation
The formation of the zygote through the fertilization of the ovum marks the
beginning of the germinal stage , the first phase of prenatal development, which spans from conception to two weeks. Shortly after it forms, the zygote begins dividing, first into two cells, then four, then eight, and so on. The zygote also travels down the fallopian tubes toward the uterus, where it becomes
implanted into the lining of the uterus (Table 10.1 ). The ball of cells, now called a blastocyst, splits into two groups. The inner group of cells develops into the fetus. The outer group of cells forms the placenta, which will pass oxygen, nutrients, and waste to and from the fetus.
Table 10.1 Phases of Prenatal Development
A summary of the stages of human prenatal development and some of the major events
at each.
GERMINAL: 0 TO 2 WEEKS
Major Events
Migration of the blastocyst from the fallopian tubes and its implantation in
the uterus. Cellular divisions take place that eventually lead to multiple
organ, nervous system, and skin tissues.
EMBRYONIC: 2 TO 8 WEEKS
Major Events
Stage in which basic cell layers become differentiated. Major structures
such as the head, heart, limbs, hands, and feet emerge. The embryo
attaches to the placenta, the structure that allows for the exchange of
oxygen and nutrients and the removal of wastes.
FETAL STAGE: 8 WEEKS TO BIRTH
Major Events
Brain development progresses as distinct regions take form. The
circulatory, respiratory, digestive, and other bodily systems develop. Sex
organs appear at around the third month of gestation.
Top: Doug Steley A/Alamy Stock Photo; centre: MedicalRF.com/Alamy Stock Photo; bottom: Claude Edelmann/Photo
Researchers, Inc./Science Source
The embryonic stage spans weeks two through eight, during which time the embryo begins developing major physical structures such as the heart and nervous system, as well as the beginnings of arms, legs, hands, and feet.
The fetal stage spans week eight through birth, during which time the skeletal, organ, and nervous systems become more developed and specialized. Muscles develop and the fetus begins to move. Sleeping and waking cycles start and the senses become fine-tuned—even to the point where the fetus is
responsive to external cues (these events are summarized in Table 10.1 ).
Fetal Brain Development
The beginnings of the human brain can be seen during the embryonic stage, between the second and third weeks of gestation, when some cells migrate to the appropriate locations and begin to differentiate into nerve cells. The first major development in the brain is the formation of the neural tube, which occurs only 2 weeks after conception. A layer of specialized cells begins to fold over onto itself, structurally differentiating between itself and the other cells. This tube-
shaped structure eventually develops into the brain and spinal cord (Lenroot & Gledd, 2007; O’Rahilly & Mueller, 2008). The first signs of the major divisions of the brain—the forebrain, the midbrain, and the hindbrain—are apparent at only
4 weeks (see Figure 10.2 ). Around 7 weeks, neurons and synapses develop in the spinal cord, giving rise to a new ability—movement; the fetus’s own movements then provide a new source of sensory information, which further stimulates the central nervous system’s development of increasingly coordinated
movements (Kurjak, Pooh, et al., 2005). By 11 weeks, differentiations between the cerebral hemisphere, the cerebellum, and the brain stem are apparent, and by the end of the second trimester, the outer surface of the cerebral cortex has started to fold into the distinctive gyri and sulci (ridges and folds) that give the outer cortex its wrinkled appearance. It is around the same time period that a fatty tissue called myelin begins to build up around developing nerve cells, a
process called myelination. Myelin is centrally important; by insulating nerve cells, it enables them to conduct messages more rapidly and efficiently (see Module 3.2 ; Giedd, 2008), thereby allowing for the large-scale functioning and integration of neural networks.
Figure 10.2 Fetal Brain Development The origins of the major regions of the brain are already detectable at four weeks’ gestation. Their differentiation progresses rapidly, with the major forebrain, midbrain, and hindbrain regions becoming increasingly specialized.
At birth, the newborn has an estimated 100 billion neurons and a brain that is approximately 25% the size and weight of an adult brain. Astonishingly, this means that at birth, the infant has created virtually all of the neurons that will
comprise the adult brain, growing up to 4000 new neurons per second in the womb (Brown et al., 2001). However, most of the connections between these
neurons have not yet been established in the brain of a newborn (Kolb, 1989, 1995). This gives us a key insight into one of our core human capacities—our ability to adapt to highly diverse environments. Although the basic shape and structure of our brains is guided by the human genome, the strength of the connections between brain regions is dependent upon experience.
The child’s brain has a vast number of synapses, far more than it will have as an adult in fact, which is why the child’s brain is so responsive to external input. The brain is learning, at a very basic level, what the world is like, and what it needs to be able to do in order to perceive and function effectively in the world. Children’s
brains have a very high amount of plasticity, so that whatever environments the child endures while growing up, her developing brain will be best able to learn to perceive and adapt to those environments. Our brains generally develop the patterns of biological organization that correspond to the world that we’ve experienced.
This means that in a deep and personal way, who we are depends on the environments that structure our brains. Ironically, this profound reliance upon the outside world is also the reason why human babies are so helpless (and make such bad Frisbee partners). We humans have relatively little pre-programmed into us and thus, we can do very little at birth relative to so many other animals. However, this seemingly profound weakness is offset by two huge, truly world- altering strengths: the incredible plasticity of our neurobiology, and the social support systems that keep us alive when we are very young. These advantages also give us the luxury of slowly developing over a long period of time. As a result, our increasingly complex neurobiological systems can learn to adapt and function effectively across a vast diversity of specific circumstances. The net result of this flexibility is that humans have been able to flourish in practically every ecosystem on the surface of the planet.
Nutrition, Teratogens, and Fetal Development
The rapidly developing fetal brain is highly vulnerable to environmental influences; for example, the quality of a pregnant woman’s diet can have a long- lasting impact on her child’s development. In fact, proper nutrition is the single
most important non-genetic factor affecting fetal development (aside from the
obvious need to avoid exposure to toxic substances; Phillips, 2006). The nutritional demands of a developing infant are such that women typically require an almost 20% increase in energy intake during pregnancy, including sufficient
quantities of protein (which affects neurological development; Morgane et al., 2002) and essential nutrients (especially omega-3 fatty acids, folic acid, zinc, calcium, and magnesium). Given that most people’s diets do not provide enough of these critical nutrients, supplements are generally considered to be a good
idea (Ramakrishnan et al., 1999).
Fetal malnutrition can have severe consequences, producing low-birth-weight babies who are more likely to suffer from a variety of diseases and illnesses, and are more likely to have cognitive deficits that can persist long after birth. Children who were malnourished in the womb are more likely to experience attention deficit disorders and difficulties controlling their emotions, due to underdeveloped
prefrontal cortices and other brain areas involved in self-control (Morgane et al., 2002). A wide variety of effects on mental health have been suggested; for example, one study showed that babies who were born in Holland during a
famine in World War II experienced a variety of physical problems (Stein et al., 1975) and had a much higher risk of developing psychological disorders, such as schizophrenia and antisocial personality disorder (Neugebauer et al., 1999; Susser et al., 1999).
Fetal development can also be disrupted through exposure to teratogens , substances, such as drugs or environmental toxins, that impair the process of development. One of the most famous and heartbreaking examples of teratogens was the use of thalidomide, a sedative that was hailed as a wonder drug for helping pregnant women deal with morning sickness during pregnancy. Available in Canada from 1959 to 1962, thalidomide was disastrous, causing miscarriages, severe birth defects such as blindness and deafness, plus its most
well-known effect, phocomelia, in which victims’ hands, feet, or both emerged directly from their shoulders or hips, functioning more like flippers than limbs;
indeed, phocomelia is taken from the Greek words phoke, which means “seal,” and melos, which means “limb” (www.thalidomide.ca/faq-en/#12). It is estimated that up to twenty thousand babies were born with disabilities as a
result of being exposed to thalidomide. In most countries, victims were able to secure financial support through class action lawsuits; however, in Canada, the government has steadfastly refused to provide much support to victims, who face ongoing severe challenges in their lives.
More common teratogens are alcohol and tobacco, although their effects differ widely depending on the volume consumed and the exact time when exposure
occurs during pregnancy. First described in the 1970s (Jones & Smith, 1973), fetal alcohol syndrome involves abnormalities in mental functioning, growth, and facial development in the offspring of women who use alcohol during pregnancy. This condition occurs in approximately 1 per 1000 births worldwide, but the specific rates likely vary greatly between regions, and little is known about specific regional variability. It also seems likely that FAS is underreported, and thus the effects of FAS may be far more widespread than is widely
recognized (Morleo et al., 2011).
This is particularly worrisome when one considers that research suggests there is no safe limit for alcohol consumption by a pregnant woman; even one drink
per day can be enough to cause impaired fetal development (O’Leary et al., 2010; Streissguth & Connor, 2001). Alcohol, like many other substances, readily passes through the placental membranes, leaving the developing fetus vulnerable to its effects, which include reduced mental functioning and
impulsivity (Olson et al., 1997; Streissguth et al., 1999). It is concerning, then, to acknowledge that about 1 in 10 pregnancies in Canada involve ingesting
alcohol (Walker et al., 2011), and in some communities, such as those in isolated Northern regions, more than 60% of pregnancies have been shown to
be alcohol-exposed (Muckle et al., 2011). Given that any amount of reduced drinking during pregnancy helps to reduce the risks of FAS, there is a clear role for public health and awareness campaigns, family and school efforts, and our own personal contributions to social norms, to tackle this together, and to eradicate, or at least minimize, alcohol consumption during pregnancy.
Victims of thalidomide; this sedative seemed like a miracle drug in the late 1950s, until its tragic effects on fetal development became apparent. Dpa picture alliance/Alamy Stock Photo
Fetal alcohol syndrome is diagnosed based on facial abnormalities, growth problems, and behavioural and cognitive deficits. Betty Udesen/KRT/Newscom
Smoking can also expose the developing fetus to teratogens, decreasing blood oxygen and raising concentrations of nicotine and carbon monoxide, as well as increasing the risk of miscarriage or death during infancy. Babies born to mothers who smoke are twice as likely to have low birth weight and have a 30% chance of premature birth—both factors that increase the newborn’s risk of illness or death. Evidence also suggests that smoking during pregnancy
increases the risk that the child will experience problems with emotional
development and impulse control (Brion et al., 2010; Wiebe et al., 2014), as well as attentional and other behavioural problems (Makin et al., 1991). These behavioural outcomes could be the outgrowth of impaired biological function; for example, there is evidence that prenatal exposure to nicotine interferes with the development of the serotonergic system, interfering with neurogenesis, and with
the expression of receptors that affect synaptic functioning (Hellström-Lindahl et al., 2001; Falk et al., 2005). Tobacco exposure may also interfere with the development of brain areas related to self-regulation (e.g., the prefrontal cortex), which then leads to poorer self-control and an increase in emotional and
behavioural problems over time (Marroun et al., 2014).
We must note, however, that there is an ongoing debate in the literature as to whether this is a causal relationship, or whether this is a third variable problem; in particular, there are a variety of familial risk factors (e.g., poverty, low parental education, etc.) that are related both to smoking during pregnancy and to the various developmental deficits that have been reported. Recent studies that attempted to statistically account for these third variable factors are somewhat inconclusive; some find very little direct relationship between maternal smoking during pregnancy and children’s development, whereas other report specific relationships that cannot be explained as being due to other variables. The jury is still out, but on the whole, researchers tentatively conclude that maternal smoking during pregnancy has a causal influence on various developmental
outcomes (Melchior et al., 2015; Palmer et al., 2016).
Smoking is implicated in other risk-factors for infants as well, perhaps most notably being the tragedy of sudden infant death syndrome (SIDS). Babies exposed to smoke are as much as three times more likely to die from SIDS
(Centers for Disease Control and Prevention [CDC], 2009a; Rogers, 2009). Even exposure to second-hand smoke during pregnancy carries similar risks
(Best, 2009). Thankfully, after major public health campaigns, the rate of SIDS has been declining substantially, dropping in Canada by 71% from 1981 to 2009
(Public Health Agency of Canada, 2014). These campaigns targeted three key behaviours: breastfeeding, putting infants to sleep on their backs (rather than their stomachs), and reducing smoking during pregnancy. To be fair, researchers
don’t know how much each of these individual behaviours have contributed to the reduction in SIDS. Researchers will continue trying to disentangle exactly what factors are related to reductions in rates of SIDS so that campaigns can even more effectively target the factors that make the biggest difference.
Clearly, teratogens exact a major cost on society, causing deficits that range from very specific (e.g., improperly formed limbs), to more general effects on development (e.g., premature birth causing overall low birth weight).
Working the Scientific Literacy Model The Long- Term Effects of Premature Birth
The human mother’s womb has evolved to be a close- to-ideal environment for a fetus’s delicate brain and body to prepare for life outside the womb. Premature birth thrusts the vulnerable baby into a much less congenial environment before she is ready; what effects does this have on development?
What do we know about premature birth? Typically, humans are born at a gestational age of around 40
weeks. Preterm infants are born earlier than 36 weeks. Premature babies often have underdeveloped brains and lungs, which present a host of immediate challenges, such as breathing on their own and maintaining an appropriate body temperature. With modern medical care, babies born at 30 weeks have a very good chance of surviving (approximately 95%), although for those born at 25 weeks, survival rates drop to only slightly above 50%
(Dani et al., 2009; Jones et al., 2005). Although babies born at less than 25 weeks often survive, they run a very high risk of damage to the brain and other major organs. To try to reduce these risks and improve outcomes as much as possible, medical science is continually seeking better procedures for nurturing preterm infants.
How can science be used to help preterm infants? Researchers and doctors have compared different methods for improving survival and normal development in preterm infants. One program, called the Newborn Individualized Developmental Care and Assessment Program (NIDCAP), is a behaviourally based intervention in which preterm infants are closely observed and given intensive care during early development. To keep the delicate brain protected against potentially harmful experiences, NIDCAP calls for minimal lights, sound levels, and stress.
Controlled studies suggest that this program works. Researchers randomly assigned 117 infants born at 29 weeks or less gestational age to receive either NIDCAP or standard care in a prenatal intensive care unit. Within 9 months of birth, the infants who received the NIDCAP care showed significantly improved motor skills, attention, and other behavioural skills, as well as
superior brain development (McAnulty et al., 2009). A longitudinal study indicates that these initial gains last for a long time. Even at eight years of age, those who were born preterm and given NIDCAP treatment scored higher on measures of thinking and problem solving, and also showed better frontal lobe functioning, than children who were born preterm but did not have
NIDCAP treatment (McAnulty et al., 2010).
Can we critically evaluate this research? The chief limitation of this longitudinal study is its small sample size (only 22 children across the two conditions). Such a small sample size presents problems from a statistical perspective, increasing the likelihood that random chance plays a substantial role in the results. Small samples also make it difficult to test the effects of interacting factors, such as whether the effectiveness of the program may depend on the child’s gender, on family socioeconomic status, ethnicity, or other factors. This study also
does not identify why the program works—what specific
mechanisms it affects that in turn improve development. It is not known which brain systems are beneficially affected by the program, or which aspects of the treatment itself are responsible for the effects. These remain questions for future research.
Kangaroo care—skin-to-skin contact between babies and caregivers— is now encouraged for promoting optimal infant development. Victoria Boland Photography/Flickr/ Getty Images
Why is this relevant? Worldwide, an estimated 9% of infants are born preterm (Villar et al., 2003). For these children, medical advances have increased the likelihood of survival, and behaviourally based interventions such as NIDCAP may reduce the chances of long-term negative effects of preterm birth. This fits with a growing literature on other behavioural interventions that have shown promise in improving outcomes for preterm infants. For example, massaging preterm infants for a mere 15 minutes per day can result in a 50% greater
daily weight gain (Field et al., 2006) and reduce stress-related behaviours (Hernandez-Reif et al., 2007). Another method called
kangaroo care focuses on promoting skin-to-skin contact between infants and caregivers, and encouraging breastfeeding. These practices have been shown to improve the physical and
psychological health of preterm infants (Conde-Agudelo et al., 2011), and are becoming widely adopted into mainstream medical practice.
The fact that teratogens can influence the development of the fetal brain—and in some cases lead to premature birth—has made (most) parents quite vigilant about these potential dangers. As you’ve read in this section, these concerns are well-founded. However, it is also important that parents examine the evidence for
each potential threat to see if it is credible. In Module 2.3 , we briefly discussed Andrew Wakefield, a British researcher who fabricated some of his data showing a link between vaccinations and autism. In that module, we focused on the ethical violations that he committed. The Myths in Mind box illustrates how this researcher’s lapse in ethics has had a profound effect on the health and safety of tens of thousands of innocent children.
Myths in Mind Vaccinations and Autism When you consider all the attention paid to developing better ways to promote healthy infant development, it is ironic and tragic that a surprising number of people actively avoid one of the key ways of preventing some of the most serious childhood illnesses—vaccination. A major controversy erupted in the late 1990s about a widely administered vaccine designed to prevent measles, mumps, and rubella (MMR). Research from one British lab linked the MMR vaccine to the development of autism, and even though the science was later discredited and the key researcher (Andrew Wakefield) lost his license to practise medicine, he continued to promote his views against vaccines through public speaking appearances and rallies, and the anti-vaccine movement remained convinced that vaccines were scarier than the diseases they prevented.
The net result has been a public health tragedy. For example, in Canada, measles was considered to have been eliminated as an endemic disease by 1997; any further cases would have to have been imported from other areas of the world. The United States followed shortly thereafter, eliminating measles by the year 2000, with less than 100 new cases imported into the country each year, which were easily dealt with because of large-scale immunity. However, as the anti-vaccine movement continued to proselytize its conspiracy theories about the medical establishment and pharmaceutical industries, these gains began to reverse. By 2011, more than 30 European countries, plus Canada and the U.S., saw huge spikes in measles cases, with worrying outbreaks
occurring in France, Quebec, and California (CDC 2015; Sherrard et al., 2015).
The take-home message? There is no evidence that vaccines cause autism. On the contrary, all the evidence suggests that vaccines prevent far more problems than they may cause.
Module 10.1b Quiz:
Zygotes to Infants: From One Cell to Billions
Know . . . 1. A developing human is called a(n) during the time between
weeks 2 and 8 of development.
A. embryo B. zygote C. fetus D. germinal
2. In which stage do the skeletal, organ, and nervous systems become more developed and specialized?
A. Embryonic stage B. Fetal stage
C. Germinal stage D. Gestational stage
Understand . . . 3. Which of the following would not qualify as a teratogen?
A. Cigarette smoke B. Alcohol C. Prescription drugs D. All of the above are possible teratogens
Analyze . . . 4. Which of the following statements best summarizes the effects of preterm
birth?
A. Preterm births are typically fatal. B. The worrisome effects of preterm birth are exaggerated. There is
little to worry about.
C. Preterm birth may cause physical and cognitive problems. D. Cohort effects make it impossible to answer this question.
Sensory and Motor Development in Infancy
Compared to the offspring of other species, healthy newborn humans have fairly limited abilities. Horses, snakes, deer, and many other organisms come into the world with a few basic skills, such as walking (or slithering), that enable them to move about the world, get food, and have at least a chance of evading predators. But human infants depend entirely on caregivers to keep them alive as they slowly develop their senses, strength, and coordination. In this section, we shift our focus to newborns to find out how movement and sensation develop in the first year of life.
It’s strange to think about what the world of an infant must be like. As adults, we Module 4.1
depend heavily on our top-down processes (see ) to help us label, categorize, perceive, and make sense of the world, but infants have developed very few top-down patterns when they are born. Their brains are pretty close to being “blank slates,” and life must be, as William James so aptly put it, a “blooming, buzzing confusion.”
However, babies aren’t quite as “blank” as we have historically assumed. In fact, they are even starting to perceive and make sense of their world while still in the womb. By month four of prenatal development, the brain starts receiving signals from the eyes and ears. By seven to eight months, not only can infants hear, they seem to be actually listening. This amazing finding comes from studies in which developing fetuses were exposed to certain stimuli, and then their preference for these stimuli was tested upon birth. In one study, mothers read
stories, including The Cat in the Hat, twice daily during the final six weeks of pregnancy. At birth, their babies were given a pacifier that controlled a tape recording of their mother’s voice reading different stories. Babies sucked the
pacifier much more to hear their mothers read The Cat in the Hat compared to hearing stories the moms had not read to them in the womb (DeCasper & Spence, 1986). Newborn babies also show a preference for their mother’s voice over other women’s voices. For example, a study involving researchers at Queen’s University showed that babies responded positively when they heard poems read by their mother, but not when the poems were read by a stranger
(Kisilevsky et al., 2003). (Unfortunately for fathers, babies up to at least 4 months old don’t prefer their dad’s voice over other men’s [DeCasper & Prescott, 1984; Ward & Cooper, 1999].)
The auditory patterning of babies’ brains is so significant that they have already started to internalize the sounds of their own native tongue, even before they are born. Recently, researchers analyzed the crying sounds of 60 babies born to either French or German parents and discovered that babies actually cry with an accent. The cries of French babies rose in intensity toward the end of their cry while German babies started at high intensity and then trailed off. This difference was apparent at only a few days of age and reflects the same sound patterns
characteristic of their respective languages (Mampe et al., 2009). So, babies are actively learning about their cultural environment even while in the womb.
The visual system is not as well developed at birth, however. Enthusiastic family members who stand around making goofy faces at a newborn baby are not really interacting with the child; newborns have only about 1/40th of the visual acuity of
adults (Sireteanu, 1999), and can only see about as far away as is necessary to see their mother’s face while breastfeeding (about 30 cm or less). It takes 6 months or more before they reach 20/20 visual acuity. Colour vision, depth perception, and shape discrimination all get a slow start as well. Colour discrimination happens at about 2 months of age, depth perception at 4 months, and it takes a full 8 months before infants can perceive shapes and objects about
as well as adults (Csibra et al., 2000; Fantz, 1961). Nevertheless, even newborns are highly responsive to visual cues if they’re close enough to see them. They will track moving objects, and will stare intently at objects they haven’t seen before, although after a while they habituate to an object and lose
interest in looking at it (Slater et al., 1988).
At just a few days of age, infants will imitate the facial expressions of others
(Meltzoff & Moore, 1977). From Meltzoff, A. N., & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198, 75–
78.
Babies’ visual responses to the world illustrate a major theme within psychology, which is that humans are fundamentally social creatures. By a few days of age,
newborns will imitate the facial expressions of others (Meltzoff & Moore, 1977). Newborns prefer to look at stimuli that look like faces, compared to stimuli that have all the same features but are scrambled so that they don’t look like faces
(see Figure 10.3 ). Infants also take longer to habituate to the face-like stimuli, suggesting that the human face holds particular importance even for newborns
(Johnson et al., 1991). This social attuning was dramatically illustrated in one study (Reissland, 1988), which showed that within one hour of birth, newborns begin to imitate facial expressions that they see!
Figure 10.3 Experimental Stimuli for Studying Visual Habituation in Infants Infants were shown three types of stimuli, a face-like stimulus, a neutral stimulus, and a scrambled-face stimulus.
Interestingly, the proper development of the visual system is not guaranteed to happen; it’s not hardwired into our genes. Instead, the visual system develops in response to the infant experiencing a world of diverse visual input. Research at McMaster University has shown that even though babies possess the necessary “equipment” for proper vision, this equipment needs to be exposed to a diverse
visual world in order to learn how to function effectively (Maurer et al., 1999); it is the patterns in the world which develop the appropriate neural pathways in the
visual cortex (see Module 4.2 ).
Although being exposed to a complex world is essential for the development of
the human visual system, interacting with this world is also necessary for the visual system to properly develop. This was illustrated by research involving an
ingenious device—the visual cliff. Originally, researchers in 1960 (Gibson & Walk, 1960) found that infants would be reluctant to crawl over the deep side, seeming to understand depth and danger right from birth. However, researchers eventually discovered that only babies who had some experience crawling
showed fear of the deep end (Campos et al., 1992).
In contrast to vision, the taste and olfactory systems are relatively well developed at birth. Similar to adults, newborns cringe when they smell something rotten or pungent (such as ammonia), and they show a strong preference for the taste of sweets. Odours are strong memory cues for infants as well. For example, infants can learn that a toy will work in the presence of one odour but not others, and
they can retain this memory over several days (Schroers et al., 2007). Newborn infants can also smell the difference between their mother’s breastmilk and that of a stranger. Infants even turn their heads toward the scent of breastmilk, which
helps to initiate nursing (Porter & Winberg, 1999).
The visual cliff. Mark Richard/PhotoEdit, Inc.
Motor Development in The First Year
Although the motor system takes many years to develop a high degree of coordination (good luck getting an infant to wield a steak knife), the beginnings of the motor system develop very early. A mere five months after conception, the fetus begins to have control of voluntary motor movements. In the last months of gestation, the muscles and nervous system are developed enough to
demonstrate basic reflexes —involuntary muscular reactions to specific types of stimulation. These reflexes provide newborns and infants with a set of innate responses for feeding and interacting with their caregivers (see Table 10.2 for a partial list of important infant reflexes). We evolved these reflexes because they help the infant survive (e.g., the rooting reflex helps the infant find and latch onto the breast; the grasping reflex helps the infant hold onto the caregiver, which was probably pretty important especially for our tree-dwelling ancestors), and they often begin the motor learning process that leads to the development of more complex motor skills (e.g., there is a stepping reflex that may help the infant learn to better sense and control her legs in order to support eventual walking behaviour).
Table 10.2 A Few Key Infant Reflexes
THE ROOTING REFLEX
Cathy Melloan
Resources/PhotoEdit,
Inc.
The rooting reflex is elicited by stimulation to the corners of the
mouth, which causes infants to orient themselves toward the
stimulation and make sucking motions. The rooting reflex helps the
infant begin feeding immediately after birth.
THE MORO REFLEX
The Moro reflex, also known as the “startle” reflex, occurs when
infants lose support of their head. Infants grimace and reach their
arms outward and then inward in a hugging motion. This may be a
Petit Format/Photo
Researchers,
Inc./Science Source
protective reflex that allows the infant to hold on to the mother when
support is suddenly lost.
THE GRASPING REFLEX
Denise
Hager/Catchlight
Visual
Services/Alamy Stock
Photo
The grasping reflex is elicited by stimulating the infant’s palm. The
infant’s grasp is remarkably strong and facilitates safely holding on
to one’s caregiver.
Interestingly, reflexes also provide important diagnostic information concerning the infant’s development. If the infant is developing normally, most of the primary, basic reflexes should disappear by the time the infant is about 6 months old, as the motor processes involved in these reflexes become integrated into the child’s developing neurology, in particular, the sensorimotor systems. The outcome of this integration is a pretty big deal—voluntary control over the body. Thus, if these reflexes persist longer than about six months, this may indicate
neural issues that may interfere with developing proper motor control (Volpe, 2008).
Over the first 12 to 18 months after birth, infants’ motor abilities progress through
fairly reliable stages—from crawling, to standing, to walking (see Figure 10.4 ). Although the majority of infants develop this way, there is still some variability; for example, some infants largely bypass the crawling stage, developing a kind of bum-sliding movement instead, and then proceed directly to standing and walking. The age at which infants can perform each of these movements differs from one individual to the next. In contrast to reflexes, the
development of motor skills seems to rely more on practice and deliberate effort, which in turn is related to environmental influences, such as cultural practices. For example, Jamaican mothers typically expect their babies to walk earlier than British or Indian mothers, and sure enough, Jamaican babies do walk earlier, likely because they are given more encouragement and opportunities to learn
(Hopkins & Westra, 1989; Zelazo et al., 1993).
Figure 10.4 Motor Skills Develop in Stages This series shows infants in different stages of development: (a) raising the head, (b) rolling over, (c) propping up, (d) sitting up, (e) crawling, and (f) walking. Top, left: bendao/Shutterstock; top, right: Bubbles Photolibrary/Alamy Stock Photo; bottom, left: imageBROKER/Glow
Images; bottom, centre left: OLJ Studio/Shutterstock; bottom, centre right: Corbis Bridge/Alamy Stock Photo; bottom, right:
Eric Gevaert/Shutterstock
One area of the body that undergoes astonishing development during infancy is the brain. Although the major brain structures are all present at birth, they continue developing right into adulthood. One key change is the myelination of
axons (see Module 3.2 ), which begins prenatally, accelerates through infancy and childhood, and then continues gradually for many years. Myelination is
centrally important for the proper development of the infant, and occurs in a reliable sequence, starting with tactile and kinesthetic systems (involving sensory and motor pathways), then moving to the vestibular, visual, and auditory systems
(Espenschade & Eckert, 1980;Deoni et al., 2011). Myelination of sensorimotor systems allows for the emergence of voluntary motor control (Espenschade & Eckert, 1980). By 12 months of age, the myelination of motoric pathways can be seen in the infant’s newfound abilities to stand and balance, begin walking, and gain voluntary control over the pincer grasp (pressing the forefinger and thumb together).
Two other neural processes, synaptogenesis and synaptic pruning, further help
to coordinate the functioning of the developing brain. Synaptogenesis
describes the forming of new synaptic connections, which occurs at blinding speed through infancy and childhood and continues through the lifespan. Synaptic pruning , the loss of weak nerve cell connections, accelerates during brain development through infancy and childhood (Figure 10.5 ), then tapers off until adolescence (see Module 10.3 ). Synaptogenesis and synaptic pruning serve to increase neural efficiency by strengthening needed connections between nerve cells and weeding out unnecessary ones.
Figure 10.5 The Processes of Synaptic Pruning
In summary, the journey from zygote to you begins dramatically, with biological
pathways being formed at a breakneck pace both prenatally and after birth, giving rise to sensory and motor abilities that allow infants to become competent perceivers and actors in the external world. Most motor abilities require substantial time for infants to learn to coordinate the many different muscles involved, which depends heavily on infants’ interactions with the environment. From the very beginnings of our lives, nature and nurture are inextricably intertwined.
Module 10.1c Quiz:
Sensory and Motor Development in Infancy
Know . . . 1. Three processes account for the main ways in which the brain develops
after birth. These three processes are
A. myelination, synaptogenesis, and synaptic pruning. B. myelination, synaptic reorganization, and increased
neurotransmitter production.
C. synaptogenesis, synaptic pruning, and increased neurotransmitter production.
D. cell growth, myelination, and synaptic organization.
Understand . . . 2. The development of infant motor skills is best described as
A. a genetic process with no environmental influence. B. completely due to the effects of encouragement. C. a mixture of biological maturation and learning. D. progressing in continuous, rather than stage, fashion.
Module 10.1 Summary
cohort effect
Know . . . the key terminology related to prenatal and infant physical development.
10.1a
cross-sectional design
developmental psychology
embryonic stage
fetal alcohol syndrome
fetal stage
germinal stage
longitudinal design
preterm infant
reflexes
synaptic pruning
synaptogenesis
teratogen
zygote
Cross-sectional designs, in which a researcher studies a sample of people at one time, have the advantage of being faster, and generally cheaper, allowing research to be completed quickly; however, they may suffer from cohort effects because people of different ages in the sample are also from somewhat different historical time periods and, thus, any differences between them could reflect a historical process and not a developmental one. Longitudinal designs, in which a researcher follows a sample of people over a span of time, have the advantage of being able to track changes in the same people, thus giving more direct insight into developmental processes. However, such studies take longer to complete, thus slowing down the research process, and they can suffer from attrition, in which people drop out of the study over time.
Understand . . . the pros and cons to different research designs in developmental psychology.
10.1b
The key to healthy fetal development is ensuring a chemically ideal environment. The most important factors are adequate nutrition and avoiding teratogens. Best nutritional practices include approximately a 20% increase in the mother’s caloric intake, additional protein, and ensuring sufficient quantities of essential nutrients (which usually involves taking nutritional supplements). Avoiding teratogens involves giving up smoking and drinking alcohol, and getting good medical advice concerning any medications that the expectant mother may be taking.
Health risks increase considerably with very premature births (e.g., those occurring at just 25 weeks’ gestation). Use of proper caregiving procedures, especially personalized care that emphasizes mother–infant contact, breastfeeding, and minimal sensory stimulation for the underdeveloped brain, increases the chances that preterm infants will remain healthy.
Apply . . . your understanding to identify the best ways expectant parents can ensure the health of their developing fetus.
10.1c
Analyze . . . the effects of preterm birth.10.1d
Module 10.2 Infancy and Childhood: Cognitive and Emotional Development
Getty Images
Learning Objectives
Know . . . the terminology associated with infancy and childhood. Understand . . . the cognitive changes that occur during infancy and childhood.
10.2a 10.2b
Many parents have turned to Disney’s Baby Einstein line of books, toys, and DVDs in hopes of entertaining and enriching their children. These materials certainly are entertaining enough that children watch them. But do they work? Do these products actually increase cognitive skills? The advertising pitch is certainly persuasive, arguing that these products were designed to help babies explore music, art, language, science, poetry, and nature through engaging images, characters, and music. How could that be bad? However, the American Academy of Pediatrics recommends that children younger than two years should not watch television at all, based on research showing that memory and language skills are slower
to develop in infants who regularly watch television (Christakis, 2009). Further, research specifically on Baby Einstein videos has shown that
they have no effect on vocabulary development (Richert et al., 2010; Robb et al., 2009). Instead of watching commercial programs on electronic screens, reading with caregivers turns out to be related to greater vocabulary comprehension and production. Thus, using the “electronic babysitter” might be justifiable in order to give parents a break or let them get some things done, but caregivers shouldn’t fool themselves into thinking that it’s actually promoting their children’s development.
Focus Questions
1. Which types of activities do infants and young children need for their psychological development?
2. Why are social interactions so important for healthy
Understand . . . the importance of attachment and the different styles of attachment. Apply . . . the concept of scaffolding and the zone of proximal development to understand how to best promote learning. Analyze . . . how to effectively discipline children in order to promote moral behaviour.
10.2c
10.2d
10.2e
development?
The transition from baby to toddler is perhaps the most biologically and behaviourally dramatic time in people’s lives. It is a mere year or two during which we grow from highly incapable, drooling babies, to highly coordinated and capable children. The physical, cognitive, and social transitions that occur between infancy and childhood are remarkably ordered, yet are also influenced by individual genetic and sociocultural factors. In this module, we integrate some important stage perspectives to explain psychological development through childhood.
One key insight to emerge from several lines of research is that for many systems, certain periods of development seem to be exceptionally important for
long-term functioning. A sensitive period is a window of time during which exposure to a specific type of environmental stimulation is needed for normal development of a specific ability. For example, to become fluent in language, infants need to be exposed to speech during their first few years of life. Long- term deficits can emerge if the needed stimulation, such as language, is missing during a sensitive period. Sensitive periods of development are a widespread phenomenon. They have been found in humans and other species for abilities such as depth perception, balance, recognition of parents and, in humans at
least, identifying with a particular culture (Cheung et al., 2011). However, although sensitive periods can explain the emergence of many perceptual (and some cognitive) abilities, they are only one of many mechanisms underlying human development.
Over the past century, many psychologists have attempted to explain how children’s mental abilities develop and expand. One of the most influential figures in this search was a Swiss psychologist named Jean Piaget (1896–1980).
Cognitive Changes: Piaget’s Cognitive Development Theory
Jean Piaget developed many of his theories in an unorthodox manner: he studied his own family. However, this was not done in a casual manner. Piaget actively studied, made copious notes of his observations, and even ran specific tests and measurements on his own children as they were growing up. The theories that resulted from this extensive personal project laid much of the
groundwork for the modern science of cognitive development —the study of changes in memory, thought, and reasoning processes that occur throughout the lifespan. In his own work, Piaget focused on cognitive development from infancy through early adolescence.
Piaget’s central interest was in explaining how children learn to think and reason. According to Piaget, learning is all about accumulating and modifying knowledge, which involves two central processes that he called assimilation and
accommodation. Assimilation is fitting new information into the belief system one already possesses. For example, young children may think that all girls have long hair and, as they encounter more examples of this pattern, they will assimilate them into their current understanding. Of course, eventually they will to run into girls with short hair or boys with long hair, and their beliefs will be challenged by this information. They may, at first, misunderstand, assuming a short-haired girl is actually a boy and a long-haired boy is actually a girl. But over time they will learn that their rigid categories of long-haired girl and short-haired
boy need to be altered. This is called accommodation , a creative process whereby people modify their belief structures based on experience. Our belief systems help us make sense of the world (assimilation), but as we encounter information that challenges our beliefs, we develop a more complex understanding of the world (accommodation). Deeply understanding assimilation and accommodation gets right to the heart of how to help people learn new things, as well as why people so often resist new information and may vigorously hold on to their beliefs.
Based on his observations of his children, Piaget concluded that cognitive
development passes through four distinct stages from birth through early adolescence: the sensorimotor, preoperational, concrete operational, and formal operational stages. Passing out of one stage and into the next occurs when the
child achieves the important developmental milestone of that stage (see Table 10.3 ).
Table 10.3 Piaget’s Stages of Cognitive Development
Stage Description
Sensorimotor
(0–2 years)
Cognitive experience is based on direct sensory experience with the
world, as well as motor movements that allow infants to interact with
the world. Object permanence is the significant developmental
milestone of this stage.
Preoperational
(2–7 years)
Thinking moves beyond the immediate appearance of objects. The
child understands physical conservation and that symbols, language,
and drawings can be used to represent ideas.
Concrete
operational
(7–11 years)
The ability to perform mental transformations on objects that are
physically present emerges. Thinking becomes logical and
organized.
Formal
operational
(11 years–
adulthood)
The capacity for abstract and hypothetical thinking develops.
Scientific reasoning becomes possible.
The Sensorimotor Stage: Living in The Material
World
The earliest period of cognitive development is known as the sensorimotor stage ; this stage spans from birth to two years, during which infants’ thinking about and exploration of the world are based on immediate sensory (e.g., seeing, feeling) and motor (e.g., grabbing, mouthing) experiences. During this time, infants are completely immersed in the present moment, responding
exclusively to direct sensory input. As soon as an object is out of sight and out of reach, it will cease to exist (at least in the minds of young infants): Out of sight, out of mind.
This is obviously not how the world works. Thus, the first major milestone of
cognitive development proposed by Piaget is object permanence , the ability to understand that objects exist even when they cannot be directly perceived. To test for object permanence, Piaget would allow an infant to reach for a toy, and then place a screen or a barrier between the infant and the toy so that the toy was no longer visible to the infant. If the reaching or looking stopped, it would suggest that the infant did not have a mental representation of the object when it was not visible. This would indicate that the infant had not yet developed object permanence.
Object permanence is tested by examining reactions that infants have to objects when they cannot be seen. Children who have object permanence will attempt to reach around the barrier or will continue looking in the direction of the desired object. Doug Goodman/Photo Researchers, Inc./ Science Source
Notice that this is not a problem for a two-year-old child. He can be very aware that his favourite toy awaits him in another room while he has to sit at the dinner table; in fact, he might not be able to get the toy out of his mind, and may take revenge on the evil tyrants who won’t get it for him by screaming throughout the meal.
The Preoperational Stage: Quantity and Numbers
According to Piaget, once children have mastered sensorimotor tasks, they have
progressed to the preoperational stage (ages two to seven). This stage is devoted to language development, the use of symbols, pretend play, and mastering the concept of conservation (discussed below). During this stage, children can think about physical objects, although they have not quite attained abstract thinking abilities. They may count objects and use numbers, yet they cannot mentally manipulate information or see things from other points of view.
This inability to manipulate abstract information is shown by testing a child’s
understanding of conservation , the knowledge that the quantity or amount of an object is not the same as the physical arrangement and appearance of that object. Conservation can be tested in a number of ways (see Figure 10.6 ). For example, in a conservation of liquid task, a child is shown two identical glasses, each containing the same amount of liquid. The researcher then pours the liquid from one glass into a differently shaped container, typically one that is taller and narrower. Although the amount of liquid is still the same, many children believe that the tall, thin glass contains more fluid because it looks “bigger” (i.e.,
taller). The conservation of number task produces similar effects. In this task, a child is presented with two identical rows of seven pennies each (see the bottom
part of Figure 10.6 ). The experimenter then spreads out one of the rows so that it is longer, but still has the same number of coins. If you ask the child, “Which row has more?” a three-year-old would likely point to the row that was spread out because a child in the preoperational stage focuses on the simpler method of answering based on immediate perception, instead of applying more sophisticated mental operations (such as counting the pennies).
Figure 10.6 Testing Conservation A child views two equal amounts of fluid, one of which is then poured into a taller, narrower container. Children who do not yet understand conservation believe that there is more fluid in the tall container compared to the shorter one. A similar version of this task can be tested using equal arrays of separate objects. Source: Lilienfeld, Scott O.; Lynn, Steven J; Namy, Laura L.; Woolf, Nancy J., Psychology: From Inquiry To Understanding,
2nd Ed., ©2011. Reprinted and Electronically reproduced by permission of Pearson Education, Inc., New York, NY.
Although tests of conservation provide compelling demonstrations of the limits of children’s cognitive abilities, Piaget’s conclusions were not universally accepted. Some researchers have challenged Piaget’s pessimism about the abilities of young children, arguing that their inability to perform certain tasks was a function of the child’s interpretation of the task, not their underlying cognitive limitations (see the Working the Scientific Literacy Model feature). For example, when three-year-old children are presented with the pennies conservation task described above, but M&Ms are substituted for the pennies, everything changes.
All of a sudden, children exhibit much more sophisticated thinking. If you put more M&Ms tightly packed together so they take up less space than a line of M&Ms that is more spread out, children will pick the more tightly packed but “smaller” row, understanding that it contains more candy—especially if they get
to eat the candy from the row they choose (Mehler & Bever, 1967).
In fact, even before children start to use and understand numbers, they acquire a basic understanding of quantity. Very soon after they are born, infants appear to
understand what it means to have less or more of something. This suggests that the infants who chose the longer row of pennies in the example above may simply have misunderstood the question, not the underlying rule of conservation.
To them, more could simply have meant longer.
Although Piaget clearly underestimated the cognitive abilities of young children,
researchers have identified common errors that very young children make but older children typically do not make. The children in Figure 10.7 are committing scale errors in the sense that they appear to interact with a doll-sized slide and a toy car as if they were the real thing, rather than miniatures
(DeLoache et al., 2004). By 2 to 2½ years of age, scale errors decline as children begin to understand properties of objects and how they are related. This understanding is one of many advances children make as they progress toward more abstract thinking.
Figure 10.7 Scale Errors and Testing for Scale Model Comprehension The children in photos (a) and (b) are making scale errors. One child is attempting to slide down a toy slide and another is attempting to enter a toy car. Three-year-olds understand that a scale model represents an actual room (c). The adult pictured is using a scale model to indicate the location of a hidden object in an actual room of this type. At around 3 years of age, children understand that the scale model symbolizes an actual room and will go directly to the hidden object after viewing the scale model. Courtesy of Judy DeLoache
At around 3 years of age children begin to understand symbolic information. For example, 3-year-olds understand that a scale model of a room can symbolize an
actual room (Figure 10.7 ). Children who view an experimenter placing a
miniature toy within the scale model will quickly locate the actual toy when
allowed to enter the room symbolized by the scale model (DeLoache, 1995). Abilities such as this are precursors to more advanced abilities of mental abstraction.
The Concrete Operational Stage: Using Logical
Thought
Conservation is one of the main skills marking the transition from the
preoperational stage to the concrete operational stage (ages 7 to 11 years), when children develop skills in logical thinking and manipulating numbers. Children in the concrete operational stage are able to classify objects according to properties such as size, value, shape, or some other physical characteristic. Their thinking becomes increasingly logical and organized. For example, a child in the concrete operational stage recognizes that if X is more than Y, and Y is
more than Z, then X is more than Z (a property called transitivity). This ability to think logically about physical objects sets the stage for them to think logically about abstractions in the fourth and final stage of cognitive development.
The Formal Operational Stage: Abstract and
Hypothetical Thought
The formal operational stage (ages 11 to adulthood) involves the development of advanced cognitive processes such as abstract reasoning and hypothetical thinking. Scientific thinking, such as gathering evidence and systematically testing possibilities, is characteristic of this stage.
Working the Scientific Literacy Model Evaluating Piaget
Piaget was immensely successful in opening our eyes to the
cognitive development of infants and children. Nevertheless, advances in testing methods have shown that he may have underestimated some aspects of infant cognitive abilities. In fact, infants appear to understand some basic principles of their physical and social worlds very shortly after birth.
What do we know about cognitive abilities in infants?
The core knowledge hypothesis proposes that infants have inborn abilities for understanding some key aspects of their environment (Spelke & Kinzler, 2007). It is a bold claim to say that babies know something about the world before they have even experienced it, so we should closely examine the evidence for this hypothesis.
How can we know what infants know or what they perceive? One frequently used method for answering this question relies on the
habituation–dishabituation response. Habituation refers to a decrease in responding with repeated exposure to an event. For example, if an infant is shown the same stimulus over and over, she will stop looking at it. Conversely, infants are quite responsive to novelty or changes in their environment. Thus, if the stimulus suddenly changes, the infant will display dishabituation , an increase in responsiveness with the presentation of a new stimulus. In other words, the infant will return her gaze to the location that she previously found boring. Research on habituation and dishabituation in infants led to the development of measurement techniques based on what infants will look at and for how long. These techniques now allow researchers to test infants even younger than Piaget was able to.
A popular method for testing infant cognitive abilities is to measure the amount of time infants look at stimuli. Researchers measure habituation and dishabituation to infer what infants understand. Lawrence Migdale/ Photo Researchers, Inc./Science Source
How can science help explain infant cognitive abilities? Measurement techniques based on what infants look at have been used to measure whether infants understand many different concepts, including abstract numbers—an ability that most people imagine appears much later in development. For example, Elizabeth Spelke and colleagues conducted a study in which 16
infants just two days old were shown sets of either 4 or 12 identical small shapes (e.g., yellow triangles, purple circles) on a video screen. The researchers also made a sound 4 or 12 times (e.g., tu-tu-tu-tu or ra-ra-ra-ra-ra-ra-ra-ra-ra-ra-ra-ra) at the same
time they showed the shapes (see Figure 10.8 ). Researchers varied whether the number of shapes the infants saw matched the number of tones they heard (e.g., 4 yellow triangles and 4 “ra” tones), or not (e.g., 4 purple circles and 12 “ra” tones). The infants were most attentive when what they saw and heard matched. In other words, they looked longer at the shapes when the number of shapes matched the number of sounds, compared to when they did not match; this is taken as evidence that even very young infants have a rudimentary appreciation for abstract
numbers (Izard et al., 2009).
Figure 10.8 Testing Infants’ Understanding of Quantity
In this study, infants listened to tones that were repeated either 4 or 12 times while they looked at objects that had either 4 or 12 components. Infants spent more time looking at visual arrays when the number of items they saw matched the number of tones they heard. Source: Figure 1 from “Newborn Infants Perceive Abstract Numbers” by V. Izard, C. Spann, E. S.
Spelke, & A. Streri (2009), Proceedings of the National Academy of Sciences, 106, 10382–10385.
Copyright © 2009. Reprinted by permission of PNAS.
Can we critically evaluate this research? Many of the studies of early cognitive development discussed in this module used the “looking time” procedure, although not all psychologists agree that it is an ideal way of determining what
infants understand or perceive (Aslin, 2007; Rivera et al., 1999). We cannot know exactly what infants are thinking, and perhaps they look longer at events and stimuli simply because these are more interesting rather than because they understand anything in particular about them. Inferring mental states that participants cannot themselves validate certainly leaves room for alternative explanations.
Also, the sample sizes in these studies are often fairly small, due to the cost and complexity of researching infants. In the study of shapes and tones just described, only 16 infants managed to complete the study. Forty-five others were too fussy or sleepy to successfully finish the task.
Why is this relevant? The key insight provided by this research is that cognitive development in young infants is much more sophisticated than psychologists previously assumed. With each study that examines the cognitive capacities of infants, we learn that infants are not just slobbery blobs that need to be fed and diapered— though it certainly can feel that way when you are a new parent. Now we are learning that infants can understand more than we might expect, and can reason in more complex ways than we had believed.
One thing that parents and caregivers can learn from this research is to see their children as complex learners who use sensation and movement to develop their emerging cognitive abilities. Caregivers can encourage this process by talking to them using diverse vocabulary, exploring rhythm and music, allowing them to feel different objects, and exposing them to different textures and sensations.
Piaget’s theories have had a lasting impact on modern developmental psychology. In addition to providing insights into the minds of young children, Piaget’s work inspired numerous other researchers to study cognitive development. Many of these new discoveries complement, rather than entirely contradict, Piaget’s foundational work.
Complementary Approaches to Piaget
In the many decades since Piaget’s work, psychologists have explored how children’s social contexts affect their cognitive development. For example, in a learning context, other people can support and facilitate children’s learning, or can make it more difficult. Children who try to master a skill by themselves may run into obstacles that would be easier to overcome with a little assistance or guidance from another person, or they may give up on a task when a little encouragement could have given them the boost needed to persevere and succeed. At the opposite extreme, children who have everything done for them and who are not allowed to work through problems themselves may become relatively incapable of finding solutions on their own, and may not develop feelings of competence that support striving for goals and overcoming challenges. Therefore, it seems reasonable to expect that optimal development will occur somewhere between the extremes of children doing everything on their own without any support, versus having others over-involved in their activities.
Russian psychologist Lev Vygotsky (1978) proposed that development is ideal when children attempt skills and activities that are just beyond what they can do alone, but they have guidance from adults who are attentive to their progress; this concept is termed the zone of proximal development (Singer & Goldin-Meadow, 2005). Teaching in order to keep children in the zone of proximal development is called scaffolding , a highly attentive approach to teaching in which the teacher matches guidance to the learner’s needs.
Cross-cultural research on parent–infant interactions shows that scaffolding is
exercised in different ways (Rogoff et al., 1993). For example, in one study, 12- to 24-month-old children were offered a toy that required pulling a string to make it move. Parents from Turkey, Guatemala, and the United States were observed interacting with their infants as they attempted to figure out how the toy worked. All parents used scaffolding when they spoke and gestured to their children to encourage them to pull the string, but mother–child pairs from Guatemala were much more communicative with each other, both verbally and through gestures such as touching and using the direction of their gaze to encourage the behaviour. Over time, this kind of sensitive scaffolding should result in children
who are more seamlessly integrated into the daily life of the family and community, rather than merely relegated to “play” activities in specialized “kid- friendly” environments. This means that children who are appropriately scaffolded are able to be useful and self-sufficient at much earlier ages than is normal in contemporary North American society. This kind of scaffolding approach to everyday life tasks is one of the foundational practices in many alternative education systems, such as the Montessori system.
Caregivers who are attentive to the learning and abilities of a developing child provide scaffolding for cognitive development. Nolte Lourens/Shutterstock
Module 10.2a Quiz:
Cognitive Changes: Piaget’s Cognitive Development Theory
Know . . . 1. Recognizing that the quantity of an object does not change despite
changes in its physical arrangement or appearance is referred to as
. A. object permanence B. scale comprehension
C. conservation D. number sense
2. Parents who attend to their children’s psychological abilities and guide them through the learning process are using .
A. scaffolding B. tutoring C. core knowledge D. the zone of proximal development
3. What is the correct order of Piaget’s stages of cognitive development? A. Preoperational, sensorimotor, concrete operational, formal
operational
B. Sensorimotor, preoperational, formal operational, concrete operational
C. Sensorimotor, preoperational, concrete operational, formal operational
D. Preoperational, concrete operational, sensorimotor, formal operational
Apply . . . 4. A child in the sensorimotor stage may quit looking or reaching for a toy if
you move it out of sight. This behaviour reflects the fact that the child has
not developed . A. core knowledge B. object permanence C. conservation D. to the preoperational stage
Analyze . . . 5. Research on newborns indicates that they have a sense of number and
quantity. What does this finding suggest about Piaget’s theory of cognitive development?
A. It confirms what Piaget claimed about infants in the sensorimotor phase.
B. Some infants are born with superior intelligence. C. Piaget may have underestimated some cognitive abilities of
infants and children.
D. Culture determines what infants are capable of doing.
Social Development, Attachment, and Self-Awareness
It seems rather obvious to point out that human infants are profoundly dependent on their caregivers for pretty much everything, from food and relief from dirty diapers, to being held and soothed when they are upset. Based on Piaget’s insight that the infant’s experiential world is largely comprised of physical sensation and movement, one might expect that physical interactions with caregivers make up a huge part of an infant’s reality. As a result, the infant’s emerging feelings of safety and security, or conversely, fear and distress, may be highly affected by the basic, physical connection with caregivers.
Nowadays, it is perhaps not a stunning insight to realize that being touched and held, seeing facial expressions that are responsive to one’s own, and hearing soothing vocalizations, are important for helping infants to feel secure in what is otherwise a pretty big, unknown and potentially scary world. However, in the mid- 20th century, it was a major insight to realize how sensitively attuned infants are to their social world, and how deeply they are affected by how they are treated by those they depend upon. Whether caregivers are loving and responsive, or perhaps neglectful or cruel, in the first months of life, can affect the developing child in ways that last for the rest of their lives.
Understanding the intense social bonding that occurs between humans revolves
around the central concept of attachment , the enduring emotional bond formed between individuals, initially between infants and caregivers. Attachment motivations are deeply rooted in our psychology, compelling us to seek out others for physical and psychological comfort, particularly when we feel stressed
or insecure (Bowlby, 1951). Infants draw upon a remarkable repertoire of behaviours that are geared towards seeking attachment, such as crying, cooing, gurgling, and smiling, and adults are generally responsive to these rudimentary but effective communications.
What is Attachment? In the early decades of modern psychology, dominant theories of motivation emphasized biological drives, such as hunger and thirst, that motivated people to satisfy their basic needs. From this perspective, the motivation that drove infants to connect with caregivers, like their mother, was simple; mom fed them, reducing their hunger, and thus, they developed a behavioural interdependence with mom, and through basic conditioning processes (i.e., associating mom with the pleasure and comfort of food), formed an emotional attachment with mom as well. Such a description of love is never going to fill a book of poetry, but it seemed to scientifically and objectively account for the infant–caregiver bond.
However, in the 1950s, a psychologist by the name of Harry Harlow made an extremely interesting observation, although one so seemingly innocuous that most of us likely would have overlooked it. Harlow was conducting research on infant rhesus monkeys, and was raising these monkeys in cages without any contact with their mothers. In the course of this research, he noticed that the baby monkeys seemed to cling passionately to the cloth pads that lined their cages, and they would become very distressed when these pads were removed for cleaning. This simple observation made Harlow start to wonder what function the pads served for the monkeys. The monkeys didn’t eat the pads, obviously, so why should they be so attached to them?
A baby monkey clings to a cloth-covered object—Harlow called this object the
cloth mother—even though in this case the wire “mother” provided food. Nina Leen/The LIFE Picture Collection/Getty Images
Harlow designed an ingenious set of studies, testing whether it was physical comfort or primary drive reduction that drove the formation of attachment. He placed rhesus monkeys in cages, right from birth, and gave them two pseudo- mothers: one was a cylinder of mesh wire wrapped with soft terry-cloth, loosely resembling an adult monkey; the other was an identical cylinder but without the cloth covering. To then test whether reducing the monkeys’ hunger was important for the formation of attachment, Harlow simply varied which of the “mothers” was the food source. For some monkeys, the terry-cloth mother had a bottle affixed to it and thus was the infant’s food source, whereas for other monkeys, the bottle was affixed to the wire mother. The question was, who would the monkeys bond with? Did their emotional attachment actually depend on which of the “mothers” fed them?
The contest between mothers wasn’t even close. No matter who had the bottle, the baby monkeys spent almost all their time with the cloth mother, pretty much ignoring the wire mother except for the small amount of time they spent actually
feeding, when she had the bottle (see Figure 10.9 ). Furthermore, the monkeys seemed emotionally attached to the cloth mother, depending on her to meet their emotional needs. For example, researchers devised experiments in which the baby monkeys would be frightened (e.g., surprising them with a metallic contraption that looked like a vicious monster), and they would watch which mother the infants would run to for comfort and security. Over and over again, they ran to the cloth mother (the videos from this experiment are heart- breaking). The implications were clear—attachment is not about reducing fundamental biological drives; it’s about feeling secure, which has a strong basis in feeling physically comforted.
Figure 10.9 Harlow’s Monkeys: Time Spent on Wire and Cloth Mother Surrogates Source: Harlow, H. F. (1958). The nature of love. American Psychologist, 13(12), 673–685. From the American Psychological
Association.
Types of Attachment
In order to measure attachment bonds in human infants, obviously it is unethical to raise babies in cages with fake mothers and then scare them half to death to see who they crawl to. Instead, psychologists have developed methods of
studying infant attachment that are only mildly stressful and mimic natural
situations. One method capitalizes on stranger anxiety—signs of distress that infants begin to show toward strangers at about eight months of age. Mary Ainsworth developed a measurement system, based on the belief that different characteristic patterns of responding to stranger anxiety indicated different types of emotional security, or attachment style.
Ainsworth (1978) developed a procedure called the strange situation , a way of measuring infant attachment by observing how infants behave when exposed to different experiences that involve anxiety and comfort. The procedure involves a sequence of scripted experiences that expose children to some mild anxiety (e.g., the presence of a stranger, being left alone with the stranger), and the potential to receive some comfort from their caregiver. For example, the child and caregiver spend a few minutes in a room with some toys; a stranger enters, the caregiver leaves, and then the caregiver returns. In each segment of the procedure, the child’s behaviour is carefully observed. Ainsworth noted three broad patterns of behaviour that she believed reflected three different attachment
styles (see Figure 10.10 ):
Figure 10.10 The Strange Situation Studies of attachment by Mary Ainsworth involved a mother leaving her infant with a stranger. Ainsworth believed that the infants’ attachment styles could be categorized according to their behavioural responses to the mother leaving and returning. Source: Lilienfeld, Scott O.; Lynn, Steven J; Namy, Laura L.; Woolf, Nancy J., Psychology: From Inquiry To Understanding,
2nd Ed., © 2011. Reprinted and Electronically reproduced by permission of Pearson Education, Inc., New York, NY.
1. Secure attachment. The caregiver is a secure base that the child turns toward occasionally, “checking in” for reassurance as she explores the room. The child shows some distress when the caregiver leaves, and avoids the stranger. When the caregiver returns, the child seeks comfort and her distress is relieved.
2. Insecure attachment. Two subtypes were distinguished: Anxious/Ambivalent. The caregiver is a base of security, but the child depends too strongly on the caregiver, exhibiting “clingy” behaviours rather than being comfortable exploring the room on his own. The child is very upset when the caregiver leaves, and is quite fearful toward the stranger. When the caregiver returns, the child seeks comfort, but then also resists it and pushes the caregiver away, not allowing his distress to be easily alleviated.
Avoidant. The child behaves as though she does not need the caregiver at all, and plays in the room as though she is oblivious to the caregiver. The child is not upset when the caregiver leaves, and is unconcerned about the stranger. When the caregiver returns, the child does not seek contact.
3. Subsequent research identified a fourth attachment style, disorganized (Main & Solomon, 1990), which is best characterized by instability; the child has learned (typically through inconsistent and often abusive experiences) that caregivers are sources of both fear and comfort, leaving the child oscillating between wanting to get away and wanting to be reassured. The child experiences a strong ambivalence, and reinforces this through his own inconsistent behaviour, seeking closeness
and then pulling away, or often simply “freezing,” paralyzed with indecision.
Attachment is important not only in infancy, but throughout one’s life. Even in adult romantic relationships, attachment styles (gained during infancy!) are still at
work (Hofer, 2006). The specific patterns of behaviour that characterize different attachment styles can be seen, albeit in somewhat more complex forms, in adult
relationships (Hazan & Shaver, 1987; Mikulincer & Shaver, 2007). Attachment styles predict many different relationship behaviours, including how we form and dissolve relationships, specific issues and insecurities that arise in relationships, and likely patterns of communication and conflict. For example, in one large, longitudinal study spanning more than 20 years, people who were securely attached as infants were better able to recover from interpersonal conflict with
their romantic partners (Salvatore et al., 2011). It appears that the father described at the beginning of Module 10.1 was correct; we really are similar to “giant babies.”
Development of Attachment
Given that attachment styles are so important, how do they form in the first place? Research consistently has shown that one’s attachment style largely reflects one’s early attachment experiences (e.g., whether caregivers tend to be loving, accepting, and responsive, or critical, rejecting, and unresponsive, or simply inconsistent and unpredictable). This makes sense; after all, attachment styles are understood to be learned patterns of behaviour that the developing child adopts in order to adapt to the key relationships in her life. This is a major insight for parents to take seriously, because the consequences of one’s own behaviour as a parent can resonate throughout the rest of the child’s life. Most important, perhaps, is parental responsiveness. For example, Ainsworth’s
research (Ainsworth, 1978) showed that maternal sensitivity (i.e., being highly attuned to the infant’s signals and communication, and responding appropriately) is key to developing a secure attachment style. More contemporary research has expanded this to included non-maternal caregivers; yes Dads, you’re important too.
At this point, especially if you feel you have somewhat of a less-than-secure attachment style, you might be wondering whether you are doomed to remain this way forever. After all, we have been emphasizing the long-term stability of attachment styles that are formed early in life. Nevertheless, it is important to
note that attachment styles can change. Insecurely attached people can certainly find their attachment style becoming more secure through having supportive relationship experiences, whether they are intimate/romantic relationships or other sorts of supportive relationships such as one may establish with a therapist
(Bowlby, 1988). The reason that attachment styles tend to be relatively consistent over time is that they tend to condition the same types of behaviour patterns and relationship outcomes that led to their formation in the first place. Just think about how much more difficult it would be for a highly avoidant or highly insecure and “needy” person to develop the sorts of patterns in relationships that would help them to feel loved and accepted, compared to somebody who is already secure. Nevertheless, if a person is able to establish healthy relationship patterns in adulthood, they can undo the effects of less-than- ideal early attachment experiences.
While it was initially believed that ideal parenting called for parents to be highly sensitive to the child, leading to closely coordinated emotional interactions between them, recent studies have shown that highly sensitive caregivers
actually demonstrate moderate coordination with their children (Hane et al., 2003). Both under-responsiveness and over-involvement/hypersensitivity to an infant’s needs and emotions are correlated with the development of insecure
attachment styles (Beebe et al., 2010). The ideal parent does not reflexively respond to all the child’s needs, but is sensitive to how much responsiveness the child needs. In the next section we will learn how this type of parental sensitivity is connected to the development of self-awareness, as well as to the ability to take other people’s perspectives.
Self Awareness
Between 18–24 months of age, toddlers begin to gain self-awareness , the ability to recognize one’s individuality. Becoming aware of one’s self goes hand- in-hand with becoming aware of others as separate beings, and thus, self-
awareness and the development of pro-social and moral motivations are intricately intertwined, as we discuss below.
The presence of self-awareness is typically tested by observing infants’ reactions
to their reflection in a mirror or on video (Bahrick & Watson, 1985; Bard et al., 2006). Self-awareness becomes increasingly sophisticated over the course of development, progressing from the ability to recognize oneself in a mirror to the ability to reflect on one’s own feelings, decisions, and appearance. By the time children reach their fifth birthday, they become self-reflective, show concern for others, and are intensely interested in the causes of other people’s behaviour.
Young children are often described as egocentric , meaning that they only consider their own perspective (Piaget & Inhelder, 1956). This does not imply that children are selfish or inconsiderate, but that they merely lack the cognitive ability to understand the perspective of others. For example, a two-year-old may
attempt to hide by simply covering her own eyes. From her perspective, she is hidden. Piaget believed that children were predominantly egocentric until the end of the preoperational phase (ending around age seven). He tested for egocentrism by sitting a child in front of an object, and then presenting pictures of that object from four angles. While sitting opposite the child, Piaget would ask him or her to identify which image represented the object from Piaget’s own perspective. Children’s egocentricity was demonstrated by selecting the image corresponding to their own perspective, rather than being able to imagine what
Piaget would be seeing (Figure 10.11 ).
Figure 10.11 Piaget’s Test for Egocentric Perspective in Children Piaget used the three-mountain task to test whether children can take someone else’s perspective. The child would view the object from one perspective while another person viewed it from a different point of view. According to Piaget, children are no longer exclusively egocentric if they understand that the other person sees the object differently. Source: Lilienfeld, Scott O.; Lynn, Steven J; Namy, Laura L.; Woolf, Nancy J., Psychology: From Inquiry To Understanding,
2nd Ed., © 2011. Reprinted and Electronically reproduced by permission of Pearson Education, Inc., New York, NY.
By two years of age, toddlers can recognize themselves in mirrors. Ruth Jenkinson/Dorling Kindersley Ltd
Modern research indicates that children take the perspective of others long before the preoperational phase is complete. Perspective taking in young
children has been demonstrated in studies of theory of mind —the ability to understand that other people have thoughts, beliefs, and perspectives that may be different from one’s own. Consider the following scenario:
An experimenter offers three-year-old Andrea a box of chocolates. Upon opening the
box, Andrea discovers not candy, but rather pencils. Joseph enters the room and she
watches as Joseph is offered the same box. The researcher asks Andrea, “What does
Joseph expect to find in the box?”
If Andrea answers “pencils,” this indicates that she believes Joseph knows the same thing she does. However, if Andrea tells the experimenter that Joseph expects to see chocolates, it demonstrates that she is taking Joseph’s mental perspective, understanding that he does not possess her knowledge that the
“chocolate box” actually contains pencils (Lillard, 1998; Wimmer & Perner, 1983). Children typically pass this test at ages four to five, although younger children may pass it if they are told that Joseph is about to be tricked (Figure 10.12 ). Of course, the shift away from egocentric thought does not occur overnight. Older children may still have difficulty taking the perspective of others; in fact, even adults aren’t that great at it much of the time. Maintaining a healthy awareness of the distinction between self and other, and accepting the uniqueness of the other person’s perspective is a continual process.
Figure 10.12 A Theory-of-Mind Task There are different methods of testing false beliefs. In this example, Andrea is asked what she thinks Joseph expects to find in the “chocolate box.” If she has developed theory-of-mind skills, she will be able to differentiate between her knowledge of the box’s contents (pencils) and what Joseph would expect to find (chocolates).
Psychological research now indicates that self-awareness and theory of mind are in constant development right from birth. Early in children’s lives, emotions are often experienced as chaotic, overwhelming, and unintegrated combinations of physical sensations, non-verbal representations, and ideas. As caregivers respond to children’s emotions, the children learn how to interpret and organize their emotions; this helps them become more aware of their own feelings
(Fonagy & Target, 1997). As children gain the ability to understand their internal states with greater clarity, it enhances their ability to represent the mental states of others.
This process helps to explain why it is important that caregivers not over-identify with a child’s emotions. If their emotional exchange is completely synchronized (e.g., the child experiences fear and the adult also experiences fear) then the
child simply gets her fear reinforced, rather than gaining the ability to understand that she is feeling fear. In a study of how mothers behave after their infants
received an injection, Fonagy et al. (1995) observed that the mothers who most effectively soothed their child reflected their child’s emotions, but also included other emotional displays in their mirroring, such as smiling or questioning. The mother’s complex representation of the child’s experience ensured that the child recognized it as related to, but not identical to his own emotion. This serves to alter the child’s negative emotions by helping him to implicitly build coping
responses into the experience (Fonagy & Target, 1997). Therefore, in the early stages of life, these face-to-face exchanges of emotional signals help the child’s
brain learn how to understand and deal with emotions (Beebe et al., 1997).
Module 10.2b Quiz:
Social Development, Attachment, and Self-Awareness
Know . . . 1. The emotional bond that forms between a caregiver and a child is
referred to as . A. a love–hate relationship
B. dependence C. attachment D. egocentrism
Understand . . . 2. Infants who are insecurely attached may do which of the following when
a parent leaves and then returns during the strange situation procedure?
A. Show anger when the parent leaves but happiness when they return
B. Show anger when the parent leaves and show little reaction when they return
C. Refuse to engage with the stranger in the room D. Show happiness when the parent leaves and anger when they
return
Apply . . . 3. Oliver and his dad read a book several times. In that book, the main
character expects to receive a hockey sweater for his birthday. However, due to a mix-up at the store, the gift box instead contains a pair of shoes. Because Oliver had read the book several times, he remembered that the box contained shoes. If Oliver was seven years old, what do you think he
would say if he was asked, “What does the main character think is in the box?” What would his two-year-old sister say if asked the same question?
A. Oliver would say that the character thought the box contained a hockey sweater; his sister would say that the character would expect to find shoes.
B. Oliver would say that the character thought the box contained a shoes; his sister would say that the character would expect to find a hockey sweater.
C. Both children would say that the main character would expect to find a hockey sweater in the gift box.
D. Both children would say that the main character would expect to find a pair of shoes in the gift box.
4. A child who you know seems to behave inconsistently towards his parents; sometimes, he is quite “clingy” and dependent, but other times is very independent and rejects the parents’ affection. This is descriptive of
a(n) attachment style. A. secure B. anxious/ambivalent C. avoidant D. disorganized
Psychosocial Development
In the previous section, we saw the powerful effect that attachment can have on a child’s behaviour. Importantly, we also saw that attachment-related behaviours that are observed in infants and young children can sometimes predict how those individuals will behave as adults. This shows us that our development is actually a life-long process rather than a series of isolated stages.
A pioneer in the study of development across the lifespan was Erik Erikson, a German-American psychologist (who married a Canadian dancer). He proposed a theory of development consisting of overlapping stages that extend from infancy to old age. In this module, we will examine the stages of development that relate to infancy and childhood. We will return to Erikson’s work again in Modules 10.3 (Adolescence) and 10.4 (Adulthood), each time discussing the parts of his theory that apply to those stages of development. Curious
readers can look ahead to Table 10.5 in Module 10.4 to see a depiction of Erikson’s model in its entirety.
Table 10.5 Erikson’s Stages of Psychosocial Development
Infancy: trust vs. mistrust: Developing a
sense of trust and security toward
caregivers.
ClickPop/Shutterstock
Adolescence: identity vs. role confusion:
Achieving a sense of self and future
direction.
Tracy Whiteside/Shutterstock
Toddlerhood: autonomy vs. shame and
doubt: Seeking independence and gaining
self-sufficiency.
Picture Partners/Alamy Stock Photo
Young adulthood: intimacy vs. isolation:
Developing the ability to initiate and
maintain intimate relationships.
OLJ Studio/Shutterstock
Preschool/early childhood: initiative vs.
guilt: Active exploration of the environment
and taking personal initiative.
Monkey Business Images/Shutterstock
Adulthood: generativity vs. stagnation:
The focus is on satisfying personal and
family needs, as well as contributing to
society.
Belinda Pretorius/Shutterstock
Childhood: industry vs. inferiority: Striving
to master tasks and challenges of
childhood, particularly those faced in
school. Child begins pursuing unique
interests.
keith morris/Alamy Stock Photo
Aging: ego integrity vs. despair: Coping
with the prospect of death while looking
back on life with a sense of contentment
and integrity for accomplishments.
Digital Vision/Photodisc/ Getty Images
Development Across The Lifespan
Erikson’s theory of development across the lifespan included elements of both cognitive and social development. Erikson’s theory centred around the notion
that at different ages, people face particular developmental crises, or challenges, based on emotional needs that are most relevant to them at that stage of life. If people are able to successfully rise to the challenge and get their emotional needs met, then they develop in a healthy way. But, if this process is disrupted for some reason and people are not able to successfully navigate a stage, the rest of the person’s personality and development could be impacted by certain deficits in their psychosocial functioning. For example, people could struggle with feeling worthless or useless, feeling insecure in relationships, feeling motivated, and so on. Understanding fully how Erikson’s insights apply to specific problems people face lies beyond our discussion here, but you can make some reasonable inferences based on a general understanding of his theory.
The first stage, Infancy, focuses on the issue of trust vs. mistrust. The infant’s key challenge in life is developing a basic sense of security, of feeling comfortable (or at least not terrified) in a strange and often indifferent world. Infants just want to know that everything is okay, and this starts with being held —being physically connected through touch and affectionate contact. As the infant develops more complex social relationships, their basic emotional security (or insecurity) grows out of the trust vs. mistrust that develops out of this stage.
The second stage, Toddlerhood, focuses on the challenge of autonomy vs. shame. The toddler, able to move herself about increasingly independently, is poised to discover a whole new world. The toddler discovers that she is a separate creature from others and from the environment; thus, exploring her
feelings of autonomy—exercising her will as an individual in the world—becomes very important. (If you’ve ever hung out with a toddler for extended periods of time, you have probably experienced their stubborn resistance, like emphatically stating “No!” to whatever you have suggested, for no clear reason.)
By the end of the first two stages, the person is, ideally, secure, and they feel a basic sense of themselves as having separate needs from others. On the other hand, if these stages were not successfully navigated, the person may struggle with feelings of inadequacy or low self-worth, and these will play out in their subsequent development.
The third stage, Early childhood, is characterized as the challenge of initiative vs. guilt. Building on the emotional security and sense of self-assurance that comes from the first two stages, here the growing child learns to take responsibility for herself while feeling like she has the ability to influence parts of her physical and social world. These preschool-years involve children pushing their boundaries and experimenting with what they can do with their rapidly developing bodies, and then experiencing guilt when they are scolded or otherwise encounter the disapproval of others, such as their parents. If this stage is navigated successfully, the child develops increased confidence and a sense of personal control and responsibility.
The fourth stage, Childhood, is all about industry vs. inferiority. Here the child is focused on the tasks of life, particularly school and the various skill development activities that take place for that big chunk of childhood. This is an important part of the child’s increasing feeling of being in control of her actions, leading her to be able to regulate herself to achieve long-term goals, develop productive habits, and gain a sense of herself as actively engaged in her own life.
Taking Erikson’s first four stages together, you can see how childhood ties together emotional development with the feeling of being a competent individual. You can also see how the challenges associated with these stages are tied together with the quality of one’s key relationships and the many complex ways in which others (e.g., parents) help or hinder the child’s ability to meet their emotional needs.
Parenting and Prosocial Behaviour
One of the central questions of development that every parent faces when raising their own children is “How can I help this child become ‘good’?” The
capacity to be a moral person is often considered to begin around the time a child develops theory of mind (the sense of themselves and others as separate beings with separate thoughts). Certainly, being aware of one’s emotions, and understanding the emotions of others, are important parts of prosocial motivations and behaviours. However, recent research seems to indicate that the basic capacity for morality is built right into us and manifests long before we
develop the cognitive sophistication to recognize self and others. Children show a natural predisposition toward prosocial behaviour very early in their
development (Hamlin et al., 2007; Warneken & Tomasello, 2013). Even one- day-old infants experience distress when they hear other infants cry, exhibiting a basic sense of empathy.
However, it is important to distinguish exactly what is meant by empathy. Surely, the one-day-old babies aren’t actually lying there, aware of the perspectives of the other infants, recognizing that when an infant cries, he is sad, and then feeling sadness in response to that awareness. One-day-old infants don’t have that much cognitive processing going on; there’s no way they can engage in very complex perspective taking. Rather, they simply feel what is going on around them; they mirror the world around them in their own actual feelings, virtually without any filter at all.
What this means is that when children are very young, they experience others’ distress directly as their own personal distress or discomfort. This makes them motivated primarily to reduce their own distress, not necessarily to help the other
person (Eisenberg, 2005). Helping the other person might be one way to alleviate one’s distress, but there might be easier ways, like ignoring them, or even shouting at them! For example, watching a parent cry is upsetting to a young child, and sometimes the child may seek to comfort the parent, such as by offering his teddy bear; other times, however, children might just close their eyes and plug their ears, or leave and go to a different room where they don’t have to see the parent, thereby alleviating their own distress. Many developmental psychologists believe that in order for explicitly prosocial motives to develop,
children must learn to attribute their negative feelings to the other person’s distress, thereby becoming motivated to reduce the other person’s suffering, not
just their own reaction to it (Mascolo & Fischer, 2007; Zahn-Waxler & Radke-
Yarrow, 1990).
Recently, researchers at the University of British Columbia and other universities demonstrated that the roots of moral motivation go back much further than we once believed, all the way to very early infancy. Studies using puppets engaging in kind and helpful, or nasty and selfish behaviours show that even very young infants (as young as three months old!) seem to know the difference between
good and bad, and prefer others who are helpful (Hamlin et al., 2007, 2010). By eight months of age, infants make complex moral discriminations, preferring others who are kind to someone who is prosocial, but reversing this and
preferring others who are unkind to someone who is antisocial (Hamlin et al., 2011). Thus, from the first months of our lives, long before we have been “taught right from wrong,” we are able to recognize, and prefer, the good.
As children move into the toddler years, prosocial behaviours increase in scope
and complexity. Around their first birthday, children demonstrate instrumental helping, providing practical assistance such as helping to retrieve an object that is out of reach (Liszkowski et al., 2006; Warneken & Tomasello, 2007). By their second birthday, they begin to exhibit empathic helping, providing help in order to make someone feel better (Zahn-Waxler et al., 1992). In one study, children younger than two were observed to be happier when giving to others over receiving treats themselves, even when the giving occurred at a cost to their
own resources (Aknin et al., 2012).
Parenting and Attachment
In humans, the tension between helping others versus being concerned for oneself reflects a kind of tug-of-war between two psychobiological systems, the attachment behavioural system , which is focused on meeting our own needs for security, and the caregiving behavioural system , which is focused on meeting the needs of others. Each system guides our behaviour when it is activated; however, the attachment system is primary, and if it is activated, it tends to shut down the caregiving system. What this means in everyday experience is that if a person feels insecure herself, it will be hard for her to take others’ needs into consideration. However, if attachment needs are
fulfilled, then the caregiving system responds to others’ distress, motivating the
person to care for others (Mikulincer & Shaver, 2005). Thus, raising kind, moral children is about helping them feel loved and secure, not just teaching them right from wrong.
This changes the emphasis in parenting, a lot! Consider the classic problems faced by all parents—they need kids to do certain things—get up, eat breakfast, get dressed, brush teeth, brush hair, pack a backpack for school, get lunch, leave the house on time, stop interrupting, be nice to siblings. . . . It’s no wonder many parenting books promise a simple, step-by-step method for getting children to behave the way parents want.
Faced with the constant challenge of managing their kids’ behaviour, parents commonly turn to the principles of operant conditioning, using rewards (e.g., Smarties, physical affection, loving words) and punishments (e.g., angry tone of voice, time-outs, criticism) as necessary. Indeed, this is so pervasive that most of us don’t think twice about it; how could rewarding good behaviour and punishing bad behaviour be a problem? However, children are not merely stimulus-
response machines, and this pervasive use of conditional approaches (i.e., rewards and punishments that are conditionally applied based on the child’s behaviour) can have significant unintended and even destructive consequences. One oft-overlooked problem is that even if conditional approaches do successfully produce the desired behaviours, these behaviours don’t tend to
persist over the long term (Deci at al., 1999). When rewards or punishments are not available to guide behaviour, children may find it difficult to motivate themselves to “do the right thing.”
Another downside to the conditional parenting approach is the impact it may have on children’s self-esteem and emotional security. Because children learn to associate feeling good about themselves with the experience of receiving rewards and avoiding punishment, their self-esteem becomes more dependent
upon external sources of validation. Instead of helping to nurture a truly secure child, parents may unwittingly be encouraging a sense of conditional self-worth, that is, the feeling that you are a good and valuable person only when you are behaving the “right” way.
Although these conditional approaches may seem fairly normal when it comes to raising children, think about it for a moment in a different context, such as your romantic relationship. Imagine that you and your partner decide to go to a couple’s counsellor, and you are told that every time your partner behaves in ways you don’t like, you should respond with negativity, such as withdrawing affection, speaking sharply and angrily, physically forcing him to sit in a corner for a certain amount of time, or taking away one of his favourite possessions. You also should use rewards as a way of getting your partner to do things you want—promise him pie, or physical intimacy, or buy him something nice. Our guess is that you would conclude it’s time to get a different counsellor. Yet, this is often how we raise children.
A mountain of research has revealed the downside of taking this kind of conditional approach to parenting. Children who experience their parents’ regard for them as conditional report more negativity and resentment toward their parents; they also feel greater internal pressure to do well, which is called introjection , the internalization of the conditional regard of significant others (Assor et al., 2004). Unfortunately, the more that people motivate themselves through introjection, the more unstable their self-esteem (Kernis et al., 2000), and the worse they tend to cope with failure (Grolnick & Ryan, 1989).
So what works better? Research clearly shows that moral development and
healthy attachment is associated with more frequent use of inductive discipline , which involves explaining the consequences of a child’s actions on other people, activating empathy for others’ feelings (Hoffman & Saltzsein, 1967). Providing a rationale for a parent’s decisions, showing empathy and understanding of the child’s emotions, supporting her autonomy, and allowing her choice whenever possible all promote positive outcomes such as greater mastery of skills, increased emotional and behavioural self-control, better ability
to persist at difficult tasks, and a deeper internalization of moral values (Deci et al., 1994; Frodi et al., 1985). When it comes to raising moral children, the “golden rule” seems to apply just as well—do unto your children as you would have someone do unto you.
Module 10.2c Quiz:
Psychosocial Development
Know . . . 1. The primary challenge in Eriksen’s “Childhood” stage of development is
. A. trust vs. mistrust B. industry vs. inferiority C. initiative vs. guilt D. autonomy vs. shame and doubt
Understand . . . 2. Marcus is very careful to teach his daughter about morality, using stories
like Aesop’s Fables, because he wants her to be a good person when she grows up. However, you notice that he often seems emotionally unavailable, and frequently criticizes her (presumably in order to improve her behaviour). Marcus seems to underappreciate the role of
in moral development. A. Piaget’s theory of cognitive development B. emotional security C. behaviourism D. theory of mind
3. If parents excessively reward and praise their children, particularly based on the children’s performance, they risk their children developing a high degree of
A. attachment anxiety. B. introjection. C. inductive discipline. D. extrojection.
Analyze . . . 4. One very common behavioural problem is when a person is too upset or
emotionally triggered to be open to listening to another person’s
perspective. This is the same basic dynamic as the
A. relationship between inductive discipline and introjected motivation.
B. relationship between the threat object and the terry-cloth mother (for rhesus monkeys).
C. relationship between the attachment behavioural system and the caregiving behavioural system.
D. parent–child relationship.
Module 10.2 Summary
accommodation
assimilation
attachment
attachment behavioural system
caregiving behavioural system
cognitive development
concrete operational stage
conservation
core knowledge hypothesis
dishabituation
egocentric
formal operational stage
habituation
inductive discipline
Know . . . the key terminology associated with infancy and childhood.
10.2a
introjection
object permanence
preoperational stage
scaffolding
self-awareness
sensitive period
sensorimotor stage
strange situation
theory of mind
zone of proximal development
According to Piaget’s theory of cognitive development, infants mature through childhood via orderly transitions across the sensorimotor, preoperational, concrete operational, and formal operational stages. This progression reflects a general transition from engaging in the world through purely concrete, sensory experiences, to an increasing ability to hold and manipulate abstract representations in one’s mind.
In developmental psychology, attachment refers to the enduring social bond between child and caregiver. Based on the quality of this bond, which is dependent upon appropriately responsive parenting, individuals develop an attachment style, which is their internalized feeling of security and self-worth. Children are either securely or insecurely attached, and insecure attachments can be further divided into disorganized, anxious/ambivalent, and avoidant
Understand . . . the cognitive changes that occur during infancy and childhood.
10.2b
Understand . . . the importance of attachment and the different styles of attachment.
10.2c
styles.
According to Vygotsky, cognitive development unfolds in a social context. Adults who are attuned to the child’s experience can help to scaffold the child’s learning, guiding them such that they focus on challenges that lie on the very edge of their capabilities. This keeps a child fully engaged in the zone of proximal development, maximizing their skill development.
Internalizing prosocial motives comes from children developing a secure attachment, experiencing empathy and receiving inductive discipline. Children have an innate sense of morality, but this can be interfered with if their attachment needs are insufficiently met. Therefore, responsive parenting that helps the child feel secure lays the foundation for the child to become less self- focused. As the child cognitively develops and can more explicitly take others’ perspectives, inductive reasoning that emphasizes perspective taking and empathy builds the habit of “doing good” because the child genuinely cares, rather than because the child wants to receive approval or to avoid punishment.
Apply . . . the concept of scaffolding and the zone of proximal development to understand how to best promote learning.
10.2d
Analyze . . . how to effectively discipline children in order to promote moral behaviour.
10.2e
Module 10.3 Adolescence
Picture Partners/Alamy Stock Photo
Learning Objectives
The Internet can be a healthy part of your social life and a necessary
Know . . . the key terminology concerning adolescent development. Understand . . . the process of identity formation during adolescence. Understand . . . the importance of relationships in adolescence. Understand . . . the functions of moral emotions. Apply . . . your understanding of the categories of moral reasoning. Analyze . . . the relationship between brain development and adolescent judgment and risk taking.
10.3a 10.3b 10.3c 10.3d 10.3e 10.3f
research tool for your education. Indeed, as the Internet has become more of a platform for social networking, at least moderate use of the
Internet is associated with greater social involvement (Gross, 2004) and stronger academic motivation (Willoughby, 2008). However, the Internet has its dangers. One is that use may become pathological, with people turning to the Internet as a way of coping with life’s difficulties, much the same as people turn to drugs, alcohol, sex, or their career. Even psychologically healthy adolescents can get hooked on the Internet, and
such pathological use can lead to depression (Lam & Peng, 2010). The Internet may also carry social dangers, such as bullying and public humiliation, now that one’s indiscretions or mistakes can be posted online to haunt people for years to come. In 2012, 15-year-old Amanda Todd from British Columbia was cruelly ostracized and humiliated by her peers after revealing photos of her were posted online. Although she switched schools, she couldn’t escape the online bullying, and she tragically committed suicide.
The Internet has revolutionized society in a single human generation. But we don’t know how it will affect human development, particularly in the challenging period of adolescence when people are forming their identities and often committing some of their biggest mistakes. This will undoubtedly be a major focus for research, and will raise major questions for society in the years to come.
Focus Questions
1. What types of changes occur during adolescence? 2. Why do adolescents so often seem to make risky decisions?
“It was the best of times; it was the worst of times.” For many people, this pretty much sums up adolescence, a time of confusion, pimples, and existential angst, as well as hanging out with friends, gaining greater independence from parents, and focusing intensely on intimate relationships. This often tumultuous time
between childhood and adulthood involves many physical changes, increasing cognitive sophistication, and a great deal of emotional and social volatility.
Amanda Todd: A tragic case of cyber-bullying. Darryl Dyck/Canadian Press
Physical Changes in Adolescence
The physical transition from childhood to adolescence starts with puberty, culminating in reproductive maturity. Puberty begins at approximately age 11 in girls and age 13 in boys, although there is wide variation. The changes that occur during puberty are primarily caused by hormonal activity. Physical growth
is stimulated by the pituitary gland, under the control of the hypothalamus, which regulates the release of hormones such as testosterone and estrogen. These
hormones also contribute to the development of primary and secondary sex traits in boys and girls. Primary sex traits are changes in the body that are part of reproduction (e.g., enlargement of the genitals, ability to ejaculate, the onset of menstruation). Secondary sex traits are changes in the body that are not part of reproduction, such as the growth of pubic hair, increased breast size in females, and increased muscle mass in males (Figure 10.13 ).
Figure 10.13 Physical Changes That Accompany Puberty in Male and Female Adolescents Hormonal changes accelerate the development of physical traits in males and females. Changes involve maturation of the reproductive system (primary sex traits) as well as secondary sex traits such as enlargement of breasts in women and increased muscle mass in males. Source: Lilienfeld, Scott O.; Lynn, Steven J; Namy, Laura L.; Woolf, Nancy J., Psychology: From Inquiry To Understanding,
Books A La Carte Edition, 2nd Ed., © 2011. Reprinted and Electronically reproduced by permission of Pearson Education,
Inc., New York, NY.
For girls, menarche —the onset of menstruation—typically occurs around age 12. The timing of menarche is influenced by physiological and environmental
factors, such as nutrition, genetics, physical activity levels, illness (Ellis & Garber, 2000), and family structure, such as the absence of a father (Bogaert, 2008). Boys are considered to reach sexual maturity at spermarche , their first ejaculation of sperm, at around age 14.
Interestingly, puberty happens much earlier now than it did 100 years ago.
American teens in the 19th century started puberty at 16–17 on average; nowadays, about one-third of boys show the beginnings of physical maturation at
age 9 (Reiter & Lee, 2001), as do almost 40% of European-American girls, and almost 80% of African-American girls (Herman-Giddens et al., 1997). This is probably because of behavioural changes that increase body fat (e.g., poor nutrition, insufficient exercise), and environmental stresses that increase stress hormones in the body. As the environment changes, our biology changes along with it.
Teens’ rapidly-developing bodies bring a host of developmental challenges, from feelings of self-consciousness and a heightened desire to be attractive and to fit in, to increasing sexual interest and experimentation, to the negative moods that
accompany hormonal fluctuations (Warren & Brooks-Gunn, 1989). Adolescents who begin to physically develop earlier than their peers can face additional challenges. Early-developing females often have to cope with being teased and having their bodies made into objects of others’ attention. Early-developing boys tend to have it easier; their masculine traits are often regarded positively by both themselves and their peers. Nevertheless, early developers of either gender run a greater risk of drug and alcohol abuse and of unwanted pregnancies.
Recent research has shown that adolescence is a time of major brain changes as well. In particular, the frontal lobes undergo a massive increase in
myelination, speeding up neural firing by up to 100-fold in those areas (Barnea- Goraly et al., 2005; Sowell et al., 2003). The frontal lobes also undergo a wave of synaptic pruning, during which relatively unused synaptic connections are broken, leaving a more efficiently functioning brain. The net result of these changes is an increase in teens’ abilities to exert self-control. However, during adolescence this process is merely under way, not completed, leaving teens often struggling with volatile emotional experiences.
Module 10.3a Quiz: Physical Changes Challenges in Adolescence
Know . . .
1. One of the changes that occurs in puberty is the beginning of menstruation for females. This event is known as .
A. estradiol B. menarche C. a primary sex trait D. spermarche
2. A brain area that shows large changes during adolescence is the .
A. motor cortex B. visual cortex C. frontal lobes D. brainstem
Understand . . .
3. One of the major differences between primary and secondary sex characteristics is that
A. primary sex characteristics are directly related to reproductive function.
B. secondary sex characteristics are directly related to reproductive function.
C. whether a person is male or female depends on the secondary sex characteristics.
D. primary sex characteristics are unique to human reproductive anatomy.
Emotional Challenges in Adolescence
The physical and emotional changes associated with puberty are widely believed to be connected to each other. For example, mood swings and experimental high-risk behaviours are attributed to “raging hormones.” But is this
characterization of adolescence accurate? Are most teens hormonally supercharged animals, constantly desiring to hook up with the first attractive (or even unattractive) person to cross their path?
The belief that adolescence is tumultuous has held sway in popular culture as
well as in psychology since at least the early 1900s (Hall, 1904); some theorists even believed that the absence of extreme volatility was an indication of arrested
development (Freud, 1958). However, this belief came under fire from cultural anthropologists (Benedict, 1938; Mead, 1928), who discovered that in many non-Western cultures, the transition from childhood to adulthood happened remarkably smoothly; children simply began to take on more and more responsibilities, and then moved into their adult roles without such a dramatic and volatile transition.
In the decades since, research has painted a somewhat mixed picture of adolescence. On the up side, the majority of teens keep their forays into debauchery fairly minimal and do not let their larger lives get unduly harmed by their experimentation. Most teens also grow out of these patterns fairly readily and move into adulthood relatively unscathed by their teenage experiences
(Bachman et al., 1997). Navigating adolescence successfully leaves teens feeling they know who they are, having constructed a healthy social identity, and having learned to identify at least some of their own personal values and goals. On the down side, however, the emotional road through adolescence also contains its fair share of bumps. Teens are prone to experiencing particularly
intense and volatile emotions (Dahl, 2001; Rosenblum & Lewis, 2003), including heightened feelings of anxiety and depression (Van Oort et al., 2009).
Emotional Regulation During Adolescence
Adolescence is a time when teens must learn to control their emotions
(McLaughlin et al., 2011). Research at Queen’s University has shown that one key to adolescents effectively regulating their emotions is to be able to draw flexibly upon a diverse set of self-control strategies. Adolescents who rely upon a limited number of adaptive strategies (e.g., learning to suppress emotions, or conversely, learning to always reach out and talk to people about their feelings)
and narrowly rely upon their chosen strategies are at greater risk for developing
symptoms of anxiety and depression (Lougheed & Hollenstein, 2012).
One of the most flexible and powerful strategies for dealing with emotions is
cognitive reframing (see Module 16.2 ), where we learn to look at our experience through a different “frame.” For example, failure can be reframed as an opportunity to learn, and a threatening experience as a challenge to be overcome. The ability to effectively choose reframing strategies, especially when under the grip of strong emotions, relies upon a sophisticated cognitive control
network involving the frontal and parietal lobes (McClure et al., 2004). These are precisely the brain areas that are undergoing the most development during adolescence. Thus, helping adolescents learn self-control strategies is critically important, not only for developing good habits, but for helping them to develop the cognitive control systems in their brains. Failing to provide this guidance is a lost opportunity for making a major difference in the lives of today’s youth.
The ability to reframe is critical to one of the most important skills adolescents
need to hone as they move into adulthood—the ability to delay gratification , putting off immediate temptations in order to focus on longer-term goals. For example, should you party with your friends, or study for the test next week? Adolescents who master this skill are far more likely to be successful in life. An inability to delay gratification reflects a tendency to discount the future in order to live in the moment, which lies at the heart of a wide range of dysfunctional behaviours ranging from addictions and unsafe sex, to racking up credit card debt and failing to meet deadlines.
Unfortunately, the ability (or inability) to delay gratification tends to be quite stable throughout childhood and adolescence. A brilliant set of studies begun in the 1960s looked at what young children would do if given a difficult temptation— they could have a marshmallow immediately, or they could wait for 15 minutes, at which point they would be given two marshmallows. It’s a pretty simple choice right? A mere 15 minutes and the marshmallow feast doubles in size! However, preschool-aged children find it excruciating to resist this temptation. In one study
(Mischel & Ebbesen, 1970), when the marshmallow was temptingly placed right in front of the children, they could only wait for, on average, one minute!
The finding that made these studies famous in psychology was that the length of time kids could delay their marshmallowy gratification predicted how well- adjusted they would become in adolescence, many years later. The child who could wait longer for a marshmallow at age 4 was better adjusted both psychologically and socially at age 15, and had higher SAT scores by the end of
high school (Shoda et al., 1990)! (SATs are standardized tests written by American students at the end of high school, and are a major part of determining acceptance to college and university.) Clearly, being able to delay gratification is an important skill.
Importantly, this is also a skill that people can learn. In fact, the challenge of delaying gratification is basically the same as the challenge of controlling emotions, and the same strategies are useful, such as cognitive reframing. Even preschool-aged children can use them. In the simplest and most literal reframing study, children were instructed to simply imagine that the marshmallow was a picture, not a real object, and to do this by mentally drawing a picture frame around the object. Incredibly, this simple imagination tactic increased the
average wait time to a full 18 minutes (Moore et al., 1976).
Working the Scientific Literacy Model Adolescent Risk and Decision Making
One of the nightmares of every parent is the smorgasbord of disasters waiting for adolescents as they explore their increasing independence—sexually transmitted diseases, drugs, and the whole host of alluring activities parents wish were never invented (despite their own fond memories of their younger years . . .).
What do we know about adolescence and risky decision making? Parents do have some reason to fear; research shows that adolescents are particularly prone to behaving impulsively and
making risky decisions (Chambers et al., 2003; Steinberg, 2007). As a result, driving recklessly, unsafe sex (Arnett, 1992), drug and alcohol abuse, accidents, and violence are more common during adolescence than during any other stage of life
(Chambers & Potenza, 2003; Steinberg, 2008).
Why do adolescents often make such bad judgment calls? Adolescence is a perfect storm of risk-inducing factors, including a teenage culture that glorifies high-risk activities, intense peer pressure, increased freedom from parents, a growing ability to critically question the values and traditions of society, and a brain that is ripe for risk due to still-developing cognitive control systems (especially the prefrontal cortex) and well-developed
reward systems located in limbic areas (Casey et al., 2008; Galvan et al., 2006). Indeed, teenage neurophysiology is a battleground of opposing urges; the reward system acts like the proverbial devil on one’s shoulder, urging “Do it! Do it!” while the underdeveloped prefrontal areas play the role of the beleaguered angel, pleading “Don’t do it! It’s not worth it!”
How can science test the link between brain function and decision making in adolescents? Modern technology has enabled researchers to look at the brain activity of adolescents in the process of making risky decisions. In one study, adolescents had their brains scanned using functional magnetic resonance imaging while they played a betting game. In this experiment, participants had to make a decision between a high-risk, high-reward choice (placing a $6 bet with a 25% chance of winning), and a low-risk, low-reward choice (placing a $1 bet with a 50% chance of winning).
Adolescents who selected the high-risk choice had less brain activity in their prefrontal cortex than those who selected the low-
risk choice (Figure 10.14 ; Shad et al., 2011). It seems that choosing the high-risk gamble was, in a sense, easier; those
teens simply focused on how much they wanted the bigger reward, and ignored the higher likelihood that they would lose. On the other hand, making the low-risk choice involved some neurological conflict; those teens wanted the bigger reward, but restrained themselves by taking into account the probabilities. This restraint involved the frontal lobes.
Figure 10.14 Extended Brain Development
The prefrontal cortex (circled in blue) continues to develop through adolescence and into young adulthood. werbefoto-burger.ch/Fotolia
This study helps to shed light on adolescent decision making in general. Compared to adults, adolescents have less-developed frontal lobes, and are therefore more likely to default to their strong reward impulses, rather than restraining their desires as a result of more sober and complex calculations of what would be in their best interest overall.
Can we critically evaluate this explanation for risky decision making?
This brain-based explanation does not fully explain adolescents’ behaviour, in at least two important ways. First, in this particular study, it’s not clear whether the prefrontal activation reflects teens thinking in more complex ways, or whether it shows that they are consciously restraining themselves from following their reward- focused desires. Is the key factor here about complex thought or self-control?
Second, in everyday decisions, other factors likely influence teens’ preference for risk, such as the size of rewards and costs, the importance of long-term goals, personality characteristics such as extraversion (which is related to reward sensitivity), and the social context in which the decisions occur. For example, psychologists have found that in some situations, adolescents are no more likely to engage in risky behaviour than adults. But when
other teens are around, this propensity changes (see Figure 10.15 ). In fact, the presence of other teens can weaken the activity in the frontal lobes (Segalowitz et al., 2012). Clearly, realistic strategies for reducing adolescent risk taking should also consider the important role that situational factors play in adolescents’ decision making.
Figure 10.15 What Drives Teenagers to Take Risks?
One key factor in risk taking is simply other teenagers. When teens play a driving video game with other teens, they crash more than when playing the same game alone, and more than
adults playing the game (from Steinberg, 2007). Source: Adapted from figure 2, p. 630 in “Peer Influence on Risk-Taking, Risk Preference, and Risky
Decision-Making in Adolescence and Adulthood: An Experimental Study” by M. Gardner & L. Stein-
berg (2005). Developmental Psychology, 41 (4), 652–635.
Why is this relevant Research on the developing adolescent brain helps explain problems with risk and impulse control, which could lead to the development of programs that could steer adolescents toward making better decisions. If we could figure out how to enhance prefrontal functioning in teens, or how to get more of them to engage in practices like meditation that would do the same thing, we could potentially reduce their tendency to make unnecessarily risky decisions.
Module 10.3b Quiz:
Emotional Challenges in Adolescence
Understand . . . 1. Adolescent decision making is often problematic or dangerous because
teens have
A. underdeveloped limbic areas responsible for reward, and well- developed prefrontal areas.
B. well-developed limbic areas responsible for reward, and
underdeveloped prefrontal areas.
C. only partly moved out of the concrete operational stage of cognitive development.
D. poorly formed sets of goals.
2. The length of time children can wait in the marshmallow task is an indicator of
A. the age at which they begin to develop secondary sex characteristics.
B. intelligence. C. self-control. D. emotional security.
Apply . . . 3. After finishing Grade 10, Naomi got a job giving music lessons at a day
camp for kids aged 6–8. She was very excited. However, the first week was a disaster. The kids misbehaved and some instruments were broken. That weekend, she thought about what had happened. Rather than viewing the past week as a failure, she decided to view it as a learning experience that could help her do a better job when she taught a new group of kids the next week. Naomi’s thought process is an example
of . A. goal formation B. cognitive reframing C. autonomy D. concrete operations
Cognitive Development: Moral Reasoning vs. Emotions
As we have just seen, making wise decisions depends on the prefrontal cortex. This area is involved in higher cognitive abilities, such as abstract reasoning and
Module
logic (what Piaget referred to as formal operational thinking; see 10.2 ), which also begin to show substantial improvements starting at about age 12. Since Piaget, psychologists have generally believed that the shift to formal operational thinking laid the foundation for effective moral reasoning. For adolescents, this increase in complex cognitive ability allows them to consider abstract moral principles, to view problems from multiple perspectives, and to think more flexibly.
Kohlberg’s Moral Development: Learning Right
From Wrong
The most influential theory of the development of moral reasoning was created by Lawrence Kohlberg, after studying how people reasoned through complex moral dilemmas. Imagine the following scenario, unlikely as it may be:
A trolley is hurtling down the tracks toward a group of five unsuspecting people. You are
standing next to a lever that, if pulled, would direct the trolley onto another track,
thereby saving the five individuals. However, on the second track stands a single,
unsuspecting person, who would be struck by the diverted trolley.
What would you choose to do? Would you pull the lever, directly causing one person to die, but saving five others? Or would you be unwilling to directly cause someone’s death and therefore do nothing? Moral dilemmas provide interesting tests of reasoning because they place values in conflict with each other. Obviously, five lives are more than one, yet most people are also unwilling to take a direct action that would cause a person to be killed.
But even more important than what you would choose is why you would choose it. Kohlberg (1984) believed that people’s reasons evolved as they grew up and became better able to think in complex ways. By analyzing people’s reasons for their decisions in these sorts of dilemmas, he developed a stage theory of moral
development, here organized into three general stages (see Table 10.4 ).
Table 10.4 Kohlberg’s Stages of Moral Reasoning
Stage of Moral
Development
Description Application to Trolley Dilemma
Preconventional
morality
Characterized by self-interest
in seeking reward or avoiding
punishment. Preconventional
morality is considered a very
basic and egocentric form of
moral reasoning.
“I would not flip the trolley track
switch because I would get in
trouble.”
Conventional
morality
Regards social conventions
and rules as guides for
appropriate moral behaviour.
Directives from parents,
teachers, and the law are
used as guidelines for moral
behaviour.
“I would not flip the switch. It is
illegal to kill, and if I willfully
intervened I would probably
violate the law.”
Postconventional
morality
Considers rules and laws as
relative. Right and wrong are
determined by more abstract
principles of justice and
rights.
“I would flip the switch. The
value of five lives exceeds that
of one, so saving them is the
right thing to do even if it
means I am killing one person
who would otherwise not have
died.”
At the preconventional level, people reason largely based on self-interest, such as avoiding punishment. This is what parents predominantly appeal to when they threaten children with time-outs, spankings, or taking away toys. At the conventional level, people reason largely based on social conventions (e.g., tradition) and the dictates of authority figures; this is what parents are appealing to with the famously frustrating, “Because I said so!” At the postconventional level, people reason based on abstract principles such as justice and fairness, thus enabling them to critically question and examine social conventions, and to
consider complex situations in which different values may conflict.
The shift to postconventional morality is a key development, for without this shift, it is unlikely that the individual will rebel against authority or work against unjust practices if they are accepted by society at large. Indeed, social reformers always encounter resistance from members of society who hold to “traditional” values and think of change as destructive and destabilizing.
Kohlberg regarded the three stages of moral reasoning as universal to all humans; however, because he developed his theory mostly through the study of
how males reason about moral dilemmas, other researchers argued that he had failed to consider that females may reason about moral issues differently. Carol Gilligan (1982) suggested that females base moral decisions on a standard of caring for others, rather than the “masculine” focus on standards of justice and fairness that Kohlberg emphasized. Some support has been found for this; women are more likely to highlight the importance of maintaining harmony in
their relationships with others (Lyons, 1983). On the other hand, men and women generally make highly similar judgments about moral dilemmas
(Boldizar et al., 1989), and both genders make use of both caring and justice principles (Clopton & Sorell, 1993). This has led other researchers to question the importance of the gender distinction at all (Jaffee & Hyde, 2000).
However, a potentially more devastating critique has been made against the moral reasoning perspective in general, based on research showing that moral
reasoning doesn’t actually predict behaviour very well (Carpendale, 2000; Haidt, 2001). Knowing that something is right or wrong is very different from feeling that it is right or wrong. According to Jonathan Haidt’s social intuitionist model of morality, in our everyday lives our moral decisions are largely based on how we feel, not what we think. Haidt argues that moral judgments are guided by intuitive, emotional reactions, like our “gut feelings,” and then afterwards, we construct arguments to support our judgments. For example, imagine the
following scenario (adapted from Haidt, 2001):
Julie and Steven are brother and sister. They are travelling together in France on
summer vacation from college. One night they are staying alone in a cabin near the
beach. They decide that it would be interesting and fun if they shared a “romantic”
evening together. At the very least it would be a new experience for each of them. They
both enjoy the experience but they decide not to do it again. They keep that night as a
special secret, which makes them feel even closer to each other.
Emotion is a major component of moral thinking and decision making. Alex Wong/Getty Images
How do you react to this scenario? Was what took place between the two
siblings morally acceptable? If you are like most people, you probably did not think carefully through this scenario, consider different perspectives, and examine your reasoning before making a decision. Instead, you probably had a gut reaction, like “Brother and sister!?!? Gross! No way!” and made your decision almost instantly.
It is only after making a decision that most people engage in more thoughtful and reflective reasoning, trying to justify their decision. For some scenarios, it is easy to come up with justifications, such as “Brothers and sisters should not engage in romance because it could lead to sexual intercourse, which could produce genetic problems for the offspring,” or “They shouldn’t do it because if the family found out, it would be devastating.” However, it’s not hard to construct a scenario
that lies outside of such justifications, such as the brother and sister being infertile and having no other surviving family members. Faced with such a scenario, people might be hard pressed to find a justification; often, in such situations, people become flustered and confused, and resort to emphatically stating something like “I don’t know—it just isn’t right!” Or simply, “Ewww . . .” Their intuitive emotional reaction has told them it’s wrong, but their more
cognitive, effortful reasoning process is having a difficult time explaining why it’s wrong; interestingly, in such situations, people generally do not change their judgments, instead trusting their intuitive reaction. The feeling of disgust is stronger than their inability to explain themselves, which is another piece of
evidence that suggests that it’s not moral reasoning that is important, but moral feelings.
The improvements in emotional regulation that occur during adolescence have an influence on moral behaviour. Without some control over emotional reactions, people can become overwhelmed by the personal distress they experience upon
encountering the suffering of others (Eisenberg, 2000), and end up attending to their own needs rather than others’. Self-control, in turn, involves brain areas that are rapidly developing in adolescents, particularly the prefrontal cortex.
It is interesting to consider that the development of key moral emotions, such as empathy, is intimately bound up with the extent to which one’s social
relationships have been healthy right from birth (see Module 10.2 ). People who are regularly socially included and treated well by others are more likely to develop trust and security, which results in well-developed areas of the prefrontal cortex necessary for good decision making and well-developed moral emotional systems. This shows us that the early roots of moral behaviour reach all the way back into infancy, when attachment styles are initially formed, and extend into adolescence and beyond, when complex cognitive and self-control abilities are strengthened.
Biopsychosocial Perspectives Emotion and
Disgust
The social intuitionist model describes moral judgments as being driven primarily by emotional reactions. Many psychologists believe that these reactions draw upon evolutionarily ancient systems that evolved for functional reasons. For example, the disgust system evolved to keep us from ingesting substances that were harmful to us, such as feces and toxic plants. As we developed into more complex social beings, our
judgments of good and bad involved neural circuits that were more cognitive and conceptual; however, these higher-level cognitive systems evolved after our more basic physiological responses, and therefore are intertwined with the functioning of the older systems.
In terms of moral reasoning, what this means is that the cognitive systems that reason about right and wrong grew out of emotional systems that in turn grew out of systems of physiological responses of accepting or rejecting a substance from one’s body. From this
perspective, good and bad are not moral judgments, per se, but rather, are elaborations of more simple physiological responses of acceptance or repulsion. One surprising hypothesis one could derive is that the feeling of actual, physical disgust may strongly influence supposedly moral judgments.
This has been tested in several different ways. One creative set of studies first activated physiological symptoms of repulsion, for example, by getting subjects to sit at a disgustingly dirty work station, or to smell
fart spray (Schnall et al., 2008). These disgust-inducing experiences led people to make more severe judgments of moral violations. Also, neuroimaging studies show that certain moral dilemmas trigger emotional areas in the brain, and this emotional activation determines the decision
that subjects make (Greene & Haidt, 2002; Greene et al., 2001).
Module 10.3c Quiz:
Cognitive Development: Moral Reasoning vs. Emotions
Know . . .
1. A stage of morality that views rules and laws as being related to abstract principles of right and wrong is the stage.
A. postconventional B. preconventional C. preoperational D. conventional
Understand . . . 2. What is the relationship between physical feelings of disgust and moral
judgments?
A. Both physical and moral disgust activate the same brain areas, but do not directly influence each other.
B. Physical and moral disgust influence each other, but through unique neural pathways.
C. Physically disgusting stimuli increase the severity of a person’s moral judgments.
D. It is impossible to ethically test this relationship.
Apply . . . 3. Rachel believes that it is wrong to steal only because doing so could land
her in jail. Which level of Kohlberg’s moral development scheme is Rachel applying in this scenario?
A. postconventional B. preconventional C. preoperational D. conventional
Social Development: Identity and Relationships
The final aspect of adolescence to consider is the role of social relationships. To teenagers, friends are everything—the people who will support your story to your parents about why you came home late, who laugh hysterically with you at 3:00
in the morning, and who help you feel that your choice of clothing is actually cool. Friends are central to two of the most important changes that occur during adolescence—the formation of a personal identity, and the shift away from family relationships and toward peer and romantic relationships. These major changes in teens’ lives are sources of growth and maturation, but are also often sources of distress and conflict.
Who am I? Identity Formation During Adolescence
A major issue faced by adolescents is forming an identity , which is a clear sense of what kind of person you are, what types of people you belong with, and what roles you should play in society. It involves coming to appreciate and express one’s attitudes and values (Arnett, 2000; Lefkowitz, 2005), which are, in large part, realized through identifying more closely with peers and being accepted into valued social groups.
You may recall Erikson’s theory of psychosocial development from Module 10.2 (see Table 10.5 , in Module 10.4 for an overview). Erikson described the stage of adolescence as involving the struggle of identity vs. role confusion. Adolescents are seeking to define who they are, in large part through their attachment to specific social groups; doing this successfully allows them to enter adulthood with a sense of their own authenticity and self-awareness.
In fact, forming an identity is so important in the teenage years that adolescents
may actually experience numerous identity crises before they reach young adulthood. An identity crisis involves curiosity, questioning, and exploration of different identities. It can also involve attaching oneself to different goals and values, different styles of music and fashion, and different subcultural groups, all the while wondering where one best fits in, and who one really is.
The process of exploring different identities, and experiencing more independence from the family, sets the stage for potential conflict, particularly with parents. Even well-meaning parents may feel somewhat threatened as their teenage son or daughter starts to establish more distance or starts to experiment
with identities they feel are unwise. They may feel hurt and want to hold onto their closeness with their child. They may also feel concerned and want to protect their child from making mistakes they will later regret. So, parents may simply be trying to help, but their advice, rules, or insistence that the teen abandon certain goals (“There’s no way you’re giving up math and science to take drama and music!”) may be interpreted as being restrictive or controlling. This, not surprisingly, can lead to conflict. And the more conflict teens perceive at home, the more they may turn to peers.
Peer Groups
Friendships are a major priority for most adolescents. Friendships generally take
place within a broader social context of small groups or cliques, and the membership and intensity of friendships within a clique constantly change
(Cairns & Cairns, 1994). Adolescent crowds—often identified with specific labels, such as “jocks,” “geeks,” “Goths,” or “druggies”—are larger than cliques and are characterized by common social and behavioural conventions.
Adolescents who can’t find their place in social networks have a difficult time; social exclusion can be a devastating experience. When rejected by peers, some adolescents turn to virtual social networks for online friendships, or join distinctive sub-groups in order to gain acceptance. This tendency to seek acceptance within specific groups is obviously not limited to teenagers, but adolescence is a time of particular social vulnerability because adolescents are, in general, so actively working on their “identity project.”
For decades, television shows and movies have offered glimpses into life within
adolescent cliques and crowds. The portrayals may be exaggerated, but they are often successful because viewers can closely identify with the characters’ experiences. Photos 12/Alamy Stock Photo
AF archive/Alamy Stock Photo
One of the most troubling outcomes of social rejection is the experience of shame, which is a feeling that there is something wrong with you. It can be accompanied by feelings of worthlessness, inferiority, or just a more subtle, gnawing feeling that there is something wrong with you, that you need to prove yourself, and that you aren’t quite good enough. Shame-prone individuals have often experienced substantial social rejection; a key source is within the family, such as when a child’s attachment needs are consistently unmet.
Many psychologists believe that shame and other negative emotions that are connected to social rejection, bullying, teasing, and being publicly humiliated can lead to tragic outbursts of violence, such as the school shootings that have become disturbingly frequent in the United States in recent years. In almost all cases of school shootings, social rejection is a key factor that precedes the
violent outburst (Leary et al., 2003; Tangney & Dearing, 2002). Just as the security from having one’s need to belong satisfied leads to the development of empathy and moral behaviours (see Module 10.2 ), the insecurity from having one’s need to belong go unmet can lead to violence.
Romantic Relationships
As children mature into teenagers, their attachment needs shift, not fully but in important ways, into their intimate or romantic relationships. Here, the dramas of their interpersonal systems play out on a new stage. In other words, teenagers are pretty interested in being attracted to each other. This opens up the potential exploration of new worlds of emotional and physical intimacy and intensity.
Many people, for many different reasons, may feel uncomfortable with adolescents exploring and engaging in sexual behaviour. Perhaps
unsurprisingly, North American teenagers themselves don’t seem to agree. Between 40–50% of Canadian teens aged 15–19 report having had sexual
intercourse (Boyce et al., 2006; Rotermann, 2008), and the proportion who have engaged in other forms of sexual acts such as oral sex is considerably higher. More than 80% of American adolescents report engaging in non-
intercourse sex acts before the age of 16 (Bauserman & Davis, 1996), and more than half of Canadian teens in Grade 11 report having experienced oral
sex (Boyce et al., 2006). Some teens turn to oral sex because they see it as less risky than intercourse, both for one’s health and social reputation (Halpern- Fisher et al., 2005).
Same-sex sexual encounters are also very common and typically occur by early
adolescence (Savin-Williams & Cohen, 2004), although contrary to stereotypes, such an experience is not an indication of whether a person identifies themselves as homosexual, or as having any other sexual orientation. In fact, the majority (60%) of people who identify themselves as heterosexual
have had at least one same-sex encounter (Remafedi et al., 1992). For many, this is part of the experimentation that comes with figuring out who you are and establishing an identity.
The process by which adolescents come to recognize their sexual orientation depends on many factors, including how they are perceived by their family and peers. Because of some people’s still-existing prejudices against non- heterosexual orientations, it is not uncommon for many people who don’t identify as heterosexual to experience some difficulty accepting their sexuality, and thus, to struggle with feelings of rejection toward themselves. However, this process is not always difficult or traumatic; it largely depends on how supportive family and other relationships can be. Nevertheless, despite these extra identity challenges, homosexuals have about the same level of psychological well-being as heterosexuals (Rieger & Savin-Williams, 2011).
Although sexual exploration is a normal part of adolescence, it can unfortunately be dangerous for many people. Research at the University of New Brunswick has shown that among Canadian teens in Grade 11, approximately 60% of both males and females reported having experienced psychological aggression
against them by their romantic relationship partner. About 40% experienced sexual aggression, generally in the form of being coerced or pressured into
having sex (Sears & Byers, 2010). In addition, each year in North America, millions of teens face the life upheaval of an unplanned pregnancy, sexually transmitted diseases, or simply having sex that they will later regret.
Overall, the emotional upheaval of relationships, from the ecstasy of attraction, to the heartbreak of being rejected or cheated on, to the loneliness one may feel in the absence of relationships, consumes a great deal of many teenagers’ attention and resources and is a central part of the often tumultuous experience of adolescence.
Module 10.3d Quiz:
Social Development: Identity and Relationships
Know . . . 1. The kind of person you are, the types of people you belong with, and the
roles that you feel you should play in society are often referred to as your
. A. crowd B. peer group C. autonomy D. identity
Understand . . . 2. For most teens, the most devastating experience would be
A. failing at an important competition. B. being rejected by their friends. C. being rejected on a first date. D. having a physical injury.
Module 10.3 Summary
Know . . . the key terminology concerning adolescent development.10.3a
conventional morality
delay gratification
identity
menarche
postconventional morality
preconventional morality
primary sex traits
secondary sex traits
spermarche
A major challenge of adolescence is the formation of a personal identity, which involves exploring different values and behaviours, and seeking inclusion in different social groups. The eventual outcome, if navigated successfully, is a relatively stable and personally satisfying sense of self.
Teenagers undergo a general shift in their social attachments as family becomes less central and friends and intimate relationships take on increased significance. The failure to establish a sense of belonging is an important precursor to dysfunctional behaviours and violence.
Contrary to theories of moral reasoning, recent research on moral emotions, such as disgust, suggests that these feelings are what lead to moral behaviour, and reasoning generally follows as a way of justifying the behaviour to oneself.
Understand . . . the process of identity formation during adolescence.
10.3b
Understand . . . the importance of relationships in adolescence.10.3c
Understand . . . the functions of moral emotions.10.3d
Apply Activity Read the following scenarios and identify which category of moral reasoning (preconventional, conventional, or postconventional) applies to each.
1. Jeff discovers that the security camera at his job is disabled. He decides it is okay to steal because there’s no way he’s going to get caught.
2. Margaret is aware that a classmate has been sending hostile text messages to various people at her school. Although she does not receive these messages, and she does not personally know any of the victims, Margaret reports the offending individual to school officials.
Many problems with judgment and decision making involve a kind of tug-of-war between emotional reward systems in the limbic areas of the brain, and the prefrontal cortex, which is involved in planning, reasoning, emotion, and impulse control. Because the prefrontal cortex is still developing during adolescence, particularly through myelination and synaptic pruning, it is often not sufficient to override the allure of immediate temptations, leading to failures to delay gratification.
Apply . . . your understanding of the categories of moral reasoning.10.3e
Analyze . . . the relationship between brain development and adolescent judgment and risk taking.
10.3f
Module 10.4 Adulthood and Aging
reppans/Alamy Stock Photo
Learning Objectives
Know . . . the key terminology concerning adulthood and aging. Know . . . the key areas of growth experienced by emerging adults. Understand . . . age-related disorders such as Alzheimer’s disease. Understand . . . how cognitive abilities change with age. Apply . . . effective communication principles to the challenge of improving your own relationships. Analyze . . . the stereotype that old age is a time of unhappiness.
10.4a 10.4b 10.4c 10.4d 10.4e
10.4f
“Use it or lose it.” This is one of those sayings that you grow up hearing, and you think, “Yeah, whatever, I’m young and awesome; I’m never going to lose it.” But time goes by, and like it or not, the day will come when you may find yourself puffing at the top of a flight of stairs, or standing in the kitchen wondering why you’re there. You may wonder, what’s happened to me? Why do I feel so old?
We all know that if you stay physically active, your body will stay stronger and healthier as you age, maintaining better cardiovascular fitness, muscle tone, balance, and bone density. Thankfully, recent advances in neuroscience confirm that the same thing is true for the brain. If you use it, you’re less likely to lose it. This is important because, unfortunately, brain connections are exactly what people lose as they age, particularly from their 60s onward, resulting in less neural connectivity and reductions in grey and white matter volume. These neurological losses are accompanied by gradual declines in some types of cognitive functioning.
The fact that exercising your brain slows down the neural signs of aging —and even reduces the likelihood of developing age-related disorders such as Alzheimer’s disease—is great news. And even better news is that exercising your brain is actually fun! It’s not like spending countless hours on the brain equivalent of a treadmill, memorizing pi to 35 decimal places. Instead, neurological exercisers are those who regularly stay actively involved in things they love—games, sports, social activities, hobbies, and in general remaining lifelong learners. This makes getting old sound not so bad after all. . . .
Focus Questions
1. What are the key developmental challenges adults face as they age?
2. How does aging affect cognitive functioning?
Becoming an adult does not entail crossing any specific line. It’s not as clear-cut as adolescence; after all, puberty is kind of hard to miss. In Canada, you are considered to be an adult from a legal perspective at 18. Still, it’s questionable whether 18-year-olds are fully fledged adults; they have essentially the same lifestyle as 17-year-olds, often at home or in student housing, with relatively few responsibilities beyond brushing their teeth and dragging themselves to work or school. As time goes by, people get increasingly integrated into working society, begin careers, usually establish long-term relationships, pay bills, possibly have children, and in a variety of ways conform to the expectations and responsibilities of adulthood. As they move slowly from adolescence toward retirement and beyond, adults go through a number of changes—physically, socially, emotionally, cognitively, and neurologically. This module will examine these changes across the different stages of adult development.
From Adolescence through Middle Age
When we are children and adolescents, we often feel like we can’t wait to grow up. Many of you can likely remember how large and mature 18-year-olds seemed when you were younger. Eighteen-year-olds went to university, had jobs, and seemed so poised. Now that many of you are in this age range, you can see that this view of emerging adults is a bit naïve. That said, people in this age group have their entire adult lives in front of them. The adventure is beginning.
Emerging Adults
The time between adolescence and adulthood is a period of great personal
challenge and potential growth. Emerging adults confront many adaptive challenges; they may leave home for the first time; start college, university, or full-time work; become more financially responsible for themselves; commit to and cohabit with romantic partners; and, of course, deal with the endless crises of their friends.
Adults inhabit a much more complex world than children, and this becomes increasingly clear as the demands of life, and the need to be responsible for yourself, increase. How well individuals navigate these challenges is important for setting the stage of the next phases of life, and affects feelings of self-worth and confidence in handling the challenges of adulthood. On the other hand, adulthood also brings a huge amount of freedom. You make money, you can travel, eat what you want, and (usually) do what you want. You can settle into your identity as a human being, and you can become comfortable in your own skin. Of course, all of this freedom operates within a complex web of social relationships and responsibilities, and adulthood involves balancing these various factors over time.
Researchers at the University of Guelph conducted an in-depth study of the experiences of these emerging adults, identifying three main areas of personal
growth: relationships, new possibilities, and personal strengths (Gottlieb et al., 2007). Interestingly, these correspond perfectly to the domains of relatedness, autonomy, and competence that are widely viewed as key pillars of healthy
development throughout the lifespan (these are discussed in depth in Module 11.3 ).
In the relationships domain, most people in this study felt that they had grown in their abilities to trust others, to recruit support from others, and generally to be able to establish strong and intimate connections. This increased intimacy is an outgrowth of people learning to be themselves with others, to know who they are, and to connect in ways that accept and encourage people’s authenticity. The
domain of new possibilities reflects the greater freedom that emerging adults enjoy to choose activities that better fit their goals and interests, to broaden their horizons, and to actively search for what they want to do with their lives. The
domain of personal strength reflects the confidence young adults gain as they confront more serious life challenges and discover that they can handle them.
The emergence into adulthood is a time, therefore, of immense opportunity. As a person comes into their own, they can engage with the world that much more confidently and effectively. And that seems to be the story of adulthood: greater opportunities, greater challenges.
Early and Middle Adulthood
The first few decades of early adulthood are typically the healthiest and most vigorous times of life. People in their 20s to 40s are usually stronger, faster, and healthier than young children or older people. After adolescence, when one has finished growing, one enters a kind of plateau period of physical development in which the body changes quite slowly (aside from obvious exceptions, like pregnancy). For women, this period starts to shift at approximately age 50 with
the onset of menopause , the termination of the menstrual cycle and reproductive ability. The physical changes associated with menopause, particularly the reduction in estrogen, can result in symptoms such as hot flashes, a reduced sex drive, and mood swings. Psychologically, some women experience a period of adjustment, perhaps feeling like they are no longer “young” or as potentially worthwhile; these types of adjustment problems are common to many different major life changes, and as always, the severity of such symptoms varies widely among individuals. Men, on the other hand, don’t experience a physical change as substantial as menopause during middle adulthood, although testosterone production and sexual motivation typically decline.
Early and middle adulthood are also an important time for relationships, particularly of the romantic variety. This links back to Erik Erikson’s theory of
development across the lifespan (see Table 10.5 ). As mentioned in earlier modules, in each of Erikson’s stages, the individual faces a specific developmental challenge, or crisis of development. If she successfully resolves this crisis and overcomes this challenge, the person becomes better able to rise to the challenges of subsequent stages and moves on in life, letting go of specific issues that characterized the earlier stages. However, if the stage is not successfully resolved, lingering issues can interfere with the person’s subsequent development.
According to Erikson’s theory, the first four stages of development are completed
during infancy and childhood (see Module 10.2 ); the fifth stage takes place during adolescence (see Module 10.3 ). In the sixth stage, Young adulthood,
the individual must cope with the conflict between intimacy and isolation. This stage places emphasis on establishing and maintaining close relationships. The
following stage of Adulthood involves the tension of generativity vs. stagnation, during which the person either becomes productively engaged in the world, playing somehow useful roles in the world, or else the person “stagnates,” becoming overly absorbed with their own lives, and failing to give back to the world in a useful way.
Thus, putting these two stages together gives a decent picture of much of the central foci in an adult’s life. Adulthood is this challenge of balancing one’s own personal needs with one’s relationships, while also fulfilling family responsibilities and playing a variety of different roles in society (depending on things like one’s career, and the roles one may play in the community). A key part of these stages is marriage (or cohabitation), perhaps the most important relationship(s) of adulthood.
Love and Marriage
Although not all long-term committed relationships proceed to marriage, it remains the norm, with 67% of Canadian families involving a married couple (with or without children). However, in recent years the proportion of married- couple families has been dropping from 70.5% in 2001. Common-law and lone-
parent families each account for about 16% of families (Statistics Canada, 2012c).
Consistent with Erikson’s theorizing, being able to establish a committed, long- term relationship seems to be good for people (although not in all cases, such as abusive relationships). On average, being in such a relationship is associated
with greater health, longer life (Coombs, 1991; Koball et al., 2010), and increased happiness (Wayment & Peplau, 1995). Numerous factors are involved in these benefits. For example, married couples encourage each other to stay active and eat healthier diets, are more satisfied with their sex lives (and have sex more frequently than those who stay single, “swinging single” myths
notwithstanding), and enjoy greater financial security (Waite & Gallagher, 2000).
But is it really marriage that makes people happier? Or is it due to living together in a committed relationship? Many people believe that living together before marriage is harmful to a relationship, whereas others believe it is a wise thing to do before making the commitment to marry a person. Until a few years ago, research suggested that despite the beliefs of more progressively minded folks, cohabiting before marriage appeared to be associated with weaker relationships
in a variety of ways (e.g., Stack & Eshleman, 1998). However, a dramatic reversal of this opinion occurred after a large international study of relationships
across 27 different countries (Lee & Ono, 2012) showed that the reason people in common-law relationships seem less happy, on average, is actually because
of cultural intolerance of these types of relationships. In cultures with more traditional gender roles, cohabiting outside of marriage is frowned upon, and couples who do so suffer a social cost. This negatively affects women in particular, whose happiness depends more heavily on family relationships and
interpersonal ties (Aldous & Ganey, 1999). In more egalitarian societies, common-law relationships are not judged as negatively, and consequently, there seems to be no cost to living with a partner before marriage. Indeed, many people would argue that it is a good idea, leading people to make better decisions when choosing a life partner.
Despite the promise of “until death do us part,” about 40% of Canadian
marriages end in divorce (Statistics Canada, 2004; see Figure 10.16 ). One of the key factors that determines whether a marriage will end, and the factor that we have the most control over, is how well partners in a relationship are able to communicate with each other, particularly when they are having a conflict. Several decades of behavioural studies by Dr. John Gottman looked at the communication patterns of couples and led to some key insights about what makes relationships break down and how relationship partners can prevent breakdowns from happening.
Figure 10.16 Marriage and Divorce Trends in Canada Starting in the 1960s, Canadian divorce rates began rising quickly. They have been fairly steady for the past 20 years. Source: Statistics Canada, Divorce cases in civil court, 2010/2011, Juristat Article, Catalogue no. 85-002-X, 2012.
Reproduced and distributed on an “as is” basis with the permission of Statistics Canada.
By observing a couple interacting in his wonderfully named “love lab,” Gottman has been able to predict with up to 94% accuracy whether a relationship will end
in divorce (e.g., Buehlman et al., 1992; Gottman & Levenson, 2002). Across multiple studies, certain patterns of behaviour are highly predictive of relationship
break-up. He calls them, rather dramatically, the Four Horsemen of the Apocalypse (Gottman & Levenson, 1992, 2002). They include:
Criticism: picking out flaws, expressing disappointments, correcting each other, and making negative comments about a spouse’s friends and family
Defensiveness: responding to perceived attacks with counter-attacks Contempt: dismissive eye rolls, sarcastic comments, and a cutting tone of voice
Stonewalling: shutting down verbally and emotionally
Studying these four patterns of destructive communication is like studying a
trouble-shooting manual for the relationships of early and middle adulthood. Avoid these patterns and nurture their opposing tendencies (such as understanding, empathy, and acceptance), and your relationships will have a much better chance of being positive and fulfilling.
Parenting
One common (although by no means universal) aspect of intimate relationships is the raising of children and having something you identify as “a family” together. This is one of the most powerful routes by which people experience a deepening in their feelings of being connected to others. Certainly, whether a person is ready for it or not, parenting basically forces you to become less self-centred. All of a sudden, there is another being who is utterly dependent on you for its survival and its healthy development for many years.
The Four Horsemen of the (Relationship) Apocalypse. Learning to recognize and change these negative communication patterns can make many relationships better. Source: Recognizing the Four Horsemen of the (Relationship) Apocalypse. Reprinted with permission of the Gottman
Institute at www.gottman.com
The experience of becoming a parent, as with any other huge shift in one’s life, causes a person to reorganize their identity to some degree. Life is not just about
them anymore. And indeed, you would be miserable and feel terrible about yourself if you ignored your child, tending instead to your own completely independent needs.
Of course, making this transition—with the exhaustion, stress, and massive changes that accompany it—is not easy. As a result, research tends to show a rather sad pattern, but one worth examining nonetheless: within a fairly short period of time (usually around two years) of having children, parents typically
report that their marital satisfaction declines (Belsky & Rovine, 1990). Marital satisfaction is usually highest before the birth of the first child, then is reduced
until the children enter school (Cowan & Cowan, 1995; Shapiro et al., 2000), and not uncommonly, remains low until the children actually leave home as
young adults themselves (Glenn, 1990).
A major upside to this pattern of findings, of course, is that older adults are often poised to enjoy a rekindling of their relationship; their best years are still ahead of them, and they can settle into enjoying their relatively free time together. In fact,
the notion of parents suffering in their empty nest once their children leave home is largely a myth. Married older adults are just as likely to report being “very
satisfied” with marriage as newlyweds (Rollins, 1989). Of course, some parents no doubt take a fresh look at their relationship once it’s just the two of them again and discover they no longer have anything in common or don’t even like each other that much. But happily, the general trend is actually the opposite—couples find their relationships flourishing again. So, there can be a lot of things to look forward to as one gets older.
Module 10.4a Quiz:
From Adolescence through Middle Age
Know . . . 1. When one person in a relationship tends to withdraw and “shut down”
when discussing difficult issues in the relationship, they are
. A. being abusive
B. stonewalling C. guilt-tripping D. being contemptuous
2. In Erikson’s theory of psychosocial development, what does generativity refer to?
A. The desire to generate an income B. The desire to generate knowledge and learning for oneself C. The desire to have offspring D. The desire to have a positive impact on the world
3. Research that shows that people are more likely to get divorced if they cohabit before marriage is probably due to
A. self-reporting biases interfering with people accurately depicting the health of their relationships.
B. people in some cultures being punished through social and community sanctions if they live in a cohabiting relationship.
C. biased motivations on the part of the researchers, who asked specific questions that were designed to show what they wanted to find.
D. journal editors having a conservative bias and thus being more likely to publish studies that show “moral” findings, rather than ones that illustrate unconventional or non-traditional values.
Late Adulthood
The pursuit of happiness is a common theme in contemporary society, and certainly we can all relate to the desire to be happy. But how do we go about achieving “happiness” as we age, and are we generally successful?
Happiness and Relationships
This generally positive story about growing older gets even better when adults
begin to transition into the latter decades of life, especially when we consider perhaps the most personal and immediate part of one’s happiness—one’s own emotions. One of the biggest benefits to growing older is that the emotional turmoil of youth, with its dramatic ups and downs (passions, despair, anger, lust, and all the rest), often gives way to a smoother, more emotionally stable, and generally more positive experience. As a result, late adulthood is often a particularly enjoyable time of life. The Buddhist monk Thich Nnat Hanh has described youth as being like the chaotic mountain stream tumbling down the mountainside, whereas old age is when the stream has broadened into a serene river making the final leg of its journey to the ocean.
Developmental psychologists describe a similar type of personal development
through the lens of socioemotional selectivity theory , which describes how older people have learned to select for themselves more positive and nourishing experiences. Older people seem better able to pay more attention to positive experiences, and to tend to take part in activities that emphasize positive
emotions and sharing meaningful connections with others (Carstensen et al., 1999). The net result of this wiser approach to life is that negative emotions often decline with age, while positive emotions actually increase in frequency (Figure 10.17 ). Simply put, older people are (often) happier (Charles & Carstensen, 2009)! This definitely gives us something to look forward to.
Figure 10.17 Emotion, Memory, and Aging Younger people have superior memory for whether they have seen positive, negative, or neutral pictures compared with older people. However, notice that younger people remember positive and negative pictures equally, whereas older
people are more likely to remember positive pictures (Charles et al., 2003). Source: Data from “At the Intersection of Emotion and Cognition: Aging and the Positivity Effect” by L. L. Carsten-sen & J. A.
Mikels, (2005). Current Directions in Psychological Science, 14 (3).
Erikson’s theory of psychosocial development describes the final stage,
spanning approximately 65 years and onward, as Aging, the challenge of ego integrity versus despair. During this time the older adult contemplates whether she lived a full life and fulfilled major accomplishments, and now can enjoy the support of one’s lifetime of relationships and social roles. In contrast, if one only looks back on disappointments and failures, this will be a time of great personal struggle against feelings of despair and regret.
The full story of aging has a downside to it as well; it’s not all sunshine and rainbows. Older people experience great challenges: the deaths of their spouse and family members, the loss of close friends and acquaintances, the fading of their physical capabilities, the loss of personal freedoms such as driving or living without assistance, and inevitable health challenges as the body ages. Existentially speaking, older adults also must, sooner or later, face the growing awareness that their time on this earth is drawing to a close. It doesn’t take a lot of imagination to understand why younger people often assume that the elderly are unhappy and depressed as they face the imminent “dying of the light.” Certainly, depression and even suicide are not unknown to the elderly, although contrary to the stereotype of the unhappy, lonely old person, healthy older adults are no more likely to become depressed than are younger people. The reality is that as long as basic emotional and social needs are met, old age is often a very joyous time, again reflecting the greater wisdom with which older adults approach the challenges of their lives, making the best of things, focusing on what they can be grateful for, and letting things go that are negative, as much as
possible (Charles & Carstensen, 2009).
Older adults have had enough experience dealing with the slings and arrows of life that they’ve learned how to emotionally cope, how to see the glass as half-full rather than half-empty, and how to focus on the positives even as they face the negatives. The active cultivation of positive emotions has been shown to be a
key resource that helps people cope with life’s challenges (Cohn et al., 2009; Garland et al., 2010). For example, research at Kwantlen University has shown that many older people respond even to the loss of their beloved spouse by
focusing on positive emotions (Tweed & Tweed, 2011); this enhanced positive focus leads to better coping overall, such as less depressed mood, the experience of greater social support, and even the ability to provide more support to others in the community. This flies in the face of earlier theorists who argued that grief needed to be “fully processed” in order for people to recover
(Bonanno, 2004), and experiencing frequent positive emotions while grieving was actually a sign of pathology (Bowlby, 1980)!
In fact, one of the key lessons that life teaches a person is that many of the challenges one faces carry their own rewards and hidden benefits. As people age, their suffering and loss ends up getting used as fertilizer for their own personal growth. In struggling to deal with the difficulties of life, people often find that they grow in many ways, such as shifting their priorities after realizing what really matters to them, feeling deeply grateful for their close relationships, and feeling deeply motivated to live authentically according to one’s own personal
values and sense of what is right (Tedeschi & Calhoun, 2004). Older people therefore have ample opportunities for personal growth, and it is important to respect how much of the later years of life can be a supremely rich time for people to invest in their own growth, learning, and practice. Even as death approaches, the benefits to the elderly can be a deep enriching of the gratitude
they feel for being alive (Frias et al., 2011).
The Eventual Decline of Aging
Of course, every story has its ending, and as much as we might like to avoid this topic, we also have to acknowledge that the later years of adulthood are accompanied by a certain amount of decline. The body declines and the mind eventually is not as sharp as it once was. Researchers have examined this in great detail and found that the brain, just like other physical systems, shows structural changes and some functional decline with age. These changes include reduced volume of white and grey matter of the cerebral cortex, as well as of the
memory-processing hippocampus (Allen et al., 2005). The prefrontal cortex and its connections to subcortical regions are also hit hard by aging (Raz, 2000). The reduced frontal lobe volume may explain why older adults sometimes lose their train of thought and why they sometimes say things that they wouldn’t have in the past (e.g., blunt comments, vulgarity). Because it is now common for people to live well into their 80s and beyond, these declines are ever more important to understand because they have many implications for how well older adults will be able to function in their everyday lives.
If one lives well and/or is lucky, one can get pretty much to the end of a natural lifespan with very little cognitive decline. However, there is a lot of variability in how well people will age, neurologically speaking. The negative end of the
spectrum is anchored by various neurodegenerative conditions. These are medical conditions of aging characterized by the loss of nerve cells and nervous system functioning, which generally worsen over time. Many older adults struggle with attending to the tasks of everyday life, which may indicate the onset
of dementia , a mild to severe disruption of mental functioning, memory loss, disorientation, and poor judgment and decision making. Approximately 14% of people older than 71 years of age have dementia.
Psych @ The Driver’s Seat Thanks to technology, the current generation of elderly adults faces issues that previous generations never did. Take driving, for example. Many older adults depend on their cars to shop, maintain a social life, and keep appointments. Research, however, has shown that the cognitive and physical changes in old age may take a toll on driving skill. This decline presents a dilemma for many seniors and their families: How can individuals maintain the independence afforded by driving without endangering themselves and other drivers?
To address this problem, psychologist Karlene Ball developed an intervention called Useful Field of View (UFOV) Speed of Processing
training (Ball & Owsley, 1993). UFOV uses computer-based training exercises to increase the portion of the visual field that adults can quickly
process and respond to. Laboratory studies show that UFOV actually
increases the speed of cognitive processing for older adults (Ball & Owsley, 2000). Records from several U.S. states that have studied the UFOV show that drivers who completed the training were half as likely to have had an accident during the study period.
Nearly 10% of cases of dementia involve the more severe Alzheimer’s disease —a degenerative and terminal condition resulting in severe damage to the entire brain. Alzheimer’s disease rarely appears before age 60, and it usually lasts 7 to 10 years from onset to death (although some people with Alzheimer’s live much longer). Early symptoms include forgetfulness for recent events, poor judgment, and some mood and personality changes. As the disease progresses, people experience severe confusion and memory loss, eventually struggling to recognize even their closest family members. In the most advanced stages of Alzheimer’s disease, affected individuals may fail to recognize themselves and may lose control of basic bodily processes such as bowel and bladder control.
What accounts for such extensive deterioration of cognitive abilities? Alzheimer’s disease involves a buildup of proteins that clump together in the spaces between neurons, interrupting their normal activity. These are often referred to as
plaques. Another type of protein tangles within nerve cells, which severely disrupts their structural integrity and functioning (Figure 10.18 ). These are often referred to as neurofibrillary tangles (or simply as tangles). Many different research groups are currently searching for specific genes that are associated with Alzheimer’s disease. The genetic risk (i.e., the heritability of the disease) is very high for people who develop an early-onset form (age 30–60) of Alzheimer’s
disease (Bertram et al., 2010). In those individuals with later-onset (age 60+) disease, the genetic link is not as consistent.
Figure 10.18 How Alzheimer’s Disease Affects the Brain Advanced Alzheimer’s disease is marked by significant loss of both grey and white matter throughout the brain. The brain of a person with Alzheimer’s disease typically has a large buildup of a protein called beta-amyloid, which kills nerve cells. Also, tau proteins, which maintain the structure of nerve cells, are often found to be defective in the Alzheimer’s brain, resulting in neurofibrillary tangles. Source: Based on information from National Institute of Aging. (2008). Part 2: What happens to the brain in AD. In
Alzheimer’s Disease: Unraveling the mystery. U.S. Department of Health and Human Services. NIH Publication No. 08-3782.
Retrieved from https://www.nia.nih.gov/sites/default/files/alzheimers_disease_unraveling_the_mystery_2.pdf
Alzheimer’s disease illustrates a worst-case scenario of the aging brain. However, even in normal brains, structural changes occur which also cause a variety of cognitive challenges that increase as the person gets older.
Working the Scientific Literacy Model Aging and Cognitive Change
How does the normal aging process affect cognitive abilities such as intelligence, learning, and memory? People commonly believe that a loss of cognitive abilities is an inevitable part of aging, even for those who do not develop dementia or Alzheimer’s disease. However, the reality of aging and cognition is not so simple.
What do we know about different cognitive abilities? There are many different cognitive abilities, including different memory and attentional abilities. One useful distinction is made between cognitive tasks that involve processes such as problem solving, reasoning, processing speed, and mental flexibility; these
tasks are said to involve fluid intelligence. Other tasks tap into crystallized intelligence, which is based on accumulated knowledge and skills (Module 9.2 ), such as recognizing famous people like David Suzuki or Justin Bieber. Although fluid intelligence reaches a peak during young adulthood and then slowly declines, crystallized intelligence remains largely intact into old age.
How can science explain age-related differences in cognitive abilities? Researchers have not yet fully solved the riddle of why some cognitive abilities decline with age. There are many different potential explanations. Neurological studies of brain function suggest two leading possibilities.
The first is that older adults sometimes use ineffective cognitive strategies, leading to lower levels of activation of relevant brain areas. This has been repeatedly found in various studies (e.g.,
Logan et al., 2002; Madden et al., 1996). Interestingly, it may be possible to enhance neural function in older people simply by
reminding them to use effective strategies. For example, Logan and her colleagues (2002) found that, compared to subjects in their 20s, older subjects (in their 70s and 80s) performed worse on a memory task and showed less activity in key frontal lobe areas. However, by giving older adults strategies to help them more deeply encode the information, older adults were able to activate these brain areas to a greater extent, thus improving their memories for the information. This work suggests that a key to helping older adults resist the decline of their cognitive abilities
is to help them learn effective strategies for making better use of their cognitive resources.
A second possible explanation for reduced cognitive abilities in older people is that older brains show more general, non-specific
brain activation for a given task (Cabeza, 2002). They may do so either because they are compensating for deficits in one area by recruiting other areas, or possibly because they are less capable of limiting activation to the appropriate, specialized neural areas. Involving more widely distributed brain areas in a given task would generally result in slower processing speed, which could help to explain some of the cognitive deficits (e.g., fluid intelligence) seen in older adults.
Can we critically evaluate our assumptions about age-related cognitive changes? Although older people show declines on laboratory tests of some cognitive functions, we should guard against the stereotypic assumption that the elderly are somehow less intellectually capable than the rest of us. In most cultures and for most of history, older people have been widely respected and honoured as wisdom keepers for their communities; respect for one’s elders is, in fact, the historical norm, whereas modern Western society’s tendency to disregard the perspectives of the elderly, assuming that they are out of touch and their opinions are no longer relevant, is the aberration.
The wisdom of elderly people is evident in their approach not only to emotional well-being, as we discussed earlier in this module, but also in how they deal with their own cognitive abilities. In everyday life, as opposed to most laboratory tests, the decline in cognitive abilities does not necessarily translate into decline in practical skills, for at least two important reasons. The first is that while the episodic and working memory systems may not work as well, the procedural and semantic memory systems show a much
slower rate of decline with age (see Figure 10.19 ). Thus, older people’s retention of practical skills and general knowledge about the world remains largely intact for most of their lives.
Figure 10.19 Memory and Aging
Several types of memory systems exist, not all of which are equally affected by age. An older person’s ability to remember events, such as words that appeared on a list (episodic memory), is more likely to decline than his or her memory for facts and concepts (semantic memory).
The second reason the elderly fare better than might be expected from laboratory tests is that they learn to compensate for their reduced raw cognitive power by using their abilities more skillfully. For example, in a chess game, older players play as well as young players, despite the fact that they cannot remember chess positions as well as their young opponents. They compensate for this reduction in working memory during a game by more efficiently searching the chessboard for patterns
(Charness, 1981). Having more experience to draw upon in many domains of life gives older people an advantage because they will be better able to develop strategies that allow them to
process information more efficiently (Salthouse, 1987).
Why is this relevant? In a society that increasingly relegates its elderly to seniors’ residences, largely removing them from their families and the larger community, it is important to remember that older people actually retain their faculties much better than might be expected. This is especially true for older adults who practice specific cognition-enhancing behaviours. What keeps the aging brain sharp? It’s pretty simple really, as researchers at the University of Alberta and others have shown—staying physically active, practising cognitively challenging activities (and they don’t have to be crosswords and brain teaser puzzles; intrinsically enjoyable hobbies work just fine), and remaining socially connected and
active (Small et al., 2012; Stine-Morrow, 2007). In addition, diets low in saturated fats and rich in antioxidants, omega-3 fatty acids, and B vitamins help to maintain cognitive functioning and
neural plasticity (Mattson, 2000;Molteni et al., 2002). As a society, providing opportunities and resources for seniors to remain active, socially engaged, and well-nourished will allow them to enjoy high-quality lives well into old age.
Module 10.4b Quiz:
Late Adulthood
Understand . . . 1. Socioemotional selectivity theory describes how older adults
A. are better at socializing in general, because they have a lifetime of practice; thus, they tend to make friends very easily, and this keeps them functioning well.
B. are better at selecting emotions that are socially acceptable
based on the current circumstance. This causes them much less stress and is why they are generally happier.
C. have usually invested so much of their lives in a few close relationships that now they have a network of support in those friends who were selected based on their tendency to be socially and emotionally supportive.
D. are better at paying attention to positive things, rather than excessively dwelling on the negatives.
2. Once someone is diagnosed with Alzheimer’s disease, they are likely to A. experience escalating pain and a reduction of their physical
capabilities.
B. exhibit emotional volatility and a tendency towards irrational, violent behaviour.
C. exhibit confused, forgetful behaviour and a general decline in cognitive abilities.
D. experience intense hallucinations, especially involving people who have died.
Apply . . . 3. Which of the following best describes the effects of aging on intelligence?
A. Fluid intelligence tends to decrease, but working memory tends to increase.
B. Fluid intelligence tends to decrease, but crystallized intelligence tends to increase.
C. Crystallized intelligence tends to increase, but the ability to skillfully use one’s abilities decreases.
D. Aging is unrelated to intelligence, except in the case of brain disorders and diseases such as dementia or Alzheimer’s disease.
Module 10.4 Summary
Alzheimer’s disease
Know . . . the key terminology concerning adulthood and aging.10.4a
dementia
menopause
socioemotional selectivity theory
People making the transition from adolescence to adulthood face substantial life challenges that contribute to personal growth in three main areas: relationships (i.e., cultivating true intimacy and trust); new possibilities (i.e., exploring what they really want to do with their lives and choosing a compatible path that reflects their interests); and personal strengths (i.e., the skills and competencies that come from successfully facing challenges).
Alzheimer’s disease is a form of dementia that is characterized by significant decline in memory, cognition, and, eventually, basic bodily functioning. It seems to be caused by two different brain abnormalities—the buildup of proteins that clump together in the spaces between neurons, plus degeneration of a structural protein that forms tangles within nerve cells.
Aging adults typically experience a general decline in cognitive abilities, especially those related to fluid intelligence, such as working memory. However, older adults also develop compensatory strategies that enable them to remain highly functional in their daily lives, despite their slow decline in processing capability.
Apply Activity In this module, you read about John Gottman’s research into the Four Horsemen
Know . . . the key areas of growth experienced by emerging adults.10.4b
Understand . . . age-related disorders such as Alzheimer’s disease.10.4c
Understand . . . how cognitive abilities change with age.10.4d
Apply . . . effective communication principles to the challenge of improving your own relationships.
10.4e
of the (Relationship) Apocalypse. Identify which of the four relationship-harming behaviours are most apparent in each of the three descriptions below.
1. Molly and David are arguing—David feels that Molly spends too much of her time talking on the phone with her friends rather than spending time with him. Molly rolls her eyes and says “I didn’t realize you needed to be entertained 24 hours a day.”
2. Nicole is upset that her husband Greg isn’t putting in enough hours at his job to make a good income. When she talks to Greg about this, he becomes distant and ends the discussion.
3. Juan and Maria are having marital problems. When frustrated, Juan often complains about Maria’s mother and about how Maria’s friends are immature. This upsets Maria.
Research shows that older adults do face issues that might lead to unhappiness —for example, health problems, loss of loved ones, and reductions in personal freedom. However, such challenges often lead to growth and a deepened appreciation for life and other people. The result is that many older people become skilled at focusing on the positives of life and pay less attention to the
negatives, leading to an increase in life satisfaction, rather than a decrease.
Analyze . . . the stereotype that old age is a time of unhappiness.10.4f
- Cover
- An Introduction to Psychological Science
- An Introduction to Psychological Science
- Business Statistics, Third Canadian Edition, 3/e
- Business Statistics, Third Canadian Edition, 3/e
- Brief Contents
- Contents
- About the Authors
- About the Canadian Authors
- From the Authors
- What’s New in the Second Canadian Edition?
- Content and Features
- For Instructors
- Acknowledgments
- Chapter 1 Introducing Psychological Science
- Module 1.1 The Science of Psychology
- Module 1.2 How Psychology Became a Science
- Chapter 2 Reading and Evaluating Scientific Research
- Module 2.1 Principles of Scientific Research
- Module 2.2 Scientific Research Designs
- Module 2.3 Ethics in Psychological Research
- Module 2.4 A Statistical Primer
- Chapter 3 Biological Psychology
- Module 3.1 Genetic and Evolutionary Perspectives on Behaviour
- Module 3.2 How the Nervous System Works: Cells and Neurotransmitters
- Module 3.3 Structure and Organization of the Nervous System
- Module 3.4 Windows to the Brain: Measuring and Observing Brain Activity
- Chapter 4 Sensation and Perception
- Module 4.1 Sensation and Perception at a Glance
- Module 4.2 The Visual System
- Module 4.3 The Auditory and Vestibular Systems
- Module 4.4 Touch and the Chemical Senses
- Chapter 5 Consciousness
- Module 5.1 Biological Rhythms of Consciousness: Wakefulness and Sleep
- Module 5.2 Altered States of Consciousness: Hypnosis, Mind-Wandering, and Disorders of Consciousness
- Module 5.3 Drugs and Conscious Experience
- Chapter 6 Learning
- Module 6.1 Classical Conditioning: Learning by Association
- Module 6.2 Operant Conditioning: Learning through Consequences
- Module 6.3 Cognitive and Observational Learning
- Chapter 7 Memory
- Module 7.1 Memory Systems
- Module 7.2 Encoding and Retrieving Memories
- Module 7.3 Constructing and Reconstructing Memories
- Chapter 8 Thought and Language
- Module 8.1 The Organization of Knowledge
- Module 8.2 Problem Solving, Judgment, and Decision Making
- Module 8.3 Language and Communication
- Chapter 9 Intelligence Testing
- Module 9.1 Measuring Intelligence
- Module 9.2 Understanding Intelligence
- Module 9.3 Biological, Environmental, and Behavioural Influences on Intelligence
- Chapter 10 Lifespan Development
- Module 10.1 Physical Development from Conception through Infancy
- Module 10.2 Infancy and Childhood: Cognitive and Emotional Development
- Module 10.3 Adolescence
- Module 10.4 Adulthood and Aging
- Chapter 11 Motivation and Emotion
- Module 11.1 Hunger and Eating
- Module 11.2 Sex
- Module 11.3 Social and Achievement Motivation
- Module 11.4 Emotion
- Chapter 12 Personality
- Module 12.1 Contemporary Approaches to Personality
- Module 12.2 Cultural and Biological Approaches to Personality
- Module 12.3 Psychodynamic and Humanistic Approaches to Personality
- Chapter 13 Social Psychology
- Module 13.1 The Power of the Situation: Social Influences on Behaviour
- Module 13.2 Social Cognition
- Module 13.3 Attitudes, Behaviour, and Effective Communication
- Chapter 14 Health, Stress, and Coping
- Module 14.1 Behaviour and Health
- Module 14.2 Stress and Illness
- Module 14.3 Coping and Well-Being
- Chapter 15 Psychological Disorders
- Module 15.1 Defining and Classifying Psychological Disorders
- Module 15.2 Personality and Dissociative Disorders
- Module 15.3 Anxiety, Obsessive–Compulsive, and Depressive Disorders
- Module 15.4 Schizophrenia
- Chapter 16 Therapies
- Module 16.1 Treating Psychological Disorders
- Module 16.2 Psychological Therapies
- Module 16.3 Biomedical Therapies
- Glossary
- References
- Name Index
- Subject Index
PSYC 106 Online Writing Assignment Questions
1. Does this study reflect basic or applied research? How do you know?
2. What level(s) of analysis (biological, psychological, environmental) are being employed in this study? Explain and identify the specific details of the study that show this.
3. What were the findings of previous research that led the researchers to do this study? What was the research question that they attempted to answer? What was (were) the researchers’ hypothesis(es)?
4. How was this study conducted? Was it an experiment or a non-experiment? How can you tell? What were the variables under investigation in this study? Identify the dependent and independent variables, if applicable.
5. What were the main findings of this study? How do they relate to the original study hypothesis(es)?
6. How do the researchers explain their results? What are some other possible explanations? What are some strengths of this study? Limitations?
7. What are the possible implications of these results in the real world?
Journal of Social and Clinical Psychology, Vol. 37, No. 10, 2018, pp. 751-768
© 2018 Guilford Publications, Inc.
Address correspondence to Melissa G. Hunt, 425 S. University Ave., Philadelphia, PA 19104; E-mail: [email protected]
LIMITING SOCIAL MEDIA HUNT ET AL.
NO MORE FOMO: LIMITING SOCIAL MEDIA DECREASES LONELINESS AND DEPRESSION
MELISSA G. HUNT, RACHEL MARX, COURTNEY LIPSON, AND JORDYN YOUNG University of Pennsylvania
Introduction: Given the breadth of correlational research linking social media use to worse well-being, we undertook an experimental study to investigate the potential causal role that social media plays in this relationship. Method: After a week of baseline monitoring, 143 undergraduates at the University of Pennsyl- vania were randomly assigned to either limit Facebook, Instagram and Snapchat use to 10 minutes, per platform, per day, or to use social media as usual for three weeks. Results: The limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group. Both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring. Discussion: Our findings strong- ly suggest that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being.
Keywords: social media; social networking sites; Facebook; Snapchat; Instagram; well-being; depression; loneliness
Social Networking Sites (SNS) have become a ubiquitous part of the lives of young adults. As of March of 2018, 68% of adults in the United States had a Facebook account, and 75% of these people reported using Facebook on a daily basis. Moreover, 78% of young adults (ages 18–24) used Snapchat, while 71% of young adults used Instagram (Smith & Anderson, 2018). Widespread adoption of social media has prompted a flurry of correlational studies on the relationship between social media use and mental
752 HUNT ET AL.
health. Self-reported Facebook and Instagram usage have been found to correlate positively with symptoms of depression, both directly and indirectly (Donnelly & Kuss, 2016; Lup, Trub, & Rosenthal, 2015; Rosen, Whaling, Rab, Carrier, & Cheever, 2013; Tandoc, Ferrucci, & Duffy, 2015;). Higher usage of Facebook has been found to be associated with lower self-esteem cross- sectionally (Kalpidou, Costin, & Morris, 2011) as well as greater loneliness (Song et al., 2014). Higher usage of Instagram is cor- related with body image issues (Tiggemann & Slater, 2013).
In a large population based study, Twenge and colleagues (Twenge, Joiner, Rogers, & Martin, 2017) found that time spent on screen activities was significantly correlated with more depres- sive symptoms and risk for suicide-related outcomes, although the correlations with SNS use specifically were quite small, and only significant for girls. A major limitation of that study was that the data bases used suffered from restricted range in SNS use, with the highest category (almost every day) being en- dorsed by more than 85% of females in the samples (Daly, 2018). This simply cannot capture differences in use as they occur natu- ralistically. Checking Facebook for 5 minutes almost every day is surely different that spending hours a day on SNS platforms.
Two studies have used prospective, naturalistic designs. Us- ing experience sampling, Kross and colleagues (2013) found that Facebook use predicts less satisfaction with life over time. In a two-week diary design, Steers, Wickham, & Acitelli (2014) found that the relationship between Facebook use and depres- sive symptoms was mediated by social comparisons. Indeed, several studies have demonstrated that social comparison and peer envy often play a major role in these findings (Tandoc et al., 2015; Verduyn et al., 2015).
Thus, there is considerable evidence that SNS use is associ- ated with reductions in well-being. However, the vast majority of work done in this domain has been correlational in design, which does not allow for causal inferences. Two studies (Kross et al., 2013 and Steers et al., 2014) used prospective longitudinal designs, but were not experimental. It is quite possible that more depressed or lonely individuals use SNS more in an attempt to connect with others. Similarly, it is possible that individuals with lower self-esteem or poorer self-image are more prone to engage
LIMITING SOCIAL MEDIA 753
in social comparison by spending time on SNS sites. Only experi- mental studies can address the direction of causality definitively.
In our review of the literature, we were able to find only two experimental studies, both of which examined only Facebook use. The first study found that subjects assigned to passively scroll through Facebook (as opposed to those assigned to ac- tively post and comment) subsequently reported lower levels of well-being and more envy, indicating not only that Facebook impacts mental health but also that the way in which we engage with Facebook matters (Verduyn et al., 2015). It is reasonable to think that the longer one spends on social media, the more one will be engaging with it in a passive way (as opposed to actively posting content, commenting, etc.) In the second study, subjects who were randomly assigned to abstain from Facebook for a week demonstrated improved satisfaction with life and af- fect (Tromholt, 2016). While this study was a considerable im- provement methodologically on prior work, the ecological va- lidity of the study is somewhat suspect. First, the intervention lasted only one week. While it is interesting that subjects showed measurable increases in well-being over this short time, it is un- clear whether this would have been sustainable. Second, many users have grown so attached to social media that a long-term intervention requiring complete abstention would be unrealistic; limiting SNS use seems more likely to be acceptable and sustain- able. Third, this study relied upon self-report to measure compli- ance with study instructions—there was no objective measure of actual time spent on Facebook. Lastly, both of these studies only explored the effects of Facebook usage. While Facebook is the most widely used SNS among adults, many other sites, es- pecially Snapchat and Instagram, attract large numbers of users and play a major role in these users’ lives; this is most notably true for young adults.
The current study was designed to be a rigorous, ecologically valid, experimental study of the impact on well-being of limiting (but not eliminating) the use of multiple SNS platforms over an extended period of time. We improve upon prior studies in sev- eral ways. First, the study is experimental, allowing for causal inferences to be made. Second, we gathered objective data on actual usage, both during a baseline phase (to account for the
754 HUNT ET AL.
effects of self-monitoring) and during the active intervention phase. Third, we included three major SNS platforms (Facebook, Snapchat, and Instagram). Fourth, we limited usage to 10 min- utes per platform per day, as this seems far more realistic than asking people to abstain from SNS use completely. Many organi- zations, student groups, businesses, and so on rely on social me- dia posts to communicate with members and customers about meeting times, events, etc. It is unrealistic to expect young peo- ple to forego this information stream entirely. Finally, we mea- sured well-being at multiple time points, including before and after the initial self-monitoring baseline, at multiple time points throughout the intervention, and at one-month follow-up after the intervention formally ended.
METHODS
PARTICIPANTS
A total of 143 subjects (108 women, 35 men) were recruited from a pool of undergraduates at the University of Pennsylvania, and began the study on a rolling basis. Seventy-two subjects partici- pated in the fall semester, and 71 in the spring. The subject pool consisted of students enrolled in psychology courses for which they could participate in studies to earn course credit. Subjects were required to have Facebook, Instagram, and Snapchat ac- counts, and to own an iPhone.
MEASURES
Subjective Well-Being Survey
To measure well-being, we used a battery consisting of seven validated scales. Given the lack of experimental research on our topic, we decided to use a wide variety of well-being constructs that have been found to correlate with social media usage. The survey also included a consent form and questions regarding de- mographic information (age, sex, and race). The scales compris- ing the subjective well-being survey are listed below.
LIMITING SOCIAL MEDIA 755
Social Support. The Interpersonal Support and Evaluation List (ISEL; Cohen & Hoberman, 1983) consists of 20 items scored on a 0–3 scale (definitely false to definitely true). We modified item 8 slightly to make it specific to Philadelphia (If I wanted to go on a trip for a day to Center City, I would have a hard time finding someone to go with me). Items pertain to accessibility of social support and include statements such as “When I feel lonely, there are several people I can talk to” and “If I decide one afternoon that I would like to go to a movie that evening, I could easily find someone to go with me.” The ISEL has good construct validity and good internal consistency with α = 0.77 (Cohen, Mermel- stein, Kamarck, & Hoberman, 1985).
Fear of Missing Out. The Fear of Missing Out Scale (FoMOs; Przybylski, Murayama, DeHaan, & Gladwell, 2013) is a validat- ed measure of distress related to missing out on social experienc- es (α = .87). It consists of 10 items scored on a scale of 1 (not at all true of me) to 5 (extremely true of me); items include statements such as “I get anxious when I don’t know what my friends are up to,” “Sometimes, I wonder if I spend too much time keeping up with what is going on,” and “I fear others have more rewarding experiences than me.”
Loneliness. The UCLA Loneliness Scale (revised UCLA Loneli- ness Scale; Russell, Peplau, & Cutrona, 1980) measures perceived social isolation. The original version was revised to include re- verse-scored items and consists of 20 items, scored on a scale of 1 (never) to 4 (often). Sample items include statements such as “No one really knows me well,” “My interests and ideas are not shared by those around me,” and “I feel in tune with the people around me” (reverse scored). The scale has good construct valid- ity and internal consistency with α = 0.94 (Russell et al., 1980).
Anxiety. The Spielberger State-Trait Anxiety Inventory (STAI-S; Spielberger, Gorsuch, & Lushene, 1970) is a widely used measure of anxiety symptoms. The inventory consists of two instruction sets, which measure state (in-the-moment) and trait (general) anxiety. We only used the state anxiety version, which consists of 20 items such as “I feel worried” and “I feel calm” (reverse scored). Subjects can respond on a scale of 1 (not at all) to 4 (ex- tremely so).
756 HUNT ET AL.
Depression. The Beck Depression Inventory (BDI-II; Beck, Steer, & Brown, 1996) is a standard clinical measure of depressive symptoms. It consists of 21 items covering the vegetative, affec- tive and cognitive symptoms of depression. Respondents can in- dicate the severity of each symptom on a scale of 0–3 (e.g.. for the symptom loss of pleasure, one can respond: “I get as much plea- sure as I ever did from the things I enjoy,” “I don’t enjoy things as much as I used to,” “I get very little pleasure from the things I used to enjoy,” or “I can’t get any pleasure from the things I used to enjoy”).
Self-Esteem. The Rosenberg Self-Esteem Scale (RSES; Rosen- berg, 1979) assesses how one feels about oneself. It consists of 10 items, scored 0 (strongly disagree) to 3 (strongly agree), with higher scores indicating more positive feelings about oneself. Items include “I feel that I have a number of good qualities,” “I feel I do not have much to be proud of” (reverse scored), and “I take a positive attitude toward myself.”
Autonomy and Self-Acceptance. The Ryff Psychological Well-Be- ing Scale (PWB; Ryff, 1989) operationalizes psychological well- being in 6 dimensions. We selected the dimensions of autonomy and self-acceptance, as these dimensions are most pertinent to the potential effects of social media. We utilized the 42-item ver- sion, selecting the 14 items belonging to these two dimensions. Items are scored on a scale of 1 (strongly disagree) to 6 (strongly agree), with higher scores indicating higher levels of well-being. Examples of items from the autonomy subscale include “My decisions are not usually influenced by what everyone else is doing” and “I tend to worry about what other people think of me” (reverse scored). Examples of items from the self-acceptance subscale include “I like most aspects of my personality” and “In many ways, I feel disappointed about my achievements in life” (reverse scored).
Objective Measure of Social Media Usage
To track usage of social media, we had subjects email screenshots of their iPhone battery usage at specified increments. iPhones automatically track the total minutes each application is actively
LIMITING SOCIAL MEDIA 757
open on the screen. The battery screen allows users to display their usage for the past 24 hours or 7 days. We provided instruc- tions on how to get to this screen with every reminder to send in a screenshot. In the spring semester of the study, subjects were also asked to estimate their daily usage of Facebook, Instagram, and Snapchat before starting the baseline self-monitoring period.
PROCEDURE
FALL SEMESTER
Subjects signed up via the online website for the University psy- chology subject pool. Upon signing up, they were directed to a secure Qualtrics platform where they saw the consent form, and then completed the baseline survey of mood and well-being measures. Subjects were then sent a welcome email describing the study in more detail. This email informed them that, start- ing that night, they would be sending in a screenshot of their battery screen displaying the past 24 hours of usage in minutes. They were told they would be doing this each night for the next four weeks. Subjects were told to use social media as usual un- til they received their next email. If they had signed up for the study but had not yet completed the baseline survey, they were sent a similar email detailing the study, with the added reminder to complete the baseline survey. They were not told to send in screenshots until they completed the baseline survey.
One week after completing the baseline survey, subjects were emailed their second survey. This survey was identical to the baseline survey, but excluded the BDI-II (it was assumed that depression would not fluctuate much on a week-by-week basis). Subjects were asked to send a screenshot of their battery usage, and then received their group assignment. The control group was instructed to continue to use social media as usual, while the intervention group was told to limit their usage on Facebook, Instagram and Snapchat to 10 minutes per platform per day.
Subjects continued to send in nightly screenshots for the next 3 weeks. They also continued to take the survey at the end of each week (the surveys at the end of the 2nd and 3rd weeks did not include the BDI-II, but the survey at the end of the 4th week—
758 HUNT ET AL.
i.e. at the end of the intervention phase—did include the BDI-II). At the end of the fourth week, they were sent a wrap-up email after completing their survey and sending in their screenshot. This email explained that they would be receiving course credit shortly and that they were essentially done with the study, with the exception of a one-time follow-up that would be sent out around a month later. This follow-up included the survey (with the BDI-II), and a final screenshot of their usage for the past 24 hours.
SPRING SEMESTER
In the spring semester the procedure was essentially the same as in the fall, albeit with two changes. First, we decided to in- clude the BDI-II in all surveys that were sent out. We regretted not having as much intermediate data on depression levels. Sec- ond, instead of having subjects send in screenshots every night, we instructed them to send in screenshots displaying the past 7 days of usage once a week. This was done for two reasons. First, although subjects were encouraged to send in screenshots at around the same time each night, subjects inevitably sent screenshots in earlier or later than the time they had sent in the screenshot the previous night. Having a screenshot sent in an hour early compromises the quality of data in the context of a 24- hour window much more than it does in the context of a 7-day window. Second, because the study lasted for four weeks, night- ly screenshots were a significant logistical commitment for both the subjects and researchers. We expected that people would be more likely to send in all screenshots if they were just asked to send in 5 screenshots, as opposed to 29. In addition, it was more manageable for researchers to promptly follow up with subjects who did not send in a screenshot. This reduced error variance from subjects submitting screenshots at different times, or for- getting to send specific screenshots. Unfortunately, the battery usage app resets each time the phone is turned off. Thus, for a few subjects, we had to extrapolate weekly usage from fewer
LIMITING SOCIAL MEDIA 759
than seven full days of battery usage. However, given that this is the first study to attempt to measure usage objectively (rather than relying on retrospective self-report) we are confident that our usage data are more reliable and valid than those of previous studies.
RESULTS
CORRELATIONAL AND PROSPECTIVE RESULTS PRIOR TO RANDOMIZATION
We found that baseline depression, loneliness, anxiety, perceived social support, self-esteem, and well-being did not actually cor- relate with baseline social media use in the week following com- pleting the questionnaires. That is, more distressed individuals did not use social media more prospectively. Baseline Fear of Missing Out, however, did predict more actual social media use prospectively (r = .20, p < .05). Similarly, actual usage during the first week of baseline monitoring was not associated with well- being at the end of the week, controlling for baseline well-being. These results are somewhat at odds with prior research, which often finds an association with estimated, self-reported social media use and measures of well-being prospectively.
In the spring, we asked subjects to give us estimates of their use (essentially retrospective self-report data as is used in most correlational studies of social media use and well-being). Inter- estingly, estimated use was significantly negatively correlated with perceived social support (r = −.24, p < .05) and marginally negatively correlated with both self-esteem (r = −.23, p = .056) and overall well-being (r = −.21, p = .08). Estimated use and actu- al use were significantly, but only modestly correlated with each other (r = .31, p = .01). Eliminating three univariate outliers from the data (people who estimated over 900 minutes, or 15 hours of use per week) yielded even more modest results (r = .26, p < .05). That is, people were not very good at estimating their actual use, and retrospective self-report bias appears to explain at least some of the correlational findings.
760 HUNT ET AL.
EXPERIMENTAL RESULTS
MANIPULATION CHECK
First, we ensured that subjects in the experimental condition did indeed limit their usage by conducting an independent samples t-test at each week of the intervention. Although not every sub- ject complied perfectly with the established time limit, on aver- age the experimental group used significantly less social media than the control group for week one, t(117) = 5.69, p < .001, week two, t(119) = 6.516, p < .001, and week three, t(113) = 5.78, p < .001, of the intervention. On average, the experimental group also remained within the limit of 210 minutes per week at weeks one (M = 179, SD = 140), two (M = 166, SD = 149), and three (M = 176, SD = 155). See Figure 1.
EFFECT OF CONDITION ON LONELINESS
We then ran an analysis of covariance to determine the effect of condition on loneliness. Controlling for baseline loneliness and actual usage, subjects in the experimental group scored signifi-
FIGURE 1. Total weekly social media use over time by condition.
LIMITING SOCIAL MEDIA 761
cantly lower on the UCLA Loneliness Scale at the end of the in- tervention, F(1,111) = 6.896, p = .01. See Figure 2.
EFFECT OF CONDITION ON DEPRESSION
Next, we first ran a univariate analysis of variance to assess the effect of group assignment on depression, controlling for base- line depression, actual usage, and the interaction of baseline de- pression and condition. There was a significant interaction be- tween condition and baseline depression, F(1, 111) = 5.188, p < .05. To help with interpretation of the interaction effect, we split the sample into high and low baseline depression. Subjects were considered low in baseline depression if they scored below the clinical cut-off of 14 on the BDI, and high if they scored a 14 or above. When analyzed this way, there were significant main ef- fects of both baseline depression and condition on depressive symptoms at week 4, for High/Low baseline, F(1,111) = 44.5, p < .001; for Condition, F(1,111) = 4.5, p < .05. In sum, individuals high in baseline depression in the control group saw no change in mean BDI score over the course of the study (at baseline, mean BDI = 22.8, at Week 4 mean BDI = 22.83). In contrast, individuals in the experimental group saw clinically significant declines in
FIGURE 2. Loneliness at week 4 by condition.
762 HUNT ET AL.
depressive symptoms, from a mean of 23 at baseline, to a mean of 14.5 at Week 4. Individuals low in baseline depression in the experimental group saw a statistically, but not clinically signifi- cant decline of a single point in mean BDI (from 5.1 at baseline to 4.1 at Week 4). Individuals low in baseline depression in the con- trol group, on the other hand, showed neither statistically nor clinically significant change in depressive symptoms (from 5 at baseline to 4.67 at Week 4). See Figure 3.
EFFECT OF CONDITION ON ALL OTHER MEASURES
After running analyses of covariance on interpersonal support, fear of missing out, anxiety, self-esteem, and psychological well- being, we found no significant differences between the two groups.
We did, however, see a slight, but statistically significant de- cline from baseline to the end of the intervention in fear of miss- ing out in both the control, t(46) = 3.278, p < .002, and experimen- tal, t(65) = 3.568, p < .001, groups. Similarly, we observed a slight decline in anxiety in both the control, t(46) = 3.035, p < .004, and experimental, t(65) = 2.477, p < .016, groups.
FIGURE 3. Depressive symptoms by condition and baseline BDI.
LIMITING SOCIAL MEDIA 763
FOLLOW-UP DATA
Unfortunately, we experienced significant attrition from the study at the final follow-up wave of data collection in both the fall and spring semesters. In total, we were able to collect com- plete follow-up data (including both objective use and well-be- ing data) from only 30 individuals (21%). We deemed that sam- ple size too small to provide reliable or meaningful results.
DISCUSSION
As hypothesized, experimentally limiting social media usage on a mobile phone to 10 minutes per platform per day for a full three weeks had a significant impact on well-being. Both lone- liness and depressive symptoms declined in the experimental group. With respect to depression, the intervention was most impactful for those who started the study with higher levels of depression. Subjects who started out with moderately severe de- pressive symptoms saw declines down to the mild range by the simple expedient of limiting social media use for three weeks. Even subjects with lower levels of depression saw a statistically significant improvement as the result of cutting down on so- cial media, although a mean decline of one point in BDI score is probably not clinically meaningful. As one subject shared with us “Not comparing my life to the lives of others had a much stronger impact than I expected, and I felt a lot more positive about myself during those weeks.” Further, “I feel overall that social media is less important and I value it less than I did prior to the study.”
Throughout the four-week intervention, subjects in both groups also showed a significant decline in both fear of missing out and anxiety. We posit that this was a result of the self-monitoring in- herent in the study. As one subject in the experimental group said “I am much more conscious of my usage now. This was defi- nitely a worthwhile study in which to partake.” Another noted “It was easier than I thought to limit my usage. Afterwards I pretty much stopped using Snapchat because I realized it wasn’t something I missed.” Although there was no statistically signifi-
764 HUNT ET AL.
cant decline in usage in the control group, even those subjects reported that self-monitoring impacted their awareness of their use. For example, one said “The amount of time spent on social media is alarming and I will be more conscientious of this in the future.” Another reported “I was in the control group and I was definitely more conscious that someone was monitoring my us- age. I ended up using less and felt happier and like I could focus on school and not (be as) interested in what everyone is up to.”
Interestingly, our subjects did not show any improvement in social support, self-esteem, or psychological well-being. Perhaps these measures are truly unaffected by social media. It is also possible that the intervention was not long enough to produce any changes in these measures. Or, it could be that the time limit we imposed was either too restrictive or not restrictive enough to bring about positive change in these domains.
With the exception of fear of missing out, well-being at base- line did not predict actual social media use prospectively during the first week of self-monitoring. FOMO, however, did predict more usage, as might be expected. Similarly, actual use during the first week did not predict changes in well-being over that week controlling for baseline. Estimated use, however, was nega- tively correlated with perceived social support, self-esteem, and overall well-being. That is, more distressed individuals believed that they used social media more than less distressed individu- als, despite the fact that there were no differences in objective use. Since ours is the first study that we know of to collect objec- tive use data, this highlights the importance of future research not relying on retrospective self-report or estimated use data.
LIMITATIONS
This study had several important limitations. While we did our best to monitor and limit social media usage, we were only able to do so on mobile phones (this was not an issue for Snapchat, which can only be used through the mobile application). While participants were instructed to only use Facebook, Instagram, and Snapchat through the applications on their phones, they still had the ability to use social media on their computers, use
LIMITING SOCIAL MEDIA 765
friends’ phones, access the websites via the internet on their phone, etc. Furthermore, we could not actually turn someone’s social media off if they went over 10 minutes. While most people were compliant with the study instructions, there were individu- als in the experimental group who used significantly more social media than they were supposed to.
Moreover, social media does not just include Facebook, Snap- chat, and Instagram. While we only measured and manipulated these three platforms, participants could still opt to go on Twit- ter, Tumblr, Pinterest, Facebook Messenger, dating sites, and so on. Indeed, some subjects noted that they spent a lot more time on dating apps, perhaps as the result of limiting other platforms.
In addition, our sample was a convenience sample of Univer- sity of Pennsylvania students who had iPhones. We excluded Android phone users only because tracking battery usage data would have required downloading a separate app for those us- ers. However, informal surveys suggested that the vast major- ity of Penn students were iPhone users, so we are not unduly worried about the sample being biased in this regard. However, future studies should certainly include Android users.
Lastly, we suffered from significant attrition at follow-up, los- ing 79% of our subjects, largely because we were forced to grant the extra credit for participation prior to the follow-up data col- lection time point. As a result, there was no incentive for subjects to complete the lengthy battery or take the trouble to submit a screen shot of their usage. This precluded reliable analysis of post-intervention social media habits and well-being. Thus, we were not able to assess maintenance of gains in well-being or to determine whether people reverted to their old use patterns. Fu- ture studies should build in incentives for subjects to continue to participate so that this valuable data could be collected.
FUTURE DIRECTIONS
As our study was the first of its nature, there are many opportu- nities for further investigation. These findings certainly bear rep- lication with a more diverse population. The study should also be replicated with a broader inclusion of social media platforms,
766 HUNT ET AL.
including Twitter, Pinterest, Tumblr, etc. Dating apps in particu- lar might be a fruitful avenue of investigation especially for in- dividuals in their late teens to late twenties. Future researchers should also incentivize follow-up participation to decrease at- trition. This will allow for critical analyses pertaining to habit maintenance.
Furthermore, moderators associated with social media use could be assessed further. These could include number of Face- book friends, Instagram followers, length of Snapchat streaks, and so on. These potential moderators could be analyzed in the context of ability to comply with the restrictions, as well as the success of the intervention.
Lastly, the length of the intervention and the length and nature of the limits imposed on usage could be explored in more detail going forward. It may be that there is an optimal level of use (similar to a dose response curve) that could be determined. This would allow for a more nuanced understanding of the amount of social media that is adaptive for most users. Alternatively, one could also explore the utility and impact of apps that actually control or limit the use of other apps (such as App Detox, An- tiSocial, and Off the Grid). Informally, however, many students shared with us that either they (or their parents) had tried such apps, but that they are so easy for tech savvy young adults to circumvent that they didn’t really work. A better strategy might be apps that increase self-monitoring and awareness of use, such as In Moment and Space. Empirical investigation of their efficacy and impact might well be warranted.
CONCLUSION
Most of the prior research that has been done on social media and well-being has been correlational in nature. A few prospec- tive and experimental studies have been done, but they have only focused on Facebook. Our study is the first ecologically valid, experimental investigation that examines multiple social media platforms and tracks actual usage objectively. The results from our experiment strongly suggest that limiting social media usage does have a direct and positive impact on subjective well-
LIMITING SOCIAL MEDIA 767
being over time, especially with respect to decreasing loneliness and depression. That is, ours is the first study to establish a clear causal link between decreasing social media use, and improve- ments in loneliness and depression. It is ironic, but perhaps not surprising, that reducing social media, which promised to help us connect with others, actually helps people feel less lonely and depressed.
REFERENCES
Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Manual for the Beck Depression Inven- tory–II. San Antonio, TX: Psychological Corporation.
Cohen, S., & Hoberman, H. (1983). Positive events and social supports as buffers of life change stress. Journal of Applied Social Psychology, 13, 99–125. https:// doi.org/10.1111/j.1559-1816.1983.tb02325.x
Cohen, S., Mermelstein, R., Kamarck, T., & Hoberman, H. M. (1985). Measuring the functional components of social support. Social Support: Theory, Research and Applications, 24, 73–94. https://doi.org/10.1007/978-94-009-5115-0_5
Daly, M. (2018). Social media use may explain little of the recent rise in depressive symptoms among adolescent girls. Clinical Psychological Science, 6, 295.
Donnelly, E., & Kuss, D. J. (2016). Depression among users of Social Networking Sites (SNSs): The role of SNS addiction and increased usage. Journal of Addic- tion and Preventative Medicine, 1, 1–6. http://doi.org/10.19104/japm.2016.107
Kalpidou, M., Costin, D., & Morris, J. (2011). The relationship between Facebook and the well-being of undergraduate college students. Cyberpsychology, Behavior, and Social Networking, 14, 183–189. https://doi.org/10.1089/cy- ber.2010.0061
Kross, E., Verduyn, P., Demiralp, E., Park, J., Lee, D.S., Lin., N., . . . & Ybarra, O. (2013). Facebook use predicts declines in subjective well-being in young adults. PLOS One, 8. e69841. https://doi.org/10.1371/journal.pone.0069841
Lup, K., Trub, L., & Rosenthal, L. (2015). Instagram #Instasad?: Exploring associa- tions among Instagram use, depressive symptoms, negative social compari- son, and strangers followed. Cyberpsychology, Behavior, and Social Networking, 18, 247 –252. https://doi.org/10.1089/cyber.2014.0560
Przybylski, A. K., Murayama, K., DeHaan, C. R., & Gladwell, V. (2013). Motivation- al, emotional, and behavioral correlates of fear of missing out. Computers in Human Behavior, 29, 1841–1848. https://doi.org/10.1016/j.chb.2013.02.014
Rosen, L. D., Whaling, K., Rab, S., Carrier, L. M., & Cheever, N. A. (2013). Is Face- book creating ‘‘iDisorders’’? The link between clinical symptoms of psychi- atric disorders and technology use, attitudes and anxiety. Computers in Hu- man Behavior, 29, 1243–1254. https://doi.org/10.1016/j.chb.2012.11.012
Rosenberg, M. (1979). Conceiving the self. New York: Basic Books. Russell, D., Peplau, L. A., & Cutrona, C. E. (1980). The revised UCLA Loneli-
ness Scale: Concurrent and discriminant validity evidence. Journal of Per-
768 HUNT ET AL.
sonality and Social Psychology, 39, 472–480. https://doi.org/10.1037/0022- 3514.39.3.472
Ryff, C. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of Personality and Social Psychology, 57, 1069–1081. https://doi.org/10.1037/0022-3514.57.6.1069
Smith, A., & Anderson, M. (2018). Social media use in 2018. Pew Research Center. http://www.pewinternet.org/2018/03/01/social-media-use-in-2018/
Song, H., Zmyslinski-Seelig, A., Kim, J., Drent, A., Victor, A., Omori, K., & Allen, M. (2014). Does Facebook make you lonely?: A meta-analysis. Computers in Human Behavior, 36, 446–452.
Spielberger, C. D., Gorsuch, R. L., & Lushene. R. E. (1970). Manual for the State-Trait Anxiety Inventory. Palo Alto, CA: Consulting Psychologists Press.
Steers, M. N., Wickham, R. E., & Acitelli, L. K. (2014). Seeing everyone else’s high- light reels: How Facebook usage is linked to depressive symptoms. Jour- nal of Social and Clinical Psychology, 33, 701–731. https://doi.org/10.1521/ jscp.2014.33.8.701
Tandoc, E. C., Jr., Ferrucci, P., & Duffy, M. (2015). Facebook use, envy, and depres- sion among college students: Is Facebooking depressing? Computers in Hu- man Behavior, 43, 139–146. https://doi.org/10.1016/j.chb.2014.10.053
Tiggemann, M., & Slater, A. (2013). NetGirls: The Internet, Facebook, and body image concern in adolescent girls. International Journal of Eating Disorders, 46, 630–633. https://doi.org/10.1002/eat.22141
Tromholt, M. (2016). The Facebook experiment: Quitting Facebook leads to higher levels of well-being. Cyberpsychology, Behavior, and Social Networking, 19, 661– 666. https://doi.org/10.1089/cyber.2016.0259
Twenge, J. M., Joiner, T. E., Rogers, M. L., & Martin, G. N. (2017). Increases in de- pressive symptoms, suicide-related outcomes, and suicide rates among U.S. adolescents after 2010 and links to increased new media screen time. Clinical Psychological Science, 6, 3–17.
Verduyn, P., Lee, D. S., Park, J., Shablack, H., Orvell, A., Bayer, J., . . . & Kross, E. (2015). Passive Facebook usage undermines affective well-being: Experi- mental and longitudinal evidence. Journal of Experimental Psychology: Gen- eral, 144, 480–488. https://doi.org/10.1037/xge0000057
Project Proposal
MGT 210 - Human Resources Management
Table of Contents:
1. Introduction .........................................................................................................................P. 3
2. Problem Statement...............................................................................................................P. 4
3. Research Questions..............................................................................................................P. 4
4. References............................................................................................................................P. 8
1. Introduction
In this world, each of us is unique and has their own individual differences. In the case of diversity, this includes the aspects of race, gender, sexual orientation, socioeconomic status, age, physical ability, religious beliefs, and other ideological dimensions. As for the diversity in human resources, it first means acceptance and respect, because we need an inclusive environment to learn to accept and understand one another. Unfortunately, there are still many negative phenomena in life, for example, regarding the Civil Rights Act of 1964 and the Equal Pay Act, which legally forbids discrimination against women in hiring and the workplace. Many women still suffer unfair treatment today, despite this act. The premise to solve these problems requires people to carefully explore and think about these differences.
We define work-diversity conflict as the differences between individuals and organizations. Certain differences sometimes lead to gaps, frustrations, and conflicts. This topic is a rather imperative management issue, as it is widely present in various work environments, and because of the recognition of the diversity of organization members, many companies have begun to develop the Diversity Management Movement as early as the late 1980s. In an organization, it is difficult to manage diverse individuals in a unified manner, and the diversity that each person brings to the company will directly or indirectly affect all aspects of the company, including production, to marketing, to company culture, and so on. If some employees are treated differently, it will not only cause them to have negative attitudes and emotions in the workplace and reduce work efficiency, but also seriously affect the company’s internal working atmosphere and later development. On the contrary, only when we truly recognize these differences and learn to respect and cherish everyone, can the power of diversity be released and positive values can benefit both employees and management.
2. Problem Statement
In this article, we will focus on analyzing the conflict between work and diversity and how it can bring prejudice and disrespect to employees. In addition, we will also introduce suggestions that may help the company reduce such conflicts and alleviate certain negative effects, so as to ensure that everyone feels that they are valued and enjoy equal opportunities, respect and a comfortable working environment.
3. Research Questions
Q1. What is the influence of racial diversity on companies?
What makes these statistics all the more relevant is that we can see that diverse workforces outperform non-diverse workforces every time. As the World Economic Forum states, 'the business case for diversity in the workplace is now overwhelming'.
· Businesses with diverse management teams make 19% more revenue.
· 43% of businesses with diverse boards saw increased profits.
· Diverse companies are 70% more likely to capture new markets.
· Diverse teams make better decisions 87% of the time compared to individuals.
· 85% of CEOs with diverse and inclusive workforces said they noticed increased profits.
Q2.What Are the Causes of Interethnic Conflicts at the Workplace?
Census data shows that the U.S. workforce is becoming more diverse. Experts predict this trend will only increase as the population in the U.S. changes. For many businesses, this will be an advantage, as a more diverse workplace implies more diversity of experience and a wider knowledge base. Conflicts may also arise, however, and it will be wise to be aware of the causes of interethnic conflict in the workplace before it happens in order to deal with it effectively.
Discrimination and abuse can be manifested in several ways. The climate in an office matters a great deal in preventing interethnic conflict. Before employees begin to discriminate against one another over ethnicity, they may be abusive to each other in other ways. An employer must take a zero-tolerance approach to all abuse as it escalates especially amongst small businesses in particular, because they have fewer employees and, therefore, must take more effort to foster a healthy and inclusive environment.
Oftentimes, workplace conflict continues and escalates because there is no mechanism in place for addressing or resolving it. If a business establishes procedures for addressing grievances early on, it can do much to reduce its conflict level. This is particularly important in matters of race and ethnicity, where the feelings involved are stronger and people more sensitive to these aspects of diversity. By dealing with issues before they become unmanageable, a company can go a long way in preventing serious problems from occurring.
Q3 How does Ethnicity conflict affect international safety and peace?
Ethnic conflict is one of the major threats to international peace and security. Conflicts in the Balkans, Rwanda, Chechnya, Iraq, Indonesia, Sri Lanka, Myanmar, India, and Darfur, as well as in Israel, the West Bank, and the Gaza Strip, are among the most notoriously-known and deadliest examples from the late 20th and early 21st centuries. The destabilization of provinces, states, and, in some cases, even whole regions, is a common consequence of ethnic violence. Ethnic conflicts are often accompanied by gross human rights violations, such as genocide and crimes against humanity, and by economic decline, state failure, environmental problems, and refugee flows. Violent ethnic conflict leads to tremendous human suffering.
In several scholarly articles, Michael Edward Brown provided a useful approach to understanding the causes of ethnic conflict. In those articles, he distinguished between underlying causes and proximate causes. Underlying causes include structural factors, political factors, economic and social factors, and cultural and perceptual factors. Proximate causes embrace four levels of conflict triggers: internal mass-level factors (what Brown calls “bad domestic problems”), external mass-level factors (“bad neighborhoods”), external elite-level factors (“bad neighbors”), and internal elite-level factors (“bad leaders”). According to Brown, both underlying and proximate causes have to be present for ethnic conflict to evolve. This section first summarizes what Brown described as the “four main clusters of factors that make some places more predisposed to violence than others”—the underlying causes—and then presents the four catalysts, or triggers, that Brown identified as proximate causes.
Q4: How does racial conflict affect the physical and mental health of employees and their work?
Racial conflict at work can manifest itself in many ways in the lives of employees. What is more worrying is the mental health problems of employees at work, and the physical and mental health issues caused by mental health problems. To a certain extent, employees who experience racial discrimination and other conflicts at work may suffer more from anxiety, fatigue, and even depression. These may be perceived as only affecting the mood and work efficiency of employees in the short term, but the reality is that their well-being is affected long-term, because employees may experience more workplace anxiety, which may lead to other serious issues such as insomnia. These factors, in turn, may negatively affect the quality of employees in the workplace. In essence, racial discrimination and other conflicts regarding race and ethicity at work ultimately affects the quality of the individual's work and the quality of the organization as a whole.
References
Definition for Diversity
https://www.qcc.cuny.edu/diversity/definition.html
Reuter, T. K. (2015). Ethnic conflict: Additional Information. Britannica, 21st Century Political
Science: A Reference Handbook (2011), from:
https://www.britannica.com/topic/ethnic-conflict
Reader, C. (2010). What Are the Causes of Interethnic Conflicts at the Workplace? Chron. from
https://smallbusiness.chron.com/causes-interethnic-conflicts-workplace-18656.html
Cronin, N. (2020). Racial Diversity in the Workplace: Boosting Representation in Leadership.
Guider. from: https://www.guider-ai.com/blog/racial-diversity-in-the-workplace

Get help from top-rated tutors in any subject.
Efficiently complete your homework and academic assignments by getting help from the experts at homeworkarchive.com