Which term refers to a mental construct consisting of a cluster or collection of related concepts?

Chapter 7: Thinking and Intelligence

Introduction

Why is it so difficult to break habits—like reaching for your ringing phone even when you shouldn’t, such as when you’re driving? How does a person who has never seen or touched snow in real life develop an understanding of the concept of snow? How do young children acquire the ability to learn language with no formal instruction? Psychologists who study thinking explore questions like these.

Cognitive psychologists also study intelligence. What is intelligence, and how does it vary from person to person? Are “street smarts” a kind of intelligence, and if so, how do they relate to other types of intelligence? What does an IQ test really measure? These questions and more will be explored in this chapter as you study thinking and intelligence.

In other chapters, we discussed the cognitive processes of perception, learning, and memory. In this chapter, we will focus on high-level cognitive processes. As a part of this discussion, we will consider thinking and briefly explore the development and use of language. We will also discuss problem solving and creativity before ending with a discussion of how intelligence is measured and how our biology and environments interact to affect intelligence. After finishing this chapter, you will have a greater appreciation of the higher-level cognitive processes that contribute to our distinctiveness as a species.

What is Cognition

Learning Objectives

By the end of this section, you will be able to:

  • Describe cognition
  • Distinguish concepts and prototypes
  • Explain the difference between natural and artificial concepts

Imagine all of your thoughts as if they were physical entities, swirling rapidly inside your mind. How is it possible that the brain is able to move from one thought to the next in an organized, orderly fashion? The brain is endlessly perceiving, processing, planning, organizing, and remembering—it is always active. Yet, you don’t notice most of your brain’s activity as you move throughout your daily routine. This is only one facet of the complex processes involved in cognition. Simply put, cognition is thinking, and it encompasses the processes associated with perception, knowledge, problem solving, judgment, language, and memory. Scientists who study cognition are searching for ways to understand how we integrate, organize, and utilize our conscious cognitive experiences without being aware of all of the unconscious work that our brains are doing [for example, Kahneman, 2011].

COGNITION

Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery store first? Can you get these things done before you head to class or will they need to wait until school is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an essential feature of human consciousness, yet not all aspects of cognition are consciously experienced.

Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts to explain how and why we think the way we do by studying the interactions among human thinking, emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive psychologists strive to determine and measure different types of intelligence, why some people are better at problem solving than others, and how emotional intelligence affects success in the workplace, among countless other topics. They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought, which will be discussed later.

CONCEPTS AND PROTOTYPES

The human nervous system is capable of handling endless streams of information. The senses serve as the interface between the mind and the external environment, receiving stimuli and translating it into nervous impulses that are transmitted to the brain. The brain then processes this information and uses the relevant pieces to create thoughts, which can then be expressed through language or stored in memory for future use. To make this process more complex, the brain does not gather information from external environments only. When thoughts are formed, the brain also pulls information from emotions and memories [Figure 7.2]. Emotion and memory are powerful influences on both our thoughts and behaviors.

Figure 7.2 Sensations and information are received by our brains, filtered through emotions and memories, and processed to become thoughts.

In order to organize this staggering amount of information, the brain has developed a file cabinet of sorts in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are, in many ways, big ideas that are generated by observing details, and categorizing and combining these details into cognitive structures. You use concepts to see the relationships among the different elements of your experiences and to keep the information in your mind organized and accessible.

Concepts are informed by our semantic memory [you will learn more about semantic memory in a later chapter] and are present in every aspect of our lives; however, one of the easiest places to notice concepts is inside a classroom, where they are discussed explicitly. When you study United States history, for example, you learn about more than just individual events that have happened in America’s past. You absorb a large quantity of information by listening to and participating in discussions, examining maps, and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall understanding of American history. In the process, your brain gathers details that inform and refine your understanding of related concepts like democracy, power, and freedom.

Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology, for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are agreed upon by many people, because they have been used in various ways over many years. Other concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the guiding principles behind the way governments function.

Another technique used by your brain to organize information is the identification of prototypes for the concepts you have developed. A prototype is the best example or representation of a concept. For example, for the category of civil disobedience, your prototype could be Rosa Parks. Her peaceful resistance to segregation on a city bus in Montgomery, Alabama, is a recognizable example of civil disobedience.

SCHEMATA

A schema is a mental construct consisting of a cluster or collection of related concepts [Bartlett, 1932]. There are many different types of schemata, and they all have one thing in common: schemata are a method of organizing information that allows the brain to work more efficiently. When a schema is activated, the brain makes immediate assumptions about the person or object being observed.

There are several types of schemata. A role schema makes assumptions about how individuals in certain roles will behave [Callero, 1994]. For example, imagine you meet someone who introduces himself as a firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing this person, already you have unknowingly made judgments about him. Schemata also help you fill in gaps in the information you receive from the world around you. While schemata allow for more efficient information processing, there can be problems with schemata, regardless of whether they are accurate: Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying to become a children’s librarian.

An event schema, also known as a cognitive script, is a set of behaviors that can feel like a routine. Think about what you do when you walk into an elevator [Figure 7.5]. First, the doors open and you wait to let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face the doors, looking for the correct button to push. You never face the back of the elevator, do you? And when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it? Interestingly, event schemata can vary widely among different cultures and countries. For example, while it is quite common for people to greet one another with a handshake in the United States, in Tibet, you greet someone by sticking your tongue out at them, and in Belize, you bump fists [Cairns Regional Council, n.d.]

Figure 7.5 What event schema do you perform when riding in an elevator? [credit: “Gideon”/Flickr]

Language

Learning Objectives

By the end of this section, you will be able to:

  • Define language and demonstrate familiarity with the components of language
  • Understand how the use of language develops
  • Explain the relationship between language and thinking

Language is a communication system that involves using words and systematic rules to organize those words to transmit information from one individual to another. While language is a form of communication, not all communication is language. Many species communicate with one another through their postures, movements, odors, or vocalizations. This communication is crucial for species that need to interact and develop social relationships with their conspecifics. However, many people have asserted that it is language that makes humans unique among all of the animal species [Corballis & Suddendorf, 2007; Tomasello & Rakoczy, 2003]. This section will focus on what distinguishes language as a special form of communication, how the use of language develops, and how language affects the way we think.

COMPONENTS OF LANGUAGE

Language, be it spoken, signed, or written, has specific components: a lexicon and grammar. Lexicon refers to the words of a given language. Thus, lexicon is a language’s vocabulary. Grammar refers to the set of rules that are used to convey meaning through the use of the lexicon [Fernández & Cairns, 2011]. For instance, English grammar dictates that most verbs receive an “-ed” at the end to indicate past tense.

Words are formed by combining the various phonemes that make up the language. A phoneme [e.g., the sounds “ah” vs. “eh”] is a basic sound unit of a given language, and different languages have different sets of phonemes. Phonemes are combined to form morphemes, which are the smallest units of language that convey some type of meaning [e.g., “I” is both a phoneme and a morpheme]. We use semantics and syntax to construct language. Semantics and syntax are part of a language’s grammar. Semantics refers to the process by which we derive meaning from morphemes and words. Syntax refers to the way words are organized into sentences [Chomsky, 1965; Fernández & Cairns, 2011].

We apply the rules of grammar to organize the lexicon in novel and creative ways, which allow us to communicate information about both concrete and abstract concepts. We can talk about our immediate and observable surroundings as well as the surface of unseen planets. We can share our innermost thoughts, our plans for the future, and debate the value of a college education. We can provide detailed instructions for cooking a meal, fixing a car, or building a fire. The flexibility that language provides to relay vastly different types of information is a property that makes language so distinct as a mode of communication among humans.

LANGUAGE DEVELOPMENT

Given the remarkable complexity of a language, one might expect that mastering a language would be an especially arduous task; indeed, for those of us trying to learn a second language as adults, this might seem to be true. However, young children master language very quickly with relative ease. B. F. Skinner [1957] proposed that language is learned through reinforcement. Noam Chomsky [1965] criticized this behaviorist approach, asserting instead that the mechanisms underlying language acquisition are biologically determined. The use of language develops in the absence of formal instruction and appears to follow a very similar pattern in children from vastly different cultures and backgrounds. It would seem, therefore, that we are born with a biological predisposition to acquire a language [Chomsky, 1965; Fernández & Cairns, 2011]. Moreover, it appears that there is a critical period for language acquisition, such that this proficiency at acquiring language is maximal early in life; generally, as people age, the ease with which they acquire and master new languages diminishes [Johnson & Newport, 1989; Lenneberg, 1967; Singleton, 1995].

Children begin to learn about language from a very early age [Table 7.1]. In fact, it appears that this is occurring even before we are born. Newborns show preference for their mother’s voice and appear to be able to discriminate between the language spoken by their mother and other languages. Babies are also attuned to the languages being used around them and show preferences for videos of faces that are moving in synchrony with the audio of spoken language versus videos that do not synchronize with the audio [Blossom & Morgan, 2006; Pickens, 1994; Spelke & Cortelyou, 1981].

Table 7.1 Stages of Language and Communication Development

Stage

Age

Developmental Language and Communication

1

0–3 months

Reflexive communication

2

3–8 months

Reflexive communication; interest in others

3

8–13 months

Intentional communication; sociability

4

12–18 months

First words

5

18–24 months

Simple sentences of two words

6

2–3 years

Sentences of three or more words

7

3–5 years

Complex sentences; has conversations

You may recall that each language has its own set of phonemes that are used to generate morphemes, words, and so on. Babies can discriminate among the sounds that make up a language [for example, they can tell the difference between the “s” in vision and the “ss” in fission]; early on, they can differentiate between the sounds of all human languages, even those that do not occur in the languages that are used in their environments. However, by the time that they are about 1 year old, they can only discriminate among those phonemes that are used in the language or languages in their environments [Jensen, 2011; Werker & Lalonde, 1988; Werker & Tees, 1984].

After the first few months of life, babies enter what is known as the babbling stage, during which time they tend to produce single syllables that are repeated over and over. As time passes, more variations appear in the syllables that they produce. During this time, it is unlikely that the babies are trying to communicate; they are just as likely to babble when they are alone as when they are with their caregivers [Fernández & Cairns, 2011]. Interestingly, babies who are raised in environments in which sign language is used will also begin to show babbling in the gestures of their hands during this stage [Petitto, Holowka, Sergio, Levy, & Ostry, 2004].

Generally, a child’s first word is uttered sometime between the ages of 1 year to 18 months, and for the next few months, the child will remain in the “one word” stage of language development. During this time, children know a number of words, but they only produce one-word utterances. The child’s early vocabulary is limited to familiar objects or events, often nouns. Although children in this stage only make one-word utterances, these words often carry larger meaning [Fernández & Cairns, 2011]. So, for example, a child saying “cookie” could be identifying a cookie or asking for a cookie.

As a child’s lexicon grows, she begins to utter simple sentences and to acquire new vocabulary at a very rapid pace. In addition, children begin to demonstrate a clear understanding of the specific rules that apply to their language[s]. Even the mistakes that children sometimes make provide evidence of just how much they understand about those rules. This is sometimes seen in the form of overgeneralization. In this context, overgeneralization refers to an extension of a language rule to an exception to the rule. For example, in English, it is usually the case that an “s” is added to the end of a word to indicate plurality. For example, we speak of one dog versus two dogs. Young children will overgeneralize this rule to cases that are exceptions to the “add an s to the end of the word” rule and say things like “those two gooses” or “three mouses.” Clearly, the rules of the language are understood, even if the exceptions to the rules are still being learned [Moskowitz, 1978].

LANGUAGE AND THOUGHT

When we speak one language, we agree that words are representations of ideas, people, places, and events. The given language that children learn is connected to their culture and surroundings. But can words themselves shape the way we think about things? Psychologists have long investigated the question of whether language shapes thoughts and actions, or whether our thoughts and beliefs shape our language. Two researchers, Edward Sapir and Benjamin Lee Whorf, began this investigation in the 1940s. They wanted to understand how the language habits of a community encourage members of that community to interpret language in a particular manner [Sapir, 1941/1964]. Sapir and Whorf proposed that language determines thought, suggesting, for example, that a person whose community language did not have past- tense verbs would be challenged to think about the past [Whorf, 1956]. Researchers have since identified this view as too absolute, pointing out a lack of empiricism behind what Sapir and Whorf proposed [Abler, 2013; Boroditsky, 2011; van Troyer, 1994]. Today, psychologists continue to study and debate the relationship between language and thought.

Measures of Intelligence

Learning Objectives

By the end of this section, you will be able to:

  • Explain how intelligence tests are developed
  • Describe the history of the use of IQ tests
  • Describe the purposes and benefits of intelligence testing

While you’re likely familiar with the term “IQ” and associate it with the idea of intelligence, what does IQ really mean? IQ stands for intelligence quotient and describes a score earned on a test designed to measure intelligence. You’ve already learned that there are many ways psychologists describe intelligence [or more aptly, intelligences]. Similarly, IQ tests—the tools designed to measure intelligence—have been the subject of debate throughout their development and use.

When might an IQ test be used? What do we learn from the results, and how might people use this information? IQ tests are expensive to administer and must be given by a licensed psychologist. Intelligence testing has been considered both a bane and a boon for education and social policy. In this section, we will explore what intelligence tests measure, how they are scored, and how they were developed.

MEASURING INTELLIGENCE

It seems that the human understanding of intelligence is somewhat limited when we focus on traditional or academic-type intelligence. How then, can intelligence be measured? And when we measure intelligence, how do we ensure that we capture what we’re really trying to measure [in other words, that IQ tests function as valid measures of intelligence]? In the following paragraphs, we will explore the how intelligence tests were developed and the history of their use.

The IQ test has been synonymous with intelligence for over a century. In the late 1800s, Sir Francis Galton developed the first broad test of intelligence [Flanagan & Kaufman, 2004]. Although he was not a psychologist, his contributions to the concepts of intelligence testing are still felt today [Gordon, 1995]. Reliable intelligence testing [you may recall from earlier chapters that reliability refers to a test’s ability to produce consistent results] began in earnest during the early 1900s with a researcher named Alfred Binet [Figure 7.13]. Binet was asked by the French government to develop an intelligence test to use on children to determine which ones might have difficulty in school; it included many verbally based tasks. American researchers soon realized the value of such testing. Louis Terman, a Stanford professor, modified Binet’s work by standardizing the administration of the test and tested thousands of different-aged children to establish an average score for each age. As a result, the test was normed and standardized, which means that the test was administered consistently to a large enough representative sample of the population that the range of scores resulted in a bell curve [bell curves will be discussed later]. Standardization means that the manner of administration, scoring, and interpretation of results is consistent. Norming involves giving a test to a large population so data can be collected comparing groups, such as age groups. The resulting data provide norms, or referential scores, by which to interpret future scores. Norms are not expectations of what a given group should know but a demonstration of what that group does know. Norming and standardizing the test ensures that new scores are reliable. This new version of the test was called the Stanford-Binet Intelligence Scale [Terman, 1916]. Remarkably, an updated version of this test is still widely used today.

Figure 7.13 French psychologist Alfred Binet helped to develop intelligence testing. [b] This page is from a 1908 version of the Binet-Simon Intelligence Scale. Children being tested were asked which face, of each pair, was prettier.

In 1939, David Wechsler, a psychologist who spent part of his career working with World War I veterans, developed a new IQ test in the United States. Wechsler combined several subtests from other intelligence tests used between 1880 and World War I. These subtests tapped into a variety of verbal and nonverbal skills, because Wechsler believed that intelligence encompassed “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” [Wechsler, 1958, p. 7]. This combination of subtests became one of the most extensively used intelligence tests in the history of psychology. Today, there are three intelligence tests credited to Wechsler, the Wechsler Adult Intelligence Scale-fourth edition [WAIS-IV], the Wechsler Intelligence Scale for Children [WISC-V], and the Wechsler Preschool and Primary Scale of Intelligence—IV [WPPSI-IV] [Wechsler, 2012]. These tests are used widely in schools and communities throughout the United States, and they are periodically normed and standardized as a means of recalibration. Interestingly, the periodic recalibrations have led to an interesting observation known as the Flynn effect. Named after James Flynn, who was among the first to describe this trend, the Flynn effect refers to the observation that each generation has a significantly higher IQ than the last. Flynn himself argues, however, that increased IQ scores do not necessarily mean that younger generations are more intelligent per se [Flynn, Shaughnessy, & Fulgham, 2012]. As a part of the recalibration process, the WISC- V was given to thousands of children across the country, and children taking the test today are compared with their same-age peers.

THE BELL CURVE

The results of intelligence tests follow the bell curve, a graph in the general shape of a bell. When the bell curve is used in psychological testing, the graph demonstrates a normal distribution of a trait, in this case, intelligence, in the human population. Many human traits naturally follow the bell curve. For example, if you lined up all your female schoolmates according to height, it is likely that a large cluster of them would be the average height for an American woman: 5’4”–5’6”. This cluster would fall in the center of the bell curve, representing the average height for American women [Figure 7.14]. There would be fewer women who stand closer to 4’11”. The same would be true for women of above-average height: those who stand closer to 5’11”. The trick to finding a bell curve in nature is to use a large sample size. Without a large sample size, it is less likely that the bell curve will represent the wider population. A representative sample is a subset of the population that accurately represents the general population. If, for example, you measured the height of the women in your classroom only, you might not actually have a representative sample. Perhaps the women’s basketball team wanted to take this course together, and they are all in your class. Because basketball players tend to be taller than average, the women in your class may not be a good representative sample of the population of American women. But if your sample included all the women at your school, it is likely that their heights would form a natural bell curve.

Figure 7.14 Are you of below-average, average, or above-average height?

The same principles apply to intelligence tests scores. Individuals earn a score called an intelligence quotient [IQ]. Over the years, different types of IQ tests have evolved, but the way scores are interpreted remains the same. The average IQ score on an IQ test is 100. Standard deviations describe how data are dispersed in a population and give context to large data sets. The bell curve uses the standard deviation to show how all scores are dispersed from the average score [Figure 7.15]. In modern IQ testing, one standard deviation is 15 points. So a score of 85 would be described as “one standard deviation below the mean.” How would you describe a score of 115 and a score of 70? Any IQ score that falls within one standard deviation above and below the mean [between 85 and 115] is considered average, and 68% of the population has IQ scores in this range. An IQ score of 130 or above is considered a superior level.

Figure 7.15 The majority of people have an IQ score between 85 and 115.

Only 2.2% of the population has an IQ score below 70 [American Psychological Association [APA], 2013]. A score of 70 or below indicates significant cognitive delays, major deficits in adaptive functioning, and difficulty meeting “community standards of personal independence and social responsibility” when compared to same-aged peers [APA, 2013, p. 37]. An individual in this IQ range would be considered to have an intellectual disability and exhibit deficits in intellectual functioning and adaptive behavior [American Association on Intellectual and Developmental Disabilities, 2013]. Formerly known as mental retardation, the accepted term now is intellectual disability, and it has four subtypes: mild, moderate, severe, and profound [Table 7.5]. The Diagnostic and Statistical Manual of Psychological Disorders lists criteria for each subgroup [APA, 2013].

Table 7.5 Characteristics of Cognitive Disorders

Intellectual Disability Subtype

Percentage of Intellectually Disabled Population

Description

Mild

85%

3rd- to 6th-grade skill level in reading, writing, and math; may be employed and live independently

Moderate

10%

Basic reading and writing skills; functional self-care skills; requires some oversight

Severe

5%

Functional self-care skills; requires oversight of daily environment and activities

Profound

Chủ Đề