Language and Cognition

timer Asked: Jan 3rd, 2019
account_balance_wallet $15

Question description

“He never learned to speak more than a few words, but he developed some sensitivity to sounds and mastered table manners and polite comportment.” (Douthwaite, 2002, p. 21)

Here, Douthwaite describes historical accounts of a feral child discovered in Germany and taken in to live out his life under the care of “civilized” keepers. Psychologists and neurologists have long devoted attention to cases of “wild children,” those who begin maturation outside of human society, with little or no human contact. Cases involving such children inform understanding of the cognitive processes inherent to language development. Consider how the effects of environmental deprivation compare to the effects of deafness on the development of language. Another influence on language production and comprehension is neurological disruption. For example, strokes—brain damage due to blockage of blood supply or hemorrhage—have helped to differentiate important sites in the brain, as well as their functional implications.

For this Discussion, consider influences of environmental deprivation, deafness, and neurological disruption on language acquisition, production, and comprehension.

With these thoughts in mind:

Post an explanation of how environmental deprivation, deafness, and neurological disruption (e.g., stroke or brain injury) might influence language acquisition, production, or comprehension. Provide examples for each to support your response. Support your response using at least scholarly 3 references. APA Format. 2-3 paragraphs.

Neuron Review Brain Mechanisms in Early Language Acquisition Patricia K. Kuhl1,* 1Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195, USA *Correspondence: DOI 10.1016/j.neuron.2010.08.038 The last decade has produced an explosion in neuroscience research examining young children’s early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants’ earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty. Introduction Neural and behavioral research studies show that exposure to language in the first year of life influences the brain’s neural circuitry even before infants speak their first words. What do we know of the neural architecture underlying infants’ remarkable capacity for language and the role of experience in shaping that neural circuitry? The goal of the review is to explore this topic, focusing on the data and arguments about infants’ neural responses to the consonants and vowels that make up words. Infants’ responses to these basic building blocks of speech—the phonemes used in the world’s languages—provide an experimentally tractable window on the roles of nature and nurture in language acquisition. Comparative studies at the phonetic level have allowed us to examine the uniqueness of humans’ language processing abilities. Moreover, infants’ responses to native and nonnative phonemes have documented the effects of experience as infants are bathed in a specific language. We are also beginning to discover how exposure to two languages early in infancy produces a bilingual brain. We focus here on when and how infants master the sound structure of their language(s), and the role of experience in explaining this important developmental change. As the data attest, infants’ neural commitment to the elementary units of language begins early, and the review showcases the extent to which the tools of modern neuroscience are advancing our understanding of infants’ uniquely human capacity for language. Humans’ capacity for speech and language provoked classic debates on nature versus nurture by strong proponents of nativism (Chomsky, 1959) and learning (Skinner, 1957). While we are far beyond these debates and informed by a great deal of data about infants, their innate predispositions, and their incredible abilities to learn once exposed to natural language (Kuhl, 2009; Saffran et al., 2006), we are still just breaking ground with regard to the neural mechanisms that underlie language development (see Friederici and Wartenburger, 2010; Kuhl and Rivera-Gaxiola, 2008). This decade may represent the dawn of a golden age with regard to the developmental neuroscience of language in humans. Windows to the Young Brain The last decade has produced rapid advances in noninvasive techniques that examine language processing in young children (Figure 1). They include Electroencephalography (EEG)/ Event-related Potentials (ERPs), Magnetoencephalography (MEG), functional Magnetic Resonance Imaging (fMRI), and Near-Infrared Spectroscopy (NIRS). Event-related Potentials (ERPs) have been widely used to study speech and language processing in infants and young children (for reviews, see Conboy et al., 2008a; Friederici, 2005; Kuhl, 2004). ERPs, a part of the EEG, reflect electrical activity that is time-locked to the presentation of a specific sensory stimulus (for example, syllables or words) or a cognitive process (recognition of a semantic violation within a sentence or phrase). By placing sensors on a child’s scalp, the activity of neural networks firing in a coordinated and synchronous fashion in open field configurations can be measured, and voltage changes occurring as a function of cortical neural activity can be detected. ERPs provide precise time resolution (milliseconds), making them well suited for studying the high-speed and temporally ordered structure of human speech. ERP experiments can also be carried out in populations who cannot provide overt responses because of age or cognitive impairment. Spatial resolution of the source of brain activation is, however, limited. Magnetoencephalography (MEG) is another brain imaging technique that tracks activity in the brain with exquisite temporal resolution. The SQUID (superconducting quantum interference device) sensors located within the MEG helmet measure the minute magnetic fields associated with electrical currents that are produced by the brain when it is performing sensory, motor, or cognitive tasks. MEG allows precise localization of the neural currents responsible for the sources of the magnetic fields. Cheour et al. (2004) and Imada et al. (2006) used new headtracking methods and MEG to show phonetic discrimination in Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 713 Neuron Review Figure 1. Four Techniques Now Used Extensively with Infants and Young Children to Examine Their Responses to Linguistic Signals (From Kuhl and Rivera-Gaxiola, 2008). newborns and infants in the first year of life. Sophisticated headtracking software and hardware enables investigators to correct for infants’ head movements, and allows the examination of multiple brain areas as infants listen to speech (Imada et al., 2006). MEG (as well as EEG) techniques are completely safe and noiseless. Magnetic resonance imaging (MRI) can be combined with MEG and/or EEG, providing static structural/anatomical pictures of the brain. Structural MRIs show anatomical differences in brain regions across the lifespan, and have recently been used to predict second-language phonetic learning in adults (Golestani and Pallier, 2007). Structural MRI measures in young infants identify the size of various brain structures and these measures have been shown to be related to language abilities later in childhood (Ortiz-Mantilla et al., 2010). When structural MRI images are superimposed on the physiological activity detected by MEG or EEG, the spatial localization of brain activities recorded by these methods can be improved. 714 Neuron 67, September 9, 2010 ª2010 Elsevier Inc. Functional magnetic resonance imaging (fMRI) is a popular method of neuroimaging in adults because it provides high spatial-resolution maps of neural activity across the entire brain (e.g., Gernsbacher and Kaschak, 2003). Unlike EEG and MEG, fMRI does not directly detect neural activity, but rather the changes in blood-oxygenation that occur in response to neural activation. Neural events happen in milliseconds; however, the blood-oxygenation changes that they induce are spread out over several seconds, thereby severely limiting fMRI’s temporal resolution. Few studies have attempted fMRI with infants because the technique requires infants to be perfectly still, and because the MRI device produces loud sounds making it necessary to shield infants’ ears. fMRI studies allow precise localization of brain activity and a few pioneering studies show remarkable similarity in the structures responsive to language in infants and adults (Dehaene-Lambertz et al., 2002, 2006). Near-Infrared Spectroscopy (NIRS) also measures cerebral hemodynamic responses in relation to neural activity, but utilizes Neuron Review the absorption of light, which is sensitive to the concentration of hemoglobin, to measure activation (Aslin and Mehler, 2005). NIRS measures changes in blood oxy- and deoxy-hemoglobin concentrations in the brain as well as total blood volume changes in various regions of the cerebral cortex using near infrared light. The NIRS system can determine the activity in specific regions of the brain by continuously monitoring blood hemoglobin level. Reports have begun to appear on infants in the first two years of life, testing infant responses to phonemes as well as longer stretches of speech such as ‘‘motherese’’ and forward versus reversed sentences (Bortfeld et al., 2007; Homae et al., 2006; Peña et al., 2002; Taga and Asakawa, 2007). As with other hemodynamic techniques such as fMRI, NIRS typically does not provide good temporal resolution. However, event-related NIRS paradigms are being developed (Gratton and Fabiani, 2001). One of the most important potential uses of the NIRS technique is possible co-registration with other testing techniques such as EEG and MEG. Neural Signatures of Early Learning Perception of the phonetic units of speech—the vowels and consonants that make up words—is one of the most widely studied linguistic skills in infancy and adulthood. Phonetic perception and the role of experience in learning is studied in newborns, during development as infants are exposed to a particular language, in adults from different cultures, in children with developmental disabilities, and in nonhuman animals. Phonetic perception studies provide critical tests of theories of language development and its evolution. An extensive literature on developmental speech perception exists and brain measures are adding substantially to our knowledge of phonetic development and learning (see Kuhl, 2004; Kuhl et al., 2008; Werker and Curtin, 2005). In the last decade, brain and behavioral studies indicate a very complex set of interacting brain systems in the initial acquisition of language, many of which appear to reflect adult language processing, even early in infancy (Dehaene-Lambertz et al., 2006). In adulthood, language is highly modularized, which accounts for the very specific patterns of language deficits and brain damage in adult patients following stroke (P.K.K. and A. Damasio, Principles of Neuronal Science V [McGraw Hill], in press, E.R. Kandel, J.H. Schwartz, T.M. Jessell, S. Siegelbaum, and J. Hudspeth, eds). Infants, however, must begin life with brain systems that allow them to acquire any and all languages to which they are exposed, and can acquire language as either an auditory-vocal or a visual-manual code, on roughly the same timetable (Petitto and Marentette, 1991). We are in a nascent stage of understanding the brain mechanisms underlying infants’ early flexibility with regard to the acquisition of language – their ability to acquire language by eye or by ear, and acquire one or multiple languages – and also the reduction in this initial flexibility that occurs with age, which dramatically decreases our capacity to acquire a new language as adults (Newport, 1990). The infant brain is exquisitely poised to ‘‘crack the speech code’’ in a way that the adult brain cannot. Uncovering why this is the case is a very interesting puzzle. In this review I will also explore a current working hypothesis and its implications for brain development—that to crack the speech code requires infants to combine a powerful set of domain-general computational and cognitive skills with their equally extraordinary social skills. Thus, the underlying brain systems must mutually influence one another during development. Experience with more than one language, for example, as in the case of people who are bilingual, is related to increases in particular cognitive skills, both in adults (Bialystok, 1991) and in children (Carlson and Meltzoff, 2008). Moreover, social interaction appears to be necessary for language acquisition, and an individual infant’s social behavior can be linked to their ability to learn new language material (Kuhl et al., 2003; B.T. Conboy et al., 2008, ‘‘Joint engagement with language tutors predicts learning of second-language phonetic stimuli,’’ presentation at the 16th International Conference on Infancy Studies, Vancouver). Regarding the social effects, I have suggested that the social brain—in ways we have yet to understand—‘‘gates’’ the computational mechanisms underlying learning in the domain of language (Kuhl, 2007). The assertion that social factors gate language learning explains not only how typically developing children acquire language, but also why children with autism exhibit twin deficits in social cognition and language, and why nonhuman animals with impressive computational abilities do not acquire language. Moreover, this gating hypothesis may explain why social factors play a far more significant role than previously realized in human learning across domains throughout our lifetimes (Meltzoff et al., 2009). Theories of social learning have traditionally emphasized the role of social factors in language acquisition (Bruner, 1983; Vygotsky, 1962; Tomasello, 2003a, 2003b). However, these models have emphasized the development of lexical understanding and the use of others’ communicative intentions to help understand the mapping between words and objects. The new data indicate that social interaction ‘‘gates’’ an even more basic aspect of language — learning of the elementary phonetic units of language — and this suggests a more fundamental connection between the brain mechanisms underlying human social understanding and the origins of language than has previously been hypothesized. In the next decade, the methods of modern neuroscience will be used to explore how the integration of brain activity across specialized brain systems involved in linguistic, social, and cognitive analyses take place. These approaches, as well as others described here, will lead us toward a view of language acquisition in the human child that could be transformational. The Learning Problem Language learning is a deep puzzle that our theories and machines struggle to solve but children accomplish with ease. How do infants discover the sounds and words used in their particular language(s) when the most sophisticated computers cannot? What is it about the human mind that allows a young child, merely one year old, to understand the words that induce meaning in our collective minds, and to begin to use those words to convey their innermost thoughts and desires? A child’s budding ability to express a thought through words is a breathtaking feat of the human mind. Research on infants’ phonetic perception in the first year of life shows how computational, cognitive, and social skills combine Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 715 Neuron Review Figure 2. The Relationship between Age of Acquisition of a Second Language and Language Skill Adapted from Johnson and Newport (1989). to form a very powerful learning mechanism. Interestingly, this mechanism does not resemble Skinner’s operant conditioning and reinforcement model of learning, nor Chomsky’s detailed view of parameter setting. The learning processes that infants employ when learning from exposure to language are complex and multi-modal, but also child’s play in that it grows out of infants’ heightened attention to items and events in the natural world: the faces, actions, and voices of other people. Language Exhibits a ‘‘Critical Period’’ for Learning A stage-setting concept for human language learning is the graph shown in Figure 1, redrawn from a study by Johnson and Newport on English grammar in native speakers of Korean learning English as a second language (1989). The graph as rendered shows a simplified schematic of second language competence as a function of the age of second language acquisition. Figure 2 is surprising from the standpoint of more general human learning. In the domain of language, infants and young children are superior learners when compared to adults, in spite of adults’ cognitive superiority. Language is one of the classic examples of a ‘‘critical’’ or ‘‘sensitive’’ period in neurobiology (Bruer, 2008; Johnson and Newport, 1989; Knudsen, 2004; Kuhl, 2004; Newport et al., 2001). Scientists are generally in agreement that this learning curve is representative of data across a wide variety of second-language learning studies (Bialystok and Hakuta, 1994; Birdsong and Molis, 2001; Flege et al., 1999; Johnson and Newport, 1989; Kuhl et al., 2005a, 2008; Mayberry and Lock, 2003; Neville et al., 1997; Weber-Fox and Neville, 1999; Yeni-Komshian et al., 2000; though see Birdsong, 1992; White and Genesee, 1996). Moreover, not all aspects of language exhibit the same temporally defined critical ‘‘windows.’’ The developmental timing of critical periods for learning phonetic, lexical, and syntactic levels of language vary, though studies cannot yet document the precise timing at each individual level. Studies indicate, for example, that the critical period for phonetic learning occurs prior to the end of the first year, whereas syntactic learning flour716 Neuron 67, September 9, 2010 ª2010 Elsevier Inc. ishes between 18 and 36 months of age. Vocabulary development ‘‘explodes’’ at 18 months of age, but does not appear to be as restricted by age as other aspects of language learning—one can learn new vocabulary items at any age. One goal of future research will be to document the ‘‘opening’’ and ‘‘closing’’ of critical periods for all levels of language and understand how they overlap and why they differ. Given widespread agreement on the fact that we do not learn equally well over the lifespan, theory is currently focused on attempts to explain the phenomenon. What accounts for adults’ inability to learn a new language with the facility of an infant? One of the candidate explanations was Lenneberg’s hypothesis that development of the corpus callosum affected language learning (Lenneberg, 1967; Newport et al., 2001). More recent hypotheses take a different perspective. Newport raised a ‘‘less is more’’ hypothesis, which suggests that infants’ limited cognitive capacities actually allow superior learning of the simplified language spoken to infants (Newport, 1990). Work in my laboratory led me to advance the concept of neural commitment, the idea that neural circuitry and overall architecture develops early in infancy to detect the phonetic and prosodic patterns of speech (Kuhl, 2004; Zhang et al., 2005, 2009). This architecture is designed to maximize the efficiency of processing for the language(s) experienced by the infant. Once established, the neural architecture arising from French or Tagalog, for example, impedes learning of new patterns that do not conform. I will return to the concept of the critical period for language learning, and the role that computational, cognitive, and social skills may play in accounting for the relatively poor performance of adults attempting to learn a second language. Focal Example: Phoneme Learning The world’s languages contain approximately 600 consonants and 200 vowels (Ladefoged, 2001). Each language uses a unique set of about 40 distinct elements, phonemes, which change the meaning of a word (e.g., from bat to pat in English). But phonemes are actually groups of non-identical sounds, phonetic units, which are functionally equivalent in the language. Japanese-learning infants have to group the phonetic units r and l into a single phonemic category (Japanese r), whereas Englishlearning infants must uphold the distinction to separate rake from lake. Similarly, Spanish learning infants must distinguish phonetic units critical to Spanish words (bano and pano), whereas English learning infants must combine them into a single category (English b). If infants were exposed only to the subset of phonetic units that will eventually be used phonemically to differentiate words in their language, the problem would be trivial. But infants are exposed to many more phonetic variants than will be used phonemically, and have to derive the appropriate groupings used in their specific language. The baby’s task in the first year of life, therefore, is to make some progress in figuring out the composition of the 40-odd phonemic categories in their language(s) before trying to acquire words that depend on these elementary units. Learning to produce the sounds that will characterize infants as speakers of their ‘‘mother tongue’’ is equally challenging, and is not completely mastered until the age of 8 years (Ferguson et al., 1992). Yet, by 10 months of age, differences can be Neuron Review Figure 3. Effects of Age and Experience on Phonetic Discrimination Effects of age on discrimination of the American English /ra-la/ phonetic contrast by American and Japanese infants at 6–8 and 10–12 months of age. Mean percent correct scores are shown with standard errors indicated (adapted from Kuhl et al., 2006). discerned in the babbling of infants raised in different countries (de Boysson-Bardies, 1993), and in the laboratory, vocal imitation can be elicited by 20 weeks (Kuhl and Meltzoff, 1982). The speaking patterns we adopt early in life last a lifetime (Flege, 1991). My colleagues and I have suggested that this kind of indelible learning stems from a linkage between sensory and motor experience; sensory experience with a specific language establishes auditory patterns stored in memory that are unique to that language, and these representations guide infants’ successive motor approximations until a match is achieved (Kuhl and Meltzoff, 1996). This ability to imitate vocally may also depend on the brain’s social understanding mechanisms which form a human mirroring system for seamless social interaction (Hari and Kujala, 2009), and we will revisit the impact of the brain’s social understanding systems later in this review. What enables the kind of learning we see in infants for speech? No machine in the world can derive the phonemic inventory of a language from natural language input (Rabiner and Huang, 1993), though models improve when exposed to ‘‘motherese,’’ the linguistically simplified and acoustically exaggerated speech that adults universally use when speaking to infants (de Boer and Kuhl, 2003). The variability in speech input is simply too enormous; Japanese adults produce both English r- and l- like sounds, exposing Japanese infants to both sounds (Lotto et al., 2004; Werker et al., 2007). How do Japanese infants learn that these two sounds do not distinguish words in their language, and that these differences should be ignored? Similarly, English speakers produce Spanish b and p, exposing American infants to both categories of sound (Abramson and Lisker, 1970). How do American infants learn that these sounds do not distinguish words in English? An important discovery in the 1970s was that infants initially hear all these phonetic differences (Eimas, 1975; Eimas et al., 1971; Lasky et al., 1975; Werker and Lalonde, 1988). What we must explain is how infants learn to group phonetic units into phonemic categories that make a difference in their language. The Timing of Phonetic Learning Another important discovery in the 1980s identified the timing of a crucial change in infant perception. The transition from an early universal perceptual ability to distinguish all the phonetic units of all languages to a more language specific pattern of perception occurred very early in development—between 6 and 12 months of age (Werker and Tees, 1984), and initial work demonstrated that infants’ perception of nonnative distinctions declines during the second half of the first year of life (Best and McRoberts, 2003; Rivera-Gaxiola et al., 2005; Tsao et al., 2006; Werker and Tees, 1984). Work in this laboratory also established a new fact: At the same time that nonnative perception declines, native language speech perception shows a significant increase. Japanese infants’ discrimination of English r-l declines between 8 and 10 months of age, while at the same time in development, American infants’ discrimination of the same sounds shows an increase (Kuhl et al., 2006) (Figure 3). Phonetic Learning Predicts the Rate of Language Growth We argued that the increase observed in native-language phonetic perception represented a critical step in initial language learning and promoted language growth (Kuhl et al., 2006). To test this hypothesis, we designed a longitudinal study examining whether a measure of phonetic perception predicted children’s language skills measured 18 months later. The study demonstrated that infants’ phonetic discrimination ability at 6 months of age was significantly correlated with their success in language learning at 13, 16, and 24 months of age (Tsao et al., 2004). However, we recognized that in this initial study the association we observed might be due to infants’ cognitive skills, such as the ability to perform in the behavioral task, or to sensory abilities that affected auditory resolution of the differences in formant frequencies that underlie phonetic distinctions. To address these issues, we assessed both native and nonnative phonetic discrimination in 7-month-old infants, and used both a behavioral (Kuhl et al., 2005a) and an event-related potential measure, the mismatch negativity (MMN), to assess infants’ performance (Kuhl et al., 2008). Using a neural measure removed potential cognitive effects on performance; the use of both native and nonnative contrasts addressed the sensory issue, since better sensory abilities would be expected to improve both native and nonnative speech discrimination. The native language neural commitment (NLNC) view suggested that future language measures would be associated with early performance on both native and nonnative contrasts, but in opposite directions. The results conformed to this prediction. When both native and nonnative phonetic discrimination Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 717 Neuron Review Figure 4. Speech Discrimination Predicts Vocabulary Growth (A) A 7.5-month-old infant wearing an ERP electrocap. (B) Infant ERP waveforms at one sensor location (CZ) for one infant are shown in response to a native (English) and nonnative (Mandarin) phonetic contrast at 7.5 months. The mismatch negativity (MMN) is obtained by subtracting the standard waveform (black) from the deviant waveform (English, red; Mandarin, blue). This infant’s response suggests that native-language learning has begun because the MMN negativity in response to the native English contrast is considerably stronger than that to the nonnative contrast. (C) Hierarchical linear growth modeling of vocabulary growth between 14 and 30 months for MMN values of +1 SD and 1 SD on the native contrast at 7.5 months (C, left) and vocabulary growth for MMN values of +1 SD and 1 SD on the nonnative contrast at 7.5 months (C, right) (adapted from Kuhl et al., 2008). was measured in the same infants at 7.5 months of age, better native language perception predicted significantly higher language abilities between 18 and 30 months of age, whereas better nonnative phonetic perception at the same age predicted poorer language abilities at the same future points in time (Kuhl et al., 2005a, 2008). As shown in Figure 4, the ERP measure at 7.5 months of age (Figure 4A) provided an MMN measure of speech discrimination for both native and nonnative contrasts; greater negativity of the MMN reflects greater discrimination (Figure 4B). Hierarchical linear growth modeling of vocabulary between 14 and 30 months for MMN values of +1SD and 1SD (Figure 4C) revealed that both native and nonnative phonetic discrimination significantly predict future language, but in opposite directions with better native MMNs predicting advanced future language development and better nonnative MMNs predicting less advanced future language development. The results are explained by NLNC: better native phonetic discrimination enhances infants’ skills in detecting words and this vaults them toward language, whereas better nonnative abilities indicated that infants remained at an earlier phase of development – sensitive to all phonetic differences. Infants’ ability to learn which phonetic units are relevant in the language(s) they are exposed to, while decreasing or inhibiting their attention to the phonetic units that do not distinguish words in their language, is the necessary step required to begin the path toward language. These data led to a theoretical argument that an implicit learning process commits the brain’s neural circuitry to the properties of native-language speech, and that neural commitment has bi-directional effects – it increases learning for patterns (such as words) that are compatible with the learned phonetic structure, while decreasing perception of 718 Neuron 67, September 9, 2010 ª2010 Elsevier Inc. nonnative patterns that do not match the learned scheme (Kuhl, 2004). Recent data indicate very long-term associations between infants’ phonetic perception and future language and reading skills. Our studies show that the ability to discriminate two simple vowels at 6 months of age predicts language abilities and pre-reading skills such as rhyming at the age of 5 years, an association that holds regardless of socio-economic status and the children’s language skills at 2.5 years of age (Cardillo, 2010). A Computational Solution to Phonetic Learning A surprising new form of learning, referred to as ‘‘statistical learning’’ (Saffran et al., 1996), was discovered in the 1990s. Statistical learning is computational in nature, and reflects implicit rather than explicit learning. It relies on the ability to automatically pick up and learn from the statistical regularities that exist in the stream of sensory information we process, and strongly influences both phonetic learning and early word learning. For example, data show that the developmental change in phonetic perception between the ages of 6 and 12 months is supported by infants’ sensitivity to the distributional frequencies of the sounds in the language(s) they hear, and that this affects perception. To illustrate, adult speakers of English and Japanese produce both English r- and l-like sounds, even though English speakers hear /r/ and /l/ as distinct and Japanese adults hear them as identical. Japanese infants are therefore exposed to both /r/ and /l/ sounds, even though they do not represent distinct categories in Japanese. The presence of a particular sound in ambient language, therefore, does not account for infant learning. However, distributional frequency analyses of English and Japanese show differential patterns of distributional frequency; in English, /r/ and /l/ occur very frequently; in Japanese, the most frequent sound of this type is Japanese /r/ which is related to but distinct from both the English variants. Can infants learn from this kind of distributional information in speech input? Neuron Review A variety of studies show that infants’ perception of phonetic categories is affected by distributional patterns in the sounds they hear. In one study using very simple stimuli and shortterm exposure in the laboratory, 6- and 8-month-old infants were exposed for 2 min to 8 sounds that formed a continuum of sounds from /da/ to /ta/ (Maye et al., 2002; see also Maye et al., 2008). All infants heard all the stimuli on the continuum, but experienced different distributional frequencies of the sounds. A ‘‘bimodal’’ group heard more frequent presentations of stimuli at the ends of the continuum; a ‘‘unimodal’’ group heard more frequent presentations of stimuli from the middle of the continuum. After familiarization, infants in the bimodal group discriminated the /da/ and /ta/ sounds, whereas those in the unimodal group did not. Furthermore, while previous studies show that infants integrate the auditory and visual instantiations of speech (Kuhl and Meltzoff, 1982; Patterson and Werker, 1999), more recent studies show that infants’ detection of statistical patterns in speech stimuli, like those used by Maye and her colleagues, is influenced both by the auditory event and the sight of a face articulating the sounds. When exposed only to the ambiguous auditory stimuli in the middle of a speech continuum, infants discriminated the /da-ta/ contrast when each auditory stimulus was paired with the appropriate face articulating either /da/ or /ta/; discrimination did not occur if only one face was used with all auditory stimuli (Teinonen et al., 2008). Cross-cultural studies also indicate that infants are sensitive to the statistical distribution of sounds they hear in natural language. Infants tested in Sweden and the United States at 6 months of age showed a unique response to vowel sounds that represent the distributional mean in productions of adults who speak the language (i.e., ‘‘prototypes’’); this response was shown only for stimuli infants had been exposed to in natural language (native-vowel prototypes), not foreign-language vowel prototypes (Kuhl et al., 1992). Taken as a whole, these studies indicate infants pick up the distributional frequency patterns in ambient speech, whether they experience them during shortterm laboratory experiments, or over months in natural environments, and can learn from them. Statistical learning also supports word learning. Unlike written language, spoken language has no reliable markers to indicate word boundaries in typical phrases. How do infants find words? New experiments show that, before 8-month-old infants know the meaning of a single word, they detect likely word candidates through sensitivity to the transitional probabilities between adjacent syllables. In typical words, like in the phrase, ‘‘pretty baby,’’ the transitional probabilities between the two syllables within a word, such as those between ‘‘pre’’ and ‘‘tty,’’ and between ‘‘ba’’ and ‘‘by,’’ are higher than those between syllables that cross word boundaries, such and ‘‘tty’’ and ‘‘ba.’’ Infants are sensitive to these probabilities. When exposed to a 2 min string of nonsense syllables, with no acoustic breaks or other cues to word boundaries, they treat syllables that have high transitional probabilities as ‘‘words’’ (Saffran et al., 1996). Recent findings show that even sleeping newborns detect this kind of statistical structure in speech, as shown in studies using event-related brain potentials (Teinonen et al., 2009). Statistical learning has been shown in nonhuman animals (Hauser et al., 2001), and in humans for stimuli outside the realm of speech, operating for musical and visual patterns in the same way as speech (Fiser and Aslin, 2002; Kirkham et al., 2002; Saffran et al., 1999). Thus, a very basic implicit learning mechanism allows infants, from birth, to detect statistical structure in speech and in other signals. Infants’ sensitivity to this statistical structure can influence both phoneme and word learning. Effects of Social Interaction on Computational Learning As reviewed, infants show robust learning effects in statistical learning studies when tested in the laboratory with very simple stimuli (Maye et al., 2002, 2008; Saffran et al., 1996). However, complex natural language learning may challenge infants in a way that these experiments do not. Are there constraints on statistical learning as an explanation for natural language learning? A series of later studies suggest that this is the case. Laboratory studies testing infant phonetic and word learning from exposure to a complex natural language suggest limits on statistical learning, and provide new information suggesting that social brain systems are integrally involved, and, in fact, may be necessary to explain natural language learning. The new experiments tested infants in the following way: At 9 months of age, the age at which the initial universal pattern of infant perception has changed to one that is more languagespecific, infants were exposed to a foreign language for the first time (Kuhl et al., 2003). Nine-month-old American infants listened to 4 different native speakers of Mandarin during 12 sessions scheduled over 4–5 weeks. The foreign language ‘‘tutors’’ read books and played with toys in sessions that were unscripted. A control group was also exposed for 12 sessions but heard only English from native speakers. After infants in the experimental Mandarin exposure group and the English control group completed their sessions, all were tested with a Mandarin phonetic contrast that does not occur in English. Both behavioral and ERP methods were used. The results indicated that infants had a remarkable ability to learn from the ‘‘live-person’’ sessions – after exposure, they performed significantly better on the Mandarin contrast when compared to the control group that heard only English. In fact, they performed equivalently to infants of the same age tested in Taiwan who had been listening to Mandarin for 10 months (Kuhl et al., 2003). The study revealed that infants can learn from first-time natural exposure to a foreign language at 9 months, and answered what was initially the experimental question: can infants learn the statistical structure of phonemes in a new language given firsttime exposure at 9 months of age? If infants required a longterm history of listening to that language—as would be the case if infants needed to build up statistical distributions over the initial 9 months of life—the answer to our question would have been no. However, the data clearly showed that infants are capable of learning at 9 months when exposed to a new language. Moreover, learning was durable. Infants returned to the laboratory for their behavioral discrimination tests between 2 and 12 days after the final language exposure session, and between 8 and 33 days for their ERP measurements. No ‘‘forgetting’’ of the Mandarin contrast occurred during the 2 to 33 day delay. Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 719 Neuron Review Figure 5. Social Interaction Facilitates Foreign Language Learning The need for social interaction in language acquisition is shown by foreign-language learning experiments. Nine-month-old infants experienced 12 sessions of Mandarin Chinese through (A) natural interaction with a Chinese speaker (left) or the identical linguistic information delivered via television (right) or audiotape (data not shown). (B) Natural interaction resulted in significant learning of Mandarin phonemes when compared with a control group who participated in interaction using English (left). No learning occurred from television or audiotaped presentations (middle). Data for age-matched Chinese and American infants learning their native languages are shown for comparison (right) (adapted from Kuhl et al., 2003). We were struck by the fact that infants exposed to Mandarin were socially very engaged in the language sessions and began to wonder about the role of social interaction in learning. Would infants learn if they were exposed to the same information in the absence of a human being, say, via television or an audiotape? If statistical learning is sufficient, the television and audio-only conditions should produce learning. Infants who were exposed to the same foreign-language material at the same time and at the same rate, but via standard television or audiotape only, showed no learning—their performance equaled that of infants in the control group who had not been exposed to Mandarin at all (Figure 5). Thus, the presence of a human being interacting with the infant during language exposure, while not required for simpler statistical-learning tasks (Maye et al., 2002; Saffran et al., 1996), is critical for learning in complex natural language-learning situations in which infants heard an average of 33,000 Mandarin syllables from a total of four different talkers over a 4–5-week period (Kuhl et al., 2003). Explaining the Effect of Social Interaction on Language Learning The impact of social interaction on language learning (Kuhl et al., 2003) led to the development of the Social Gating Hypothesis 720 Neuron 67, September 9, 2010 ª2010 Elsevier Inc. (Kuhl, 2007). ‘‘Gating’’ suggested that social interaction creates a vastly different learning situation, one in which additional factors introduced by a social context influence learning. Gating could operate by increasing: (1) attention and/ or arousal, (2) information, (3) a sense of relationship, and/or (4) activation of brain mechanisms linking perception and action. Attention and arousal affect learning in a wide variety of domains (Posner, 2004), and could impact infant learning during exposure to a new language. Infant attention, measured in the original studies, was significantly higher in response to the live person than to either inanimate source (Kuhl et al., 2003). Attention has been shown to play a role in the statistical learning studies as well. ‘‘High-attender’’ 10-month-olds, measured as the amount of infant ‘‘looking time,’’ learned from bimodal stimulus distributions when ‘‘lowattenders’’ did not (Yoshida et al., 2006; see also Yoshida et al., 2010). Heightened attention and arousal could produce an overall increase in the quantity or quality of the speech information that infants encode and remember. Recent data suggest a role for attention in adult second-language phonetic learning as well (Guion and Pederson, 2007). A second hypothesis was raised to explain the effectiveness of social interaction – the live learning situation allowed the infants and tutors to interact, and this added contingent and reciprocal social behaviors that increased information that could foster learning. During live exposure, tutors focused their visual gaze on pictures in the books or on the toys as they spoke, and the infants’ gaze tended to follow the speaker’s gaze, as previously observed in social learning studies (Baldwin, 1995; Brooks and Meltzoff, 2002). Referential information is present in both the live and televised conditions, but it is more difficult to pick up via television, and is totally absent during audio-only presentations. Gaze following is a significant predictor of receptive vocabulary (Baldwin, 1995; Brooks and Meltzoff, 2005; Mundy and Gomes, 1998), and may help infants link the foreign speech Neuron Review to the objects they see. When 9-month-old infants follow a tutor’s line of regard in our foreign-language learning situation, the tutor’s specific meaningful social cues, such as eye gaze and pointing to an object of reference, might help infants segment word-like units from ongoing speech, thus facilitating phonetic learning of the sounds contained in those words. If this hypothesis is correct, then the degree to which infants interact and engage socially with the tutor in the social language-learning situation should correlate with learning. In studies testing this hypothesis, 9-month-old infants were exposed to Spanish (Conboy and Kuhl, 2010), extending the experiment to a new language. Other changes in method expanded the tests of language learning to include both Spanish phonetic learning and Spanish word learning, as well as adding measures of specific interactions between the tutor and the infant to examine whether interactive episodes could be related to learning of either phonemes or words. The results confirmed Spanish language learning, both of the phonetic units of the language and the lexical units of the language (Conboy and Kuhl, 2010). In addition, these studies answered a key question—does the degree of infants’ social engagement during the Spanish exposure sessions predict the degree of language learning as shown by ERP measures of Spanish phoneme discrimination? Our results (Figure 6) show that they do (Conboy et al., 2008a). Infants who shifted their gaze between the tutor’s eyes and newly introduced toys during the Spanish exposure sessions showed a more negative MMN (indicating greater neural discrimination) in response to the Spanish phonetic contrast. Infants who simply gazed at the tutor or at the toy, showing fewer gaze shifts, produced less negative MMN responses. The degree of infants’ social engagement during sessions predicted both phonetic and word learning—infants who were more socially engaged showed greater learning as reflected by ERP brain measures of both phonetic and word learning. Figure 6. Social Engagement Predicts Foreign Language Learning Language, Cognition, and Bilingual Language Experience Specific cognitive abilities, particularly the executive control of attention and the ability to inhibit a pre-potent response (inhibitory control), are associated with exposure to more than one language. Bilingual adult speakers show enhanced executive control skills (Bialystok, 1999, 2001; Bialystok and Hakuta, 1994; Wang et al., 2009), a finding that has been extended to young school-aged bilingual children (Carlson and Meltzoff, 2008). In monolingual infants, the decline in discrimination of nonnative contrasts (which promotes more rapid growth in language, see Figure 4C) is associated with enhanced inhibitory control, suggesting that domain-general cognitive mechanisms underlying attention may play a role in enhancing performance on native and suppressing performance on nonnative phonetic contrasts early in development (Conboy et al., 2008b; Kuhl et al., 2008). In support of this view, it is noteworthy that in the Spanish exposure studies, a median split of the post-exposure MMN phonetic discrimination data revealed that infants showing greater phonetic learning had higher cognitive control scores post-exposure. These same infants did not differ in their preexposure cognitive control tests (Conboy, Sommerville, and (A) Nine-month-old infants experienced 12 sessions of Spanish through natural interaction with a Spanish speaker. (B) The neural response to the Spanish phonetic contrast (d-t) and the proportion of gaze shifts during Spanish sessions were significantly correlated (from Conboy et al., unpublished data). P.K.K., unpublished data). Taken as a whole, the data are consistent with the notion that cognitive skills are strongly linked to phonetic learning at the initial stage of phonetic development (Kuhl et al., 2008). The ‘‘Social Brain’’ and Language Learning Mechanisms While attention and the information provided by interaction with another may help explain social learning effects for language, it is also possible that social contexts are connected to language learning through even more fundamental mechanisms. Social interaction may activate brain mechanisms that invoke a sense of relationship between the self and other, as well as social understanding systems that link perception and action (Hari and Kujala, 2009). Neuroscience research focused on shared neural systems for perception and action have a long tradition in speech research (Liberman and Mattingly, 1985), and interest in ‘‘mirror systems’’ for social cognition have re-invigorated this Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 721 Neuron Review Figure 7. Perception-Action Brain Systems Respond to Speech in Infancy (A) Neuromagnetic signals were recorded in newborns, 6-month-old infants (shown), and 12-month-old infants in the MEG machine while listening to speech and nonspeech auditory signals. (B) Brain activation in response to speech recorded in auditory (B, top row) and motor (B, bottom row) brain regions showed no activation in the motor speech areas in the newborn in response to auditory speech but increasing activity that was temporally synchronized between the auditory and motor brain regions in 6and 12-month-old infants (from Imada et al., 2006). tradition (Kuhl and Meltzoff, 1996; Meltzoff and Decety, 2003; Pulvermuller, 2005; Rizzolatti, 2005; Rizzolatti and Craighero, 2004). Might the brain systems that link perception and production for speech be engaged when infants experience social interaction during language learning? The effects of Spanish language exposure extend to speech production, and provide evidence of an early coupling of sensory-motor learning in speech. The English-learning infants who were exposed to 12 sessions of Spanish (Conboy and Kuhl, 2010) showed subsequent changes in their patterns of vocalization (N. Ward et al., 2009, ‘‘Consequences of shortterm language exposure in infancy on babbling,’’ poster presented at the 158th meeting of the Acoustical Society of America, San Antonio). When presented with language from a Spanish speaker (but not from an English speaker), a new pattern of infant vocalizations was evoked, one that reflected the prosodic patterns of Spanish, rather than English. This only occurred in response to Spanish, and only occurred in infants who had been exposed to Spanish in the laboratory experiment. Neuroscience studies using speech and imaging techniques have the capacity to examine whether the brain systems involved in speech production are activated when infants listen to speech. Two new infant studies take a first step toward an answer to this developmental issue. Imada et al. (2006) used magnetoenchephalography (MEG) to study newborns, 6-monthold infants, and 12-month-old infants while they listened to nonspeech, harmonics, and syllables (Figure 7). Dehaene-Lambertz and colleagues (2006) used fMRI to scan 3-month-old infants while they listened to sentences. Both studies show activation in brain areas responsible for speech production (the inferior frontal, Broca’s area) in response to auditorally presented speech. Imada et al. reported synchronized activation in response to speech in auditory and motor areas at 6 and 12 months, and Dehaene et al. reported activation in motor speech areas in response to sentences in 3-month-olds. Is activation of 722 Neuron 67, September 9, 2010 ª2010 Elsevier Inc. Broca’s area to the pure perception of speech present at birth? Newborns tested by Imada et al. (2006) showed no activation in motor speech areas for any signals, whereas auditory areas responded robustly to all signals, suggesting the possibility that perception-action linkages for speech develop by 3 months of age as infants begin to produce vowel-like sounds. Using the tools of modern neuroscience, we can now ask how the brain systems responsible for speech perception and speech production forge links early in development, and whether these same brain areas are involved when language is presented socially, but not when language is presented through a disembodied source such as a television set. Brain Rhythms, Cognitive Effects, and Language Learning MEG studies will provide an opportunity to examine brain rhythms associated with broader cognitive abilities during speech learning. Brain oscillations in various frequency bands have been associated with cognitive abilities. The induced brain rhythms have been linked to attention and cognitive effort, and are of primary interest since MEG studies with adults have shown that cognitive effort is increased when processing nonnative speech (Zhang et al., 2005, 2009). In the adult MEG studies, participants listened to their native- and to nonnative-language sounds. The results indicated that when listening to native language, the brain’s activation was more focal, and faster, than when listening to nonnative-language sounds (Zhang et al., 2005). In other words, there was greater neural efficiency for native as opposed to nonnative speech processing. Training studies show that adults can improve nonnative phonetic perception when training occurs under more social learning conditions, and MEG measures before and after training indicate that neural efficiency increases after training (Zhang et al., 2009). Similar patterns of neural inefficiency occur as young children learn words. Young children’s event-related brain potential Neuron Review responses are more diffuse and become more focally lateralized in the left hemisphere’s temporal regions as they develop (Conboy et al., 2008a; Durston et al., 2002; Mills et al., 1993, 1997; Tamm et al., 2002) and studies with young children with autism show this same pattern – more diffuse activation – when compared to typically developing children of the same age (CoffeyCorina et al., 2008). Brain rhythms may be reflective of these same processes in infants as they learn language. Brain oscillations in four frequency bands have been associated with cognitive effects: theta (4–7 Hz), alpha (8–12 Hz), beta (13–30 Hz) and gamma (30–100 Hz). Resting gamma has been related to early language and cognitive skills in the first three years (Benasich et al., 2008). The induced theta rhythm has been linked to attention and cognitive effort, and will be of strong interest to speech researchers. Power in the theta band increases with memory load in adults tested in either verbal or nonverbal tasks (Gevins et al., 1997; Krause et al., 2000) and in 8-month-old infants tested in working memory tasks (Bell and Wolfe, 2007). Examining brain rhythms in infants using speech stimuli is now underway using EEG with high-risk infants (C.R. Percaccio et al., 2010, ‘‘Native and nonnative speech-evoked responses in high-risk infant siblings,’’ abstracts of the International Meeting for Autism Research, May 2010, Philadelphia) and using MEG with typically developing infants (A.N. Bosseler et al., 2010, ‘‘Event-related fields and cortical rhythms to native and nonnative phonetic contrasts in infants and adults,’’ abstracts of the 17th International Conference of Biomagnetism), as they listen to native and nonnative speech. Comparisons between native and nonnative speech may allow us to examine whether there is increased cognitive effort associated with processing nonnative language, across age and populations. We are also testing whether language presented in a social environment affects brain rhythms in a way that television and audiotape presentations do not. Neural efficiency is not observable with behavioral approaches—and one promise of brain rhythms is that they provide the opportunity to compare the higher-level processes that likely underlie humans’ neural plasticity for language early in development in typical children as well as in children at risk for autism spectrum disorder, and in adults learning a second language. These kinds of studies may reveal the cortical dynamics underlying the ‘‘Critical Period’’ for language. These results underscore the importance of a social interest in speech early in development in both typical and atypical populations. An interest in ‘‘motherese,’’ the universal style with which adults address infants across cultures (Fernald and Simon, 1984; Grieser and Kuhl, 1988) provides a good metric of the value of a social interest in speech. The acoustic stretching in motherese, observed across languages, makes phonetic units more distinct from one another (Burnham et al., 2002; Englund, 2005; Kuhl et al., 1997; Liu et al., 2003, 2007). Mothers who use the exaggerated phonetic patterns to a greater extent when talking to their typically developing 2-month-old infants have infants who show significantly better performance in phonetic discrimination tasks when tested in the laboratory (Liu et al., 2003). New data show that the potential benefits of early motherese extend to the age of 5 years (Liu et al., 2009). Recent ERP studies indicate that infants’ brain responses to the exag- gerated patterns of motherese elicit an enhanced N250 as well as increased neural synchronization at frontal-central-parietal sites (Zhang et al., personal communication). It is also noteworthy that children with Autism Spectrum Disorder (ASD) prefer to listen to non-speech rather than speech, when given a choice, and this preference is strongly correlated with the children’s ERP brain responses to speech, as well as with the severity of their autistic symptoms (Kuhl et al., 2005b). Early speech measures may therefore provide an early biomarker of risk for ASD. Neuroscience studies in both typically developing and children with ASD that examine the coherence and causality of interaction between social and linguistic brain systems will provide valuable new theoretical data as well as potentially improving the early diagnosis and treatment of children with autism. Neurobiological Foundations of Communicative Learning Humans are not the only species in which communicative learning is affected by social interaction (see Fitch et al., 2010, for review). Young zebra finches need visual interaction with a tutor bird to learn song in the laboratory (Eales, 1989). A zebra finch will override its innate preference for conspecific song if a Bengalese finch foster father feeds it, even when adult zebra finch males can be heard nearby (Immelmann, 1969). More recent data indicate that male zebra finches vary their songs across social contexts; songs produced when singing to females vary from those produced in isolation, and females prefer these ‘‘directed’’ songs (Woolley and Doupe, 2008). Moreover, gene expression in highlevel auditory areas is involved in this kind of social context perception (Woolley and Doupe, 2008). White-crowned sparrows, which reject the audiotaped songs of alien species, learn the same alien songs when a live tutor sings them (Baptista and Petrinovich, 1986). In barn owls (Brainard and Knudsen, 1998) and white-crowned sparrows (Baptista and Petrinovich, 1986), a richer social environment extends the duration of the sensitive period for learning. Social contexts also advance song production in birds; male cowbirds respond to the social gestures and displays of females, which affect the rate, quality, and retention of song elements in their repertoires (West and King, 1988), and white-crowned sparrow tutors provide acoustic feedback that affects the repertoires of young birds (Nelson and Marler, 1994). Studies of the brain systems linking social and auditory-vocal learning in humans and birds may significantly advance theories in the near future (Doupe and Kuhl, 2008). Neural Underpinnings of Cognitive and Social Influences on Language Learning Our current model of neural commitment to language describes a significant role for cognitive processes such as attention in language learning (Kuhl et al., 2008). Studies of brain rhythms in infants and other neuroscience research in the next decade promise to reveal the intricate relationships between language and cognitive processes. Language evolved to address a need for social communication and evolution may have forged a link between language and the social brain in humans (Adolphs, 2003; Dunbar, 1998; Kuhl, 2007; Pulvermuller, 2005). Social interaction appears to Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 723 Neuron Review be necessary for language learning in infants (Kuhl et al., 2003), and an individual infant’s social behavior is linked to their ability to learn new language material (Conboy et al., 2008a). In fact, social ‘‘gating’’ may explain why social factors play a far more significant role than previously realized in human learning across domains throughout our lifetimes (Meltzoff et al., 2009). If social factors ‘‘gate’’ computational learning, as proposed, infants would be protected from meaningless calculations – learning would be restricted to signals that derive from live humans rather than other sources (Doupe and Kuhl, 2008; Evans and Marler, 1995; Marler, 1991). Constraints of this kind appear to exist for infant imitation: when infants hear nonspeech sounds with the same frequency components as speech, they do not attempt to imitate them (Kuhl et al., 1991). Research has begun to appear on the development of the neural networks in humans that constitute the ‘‘social brain’’ and invoke a sense of relationship between the self and other, as well as on social understanding systems that link perception and action (Hari and Kujala, 2009). Neuroscience studies using speech and imaging techniques are beginning to examine links between sensory and motor brain systems (Pulvermuller, 2005; Rizzolatti and Craighero, 2004), and the fact that MEG has now been demonstrated to be feasible for developmental studies of speech perception in infants during the first year of life (Imada et al., 2006) provides exciting opportunities. MEG studies of brain activation in infants during social versus nonsocial language experience will allow us to investigate cognitive effects via brain rhythms and also examine whether social brain networks are activated differentially under the two conditions. Many questions remain about the impact of cognitive skills and social interaction on natural speech and language learning. As reviewed, new data show the extensive interface between cognition and language and indicate that whether or not multiple languages are experienced in infancy affects cognitive brain systems. The idea that social interaction is integral to language learning has been raised previously for word learning; however, previous data and theorizing have not tied early phonetic learning to social factors. Doing so suggests a more fundamental connection between the motivation to learn socially and the mechanisms that enable language learning. Understanding how language learning, cognition, and social processing interact in development may ultimately explain the mechanisms underlying the critical period for language learning. Furthermore, understanding the mechanism underlying the critical period may help us develop methods that more effectively teach second languages to adult learners. Neuroscience studies over the next decade will lead the way on this theoretical work, and also advance our understanding of the practical results of training methods, both for adults learning new languages, and children with developmental disabilities struggling to learn their first language. These advances will promote the science of learning in the domain of language, and potentially, shed light on human learning mechanisms more generally. of Washington LIFE Center (SBE-0354453), and by grants from the National Institutes of Health (HD37954, HD55782, HD02274, DC04661). ACKNOWLEDGMENTS Burnham, D., Kitamura, C., and Vollmer-Conna, U. (2002). What’s new, pussycat? On talking to babies and animals. Science 296, 1435. The author and research reported here were supported by a grant from the National Science Foundation’s Science of Learning Program to the University Cardillo, G.C. (2010). Predicting the predictors: Individual differences in longitudinal relationships between infant phoneme perception, toddler vocabulary, 724 Neuron 67, September 9, 2010 ª2010 Elsevier Inc. REFERENCES Abramson, A.S., and Lisker, L. (1970). Discriminability along the voicing continuum: cross-language tests. Proc. Int. Congr. Phon. Sci. 6, 569–573. Adolphs, R. (2003). Cognitive neuroscience of human social behaviour. Nat. Rev. Neurosci. 4, 165–178. Aslin, R.N., and Mehler, J. (2005). Near-infrared spectroscopy for functional studies of brain activity in human infants: promise, prospects, and challenges. J. Biomed. Opt. 10, 11009. Baldwin, D.A. (1995). Understanding the link between joint attention and language. In Joint Attention: Its Origins and Role in Development, C. Moore and P.J. Dunham, eds. (Hillsdale, NJ: Lawrence Erlbaum Associates), pp. 131–158. Baptista, L.F., and Petrinovich, L. (1986). Song development in the whitecrowned sparrow: social factors and sex differences. Anim. Behav. 34, 1359–1371. Bell, M.A., and Wolfe, C.D. (2007). Changes in brain functioning from infancy to early childhood: evidence from EEG power and coherence during working memory tasks. Dev. Neuropsychol. 31, 21–38. Benasich, A.A., Gou, Z., Choudhury, N., and Harris, K.D. (2008). Early cognitive and language skills are linked to resting frontal gamma power across the first 3 years. Behav. Brain Res. 195, 215–222. Best, C.C., and McRoberts, G.W. (2003). Infant perception of non-native consonant contrasts that adults assimilate in different ways. Lang. Speech 46, 183–216. Bialystok, E. (1991). Language Processing in Bilingual Children (Cambridge University Press). Bialystok, E. (1999). Cognitive complexity and attentional control in the bilingual mind. Child Dev. 70, 636–644. Bialystok, E. (2001). Bilingualism in Development: Language, Literacy, and Cognition (New York: Cambridge University Press). Bialystok, E., and Hakuta, K. (1994). Other Words: The Science and Psychology of Second-Language Acquisition (New York: Basic Books). Birdsong, D. (1992). Ultimate attainment in second language acquisition. Ling. Soc. Am. 68, 706–755. Birdsong, D., and Molis, M. (2001). On the evidence for maturational constraints in second-language acquisitions. J. Mem. Lang. 44, 235–249. Bortfeld, H., Wruck, E., and Boas, D.A. (2007). Assessing infants’ cortical response to speech using near-infrared spectroscopy. Neuroimage 34, 407–415. Brainard, M.S., and Knudsen, E.I. (1998). Sensitive periods for visual calibration of the auditory space map in the barn owl optic tectum. J. Neurosci. 18, 3929–3942. Brooks, R., and Meltzoff, A.N. (2002). The importance of eyes: how infants interpret adult looking behavior. Dev. Psychol. 38, 958–966. Brooks, R., and Meltzoff, A.N. (2005). The development of gaze following and its relation to language. Dev. Sci. 8, 535–543. Bruer, J.T. (2008). Critical periods in second language learning: distinguishing phenomena from explanation. In Brain, Behavior and Learning in Language and Reading Disorders, M. Mody and E. Silliman, eds. (New York: The Guilford Press), pp. 72–96. Bruner, J. (1983). Child’s Talk: Learning to Use Language (New York: W.W. Norton). Neuron Review and preschooler language and phonological awareness. Doctoral Dissertation, University of Washington. Fernald, A., and Simon, T. (1984). Expanded intonation contours in mothers’ speech to newborns. Dev. Psychol. 20, 104–113. Carlson, S.M., and Meltzoff, A.N. (2008). Bilingual experience and executive functioning in young children. Dev. Sci. 11, 282–298. Fiser, J., and Aslin, R.N. (2002). Statistical learning of new visual feature combinations by infants. Proc. Natl. Acad. Sci. USA 99, 15822–15826. Cheour, M., Imada, T., Taulu, S., Ahonen, A., Salonen, J., and Kuhl, P.K. (2004). Magnetoencephalography is feasible for infant assessment of auditory discrimination. Exp. Neurol. 190 (Suppl 1 ), S44–S51. Fitch, W.T., Huber, L., and Bugnyar, T. (2010). Social cognition and the evolution of language: constructing cognitive phylogenies. Neuron 65, 795–814. Chomsky, N. (1959). Review of Skinner’s Verbal Behavior. Language 35, 26–58. Coffey-Corina, S., Padden, D., Kuhl, P.K., and Dawson, G. (2008). ERPs to words correlate with behavioral measures in children with Autism Spectrum Disorder. J. Acoust. Soc. Am. 123, 3742–3748. Conboy, B.T., and Kuhl, P.K. (2010). Impact of second-language experience in infancy: brain measures of first- and second-language speech perception. Dev. Sci., in press. Published online June 28, 2010. 10.1111/j.1467-7687. 2010.00973.x. Flege, J.E. (1991). Age of learning affects the authenticity of voice-onset time (VOT) in stop consonants produced in a second language. J. Acoust. Soc. Am. 89, 395–411. Flege, J.E., Yeni-Komshian, G.H., and Liu, S. (1999). Age constraints on second-language acquisition. J. Mem. Lang. 41, 78–104. Friederici, A.D. (2005). Neurophysiological markers of early language acquisition: from syllables to sentences. Trends Cogn. Sci. 9, 481–488. Friederici, A.D., and Wartenburger, I. (2010). Language and Brain. Wiley Interdisciplinary Reviews: Cognitive Science 1, 150–159. Conboy, B.T., Rivera-Gaxiola, M., Silva-Pereyra, J., and Kuhl, P.K. (2008a). Event-related potential studies of early language processing at the phoneme, word, and sentence levels. In Early Language Development: Volume 5. Bridging Brain and Behavior, Trends in Language Acquisition Research, A.D. Friederici and G. Thierry, eds. (Amsterdam, The Netherlands: John Benjamins), pp. 23–64. Gernsbacher, M.A., and Kaschak, M.P. (2003). Neuroimaging studies of language production and comprehension. Annu. Rev. Psychol. 54, 91–114. Conboy, B.T., Sommerville, J.A., and Kuhl, P.K. (2008b). Cognitive control factors in speech perception at 11 months. Dev. Psychol. 44, 1505–1512. Golestani, N., and Pallier, C. (2007). Anatomical correlates of foreign speech sound production. Cereb. Cortex 17, 929–934. de Boer, B., and Kuhl, P.K. (2003). Investigating the role of infant-directed speech with a computer model. ARLO 4, 129–134. Gratton, G., and Fabiani, M. (2001). Shedding light on brain function: the eventrelated optical signal. Trends Cogn. Sci. 5, 357–363. de Boysson-Bardies, B. (1993). Ontogeny of language-specific syllabic productions. In Developmental Neurocognition: Speech and Face Processing in the First Year of Life, B. de Boysson-Bardies, S. de Schonen, P. Jusczyk, P. McNeilage, and J. Morton, eds. (Dordrecht, Netherlands: Kluwer), pp. 353–363. Grieser, D.L., and Kuhl, P.K. (1988). Maternal speech to infants in a tonal language: support for universal prosodic features in motherese. Dev. Psychol. 24, 14–20. Dehaene-Lambertz, G., Dehaene, S., and Hertz-Pannier, L. (2002). Functional neuroimaging of speech perception in infants. Science 298, 2013–2015. Dehaene-Lambertz, G., Hertz-Pannier, L., Dubois, J., Mériaux, S., Roche, A., Sigman, M., and Dehaene, S. (2006). Functional organization of perisylvian activation during presentation of sentences in preverbal infants. Proc. Natl. Acad. Sci. USA 103, 14240–14245. Doupe, A.J., and Kuhl, P.K. (2008). Birdsong and human speech: common themes and mechanisms. In Neuroscience of Birdsong, H.P. Zeigler and P. Marler, eds. (Cambridge, England: Cambridge University Press), pp. 5–31. Dunbar, R.I.M. (1998). The social brain hypothesis. Evol. Anthropol. 6, 178–190. Durston, S., Thomas, K.M., Worden, M.S., Yang, Y., and Casey, B.J. (2002). The effect of preceding context on inhibition: an event-related fMRI study. Neuroimage 16, 449–453. Gevins, A., Smith, M.E., McEvoy, L., and Yu, D. (1997). High-resolution EEG mapping of cortical activation related to working memory: effects of task difficulty, type of processing, and practice. Cereb. Cortex 7, 374–385. Guion, S.G., and Pederson, E. (2007). Investigating the role of attention in phonetic learning. In Language Experience in Second Language Speech Learning: In Honor of James Emil Flege, O.-S. Bohn and M. Munro, eds. (Amsterdam: John Benjamins), pp. 55–77. Hari, R., and Kujala, M.V. (2009). Brain basis of human social interaction: from concepts to brain imaging. Physiol. Rev. 89, 453–479. Hauser, M.D., Newport, E.L., and Aslin, R.N. (2001). Segmentation of the speech stream in a non-human primate: statistical learning in cotton-top tamarins. Cognition 78, B53–B64. Homae, F., Watanabe, H., Nakano, T., Asakawa, K., and Taga, G. (2006). The right hemisphere of sleeping infant perceives sentential prosody. Neurosci. Res. 54, 276–280. Imada, T., Zhang, Y., Cheour, M., Taulu, S., Ahonen, A., and Kuhl, P.K. (2006). Infant speech perception activates Broca’s area: a developmental magnetoencephalography study. Neuroreport 17, 957–962. Eales, L.A. (1989). The influence of visual and vocal interaction on song learning in zebra finches. Anim. Behav. 37, 507–508. Immelmann, K. (1969). Song development in the zebra finch and other estrildid finches. In Bird Vocalizations, R. Hinde, ed. (London: Cambridge University Press), pp. 61–74. Eimas, P.D. (1975). Auditory and phonetic coding of the cues for speech: discrimination of the /r–l/ distinction by young infants. Percept. Psychophys. 18, 341–347. Johnson, J.S., and Newport, E.L. (1989). Critical period effects in second language learning: the influence of maturational state on the acquisition of English as a second language. Cognit. Psychol. 21, 60–99. Eimas, P.D., Siqueland, E.R., Jusczyk, P., and Vigorito, J. (1971). Speech perception in infants. Science 171, 303–306. Kirkham, N.Z., Slemmer, J.A., and Johnson, S.P. (2002). Visual statistical learning in infancy: evidence for a domain general learning mechanism. Cognition 83, B35–B42. Englund, K.T. (2005). Voice onset time in infant directed speech over the first six months. First Lang. 25, 219–234. Knudsen, E.I. (2004). Sensitive periods in the development of the brain and behavior. J. Cogn. Neurosci. 16, 1412–1425. Evans, C.S., and Marler, P. (1995). Language and animal communication: parallels and contrasts. In Comparative Approaches to Cognitive Science: Complex Adaptive Systems, H.L. Roitblat and J.-A. Meyer, eds. (Cambridge, MA: MIT Press), pp. 341–382. Krause, C.M., Sillanmäki, L., Koivisto, M., Saarela, C., Häggqvist, A., Laine, M., and Hämäläinen, H. (2000). The effects of memory load on event-related EEG desynchronization and synchronization. Clin. Neurophysiol. 111, 2071–2078. Ferguson C.A., Menn L., and Stoel-Gammon C., eds. (1992). Phonological Development: Models, Research, Implications (Timonium, MD: York Press). Kuhl, P.K. (2004). Early language acquisition: cracking the speech code. Nat. Rev. Neurosci. 5, 831–843. Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 725 Neuron Review Kuhl, P.K. (2007). Is speech learning ‘gated’ by the social brain? Dev. Sci. 10, 110–120. Kuhl, P.K. (2009). Early language acquisition: neural substrates and theoretical models. In The Cognitive Neurosciences IV, M.S. Gazzaniga, ed. (Cambridge, MA: MIT Press), pp. 837–854. Kuhl, P.K., and Meltzoff, A.N. (1982). The bimodal perception of speech in infancy. Science 218, 1138–1141. Kuhl, P.K., and Meltzoff, A.N. (1996). Infant vocalizations in response to speech: vocal imitation and developmental change. J. Acoust. Soc. Am. 100, 2425–2438. Kuhl, P.K., and Rivera-Gaxiola, M. (2008). Neural substrates of language acquisition. Annu. Rev. Neurosci. 31, 511–534. Kuhl, P.K., Williams, K.A., and Meltzoff, A.N. (1991). Cross-modal speech perception in adults and infants using nonspeech auditory stimuli. J. Exp. Psychol. Hum. Percept. Perform. 17, 829–840. Kuhl, P.K., Williams, K.A., Lacerda, F., Stevens, K.N., and Lindblom, B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science 255, 606–608. Kuhl, P.K., Andruski, J.E., Chistovich, I.A., Chistovich, L.A., Kozhevnikova, E.V., Ryskina, V.L., Stolyarova, E.I., Sundberg, U., and Lacerda, F. (1997). Cross-language analysis of phonetic units in language addressed to infants. Science 277, 684–686. Marler, P. (1991). The instinct to learn. In The Epigenesis of Mind: Essays on Biology and Cognition, S. Carey and R. Gelman, eds. (Hillsdale, NJ: Lawrence Erlbaum Associates), pp. 37–66. Mayberry, R.I., and Lock, E. (2003). Age constraints on first versus second language acquisition: evidence for linguistic plasticity and epigenesis. Brain Lang. 87, 369–384. Maye, J., Werker, J.F., and Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition 82, B101–B111. Maye, J., Weiss, D.J., and Aslin, R.N. (2008). Statistical phonetic learning in infants: facilitation and feature generalization. Dev. Sci. 11, 122–134. Meltzoff, A.N., and Decety, J. (2003). What imitation tells us about social cognition: a rapprochement between developmental psychology and cognitive neuroscience. Philos. Trans. R. Soc. Lond. B Biol. Sci. 358, 491–500. Meltzoff, A.N., Kuhl, P.K., Movellan, J., and Sejnowski, T. (2009). Foundations for a new science of learning. Science 17, 284–288. Mills, D., Coffey-Corina, S., and Neville, H. (1993). Language acquisition and cerebral specialization in 20 month old infants. J. Cogn. Neurosci. 5, 317–334. Mills, D., Coffey-Corina, S., and Neville, H. (1997). Language comprehension and cerebral specialization from 13 to 20 months. Dev. Neuropsychol. 13, 397–445. Mundy, P., and Gomes, A. (1998). Individual differences in joint attention skill development in the second year. Infant Behav. Dev. 21, 469–482. Kuhl, P.K., Tsao, F.-M., and Liu, H.-M. (2003). Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc. Natl. Acad. Sci. USA 100, 9096–9101. Nelson, D.A., and Marler, P. (1994). Selection-based learning in bird song development. Proc. Natl. Acad. Sci. USA 91, 10498–10501. Kuhl, P.K., Conboy, B.T., Padden, D., Nelson, T., and Pruitt, J. (2005a). Early speech perception and later language development: implications for the ‘critical period.’. Lang. Learn. Dev. 1, 237–264. Neville, H.J., Coffey, S.A., Lawson, D.S., Fischer, A., Emmorey, K., and Bellugi, U. (1997). Neural systems mediating American sign language: effects of sensory experience and age of acquisition. Brain Lang. 57, 285–308. Kuhl, P.K., Coffey-Corina, S., Padden, D., and Dawson, G. (2005b). Links between social and linguistic processing of speech in preschool children with autism: behavioral and electrophysiological measures. Dev. Sci. 8, F1–F12. Newport, E. (1990). Maturational constraints on language learning. Cogn. Sci. 14, 11–28. Kuhl, P.K., Stevens, E., Hayashi, A., Deguchi, T., Kiritani, S., and Iverson, P. (2006). Infants show a facilitation effect for native language phonetic perception between 6 and 12 months. Dev. Sci. 9, F13–F21. Kuhl, P.K., Conboy, B.T., Coffey-Corina, S., Padden, D., Rivera-Gaxiola, M., and Nelson, T. (2008). Phonetic learning as a pathway to language: new data and native language magnet theory expanded (NLM-e). Philos. Trans. R. Soc. Lond. B Biol. Sci. 363, 979–1000. Ladefoged, P. (2001). Vowels and Consonants: An Introduction to the Sounds of Language (Oxford: Blackwell Publishers). Lasky, R.E., Syrdal-Lasky, A., and Klein, R.E. (1975). VOT discrimination by four to six and a half month old infants from Spanish environments. J. Exp. Child Psychol. 20, 215–225. Lenneberg, E. (1967). Biological Foundations of Language (New York: John Wiley and Sons). Liberman, A.M., and Mattingly, I.G. (1985). The motor theory of speech perception revised. Cognition 21, 1–36. Liu, H.-M., Kuhl, P.K., and Tsao, F.-M. (2003). An association between mothers’ speech clarity and infants’ speech discrimination skills. Dev. Sci. 6, F1–F10. Liu, H.-M., Tsao, F.-M., and Kuhl, P.K. (2007). Acoustic analysis of lexical tone in Mandarin infant-directed speech. Dev. Psychol. 43, 912–917. Liu, H.-M., Tsao, F.-M., and Kuhl, P.K. (2009). Age-related changes in acoustic modifications of Mandarin maternal speech to preverbal infants and five-yearold children: a longitudinal study. J. Child Lang. 36, 909–922. Lotto, A.J., Sato, M., and Diehl, R. (2004). Mapping the task for the second language learner: the case of Japanese acquisition of /r/ and /l. In From Sound to Sense, J. Slitka, S. Manuel, and M. Matthies, eds. (Cambridge, MA: MIT Press), pp. C181–C186. 726 Neuron 67, September 9, 2010 ª2010 Elsevier Inc. Newport, E.L., Bavelier, D., and Neville, H.J. (2001). Critical thinking about critical periods: perspectives on a critical period for language acquisition. In Language, Brain, and Cognitive Development: Essays in Honor of Jacques Mehlter, E. Dupoux, ed. (Cambridge, MA: MIT Press), pp. 481–502. Ortiz-Mantilla, S., Choe, M.-S., Flax, J., Grant, P.E., and Benasich, A.A. (2010). Associations between the size of the amygdale in infancy and language abilities during the preschool years in normally developing children. Neuroimage 49, 2791–2799. Patterson, M.L., and Werker, J.F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behav. Dev. 22, 237–247. Peña, M., Bonatti, L.L., Nespor, M., and Mehler, J. (2002). Signal-driven computations in speech processing. Science 298, 604–607. Petitto, L.A., and Marentette, P.F. (1991). Babbling in the manual mode: evidence for the ontogeny of language. Science 251, 1493–1496. Posner M.I., ed. (2004). Cognitive Neuroscience of Attention (New York: Guilford Press). Pulvermuller, F. (2005). Brain mechanisms linking language to action. Nat. Rev. Neurosci. 6, 574–582. Rabiner, L.R., and Huang, B.H. (1993). Fundamentals of Speech Recognition (Englewood Cliffs, NJ: Prentice Hall). Rivera-Gaxiola, M., Silva-Pereyra, J., and Kuhl, P.K. (2005). Brain potentials to native and non-native speech contrasts in 7- and 11-month-old American infants. Dev. Sci. 8, 162–172. Rizzolatti, G. (2005). The mirror neuron system and imitation. In Perspectives on Imitation: From Neuroscience to Social Science – I: Mechanisms of Imitation and Imitation in Animals, S. Hurley and N. Chater, eds. (Cambridge, MA: MIT Press), pp. 55–76. Rizzolatti, G., and Craighero, L. (2004). The mirror-neuron system. Annu. Rev. Neurosci. 27, 169–192. Neuron Review Saffran, J.R., Aslin, R.N., and Newport, E.L. (1996). Statistical learning by 8-month-old infants. Science 274, 1926–1928. Saffran, J.R., Johnson, E.K., Aslin, R.N., and Newport, E.L. (1999). Statistical learning of tone sequences by human infants and adults. Cognition 70, 27–52. Saffran, J.R., Werker, J.F., and Werner, L.A. (2006). The infant’s auditory world: hearing, speech, and the beginnings of language. In Handbook of Child Psychology: Volume 2, Cognition, Perception and Language VI, W. Damon and R.M. Lerner, series eds., R. Siegler and D. Kuhn, volume eds. (New York: Wiley), pp. 58–108. Skinner, B.F. (1957). Verbal Behavior (New York: Appleton-Century-Crofts). Taga, G., and Asakawa, K. (2007). Selectivity and localization of cortical response to auditory and visual stimulation in awake infants aged 2 to 4 months. Neuroimage 36, 1246–1252. Tamm, L., Menon, V., and Reiss, A.L. (2002). Maturation of brain function associated with response inhibition. J. Am. Acad. Child Adolesc. Psychiatry 41, 1231–1238. Teinonen, T., Aslin, R.N., Alku, P., and Csibra, G. (2008). Visual speech contributes to phonetic learning in 6-month-old infants. Cognition 108, 850–855. Teinonen, T., Fellman, V., Näätänen, R., Alku, P., and Huotilainen, M. (2009). Statistical language learning in neonates revealed by event-related brain potentials. BMC Neurosci. 10, 21. Tomasello, M. (2003a). Constructing A Language: A Usage-Based Theory of Language Acquisition (Cambridge, MA: Harvard University Press). Tomasello, M. (2003b). The key is social cognition. In Language and Thought, D. Gentner and S. Kuczaj, eds. (Cambridge, MA: MIT Press), pp. 47–51. Tsao, F.-M., Liu, H.-M., and Kuhl, P.K. (2004). Speech perception in infancy predicts language development in the second year of life: a longitudinal study. Child Dev. 75, 1067–1084. Tsao, F.-M., Liu, H.-M., and Kuhl, P.K. (2006). Perception of native and nonnative affricate-fricative contrasts: cross-language tests on adults and infants. J. Acoust. Soc. Am. 120, 2285–2294. Vygotsky, L.S. (1962). Thought and Language (Cambridge, MA: MIT Press). Wang, Y., Kuhl, P.K., Chen, C., and Dong, Q. (2009). Sustained and transient language control in the bilingual brain. Neuroimage 47, 414–422. Weber-Fox, C.M., and Neville, H.J. (1999). Functional neural subsystems are differentially affected by delays in second language immersion: ERP and behavioral evidence in bilinguals. In Second Language Acquisition and the Critical Period Hypothesis, D. Birdsong, ed. (Mahwah, NJ: Lawerence Erlbaum and Associates, Inc), pp. 23–38. Werker, J.F., and Curtin, S. (2005). PRIMIR: a developmental framework of infant speech processing. Lang. Learn. Dev. 1, 197–234. Werker, J.F., and Lalonde, C. (1988). Cross-language speech perception: initial capabilities and developmental change. Dev. Psychol. 24, 672–683. Werker, J.F., and Tees, R.C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behav. Dev. 7, 49–63. Werker, J.F., Pons, F., Dietrich, C., Kajikawa, S., Fais, L., and Amano, S. (2007). Infant-directed speech supports phonetic category learning in English and Japanese. Cognition 103, 147–162. West, M.J., and King, A.P. (1988). Female visual displays affect the development of male song in the cowbird. Nature 334, 244–246. White, L., and Genesee, F. (1996). How native is near-native? The issue of ultimate attainment in adult second language acquisition. Second Lang. Res. 12, 233–265. Woolley, S.C., and Doupe, A.J. (2008). Social context-induced song variation affects female behavior and gene expression. PLoS Biol. 6, e62. Yeni-Komshian, G.H., Flege, J.E., and Liu, S. (2000). Pronunciation proficiency in the first and second languages of Korean–English bilinguals. Bilingualism: Lang. Cogn. 3, 131–149. Yoshida, K.A., Pons, F., Cady, J.C., and Werker, J.F. (2006). Distributional learning and attention in phonological development. Paper presented at International Conference on Infant Studies, Kyoto, Japan, 19–23 June. Yoshida, K.A., Pons, F., Maye, J., and Werker, J.F. (2010). Distributional phonetic learning at 10 months of age. Infancy 15, 420–433. Zhang, Y., Kuhl, P.K., Imada, T., Kotani, M., and Tohkura, Y. (2005). Effects of language experience: neural commitment to language-specific auditory patterns. Neuroimage 26, 703–720. Zhang, Y., Kuhl, P.K., Imada, T., Iverson, P., Pruitt, J., Stevens, E.B., Kawakatsu, M., Tohkura, Y., and Nemoto, I. (2009). Neural signatures of phonetic learning in adulthood: a magnetoencephalography study. Neuroimage 46, 226–240. Neuron 67, September 9, 2010 ª2010 Elsevier Inc. 727
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Psychological Bulletin 2007, Vol. 133, No. 4, 638 – 650 Copyright 2007 by the American Psychological Association 0033-2909/07/$12.00 DOI: 10.1037/0033-2909.133.4.638 Age of Acquisition: Its Neural and Computational Mechanisms Arturo E. Hernandez Ping Li University of Houston University of Richmond The acquisition of new skills over a life span is a remarkable human ability. This ability, however, is constrained by age of acquisition (AoA); that is, the age at which learning occurs significantly affects the outcome. This is most clearly reflected in domains such as language, music, and athletics. This article provides a perspective on the neural and computational mechanisms underlying AoA in language acquisition. The authors show how AoA modulates both monolingual lexical processing and bilingual language acquisition. They consider the conditions under which syntactic processing and semantic processing may be differentially sensitive to AoA effects in second-language acquisition. The authors conclude that AoA effects are pervasive and that the neural and computational mechanisms underlying learning and sensorimotor integration provide a general account of these effects. Keywords: bilingualism, second language acquisition, age of acquisition, sensorimotor learning, computational modeling understand how early versus late learning affects successful acquisition. This issue is often discussed in terms of a critical period, or sensitive period, of learning. In the second literature, researchers study the age at which lexical items are acquired in monolingual learners and before AoA effects on the processing of these items. Do these three types of AoA effects share a common mechanism? If so, what might that mechanism be? Our review attempts to provide an integrated answer to these questions. Personal anecdotes and scientific evidence both confirm that it is important to learn a second language (L2) early. In the 125th anniversary Special Issue of Science (Kennedy & Norman, 2005), age of acquisition (AoA) and critical periods were included in a list of 100 important science questions to be addressed in the next few decades. Linguists, psychologists, and cognitive scientists have made significant progress in understanding AoA; however, important questions remain unanswered: What neural substrates underlie AoA, if any? Are AoA effects specific to L2 learning or are they present in language in general? And, how is AoA reflected in both linguistic and nonlinguistic domains? In this review, we approach these questions broadly and consider general computational and neural principles that may contribute to AoA effects in both linguistic and nonlinguistic domains. Age of Acquisition in Nonlinguistic Domains AoA effects have been found in many nonlinguistic domains. For example, early deprivation or alteration of sensory input leads to impaired sensory perception in many species. The most wellknown examples involve binocular deprivation during a critical period leading to a reduction in stereoscopic depth perception among cats, monkeys, rats, mice, ferrets, and humans (Banks, Aslin, & Letson, 1975; Fagiolini, Pizzorusso, Berardi, Domenici, & Maffei, 1994; Harwerth, Smith, Duncan, Crawford, & von Noorden, 1986; Huang et al., 1999; Issa, Trachtenberg, Chapman, Zahs, & Stryker, 1999; Olson & Freeman, 1980). Critical periods are also found in the calibration of auditory maps by visual input (Brainard & Knudsen, 1998). Moreover, sensory deprivation can lead to problems in the motor system. For example, disruption of binocular experience adversely affects smooth pursuit of moving objects and diminishes stability of the eyes when viewing stationary targets (Norcia, 1996). Hence, problems in the sensory domain lead to abnormalities of motor function. AoA also affects song learning in birds. Learning generally occurs in three phases: sensory, sensorimotor, and crystallized (Brainard & Doupe, 2002). During the sensory period, a bird listens to the song of a tutor and forms a template in memory. Lack of exposure to adult song during this phase leads to irregular songs that contain some species-specific characteristics. During the sensorimotor phase, the bird learns to match the song to the template. Songs are fine-tuned through practice; auditory feedback is crucial during this time (note that sensory and sensorimotor phases may overlap for some birds). In the final, crystallized phase, birds are What Is Age of Acquisition? AoA, in its broadest sense, refers to the age at which a concept or skill is acquired. AoA effects have been addressed in at least three distinct literatures: the age at which skills are acquired in nonlinguistic domains, the age at which a lexical item is acquired in monolingual learners, and the age at which L2 learning begins. In the first and third literatures, researchers have attempted to Arturo E. Hernandez, Department of Psychology, University of Houston; Ping Li, Department of Psychology, University of Richmond. The writing of this article has been made possible by National Institutes of Health Grant 1 R03HD050313-01 to Arturo E. Hernandez and National Science Foundation Grant BCS-0131829 to Ping Li. We gratefully acknowledge our former mentor Elizabeth Bates, whose many insights on language and cognition have shaped the ideas presented here. We thank Michael Ullman, Leigh Leasure, and Brian MacWhinney for helpful comments related to the theoretical issues discussed here. We also thank Gedeon Deak and Catriona Morrison for comments on previous versions of this article. Correspondence concerning this article should be addressed to Arturo E. Hernandez, Department of Psychology, 126 Heyne Building, University of Houston, Houston, TX 77204-5022. E-mail: 638 This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. AGE OF ACQUISITION MECHANISMS mature and can produce a species-specific song, but they often cannot learn new songs. The fact that early acquisition of birdsong is characterized by sensory and sensorimotor processing is of particular importance in this review (see Doupe, Perkel, Reiner, & Stern, 2005). Finally, AoA effects have been observed in high-level nonlinguistic functions. AoA effects are found in musicians at both the behavioral and the neural levels. Absolute pitch appears to be learned by speakers of nontonal languages only before the age of 7 years (Deutsch, Henthorn, Marvin, & Xu, 2006; Trainor, 2005). In addition, the ability to synchronize motor responses to a visually presented flashing square differs significantly between groups of professional musicians as a function of AoA, even when these groups are matched for years of musical experience, years of formal training, and hours of current practice (Watanabe, SavionLemieux, & Penhune, 2007). At the neural level, early musical training correlates with the size of digit representations in motor regions of the cortex (Elbert, Pantev, Wienbruch, Rockstroh, & Taub, 1995). Similarly, Schlaug, Jancke, Huang, Staiger, and Steinmetz (1995) found that the anterior corpus callosum was larger in musicians than nonmusicians and largest in those who learned to play before the age of 7 years. Hence, AoA effects on both behavior and neural representations in the music domain appear to reflect sensorimotor processing. AoA effects are generally considered evidence for critical periods, time windows within which learning outcomes are optimal and after which the ability to learn drastically decreases. Critical periods, however, may be only one instantiation of AoA effects. A crucial aspect of these effects in nonlinguistic domains is that they impact both sensory and motor systems (for further discussion of critical and sensitive periods, see Knudsen, 2004). Age of Acquisition in Monolingual Individuals Researchers discovered over 30 years ago that early learned words are processed differently than late learned words (Carroll & White, 1973; Gilhooly & Watson, 1981); only recently, however, has this difference attracted significant interests among psycholinguists as an AoA effect. Using a number of experimental paradigms, researchers have shown that the age of word acquisition significantly affects the speed and accuracy with which a word is accessed and processed (Barry, Morrison, & Ellis, 1997; Cuetos, Ellis, & Alvarez, 1999; Ellis & Morrison, 1998; Gerhand & Barry, 1998, 1999; Gilhooly & Gilhooly, 1979; Lewis, 1999; Meschyan & Hernandez, 2002; Morrison, Chappell, & Ellis, 1997; Morrison & Ellis, 1995, 2000). Early learned words typically elicit faster response times than late learned words in word reading, auditory and visual lexical decision, picture naming, and face recognition. Researchers have not, however, agreed on the exact mechanisms underlying this AoA effect. The controversy lies in the identification of the locus of AoA effects, in particular, with regard to whether AoA reflects endogenous properties of the lexicon or exogenous properties of the learning process. We now turn to the various theoretical accounts. Theoretical accounts of age of acquisition. Brown and Watson (1987) proposed a phonological completeness hypothesis to account for AoA effects in word learning. In this view, early learned words are stored and represented holistically, whereas late learned words are represented in a fragmented fashion and require reconstruction or reassembly before the whole phonological shape can 639 be produced. Thus, early learned words are pronounced more quickly than late learned words. This hypothesis, however, has not been supported in a number of studies in several domains. First, the phonological completeness hypothesis has difficulty accounting for AoA effects in tasks that do not involve overt naming, such as face recognition (Moore & Valentine, 1998, 1999) and object processing (Moore, Smith-Spark, & Valentine, 2004). Second, reaction times are faster to early than to late learned words when participants are asked to perform a segmentation task (Monaghan & Ellis, 2002a). This pattern is contrary to what the hypothesis predicts. If late learned words are acquired in a fragmented form, they should be easier to segment than early learned words. These findings have led researchers to consider alternative hypotheses, in particular, hypotheses about whether lexical AoA effects are due to a more general mechanism. Several general accounts of AoA effects have been proposed (for a recent review of the literature see Juhasz, 2005). The cumulative frequency hypothesis maintains that word frequency consists of additive effects across the lifetime of a word. Hence, early learned words will be encountered more times across many years of use than late learned words, even if they are low in frequency (Lewis, Gerhand, & Ellis, 2001). Lewis et al. have provided evidence for this hypothesis using mathematical modeling: However, research with old adults has not supported it. AoA effects should decrease as the language user becomes older. The difference, for example, between words learned at age 3 years versus words learned at age 8 years should be large when the learner reaches age 14 years (these words have been encountered for 11 and 6 years, respectively), but the difference should be smaller when the learner reaches age 60 years (the same words have been encountered for 57 and 52 years, respectively). Morrison et al. (Morrison, Hirsh, Chappell, & Ellis, 2002) found the standard AoA effect but also found that it did not increase with age. Such findings provide compelling evidence against the cumulative frequency hypothesis.1 The semantic locus hypothesis claims that early learned words have a semantic advantage over late learned words because they enter the representational network first and affect the semantic representations of later learned words (Brysbaert, Van Wijnendaele, & De Deyne, 2000; Steyvers & Tenenbaum, 2005). Brysbaert et al. (2000) found that participants generated associates faster to early learned words than to late learned words. Similarly, Morrison and Gibbons (2006) found AoA effects in a “living” versus “nonliving” semantic categorization task but only for the “living” items. Research with neural networks has found that early learned words have more semantic connections to other words than do late learned words (Steyvers & Tenenbaum, 2005) and that early learned words establish a basic semantic structure that allows later word learning to accelerate (for a discussion of the “vocabulary spurt” in lexical acquisition, see Li, Zhao, & MacWhinney, 1 Recently, Zevin and Seidenberg (2004) have suggested a variant of the cumulative frequency hypothesis in which both cumulative frequency and frequency trajectory play an important role. Frequency trajectory, unlike cumulative frequency, refers to whether a word is encountered more frequently in childhood than in adulthood (e.g., potty, stroller) or vice versa (fax, merlot). In their view, AoA is difficult to quantify because it correlates highly with other types of information; hence, it may be impossible to isolate. According to Zevin and Seidenberg, frequency trajectory may be a more accurate measure of true AoA. This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 640 HERNANDEZ AND LI in press). Hence, AoA effects may be due, at least in part, to differences in semantic processing. According to the semantic locus hypothesis, early learned words are conceptually more enriched than late learned words (e.g., have more semantic connections to other words) and these representations affect later learning.2 In monolingual individuals, a linguistic form maps in a consistent and relatively straightforward manner to its corresponding conceptual representation. In bilingual indviduals, however, each concept maps to two forms, one for each language. Thus, the semantic locus hypothesis suggests that AoA effects should transfer to a second language. Bilingual researchers have long argued for a unitary semantic store with separate lexical form representations for each language (Altarriba, 1992; Kroll & de Groot, 1997, 2005; Kroll & Tokowicz, 2005; Kroll, Tokowicz, & Nicol, 2001; Potter, So, von Eckardt, & Feldman, 1984; Schreuder & Weltens, 1993; Sholl, Sankaranarayanan, & Kroll, 1995). Furthermore, they have argued that connections between concepts and L2 lexical items are mediated initially through the learned first language. As proficiency (i.e., language ability) improves, connections between L2 and the conceptual store are strengthened. The conceptual/semantic processing of L2 items should reflect the overall organization of the conceptual system because semantic processing occurs at the conceptual level.3 If AoA effects are purely a product of early items having more semantic connections to other items than do late items, then an L2 lexical item should inherit the L1’s AoA Empirical studies with L2 speakers, however, have not supported this prediction. Researchers have found that the speed of L2 lexical access is determined by the age at which words are acquired in the second language (L2 AoA) and not the age at which the corresponding words are learned in the native language (L1 AoA; Hirsh, Morrison, Gaset, & Carnicer, 2003; Izura & Ellis, 2004). Thus, these effects appear to be due to the order in which words enter a particular language, irrespective of when the language was learned (Hirsh et al., 2003). In order to account for this finding, the semantic locus hypothesis would have to assume separate semantic stores for each language. Researchers have not yet determined the exact mode of bilingual lexical representation (for a review, see French & Jacquet, 2004; Kroll & Tokowicz, 2005); however, most of the evidence favors a single semantic store. Thus, it seems reasonable to assume that AoA exerts its effects at the lexical level rather than at the semantic level (for further discussion along these lines with monolingual individuals, see Belke, Brysbaert, Meyer, & Ghyselinck, 2005). Computational accounts of age of acquisition. Some connectionist models have been designed to explicitly capture mechanisms of AoA (Ellis & Lambon Ralph, 2000; Li, Farkas, & MacWhinney, 2004; M. A. Smith, Cottrell, & Anderson, 2001). Ellis and Lambon Ralph trained an auto-associative network on sets of words that were introduced at different times. They showed that the network displayed strong AoA effects, as indicated by lower recognition errors for early than for late learned words when the words were presented to the network in stages, that is, trained on one set of words before a second set was introduced. Using the same model without staged learning, M. A. Smith et al. (2001) showed that recognition errors decreased as a function of learning order; early learned words had lower final recognition errors than did late learned words. Li, Farkas, and MacWhinney (2004) further explored AoA effects using a self-organizing neural network relying on Hebbian learning. AoA effects appeared such that early and late acquired words showed structural differences in organization as a natural outcome of learning order. More recently, Lambon Ralph and Ehsan (2006) showed that their connectionist network could capture AoA effects as a function of the consistency or predictability in the input-to-output mapping relations: arbitrary mappings elicited larger AoA effects compared with less arbitrary mappings. In each case, AoA effects appeared to reflect increased rigidity (reduced plasticity) of the network as a result of the learning process. Early learned words influenced the structural organization of the distributed mental lexicon more than late learned words, and had better optimized representations (e.g., as captured by word density measures in Li, Farkas, & MacWhinney, 2004).4 These connectionist models provide a general account of AoA that is not specific to any particular domain (i.e., phonology, semantics, etc.); as such, the account is compatible with aspects of several hypotheses. For example, the semantic locus hypothesis also posits that early learned words help shape the (semantic) network. Similarly, the phonological completeness hypothesis conceptualizes early learned words as more complete than late learned words and posits that these words form a foundation for the less complete words acquired later. Hence, loss of plasticity may be a property of learning that is reflected in a number of domains. Neuroimaging studies of age of acquisition. Relatively few studies have investigated the neural basis of AoA effects. Fiebach, Friederici, Müller, von Cramon, and Hernandez (2003) examined AoA with functional magnetic resonance imaging (fMRI), a technique that allows researchers to measure the oxygenation level of blood and thereby determine which neural areas are activated during a task. Participants were asked to make visual and auditory lexical decisions to words and pronounceable pseudowords during fMRI scanning. Results in both the visual and auditory modalities revealed increased activity for late relative to early learned words in the left inferior prefrontal cortex (IPFC; Brodmann’s Area [BA] 45) extending to the lateral orbitofrontal cortex (BA 47/12). The precuneus was more activated for early learned relative to late learned words (see Figure 1). In addition, increased activity in the region of the left 2 This is not true for some proposals based on statistical learning or neural networks. For example, Li et al. (in press) argued that semantic representations become enriched over time as a function of learning, much like filling holes in Swiss chess; initially, there may be more holes than cheese (shallow representations), but the holes fill quickly as the word context accumulates during learning (rich representations). This perspective, however, does not contradict the idea that early learned words establish the basic lexical–semantic structure. 3 One could argue, however, that semantic structure is not equivalent to conceptual structure, with the former tied to specific properties of a given language and the latter more language independent (for further discussion see Lyons, 1977). Most bilingual lexical memory research does not make this fine-grained distinction, and considers semantic and conceptual structure at the same level. 4 Zevin and Seidenberg (2002) have argued that AoA effects may be restricted to tasks in which early learned information does not aid in acquiring items introduced later. In the simulations discussed above, the networks must “memorize” each pattern. However, Zevin and Seidenberg have simulated reading acquisition and found that the practice effects can diminish AoA effects. Hence, AoA effects may be robust for tasks such as object naming and face recognition but may be small for skilled tasks such as reading (for additional evidence in favor of this view, see Lambon Ralph & Ehsan, 2006; Monaghan & Ellis, 2002b). Studies of AoA effects in transparent orthographies, however, call into question this “mapping” hypothesis (Raman, 2006). This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. AGE OF ACQUISITION MECHANISMS Figure 1. Neural activity associated with early and late learned words. Increased activity is evident for early and late learned words in monolingual German speakers. The blue-to-green scale represents areas of increased activity for early learned words. The red-to-yellow scale represents areas of increased activity for late learned words. BA ⫽ Brodmann’s area; IFG ⫽ inferior frontal gyrus; ant. ⫽ anterior; lat. ⫽ lateral. From “Distinct brain representations for early and late learned words,” by C. J. Fiebach, A. D. Friederici, K. Müller, D. Y. von Cramon, & A. E. Hernandez, 2003, Neuroimage, 42, p. 1631. Copyright, 2003 by Elsevier. Adapted with permission. temporal operculum near Heschl’s gyrus was observed for early relative to late learned words in the visual modality. Because auditory association cortices were activated, Fiebach et al. concluded that participants automatically coactivated auditory representations when making lexical decisions to early learned words that were visually presented, possibly to facilitate word recognition. The increase in inferior frontal activity during processing of late learned words is compatible with findings regarding the role of the left IPFC in semantic processing. Left IPFC appears to be critical in the effortful or strategic activation of information from the semantic knowledge system (Fiez, 1997; Thompson-Schill, D’Esposito, Aguirre, & Farah, 1997). Hence, processing of late learned words, at least when making lexical decisions, is likely to involve complex semantic retrieval or selection processes instantiated by inferior frontal brain areas. An interesting implication of this result is that semantics may play a strong role in learning words late in life, whereas auditory processing may play a strong role in learning words early in life. This makes sense especially in light of our hypothesis regarding early sensorimotor integration in L2 acquisition (see discussions presented later in the section, Integration of Age of Acquisition Effects Across Domains). Recent studies have extended Fiebach et al.’s (2003) research using word reading (Hernandez & Fiebach, 2006) and picture-naming tasks (Ellis, Burani, Izura, Bromiley, & Venneri, 2006). Ellis et al. (2006) found increased activity in separate portions of the middle occipital gyrus for early compared to late learned words, suggesting that both sets engage visual processing to a certain extent. Of particular interest, was the fact that late learned words elicited activity in the fusiform gyrus and early learned words elicited activity in the most inferior portions of the temporal lobe. Ellis et al. (2006) interpreted activity in the temporal pole for early learned items as reflecting the highly interconnected nature of these items. This inference is based on evidence that damage to the temporal poles leads to semantic dementia (Rogers, Lambon Ralph, Hodges, & Patterson, 2004; Thompson, Patterson, & Hodges, 2003). The increase in activity for late learned compared with early learned items in the fusiform gyrus reflects an 641 increased need for visual form processing (Devlin, Jamison, Gonnerman, & Matthews, 2006; Price & Devlin, 2003). These results seem consistent with the view that early learned items have more semantic interconnections than do late learned items, whereas late learned items require more visual form processing than do early learned items during picture naming. Hernandez and Fiebach (2006) asked participants to read single words during fMRI scanning. Increased activity to late as compared with early items was found in the left planum temporale (posterior superior temporal gyrus) and in the right globus pallidus, putamen, middle frontal gyrus (BA 9) and inferior frontal gyrus (BA 44). The authors suggested that late learned words engage brain areas in the left hemisphere that are involved in mapping phonological word representations and areas in the right hemisphere that aid articulatory and motor processing. These results implicate neuroanatomical substrates that may be associated with plasticity. In all of the studies reviewed above, processing of late learned items involved brain areas thought to be involved in effortful retrieval, including effortful semantic retrieval in lexical decision, articulatory and motor processing during reading, and visual form processing in picture naming. By contrast, early learned words appeared to be strongly connected to semantics in picture naming and to auditory word representations in lexical decision. Together, these results are consistent with the notion that the neural substrate of early learned words is at a basic level, albeit semantic or auditory, depending on the task. Late learned words build on these representations and require additional processing during lexical tasks. Age of Acquisition in Second-Language Learning The term AoA has also been used by scholars of L2 acquisition. The meaning of the term, however, is different when researchers use it to describe L2 learning than when they use it to describe L1 processing. In L1 processing, AoA refers to a stimulus property of linguistic items (early vs. late learned words), whereas in L2 learning AoA usually refers to learner characteristics (early vs. late starting age for acquiring L2). In the L2 literature, AoA5 is often examined along with other learner characteristics, such as level of L2 proficiency.6 Behavioral studies have long documented differences between early and late learners of a second language. They have consis5 Some authors (e.g., Johnson & Newport, 1989) have used “age of arrival” rather than AoA to indicate the age at which L2 acquisition begins. The former term is conceptually relevant to immigrant learners whose L2 learning coincides with their arrival in the target language country, whereas the latter is a more general term. Here we use “L2 AoA” for consistency. 6 Language proficiency can be defined as the degree of control one has over a language. Proficiency can be defined in four domains: listening, speaking, reading, and writing. These skills, although interrelated, are independent in that one skill may develop separately from the others. Cummins (1983) has argued that language proficiency has two levels: basic interpersonal communicative skills (BICS) and cognitive and academic language proficiency (CALP). BICS involves personal, face-to-face, “context-embedded” communication and typically requires 2 years to acquire, whereas CALP involves skills in understanding and using language in academic settings (context-reduced) and requires 5 to 7 years to acquire. Studies in the psycholinguistic and neuroimaging literature generally use some standardized test to assess proficiency; hence, proficiency involves CALP in Cummins’s terminology. This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. HERNANDEZ AND LI 642 tently found an AoA on the ultimate attainment of L2 (Flege, Munro, & MacKay, 1995; Flege, Yeni-Komshian, & Liu, 1999; Mackay & Flege, 2004; Munro, Flege, & MacKay, 1996). Although critical period effects in L2 learning are still being debated (Hakuta, Bialystok, & Wiley, 2003; Harley & Wang, 1997; Johnson & Newport, 1989; Liu, Bates, & Li, 1992; Snow & Hoefnagel-Höhle, 1978), researchers generally agree that late compared with early learning of L2 is associated with lower ultimate proficiency, even though some individuals may achieve native-like proficiency (Birdsong, 1992). Moreover, behavioral work by Hernandez and colleagues suggests that proficiency, and not AoA, determines naming latencies in lexical tasks when L2 acquisition occurs early in life (Hernandez, Bates, & Avila, 1996; Hernandez & Kohnert, 1999; Hernandez & Reyes, 2002; Kohnert, Hernandez, & Bates, 1998). This is consistent with the view that L2 AoA affects the processing of syntax, morphology, and phonology more than it affects lexical and semantic processing (Johnson & Newport, 1989; Weber-Fox & Neville, 1996). Evidence supporting the role of AoA in behavioral studies has been overwhelming; however, findings regarding the neural bases of L2 AoA effects have been mixed. First, language recovery in those with bilingual aphasia is not driven exclusively by L2 AoA (see Fabbro, 1999, for a review). Second, recent fMRI studies have yielded conflicting results, with some finding that AoA determines patterns of neural activity and others finding that language proficiency is the primary determinant. AoA modulates neural activity during sentence comprehension when proficiency is not taken into account (Perani et al., 1996). AoA effects diminish or disappear, however, when early and late learners are equated on proficiency: Proficient bilingual indviduals, whether early or late learners, show strikingly similar neural responses for both L1 and L2, whereas less proficient bilingual individuals have different activation patterns for the two languages, more so in comprehension than in production (for a review, see Abutalebi, Cappa, & Perani, 2001; see also Chee, 2006; Perani et al., 1998). Proficiency also plays a role in semantic and lexical tasks (Chee, Hon, Lee, & Soon, 2001; Chee, Soon, Lee, & Pallier, 2004; Elston-Guettler, Paulmann, & Kotz, 2005; Mechelli et al., 2004; Meschyan & Hernandez, 2006; Xue, Dong, Jin, Zhang, & Wang, 2004). Hence, considerable evidence suggests that proficiency has a crucial role in the neural activity underlying L2 processing. What remains unclear is how proficiency and AoA interact in the acquisition of different language processes, the topic of the next section. Age of Acquisition, Proficiency, and Syntactic and Semantic Processing in Second-Language Learning Age of Acquisition Versus Proficiency Evidence for the relative importance of proficiency as opposed to AoA can be found in recent work with populations that are immersed in a second language early in life. One particularly intriguing finding involves Korean adoptees who experience exclusive L2 immersion after being adopted by French families. The research shows no neural or behavioral trace of L1 even when L2 immersion occurs as late as age 8 years (Pallier et al., 2003; Ventureyra, Pallier, & Yoo, 2004). Evidence to date is inconclusive as to whether AoA or proficiency determines the behavioral and neural patterns in L2 learning. The fact that proficiency seems more important than AoA in the neuroimaging research contradicts behavioral research in the L2 literature, as discussed above. Syntactic Versus Semantic Processing The lack of uniform support for AoA or proficiency as the primary determinant of neural activity may be a result of the fact that both factors play a role, perhaps differently for different language processes, as suggested by Hernandez et al. (2004; see earlier discussion). Indeed, some neuroimaging research has found that tasks involving syntactic processing show larger AoA effects than tasks involving semantic processing (Wartenburger et al., 2003; Weber-Fox & Neville, 1996). In a seminal study, WeberFox and Neville (1996) asked a group of Chinese–English bilingual individuals to read sentences that contained three different types of syntactic violations (phrase structure, specificity constraint, and subjacency constraint) and sentences that contained semantic violations. They used event-related potentials (ERPs) to measure participants’ electrophysiological responses to a number of linguistic and nonlinguistic factors. Previous research has established that ERP components (e.g., N400, P600, and left anterior negativity [LAN]) are sensitive to semantic and syntactic violations (Atchley et al., 2006; Friederici, Hahne, & Mecklinger, 1996; Hagoort, 2003; Hagoort & Brown, 1999; Kutas & Hillyard, 1980; Kutas & Van Petten, 1988; Osterhout, Allen, McLaughlin, & Inoue, 2002). Weber-Fox and Neville found differences in the timing and distribution of the ERPs for both semantic and syntactic violations when L2 learners were compared with native speakers. Differences between L2 learners and native speakers appeared at different ages depending on whether the violation was syntactic or semantic. For syntactic violations, differences appeared between participants who learned English as early as age 2 years and native speakers. However, differences in the ERPs to semantic violations only appeared in participants who learned English after the age of 11 years. Although objective proficiency was not measured, participants who learned English after age 16 rated themselves as less proficient in English than in Chinese. These results are consistent with the view that AoA has an important role in determining the neural activity associated with grammatical violations, whereas proficiency has an important role in determining the neural activity associated with semantic violations. In order to understand the neuroanatomical substrates that distinguish AoA and proficiency, Wartenburger et al. (2003) examined Italian–German bilingual indviduals as they monitored sentences for morphosyntactic violations (number, gender, or case) or semantic violations during fMRI scanning. Three groups were tested: early-acquisition bilingual individuals with high proficiency in L2 (EAHP), and late-acquisition bilingual individuals with either high (LAHP) or low (LALP) proficiency in L2. Increased brain activity in L2 relative to L1 was seen in all three groups for both semantic and syntactic violations. Late learners, relative to early learners, showed increased neural activity in areas associated with motor planning and articulatory effort when processing grammatical violations, even when both groups were matched in proficiency. In late learners, lower proficiency led to activity in areas closely associated with auditory and visual integration. A different pattern emerged for semantic processing. Proficiency modulated activity in areas of the brain devoted to memory and executive processing. However, the difference was between LAHP and LALP, unlike the results for syntactic processing. Together, these results suggest that AoA predicts activity in brain areas during syntactic processing, whereas proficiency predicts activity during semantic processing. Furthermore, the This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. AGE OF ACQUISITION MECHANISMS AoA effects observed for the former appear to involve areas of the brain underlying sensorimotor processing. A number of questions arise with regard to the finding that syntactic processing is more sensitive than semantic processing to AoA effects in bilingual indviduals. First, it is unclear why this should be so. One possibility is that semantic processes are more similar across a bilingual’s two languages than are syntactic processes (at least the ones tested in these studies). A second possibility is that some processing component of syntax is particularly affected by AoA. The underlying cause of stronger AoA effects in L2 syntactic processing than L2 semantic processing may be revealed by considering various factors that modulate these effects. 643 number; thus, subject–verb agreement in sentences is not required. The acquisition of subject–verb agreement is a major obstacle for Chinese learners of English. ERP responses from the L2 Chinese and the native English participants clearly showed distinct patterns, even though behavioral responses did not. Native speakers showed a typical LAN/P600 biphasic pattern in response to agreement violations, whereas this pattern was absent in the L2 learners. Instead, L2 learners showed an N400/N600 response pattern, indicating that even proficient L2 learners differ from native speakers when processing syntactic features that are absent in their native language. Together, these studies suggest that the overlap between native and nonnative languages affects syntactic processing in L2. Overlap Across Languages As noted earlier, Zevin and Seidenberg (2002, 2004), as well as Monaghan and Ellis (2002b), argue that AoA effects should be large when early learning differs substantially from late learning (i.e., with little overlap between late and early learning). Although this hypothesis is inconsistent with findings in behavioral studies of AoA conducted with transparent orthographies (Raman, 2006), recent research suggests that it may play a role in L2 learning. Tokowicz and MacWhinney (2005) instructed English–Spanish bilingual indviduals to make grammaticality judgments for sentences that varied in the ways in which syntactic functions overlapped. The first function involved tense marking (noun–verb agreement), which is similar in English and Spanish. The second function involved determiner–noun agreement (las casas vs. la casas). Number in English, as in Spanish, is marked on the noun (houses). However, Spanish, unlike English, requires determiner– noun agreement (la casa vs. las casas). The third function involved gender agreement, a function that is unique to Spanish (la casa vs. el casa). ERP data were collected as participants made judgments about the sentences. Across all sentences, an interaction between function type and grammaticality was found for the P600, an electrophysiological index that is sensitive to grammatical violations in sentence contexts. Specifically, the difference in ERPs to grammatical and ungrammatical sentences was significant for subject–verb agreement, but not for determiner–number agreement. This finding is consistent with the view that cross-language overlap modulates sensitivity to grammatical processing in L2. Chen, Shu, Liu, Zhao, and Li (in press) recently tested proficient late Chinese bilingual learners’ ability to detect subject–verb agreement violations in English (e.g., the price of the cars are too high). In Chinese, unlike English and other Indo-European languages, grammatical morphology does not mark case, gender, or Regularity Recent research in the psycholinguistic literature has focused on the difference between regular and irregular morphological markings, in particular, on the processing of English past tense. Researchers have debated whether irregular and regular verbs are processed in separate memory systems (Pinker, 1991; Pinker & Ullman, 2002; Ullman, 2001a, 2004) or in the same system, but relying differentially on semantic or phonological processes (Bird, Ralph, Seidenberg, McClelland, & Patterson, 2003; McClelland & Patterson, 2002; Patterson, Ralph, Hodges, & McClelland, 2001). Although a large number of studies have investigated regularity in monolingual individuals, relatively few have examined the issue in bilingual indviduals. These few studies have found that late learners have difficulty learning irregular items (Birdsong & Flege, 2001; Flege et al., 1999). Recent neuroimaging work by Hernandez et al. (2004) examined AoA effects on the processing of regular and irregular grammatical gender. Two groups of early Spanish learners, one with high proficiency and one with low proficiency, were compared with a group of late learners. The results revealed increased activity in the inferior frontal gyrus for all three groups; however, the locus of activity varied. Early high-proficiency learners showed increased activity in the anterior insula, BA 44/45 and BA 44/6, as a function of irregularity (see Figure 2a). Each of these areas plays an important role in language processing (for a review, see Hagoort, 2005). BA 44/45 is activated in tasks that involve syntactic processing (Dapretto, Bookheimer, & Mazziotta, 1999; Friederici, Opitz, & von Cramon, 2000; Kang, Constable, Gore, & Avrutin, 1999; Moro et al., 2001) and in studies comparing gender monitoring to semantic monitoring (Miceli et al., 2002). Neuropsychological and neuroimaging studies have also demonstrated a link between the anterior insula Figure 2. Difference between irregular and regular gender items for (a) monolingual Spanish speakers as well as (b) early and late learners of Spanish. Neural activity that varies in color from blue-to-pink represents results from late learners of Spanish. Results from early learners are represented in red-to-yellow color variations. This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 644 HERNANDEZ AND LI and articulation (Ackermann & Riecker, 2004; Bates et al., 2003; Dronkers, 1996; Shuster & Lemieux, 2005). BA 44/6 is involved in phonological processing (Poldrack et al., 1999) and activity is greater when German monolingual individuals are asked to generate a gender-marking determiner (der, die or das) for a picture than when they are asked to name the picture (Heim, Opitz, & Friederici, 2002). These results are consistent with the view that gender decisions for irregularly marked items, relative to regularly marked ones, involve more phonological and articulatory demands and more effortful syntactic computations. Early and late learners of Spanish with matched proficiency showed different patterns of activity, although increased activity for the irregular items was observed for both groups. The late English–Spanish bilingual individuals showed a more distributed area of increased activity encompassing inferior portions of the inferior frontal gyrus, extending from the anterior insula into BA 47 (see Figure 2b). The early Spanish–English bilingual individuals showed more focused activity in BA 44/45 than did late learners. Direct comparisons between groups revealed increased activity in BA 47 for the late bilingual individuals. The results confirm that AoA modulates neural activity on grammatical tasks. Furthermore, they indicate that these differences are graded in nature. Group differences for regular items are very small, whereas larger differences are observed for irregular items. In summary, grammatical functions differ in their sensitivity to AoA, and inconsistent or irregular patterns in the grammar may affect bilingual learners to a greater extent than monolingual learners, especially as they age (see further discussion below in the section, Integration of Age of Acquisition Effects Across Domains). A Declarative–Procedural Account In the section, Theoretical accounts of age of acquisition, we described a number of theories that have been offered to account for AoA effects in monolingual individuals. Few theories, however, have been proposed to account for AoA effects in L2 except for a general learning plasticity based on computational modeling (see the section Computational accounts of age of acquisition). Ullman and colleagues (Ullman, 2001a, 2005) have proposed that L2 acquisition and use is constrained by the declarative and procedural (DP) memory systems. In the DP model, lexical learning and processing involve the declarative memory system, whereas grammatical acquisition and processing involve rulegoverned combinatorial processes in the procedural memory system. Memory research supports the existence of these two systems (Eichenbaum, 2001; Squire & Zola, 1996), and neuroimaging research suggests that they have distinct neural bases. The declarative memory system underlies knowledge about facts and events, appears to be specialized for relational binding, and depends on a network of brain structures including regions of the medial temporal lobe. The procedural memory system underlies motor and cognitive skills, including “habits,” may be specialized for sequences, and depends on a network of brain structures that include the frontal– basal ganglia circuitry. Ullman and colleagues have provided considerable evidence supporting their claim that grammatical and lexical processing rely on these different neural memory systems, respectively, including evidence from aphasia and Alzheimer’s and Parkinson’s diseases (Ullman et al., 1997; Ullman, 2005). Ullman (2005) has also shown that procedural learning decreases with age, whereas declarative learning may actually improve with age, possibly because of increased sex-hormone levels. This framework sheds some light on why neural correlates of syntactic processing are more sensitive to AoA than are neural correlates of lexical processing. In the DP model, L2 learners, especially late learners, rely on declarative rather than procedural memory for grammatical processing. Evidence suggests that these learners memorize complex forms (e.g., “walked”) as chunks, that is, forms that native speakers generally compose within the grammatical/procedural system. This reliance on declarative memory predicts that L2 learners will show more activity during grammatical processing than L1 learners in brain areas that are associated with declarative memory. As proficiency improves, L2 learners, especially early learners, should begin to rely on procedural memory; some recent evidence supports this prediction (Ullman, 2001b, 2005). The extent to which late learners can proceduralize their grammatical processing is still unclear, but late learners may rarely achieve L1-like levels of grammatical proficiency (Ullman, 2005). Integration of Age of Acquisition Effects Across Domains In the current review, we attempt to synthesize distinct literatures examining AoA in first- and second-language processing. It is worth noting, as we mentioned earlier, that no one would argue that AoA effects in L2 learning are the same as AoA effects involving vocabulary learning in monolingual individuals. However, these two types of AoA effects have interesting parallels raising a question about whether they rely, at least in part, on the same underlying mechanisms or processes. No current theory can explain both L1 and L2 AoA effects. Clearly, the DP model cannot account for AoA effects in monolingual lexical recognition and production. Similarly, the semantic locus and phonological completeness accounts do not predict L2 AoA effects as reflected in phonological, syntactic, and semantic processing. The best candidate for a general account of AoA comes from computational models that attribute AoA effects to plasticity and stability across the life span, as suggested by Ellis and Lambon Ralph (2000); Li, Farkas, and MacWhinney (2004); Seidenberg and Zevin (2006); and Smith, Cottrell, and Anderson (2001). According to this account, AoA effects are due to the interactive dynamics with which items are learned. Early learning determines the structure of knowledge and shapes later learning, not only in the monolingual lexicon but also in other domains including phonological and grammatical development in both L1 and L2 learning (Kuhl, 2004). In other words, the learning process itself leads to AoA effects that are similar in L1 and L2. If developmental constraints on learning are the underlying cause of AoA effects, how can one predict which domains will be most sensitive to AoA? This amounts to asking whether domains that show AoA effects share characteristics. One important characteristic that is shared among three domains showing AoA effects (nonlinguistic, L1, and L2) is the sensorimotor nature of processing. In L1, research suggests that early learned words are preferentially accessed using auditory information (Fiebach & Friederici, 2004), providing indirect evidence about the importance of sensory information for these items. In L2, late learners show reductions in phonological abilities and clear nonnative accents. In both cases, AoA is related to phonological processing, and in the latter, it is also related to motor planning and execution of speech. The This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. AGE OF ACQUISITION MECHANISMS association between AoA and sensorimotor processing is related to a broader theory that has been espoused by a number of scholars (Bates, Benigni, Bretherton, Camaioni, & Volterra, 1979; Lieberman, 2000; Zatorre, 1989), according to which language reflects a general sensorimotor ability in humans. As discussed earlier, the frontal– basal ganglia circuitry underlies the procedural memory system. Recent neuroimaging research has shown that the basal ganglia plays an important role in cognitive and linguistic functions, including sensory acquisition and discrimination (Gao et al., 1996), lexical decision (Li, Jin, & Tan, 2004), and sequencing of articulatory activities. The development of the basal ganglia system is not well understood; however, it is possible that the crucial neural systems for sensorimotor learning and coordination, including the basal ganglia, undergo rapid organization and reorganization early in life; a loss of plasticity leads to difficulty in forming complex mappings later in life (Bates, 1999, Bates et al., 1997; Hensch, 2004; Pickett, Kuniholm, Protopapas, Friedman, & Lieberman, 1998). This view fits well with classic findings of a maturational constraint in sensory processing (Frenkel & Bear, 2004; Hensch, 2004; Hubel & Wiesel, 1965; Knudsen & Knudsen, 1990; Pettigrew, 1972; Smith & Greene, 1963; Stafford, 1984; Tees, 1967; Wiesel & Hubel, 1965). If sensorimotor integration underlies AoA, then a unified account of linguistic and nonlinguistic patterns of development is possible. In nonlinguistic domains, we reviewed the acquisition of musical abilities and birdsong. In both these domains, evidence suggests that sensorimotor processing benefits from early exposure to the behavior of interest: and that the frontal-basal ganglia circuitry plays a significant role in this process (see Doupe et al., 2005). In the domain of language, phonological processing, particularly the articulation of speech sounds, is a sensorimotor process, and the accuracy of both L1 and L2 pronunciation depends on the speaker’s precise control and temporal coordination of articulatory actions in the speech apparatus (tongue, lips, jaw, larynx, etc.). According to the motor theory of speech perception (Liberman & Mattingly, 1985), the perception of speech is based on our neural representation of the articulatory gestures associated with the generation of sounds: All speech sounds involve an invariant set of motor commands that are internally represented for articulation. In this view, speech production is the mirror image of speech perception. It involves fine-grained, high-speed sensorimotor control of sequences of muscle movements (Browman & Goldstein, 1989). Such activities must, to some extent, engage the frontal– basal ganglia neural circuitry that underlies the dynamic coordination of sequenced activities. If speech perception and articulatory coordination are developed early in life, as evidence seems to suggest (see Kuhl, 2000, 2004), then a sensorimotor account based on the dynamics of sequence acquisition may explain AoA effects in both L1 and L2. Our sensorimotor view sheds light on several recent findings. For example, Izura and Ellis (2004) found that AoA effects in L2 are predicted by the order in which items enter L2, but not L1. The authors suggest that this finding supports computational models in which early learned items form initial links at the graphemic, phonological, and semantic level; these links constrain the acquisition of later items. Their argument is compatible with the sensorimotor hypothesis proposed here. Words encountered in L2 lead to the formation of new phonological and articulatory traces; these sensorimotor traces are unique to each language and do not transfer from L1 to L2, especially when L2 occurs late in life. The 645 establishment of lexical structures in L1 may also adversely impact the representation of the L2 lexicon in addition to affecting phonology and articulation (see Hernandez, Li, & MacWhinney, 2005 for a review). Recent connectionist simulations that manipulate L1 and L2 AoA provide additional evidence about how the L2 lexicon, as a whole, may be affected when L2 learning is delayed; L1 will consolidate and will significantly (sometimes dramatically) affect the representation of L2. L2, for example, may be parasitic, with reduced lexical space and high rates of confusion during lexical retrieval (see Figure 3). These patterns may account for the observed “deficit” in lexical retrieval during word naming in L2 (Craik & Bialystok, 2006) and are consistent with the view that reduced plasticity and diminished structural reorganization underlie AoA effects.7 The sensorimotor hypothesis naturally accounts for the finding that syntax, especially morphosyntax, is more sensitive to AoA than semantics in both monolingual and bilingual individuals. Research suggests that young children use prosodic and phonological cues in early word learning (Morgan & Demuth, 1996a, 1996b). Most importantly, phonological cues appear to be crucial in the processing of syntax and morphosyntax (Christophe, Guasti, Nespor, Dupoux, & Van Ooyen, 1997; Jusczyk, Kemler Nelson, Morgan, & Demuth, 1996) and in lexical categorization (Shi, 2006; Shi, Morgan, & Allopenna, 1998). If phonological processing abilities develop early, and the learning of syntax and morphosyntax relies heavily on these abilities, then it would be no surprise to find that sensorimotor abilities underlie syntactic and morphosyntactic AoA effects. These effects should be especially large for irregular items and items that do not have overlapping characteristics across languages, as they will tax the phonological system to a greater extent. In contrast, semantic processing relies on the conceptual overlap across languages and should transfer readily; hence, it should be less susceptible to AoA than are syntactic and morphosyntactic processes. Our working hypothesis is that AoA effects in nonlinguistic domains and in first- and second-language acquisition, although clearly distinct, share an underlying mechanism involving sensorimotor integration. This hypothesis is consistent with evidence about neural development. For example, Trainor (2005) showed that as an organism gains experience, its brain becomes more organized, leading to reduced plasticity because many connections have been functionally specified and are less open to change. This loss of plasticity can be described in neural terms by what some researchers call “experience-dependent synaptic change” (Bates, 1999) or “experience-mediated changes” (Trainor, 2005; Werker & Tees, 2005). Connectionist models provide computational principles that account for such changes (Elman, 1993; Elman, Bates, Johnson, & Karmiloff-Smith, 1996). Recent results from our selforganizing models clearly demonstrate such changes in mechanistic terms, as discussed earlier (Hernandez et al., 2005; Li & Farkas, 7 The functional organization of the bilingual lexicon in development (as simulated by our model) should not be confused with the issue of neural representation of the two languages (as shown by fMRI work). Indeed, the early distinct representation of the two lexicons, as shown in Figure 3, would appear counterintuitive if it were pitted against the idea of a common neural machinery for both L1 and L2 in early or proficient bilingual individuals. This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. HERNANDEZ AND LI 646 Figure 3. Lexical organization as a function of early versus late learning of a second language (L2). Shaded areas indicate L2 (Chinese) representations. From “A self-organizing connectionist model of bilingual lexical development” (p. 2639) by X. Zhao & P. Li. In Proceedings of the 28th annual conference of the Cognitive Science Society (2006). Mahwah, NJ: Erlbaum. 2002; Li, Farkas, & MacWhinney, 2004; Li, Zhao, & MacWhinney, 2007). In conclusion, sensorimotor learning is an important milestone that determines AoA effects. Our sensorimotor account exemplifies the idea that learning shapes the course of development in monolingual, bilingual, and nonlinguistic domains. This view is consistent with recent views in developmental psychology and cognitive neuroscience that early learning leads to dedicated neural circuitry that affects the form of cognitive and neural structures at later stages of development (Elman, 2005; Elman et al., 1996; Kello, 2004; Kuhl, 2000, 2004; L. B. Smith & Thelen, 2003). References Abutalebi, J., Cappa, S. F., & Perani, D. (2001). The bilingual brain as revealed by functional neuroimaging. Bilingualism: Language and Cognition, 4, 179 –190. Ackermann, H., & Riecker, A. (2004). The contribution of the insula to motor aspects of speech production: A review and a hypothesis. Brain and Language, 89, 320 –328. Altarriba, J. (1992). The representation of translation equivalents in bilingual memory. In R. J. Harris (Ed.), Cognitive processing in bilinguals (Vol. 83, pp. 157–174). Amsterdam, the Netherlands: North-Holland. Atchley, R. A., Rice, M. L., Betz, S. K., Kwasny, K. M., Sereno, J. A., & Jongman, A. (2006). A comparison of semantic and syntactic event related potentials generated by children and adults. Brain and Language, 99, 236 –246. Banks, M. S., Aslin, R. N., & Letson, R. D. (1975). Sensitive period for the development of human binocular vision. Science, 190, 675– 677. Barry, C., Morrison, C. M., & Ellis, A. W. (1997). Naming the Snodgrass and Vanderwart pictures: Effects of age of acquisition, frequency and name agreement. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 50A, 560 –585. Bates, E. (1999). Plasticity, localization and language development. In S. Broman & J. M. Fletcher (Eds.), The changing nervous system: Neurobehavioral consequences of early brain disorders (pp. 214 –253). New York: Oxford University Press. Bates, E., Benigni, L., Bretherton, I., Camaioni, L., & Volterra, V. (1979). The emergence of symbols: Cognition and communication in infancy. New York: Academic Press. Bates, E., Thal, D., Trauner, D., Fenson, J., Aram, D., Eisele, J., et al. (1997). From first words to grammar in children with focal brain injury. Developmental Neuropsychology, 13, 275–343. Bates, E., Wilson, S. M., Saygin, A. P., Dick, F., Sereno, M. I., Knight, R. T., et al. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6, 448 – 450. Belke, E., Brysbaert, M., Meyer, A. S., & Ghyselinck, M. (2005). Age of acquisition effects in picture naming: Evidence for a lexical-semantic competition hypothesis. Cognition, 96, B45–B54. Bird, H., Ralph, M. A. L., Seidenberg, M. S., McClelland, J. L., & Patterson, K. (2003). Deficits in phonology and past-tense morphology: What’s the connection? Journal of Memory and Language, 48, 502–526. Birdsong, D. (1992). Ultimate attainment in second language acquisition. Language, 68, 706 –755. Birdsong, D., & Flege, J. E. (2001). Regular–irregular dissociations in the acquisition of English as a second language. BUCLD 25: Proceedings of the 25th annual Boston University Conference on Language Development, Boston. Brainard, M. S., & Doupe, A. J. (2002). What songbirds teach us about learning. Nature, 417, 351–358. Brainard, M. S., & Knudsen, E. (1998). Sensitive periods for visual calibration of the auditory space map in the barn owl optic tectum. Journal of Neuroscience, 18, 3929 –3942. Browman, C. P., & Goldstein, L. (1989). Articulatory gestures as phonological units. Phonology, 6, 201–251. Brown, G. D., & Watson, F. L. (1987). First in, first out: Word learning age and spoken word frequency as predictors of word familiarity and word naming latency. Memory & Cognition, 15, 208 –216. Brysbaert, M., Van Wijnendaele, I., & De Deyne, S. (2000). Age-of- This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. AGE OF ACQUISITION MECHANISMS acquisition effects in semantic processing tasks. Acta Psychologica, 104, 215–226. Carroll, J. B., & White, M. N. (1973). Word frequency and age of acquisition as determiners of picture-naming latency. Quarterly Journal of Experimental Psychology, 25, 85–95. Chee, M. W. (2006). Language processing in bilinguals as revealed by functional neuroimaging: A contemporary synthesis. In P. Li, L. Tan, E. Bates, & O. Tzeng (Eds.), Handbook of East Asian psycholinguistics (Vol. 1, Chinese, pp. 287–295). Cambridge, United Kingdom: Cambridge University Press. Chee, M. W., Hon, N., Lee, H. L., & Soon, C. S. (2001). Relative language proficiency modulates BOLD signal change when bilinguals perform semantic judgments: Blood oxygen level dependent. Neuroimage, 13, 1155–1163. Chee, M. W., Soon, C. S., Lee, H. L., & Pallier, C. (2004). Left insula activation: A marker for language attainment in bilinguals. Proceedings of the National Academy of Sciences of the United States of America, 101, 15265–15270. Chen, L., Shu, H., Liu, Y., Zhao, X., & Li, P. (in press). ERP signatures of subject–verb agreement in L2 learning. Bilingualism: Language and Cognition. Christophe, A., Guasti, T., Nespor, M., Dupoux, E., & Van Ooyen, B. (1997). Reflections on phonological bootstrapping: Its role for lexical and syntactic acquisition. Language and Cognitive Processes, 12, 585– 612. Craik, F., & Bialystok, E. (2006, November). Positive and negative effects of bilingualism on cognitive aging. Paper presented at the 47th annual meeting of the Psychonomic Society, Houston, TX. Cuetos, F., Ellis, A. W., & Alvarez, B. (1999). Naming times for the Snodgrass and Vanderwart pictures in Spanish. Behavior Research Methods, Instruments and Computers, 31, 650 – 658. Cummins, J. (1983). Language proficiency in academic achievement. In J. W. Oller (Ed.), Issues in language testing research (pp. 108 –130). Rowley, MA: Newbury House. Dapretto, M., Bookheimer, S., & Mazziotta, J. (1999). Form and content: Dissociating syntax and semantics in sentence comprehension. Neuron, 24, 427– 432. Deutsch, D., Henthorn, T., Marvin, E., & Xu, H. (2006). Absolute pitch among American and Chinese conservatory students: Prevalence differences, and evidence for a speech-related critical period. Journal of the Acoustical Society of America, 119, 719 –722. Devlin, J. T., Jamison, H. L., Gonnerman, L. M., & Matthews, P. M. (2006). The role of the posterior fusiform gyrus in reading. Journal of Cognitive Neuroscience, 18, 911–922. Doupe, A., Perkel, D., Reiner, A., & Stern, A. (2005). Birdbrains could teach basal ganglia research a new song. Trends in Neurosciences, 28, 353–363. Dronkers, N. F. (1996). A new brain region for coordinating speech articulation. Nature, 384, 159 –161. Eichenbaum, H. (2001). The hippocampus and declarative memory: Cognitive mechanisms and neural codes. Behavioural Brain Research, 127, 199 –207. Elbert, T., Pantev, C., Wienbruch, C., Rockstroh, B., & Taub, E. (1995). Increased cortical representation of the fingers of the left hand in string players. Science, 270, 305–307. Ellis, A. W., Burani, C., Izura, C., Bromiley, A., & Venneri, A. (2006). Traces of vocabulary acquisition in the brain: Evidence from covert object naming. Neuroimage, 33, 958 –968. Ellis, A. W., & Lambon Ralph, M. A. (2000). Age of acquisition effects in adult lexical processing reflect loss of plasticity in maturing systems: Insights from connectionist networks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1103–1123. Ellis, A. W., & Morrison, C. M. (1998). Real age-of-acquisition effects in lexical retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 515–523. 647 Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48, 71–99. Elman, J. L. (2005). Connectionist models of cognitive development: Where next? Trends in Cognitive Sciences, 9, 112–117. Elman, J. L., Bates, E. A., Johnson, M. H., & Karmiloff-Smith, A. (1996). Rethinking innateness: A connectionist perspective on development: MIT Press: Cambridge, MA. Elston-Guettler, K. E., Paulmann, S., & Kotz, S. A. (2005). Who’s in control? Proficiency and L1 influence on L2 processing. Journal of Cognitive Neuroscience, 17, 1593–1610. Fabbro, F. (1999). The neurolinguistics of bilingualism: An introduction. Hove, England: Elsevier. Fagiolini, M., Pizzorusso, T., Berardi, N., Domenici, L., & Maffei, L. (1994). Functional postnatal development of the rat primary visual cortex and the role of visual experience: Dark rearing and monocular deprivation. Vision Research, 34, 709 –720. Fiebach, C. J., & Friederici, A. D. (2004). Processing concrete words: fMRI evidence against a specific right-hemisphere involvement. Neuropsychologia, 42, 62–70. Fiebach, C. J., Friederici, A. D., Müller, K., von Cramon, D. Y., & Hernandez, A. E. (2003). Distinct brain representations for early and late learned words. Neuroimage, 19, 1627–1637. Fiez, J. A. (1997). Phonology, semantics, and the role of the left inferior prefrontal cortex. Human Brain Mapping, 5, 79 – 83. Flege, J. E., Munro, M. J., & MacKay, I. R. A. (1995). Effects of age of second-language learning on the production of English consonants. Speech Communication, 16, 1–26. Flege, J. E., Yeni-Komshian, G. H., & Liu, S. (1999). Age constraints on second language acquisition. Journal of Memory and Language, 41, 78 –104. French, R. M., & Jacquet, M. (2004). Understanding bilingual memory: Models and data. Trends in Cognitive Sciences, 8, 87–93. Frenkel, M. Y., & Bear, M. F. (2004). How monocular deprivation shifts ocular dominance in visual cortex of young mice. Neuron, 44, 917–923. Friederici, A. D., Hahne, A., & Mecklinger, A. (1996). Temporal structure of syntactic parsing: Early and late event-related brain potential effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1219 –1248. Friederici, A. D., Opitz, B., & von Cramon, D. Y. (2000). Segregating semantic and syntactic aspects of processing in the human brain: An fMRI investigation of different word types. Cerebral Cortex, 10, 698 – 705. Gao, J. H., Parsons, L. M., Bower, J. M., Xiong, J., Li, J., & Fox, P. T. (1996). Cerebellum implicated in sensory acquisition and discrimination rather than motor control. Science, 272, 545–547. Gerhand, S., & Barry, C. (1998). Word frequency effects in oral reading are not merely age-of-acquisition effects in disguise. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 267–283. Gerhand, S., & Barry, C. (1999). Age-of-acquisition and frequency effects in speeded word naming. Cognition, 73, B27–B36. Gilhooly, K. J., & Gilhooly, M. L. (1979). Age-of-acquisition effects in lexical and episodic memory tasks. Memory & Cognition, 7, 214 –223. Gilhooly, K. J., & Watson, F. L. (1981). Word age-of-acquisition effects: A review. Current Psychological Reviews, 1, 269 –286. Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15, 883– 899. Hagoort, P. (2005). On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416 – 423. Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28. Hakuta, K., Bialystok, E., & Wiley, E. (2003). Critical evidence: A test of the critical-period hypothesis for second-language acquisition. Psychological Science, 14, 31–38. This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 648 HERNANDEZ AND LI Harley, B., & Wang, W. (1997). The critical period hypothesis: Where are we now? In Tutorials in bilingualism: Psycholinguistic perspectives (pp. 1–50). Hillsdale, NJ: Erlbaum. Harwerth, R., Smith, E., Duncan, G., Crawford, M., & von Noorden, G. (1986). Multiple sensitive periods in the development of the primate visual system. Science, 232, 235–238. Heim, S., Opitz, B., & Friederici, A. D. (2002). Broca’s area in the human brain is involved in the selection of grammatical gender for language production: Evidence from event-related functional magnetic resonance imaging. Neuroscience Letters, 328, 101–104. Hensch, T. K. (2004). Critical period regulation. Annual Review of Neuroscience, 27, 549 –579. Hernandez, A. E., Bates, E., & Avila, L. X. (1996). Processing across the language boundary: A cross-modal priming study of Spanish–English bilinguals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 846 – 864. Hernandez, A. E., & Fiebach, C. J. (2006). The brain bases of reading late learned words: Evidence from functional MRI. Visual Cognition, 13, 1027–1043. Hernandez, A. E., & Kohnert, K. (1999). Aging and language switching in bilinguals. Aging, Neuropsychology and Cognition, 6, 69 – 83. Hernandez, A. E., Kotz, S. A., Hoffman, J., Valentin, V. V., Dapretto, M., & Bookheimer, S. Y. (2004). The neural correlates of grammatical gender decisions in Spanish. NeuroReport, 15, 863– 866. Hernandez, A., Li, P., & MacWhinney, B. (2005). The emergence of competing modules in bilingualism. Trends in Cognitive Sciences, 9, 220 –225. Hernandez, A. E., & Reyes, I. (2002). Within- and between-language priming differ: Evidence from repetition of pictures in Spanish–English bilinguals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 726 –734. Hirsh, K. W., Morrison, C. M., Gaset, S., & Carnicer, E. (2003). Age of acquisition and speech production in L2. Bilingualism: Language and Cognition, 6, 117. Huang, Z., Kirkwood, A., Pizzorusso, T., Porciatti, V., Morales, B., Bear, M., et al. (1999). BDNF regulates the maturation of inhibition and the critical period of plasticity in mouse visual cortex. Cell, 98, 739 –755. Hubel, D. H., & Wiesel, T. N. (1965). Binocular interaction in striate cortex of kittens reared with artificial squint. Journal of Neurophysiology, 28, 1041–1059. Issa, N., Trachtenberg, J., Chapman, B., Zahs, K., & Stryker, M. (1999). The critical period for ocular dominance plasticity in the ferret’s visual cortex. Journal of Neuroscience, 19, 6965– 6978. Izura, C., & Ellis, A. W. (2004). Age of acquisition effects in translation judgement tasks. Journal of Memory and Language, 50, 165. Johnson, J. S., & Newport, E. L. (1989). Critical period effects in second language learning: The influence of maturational state on the acquisition of English as a second language. Cognitive Psychology, 21, 60 –99. Juhasz, B. J. (2005). Age-of-acquisition effects in word and picture identification. Psychological Bulletin, 131, 684 –712. Jusczyk, P. W., Kemler Nelson, D. G., Morgan, J. L., & Demuth, K. (1996). Syntactic units, prosody, and psychological reality during infancy. In J. L. Morgan & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 389 – 408). Mahwah, NJ: Erlbaum. Kang, A. M., Constable, R. T., Gore, J. C., & Avrutin, S. (1999). An event-related fMRI study of implicit phrase-level syntactic and semantic processing. Neuroimage, 10, 555–561. Kello, C. T. (2004). Characterizing the evolutionary dynamics of language. Trends in Cognitive Sciences, 8, 392–394. Kennedy, D., & Norman, C. (Eds.). (2005, July 1). What don’t we know? [special issue]. Science, 309(5731). Knudsen, E. I. (2004). Sensitive periods in the development of the brain and behavior. Journal of Cognitive Neuroscience, 16, 1412–1425. Knudsen, E. I., & Knudsen, P. F. (1990). Sensitive and critical periods for visual calibration of sound localization by barn owls. Journal of Neuroscience, 10, 222–232. Kohnert, K. J., Hernandez, A. E., & Bates, E. (1998). Bilingual performance on the Boston Naming Test: Preliminary norms in Spanish and English. Brain and Language, 65, 422– 440. Kroll, J. F., & de Groot, A. M. B. (1997). Lexical and conceptual memory in the bilingual: Mapping form to meaning in two languages. In A. M. B. de Groot & J. F. Kroll (Eds.), Tutorials in bilingualism: Psycholinguistic perspectives (pp. 169 –199). Mahwah, NJ: Erlbaum. Kroll, J. F., & de Groot, A. M. B. (2005). Handbook of bilingualism: Psycholinguistic approaches. Oxford, England: Oxford University Press. Kroll, J. F., & Tokowicz, N. (2005). Models of bilingual representation and processing: Looking back and to the future. In J. F. Kroll & A. M. B. DeGroot (Eds.), Handbook of bilingualism: Psycholinguistic approaches (pp. 531–533). Oxford, England: Oxford University Press. Kroll, J. F., Tokowicz, N., & Nicol, J. L. (2001). The development of conceptual representation for words in a second language. In P. C. Muysken (Ed.), One mind, two languages: Bilingual language processing (pp. 49 –71). Malden, MA: Blackwell. Kuhl, P. K. (2000). A new view of language acquisition. Proceedings of the National Academy of Sciences of the United States of America, 97, 11850 –11857. Kuhl, P. K. (2004). Early language acquisition: Cracking the speech code. Nature Reviews Neuroscience, 5, 831– 841. Kutas, M., & Hillyard, S. A. (1980). Event-related brain potentials to semantically inappropriate and surprisingly large words. Biological Psychology, 11, 99 –116. Kutas, M., & Van Petten, C. (1988). Event-related brain potential studies of language. In P. K. Ackles, J. R. Jennings, & M. G. H. Coles (Eds.), Advances in psychophysiology (Vol. 3, pp. 139 –187). Greenwich, CT: JAI Press. Lambon Ralph, M., & Ehsan, S. (2006). Age of acquisition effects depend on the mapping between representations and the frequency of occurrence: Empirical and computational evidence. Visual Cognition, 13, 928 –948. Lewis, M. B. (1999). Age of acquisition in face categorization: Is there an instance-based account? Cognition, 71, B23–B39. Lewis, M. B., Gerhand, S., & Ellis, H. D. (2001). Re-evaluating age-ofacquisition effects: Are they simply cumulative-frequency effects? Cognition, 78, 189 –205. Li, P., & Farkas, I. (2002). A self-organizing connectionist model of bilingual processing. In R. H. J. Altarriba (Ed.), Bilingual sentence processing (pp. 59 – 85). North-Holland: Elsevier Science. Li, P., Farkas, I., & MacWhinney, B. (2004). Early lexical development in a self-organizing neural networks. Neural Networks, 17, 1345–1362. Li, P., Jin, Z., & Tan, L. H. (2004). Neural representations of nouns and verbs in Chinese: An fMRI study. Neuroimage, 21, 1533–1541. Li, P., Zhao, X., & MacWhinney, B. (in press). Dynamic self-organization and early lexical development in children. Cognitive Science. Liberman, A. M., & Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21, 1–36. Lieberman, P. (2000). Human language and our reptilian brain: The subcortical bases of speech, syntax, and thought. Cambridge, MA: Harvard University Press. Liu, H., Bates, E., & Li, P. (1992). Sentence interpretation in bilingual speakers of English and Chinese. Applied Psycholinguistics, 13, 451– 484. Lyons, J. (1977). Semantics (Vol. 2). Cambridge, United Kingdom: Cambridge University Press. Mackay, I. R. A., & Flege, J. E. (2004). Effects of the age of second language learning on the duration of first and second language sentences: The role of suppression. Applied Psycholinguistics, 25, 373–396. McClelland, J. L., & Patterson, K. (2002). Rules or connections in past- This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. AGE OF ACQUISITION MECHANISMS tense inflections: What does the evidence rule out? Trends in Cognitive Sciences, 6, 465– 472. Mechelli, A., Crinion, J. T., Noppeney, U., O’Doherty, J., Ashburner, J., Frackowiak, R. S., et al. (2004). Neurolinguistics: Structural plasticity in the bilingual brain. Nature, 431, 757. Meschyan, G., & Hernandez, A. (2002). Age of acquisition and word frequency. Memory & Cognition, 30, 262–269. Meschyan, G., & Hernandez, A. (2006). Impact of language proficiency and orthographic transparency on bilingual word reading: An fMRI investigation. Neuroimage, 29, 1135–1140. Miceli, G., Turriziani, P., Caltagirone, C., Capasso, R., Tomaiuolo, F., & Caramazza, A. (2002). The neural correlates of grammatical gender: An fMRI investigation. Journal of Cognitive Neuroscience, 14, 618 – 628. Monaghan, J., & Ellis, A. W. (2002a). Age of acquisition and the completeness of phonological representations. Reading and Writing, 15, 759 –788. Monaghan, J., & Ellis, A. W. (2002b). What exactly interacts with spelling–sound consistency in word naming? Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 183–206. Moore, V., Smith-Spark, J. H., & Valentine, T. (2004). The effects of age of acquisition on object perception. European Journal of Cognitive Psychology, 16, 417– 439. Moore, V., & Valentine, T. (1998). The effect of age of acquisition on speed and accuracy of naming famous faces. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 51A, 485– 513. Moore, V., & Valentine, T. (1999). The effects of age of acquisition on processing famous faces and names: Exploring the locus and proposing a mechanism. In M. H. S. Stoness (Ed.), Proceedings of the 21st Annual Meeting of the Cognitive Science Society (pp. 749 –754). Vancouver, Canada: Erlbaum. Morgan, J. L., & Demuth, K. (1996a). Signal to syntax: An overview. In J. L. Morgan & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 1–22). Mahwah, NJ: Erlbaum. Morgan, J. L., & Demuth, K. (Eds.). (1996b). Signal to syntax: Bootstrapping from speech to grammar in early acquisition. Mahwah, NJ: Erlbaum. Moro, A., Tettamanti, M., Perani, D., Donati, C., Cappa, S. F., & Fazio, F. (2001). Syntax and the brain: Disentangling grammar by selective anomalies. Neuroimage, 13, 110 –118. Morrison, C. M., Chappell, T. D., & Ellis, A. W. (1997). Age of acquisition norms for a large set of object names and their relation to adult estimates and other variables. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 50A, 528 –559. Morrison, C. M., & Ellis, A. W. (1995). Roles of word frequency and age of acquisition in word naming and lexical decision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 116 –133. Morrison, C. M., & Ellis, A. W. (2000). Real age of acquisition effects in word naming and lexical decision. British Journal of Psychology, 91, 167–180. Morrison, C. M., & Gibbons, Z. C. (2006). Does age of acquisition affect semantic processing? Visual Cognition, 13, 949 –967. Morrison, C. M., Hirsh, K. W., Chappell, T., & Ellis, A. W. (2002). Age and age of acquisition: An evaluation of the cumulative frequency hypothesis. European Journal of Cognitive Psychology, 14, 435– 459. Munro, M. J., Flege, J. E., & MacKay, I. R. A. (1996). The effects of age of second language learning on the production of English vowels. Applied Psycholinguistics, 17, 313–334. Norcia, A. M. (1996). Abnormal motion processing and binocularity: Infantile esotropia as a model system for effects of early interruptions of binocularity. Eye, 10, 259 –265. Olson, C., & Freeman, R. (1980). Profile of the sensitive period for monocular deprivation in kittens. Experimental Brain Research, 39, 17–21. Osterhout, L., Allen, M. D., McLaughlin, J., & Inoue, K. (2002). Brain 649 potentials elicited by prose-embedded linguistic anomalies. Memory & Cognition, 30, 1304 –1312. Pallier, C., Dehaene, S., Poline, J. B., LeBihan, D., Argenti, A. M., Dupoux, E., et al. (2003). Brain imaging of language plasticity in adopted adults: Can a second language replace the first? Cerebral Cortex, 13, 155–161. Patterson, K., Ralph, M. A. L., Hodges, J. R., & McClelland, J. L. (2001). Deficits in irregular past-tense verb morphology associated with degraded semantic knowledge. Neuropsychologia, 39, 709 –724. Perani, D., Dehaene, S., Grassi, F., Cohen, L., Cappa, S., Dupoux, E., et al. (1996). Brain processing of native and foreign languages. NeuroReport, 7, 2439 –2444. Perani, D., Paulesu, E., Galles, N. S., Dupoux, E., Dehaene, S., Bettinardi, V., et al. (1998). The bilingual brain: Proficiency and age of acquisition of the second language. Brain, 121, 1841–1852. Pettigrew, J. D. (1972). The importance of early visual experience for neurons of the developing geniculostriate system. Investigative Ophthalmology, 11, 386 –394. Pickett, E. R., Kuniholm, E., Protopapas, A., Friedman, J., & Lieberman, P. (1998). Selective speech motor, syntax and cognitive deficits associated with bilateral damage to the putamen and the head of the caudate nucleus: A case study. Neuropsychologia, 36, 173–188. Pinker, S. (1991). Rules of language. Science, 253, 530 –535. Pinker, S., & Ullman, M. T. (2002). The past and future of the past tense. Trends in Cognitive Sciences, 6, 456. Poldrack, R. A., Wagner, A. D., Prull, M. W., Desmond, J. E., Glover, G. H., & Gabrieli, J. D. (1999). Functional specialization for semantic and phonological processing in the left inferior prefrontal cortex. Neuroimage, 10, 15–35. Potter, M. C., So, K., von Eckardt, B., & Feldman, L. B. (1984). Lexical and conceptual representation in beginning and proficient bilinguals. Journal of Verbal Learning and Verbal Behavior, 23, 23–38. Price, C. J., & Devlin, J. T. (2003). The myth of the visual word form area. Neuroimage, 19, 473– 481. Raman, I. (2006). On the age-of-acquisition effects in word naming and orthographic transparency: Mapping specific or universal? Visual Cognition, 13, 1044 –1053. Rogers, T. T., Lambon Ralph, M. A., Hodges, J. R., & Patterson, K. (2004). Natural selection: The impact of semantic impairment on lexical and object decision. Cognitive Neuropsychology, 21, 331–352. Schlaug, G., Jancke, L., Huang, Y., Staiger, J. F., & Steinmetz, H. (1995). Increased corpus callosum size in musicians. Neuropsychologia, 33, 1047–1055. Schreuder, R., & Weltens, B. (Eds.). (1993). The bilingual lexicon. Amsterdam, the Netherlands: John Benjamins. Seidenberg, M. S., & Zevin, J. D. (2006). Connectionist models in developmental cognitive neuroscience: Critical periods and the paradox of success. In Y. Munakata & M. Johnson (Eds.), Attention & Performance XXI: Processes of change in brain and cognitive development (pp. 585– 612). Oxford, England: Oxford University Press. Shi, R. (2006). Basic syntactic categories in early language development. In P. Li, L. H. Tan, E. Bates, & O. J. L. Tzeng (Eds.), Handbook of East Asian psycholinguistics (Vol. 1: Chinese, pp. 90 –102). Cambridge, United Kingdom: Cambridge University Press. Shi, R., Morgan, J., & Allopenna, P. (1998). Phonological and acoustic bases for earliest grammatical category assignment: A cross-linguistic perspective. Journal of Child Language, 25, 169 –201. Sholl, A., Sankaranarayanan, A., & Kroll, J. F. (1995). Transfer between picture naming and translation: A test of asymmetries in bilingual memory. Psychological Science, 6, 45– 49. Shuster, L. I., & Lemieux, S. K. (2005). An fMRI investigation of covertly and overtly produced mono- and multisyllabic words. Brain and Language, 93, 20 –31. Smith, K. U., & Greene, P. (1963). A critical period in maturation of This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 650 HERNANDEZ AND LI performance with space-displaced vision. Perceptual Motor Skills, 17, 627– 639. Smith, L. B., & Thelen, E. (2003). Development as a dynamic system. Trends in Cognitive Sciences, 7, 343–348. Smith, M. A., Cottrell, G. W., & Anderson, K. (2001). The early word catches the weights. In T. K. Leen, T. G. Dietterich, & V. Tresp (Eds.), Advances in neural information processing systems (Vol. 13, pp. 52–58). Cambridge, MA: MIT Press. Snow, C. E., & Hoefnagel-Höhle, M. (1978). The critical period for language acquisition: Evidence from second language learning. Child Development, 49, 1114 –1128. Squire, L. R., & Zola, S. M. (1996). Structure and function of declarative and nondeclarative memory systems. Proceedings of the National Academy of Sciences of the United States of America, 93, 13515–13522. Stafford, C. A. (1984). Critical period plasticity for visual function: Definition in monocularly deprived rats using visually evoked potentials. Opthalmic Physiology, 4, 95–100. Steyvers, M., & Tenenbaum, J. B. (2005). The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth. Cognitive Science, 29, 41–78. Tees, R. C. (1967). Effects of early auditory restriction in the rat on adult pattern discrimination. Journal of Comparative and Physiological Psychology, 63, 389 –393. Thompson, S. A., Patterson, K., & Hodges, J. R. (2003). Left/right asymmetry of atrophy in semantic dementia: Behavioral– cognitive implications. Neurology, 61, 1196 –1203. Thompson-Schill, S. L., D’Esposito, M., Aguirre, G. K., & Farah, M. J. (1997). Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation. Proceedings of the National Academy of Sciences of the United States of America, 94, 14792–14797. Tokowicz, N., & MacWhinney, B. (2005). Implicit vs. explicit measures of sensitivity to violations in L2 grammar: An event-related potential investigation. Studies in Second Language Acquisition, 27, 173–204. Trainor, L. J. (2005). Are there critical periods for musical development? Developmental Psychobiology, 46, 262–278. Ullman, M. (2001a). A neurocognitive perspective on language: The declarative/procedural model. Nature Reviews Neuroscience, 2, 717–726. Ullman, M. T. (2001b). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4, 105–122. Ullman, M. T. (2004). Contributions of memory circuits to language: The declarative/procedural model. Cognition, 92, 231–270. Ullman, M. T. (2005). A cognitive neuroscience perspective on second language acquisition: The declarative/procedural model. In C. Sanz (Ed.), Mind and context in adult second language acquisition: Methods, theory, and practice (pp. 141–178). Washington, DC: Georgetown University Press. Ullman, M., Corkin, S., Coppola, M., Hickok, G., Growden, J., & Koroshetz, W. (1997). A neural dissociation within language: Evidence that the mental dictionary is part of declarative memory and that grammatical rules are processed by the procedural system. Journal of Cognitive Neuroscience, 9, 289 –299. Ventureyra, V. A. G., Pallier, C., & Yoo, H.-Y. (2004). The loss of first language phonetic perception in adopted Koreans. Journal of Neurolinguistics, 17, 79 –91. Wartenburger, I., Heekeren, H. R., Abutalebi, J., Cappa, S. F., Villringer, A., & Perani, D. (2003). Early setting of grammatical processing in the bilingual brain. Neuron, 37, 159 –170. Watanabe, D., Savion-Lemieux, T., & Penhune, V. B. (2007). The effect of early musical training on adult motor performance: Evidence for a sensitive period in motor learning. Experimental Brain Research, 176, 332–340. Weber-Fox, C., & Neville, H. J. (1996). Maturational constraints on functional specializations for language processing: ERP and behavioral evidence in bilingual speakers. Journal of Cognitive Neuroscience, 8, 231–256. Werker, J. F., & Tees, R. C. (2005). Speech perception as a window for understanding plasticity and commitment in language systems of the brain. Developmental Psychobiology, 46, 233–234. Wiesel, T. N., & Hubel, D. H. (1965). Extent of recovery from the effects of visual deprivation in kittens. Journal of Neurophysiology, 28, 1060 – 1072. Xue, G., Dong, Q., Jin, Z., Zhang, L., & Wang, Y. (2004). An fMRI study with semantic access in low proficiency second language learners. NeuroReport, 15, 791–796. Zatorre, R. J. (1989). On the representation of multiple languages in the brain: Old problems and new directions. Brain and Language, 36, 127–147. Zevin, J. D., & Seidenberg, M. S. (2002). Age of acquisition effects in word reading and other tasks. Journal of Memory and Language, 47, 1–29. Zevin, J. D., & Seidenberg, M. S. (2004). Age-of-acquisition effects in reading aloud: Tests of cumulative frequency and frequency trajectory. Memory & Cognition, 32, 31–38. Zhao, X., & Li, P. (2006). A self-organizing connectionist model of bilingual lexical development. In Proceedings of the 28th Annual Conference of the Cognitive Science Society (p. 2639). Mahwah, NJ: Erlbaum. Received June 19, 2006 Revision received December 29, 2006 Accepted January 3, 2007 䡲
The cognitive niche: Coevolution of intelligence, sociality, and language Steven Pinker1 Department of Psychology, Harvard University, Cambridge, MA 02138 Although Darwin insisted that human intelligence could be fully explained by the theory of evolution, the codiscoverer of natural selection, Alfred Russel Wallace, claimed that abstract intelligence was of no use to ancestral humans and could only be explained by intelligent design. Wallace’s apparent paradox can be dissolved with two hypotheses about human cognition. One is that intelligence is an adaptation to a knowledge-using, socially interdependent lifestyle, the “cognitive niche.” This embraces the ability to overcome the evolutionary fixed defenses of plants and animals by applications of reasoning, including weapons, traps, coordinated driving of game, and detoxification of plants. Such reasoning exploits intuitive theories about different aspects of the world, such as objects, forces, paths, places, states, substances, and other people’s beliefs and desires. The theory explains many zoologically unusual traits in Homo sapiens, including our complex toolkit, wide range of habitats and diets, extended childhoods and long lives, hypersociality, complex mating, division into cultures, and language (which multiplies the benefit of knowledge because know-how is useful not only for its practical benefits but as a trade good with others, enhancing the evolution of cooperation). The second hypothesis is that humans possess an ability of metaphorical abstraction, which allows them to coopt faculties that originally evolved for physical problem-solving and social coordination, apply them to abstract subject matter, and combine them productively. These abilities can help explain the emergence of abstract cognition without supernatural or exotic evolutionary forces and are in principle testable by analyses of statistical signs of selection in the human genome. cognition | human evolution | metaphor T he bicentennial of Darwin’s birth and sesquicentennial of the publication of the Origin of Species have focused the world’s attention on the breathtaking scope of the theory of natural selection, not least its application to the human mind. “Psychology will be based on a new foundation,” Darwin famously wrote at the end of the Origin, “that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history.” Far less attention has been given to the codiscoverer of natural selection, Alfred Russel Wallace, despite his prodigious scientific genius, and it is unlikely that the bicentennial of his birth in 1823 will generate the same hoopla. One reason was that Wallace turned out to be less prescient about the power of natural selection as an explanation of adaptive complexity in the living world. In particular, Wallace notoriously claimed that the theory of evolution by natural selection was inadequate to explain human intelligence: Our law, our government, and our science continually require us to reason through a variety of complicated phenomena to the expected result. Even our games, such as chess, compel us to exercise all these faculties in a remarkable degree. . . . A brain slightly larger than that of the gorilla would . . . fully have sufficed for the limited mental development of the savage; and we must therefore admit that the large brain he actually possesses could never have been solely developed by any of those laws of evolution, whose essence is, that they lead to a degree of organization exactly proportionate to the wants of each species, never beyond those wants. . .. sesses one very little inferior to that of a philosopher. (1, pp. 340, 343.) The upshot, claimed Wallace, was that “a superior intelligence has guided the development of man in a definite direction, and for a special purpose (1, p 359). ” Few scientists today accept Wallace’s creationism, teleology, or spiritualism. Nonetheless it is appropriate to engage the profound puzzle he raised; namely, why do humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law, given that opportunities to exercise these talents did not exist in the foraging lifestyle in which humans evolved and would not have parlayed themselves into advantages in survival and reproduction even if they did? I suggest that the puzzle can be resolved with two hypotheses. The first is that humans evolved to fill the “cognitive niche,” a mode of survival characterized by manipulating the environment through causal reasoning and social cooperation. The second is that the psychological faculties that evolved to prosper in the cognitive niche can be coopted to abstract domains by processes of metaphorical abstraction and productive combination, both vividly manifested in human language. The Cognitive Niche The term cognitive niche was proposed by Tooby and DeVore (2) to explain the constellation of zoologically unusual features of modern Homo sapiens without resorting to exotic evolutionary mechanisms. Their account begins with the biological commonplace that organisms evolve at one another’s expense. With the exception of fruit, virtually every food source of one animal is a body part of some other organism, which would just as soon keep that body part for itself. As a result, organisms evolve defenses against being eaten. Animals evolve speed, stealth, armor, and defensive maneuvers. Plants cannot defend themselves with their behavior, so they resort to chemical warfare, and have evolved a pharmacopeia of poisons, irritants, and bitter-tasting substances to deter herbivores with designs on their flesh. In response, eaters evolve measures to penetrate these defenses, such as offensive weapons, even greater speed or stealth, and organs such as the liver that detoxify plant poisons. This in turn selects for better defenses, selecting for better offenses, and so on, in a coevolutionary arms race, escalating over many generations of natural selection. Tooby and DeVore (2) suggest that humans exploit a cognitive niche in the world’s ecosystems. In biology, a “niche” is sometimes defined as “the role an organism occupies in an ecosystem.” The cognitive niche is a loose extension of this concept, based on the idea that in any ecosystem, the possibility exists for an organism to overtake other organisms’ fixed defenses by cause-and-effect reasoning and cooperative action—to deploy information and This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, "In the Light of Evolution IV: The Human Condition," held December 10–12, 2009, at the Arnold and Mabel Beckman Center of the National Academies of Sciences and Engineering in Irvine, CA. The complete program and audio files of most presentations are available on the NAS Web site at Author contributions: S.P. designed research; performed research; and wrote the paper. The author declares no conflict of interest. Natural selection could only have endowed savage man with a brain a few degrees superior to that of an ape, whereas he actually This article is a PNAS Direct Submission. 1 E-mail: PNAS | May 11, 2010 | vol. 107 | suppl. 2 | 8993–8999 inference, rather than particular features of physics and chemistry, to extract resources from other organisms in opposition to their adaptations to protect those resources. These inferences are played out internally in mental models of the world, governed by intuitive conceptions of physics, biology, and psychology, including the psychology of animals. It allows humans to invent tools, traps, and weapons, to extract poisons and drugs from other animals and plants, and to engage in coordinated action, for example, fanning out over a landscape to drive and concentrate game, in effect functioning like a huge superorganism. These cognitive stratagems are devised on the fly in endless combination suitable to the local ecology. They arise by mental design and are deployed, tested, and fine-tuned by feedback in the lifetimes of individuals, rather than arising by random mutation and being tuned over generations by the slow feedback of differential survival and reproduction. Because humans develop offenses in real time that other organisms can defend themselves against only in evolutionary time, humans have a tremendous advantage in evolutionary arms races. Even before the current anthropogenic mass extinction, prehistoric humans are believed to have caused significant extinctions of large fauna whenever they first entered an ecosystem. The theory of the cognitive niche helps explain many zoologically unusual features of H. sapiens: traits that are universal across human cultures (3) but are either unique or hyperdeveloped (especially in combination) with respect to the rest of the animal kingdom. Three in particular make our species stand out. Technological Know-How. Humans use and depend upon many kinds of tools, which involve multiple parts and complicated methods of fabrication. The tools are deployed in extended sequences of behavior and are acquired both by individual discovery and learning from others. They are deployed to capture and kill animals, to process foods (including cooking, fermenting, soaking, peeling, and crushing them to remove toxins and increase the availability of nutrients), and to generate and administer medicinal drugs (4, 5). This reasoning is supported by “intuitive theories”—folk understandings of physics (in particular, objects, substances, and the forces that impinge on them), geometry (places, paths, and directions), biology (essences that give organisms their form and propel their growth, motion, and physiological processes), and psychology (internal, immaterial beliefs and desires) (6–10). Cooperation Among Nonkin. Humans cooperate with other humans: they trade goods, favors, know-how, and loyalty, and act collectively in child-rearing, gathering, hunting, and defense. This cooperation extends to other humans who are not related to them, in shifting partnerships, coalitions, and trading relationships, and thus must be explained not by kin selection but by mutualism or reciprocity (11). The evolution of cooperation by reciprocal altruism requires a number of cognitive adaptations, which in fact appear to be welldeveloped in humans (11). They include the recognition of individuals (12); episodic memory for their actions (13); an ability to classify those actions in terms of whether they violate a reciprocity contract (14, 15); a suite of moral emotions such as sympathy, gratitude, anger, guilt, and trust, which impel an individual to initiate cooperation, reward reciprocators, and punish cheaters (11, 16); and the drives to ascertain the competence, integrity, and generosity of others (through gossip and other forms of due diligence) and to burnish one’s own reputation for these traits (17, 18). Because humans cooperate by at least three different kinds of relationship, governed by incompatible rules for the distribution of resources—reciprocal altruism, mutualistic sharing, deferring to dominant individuals—dyads can dynamically switch among kinds of relationship according to their history, kinship, social support, the resource at stake, and the context (19). The demands of this negotiation account for many of the complex aspects of human social life such as politeness, hypocrisy, ritual, and taboo (20, 21). 8994 | Grammatical Language. Although many animals communicate, humans appear to be unique in using an open-ended combinatorial system, grammatical language. In grammatical language, signals (words) are arbitrarily paired with concepts, and can be rearranged in novel hierarchical configurations (phrases embedded within phrases) in such a way that the meaning of the sequence can be computed from the meanings of the individual symbols and the way that they are arranged (22–24). The semantic meanings of the symbols (nouns, verbs, prepositions, tense markers, and so on) are related to the basic cognitive categories that define intuitive theories: objects, substances, motion, causation, agency, space, time (9, 25). The syntactic arrangements serve to express relationships among these concepts such as who did what to whom, what is where, and what is true of what (9). Although every language must be learned, humans have an ability to coin, pool, and learn new words and rules and thus are not dependent on some other species as teachers (as is the case with apes), or even on a longstanding linguistic community, to develop and use language (26). Grammatical language has clear advantages in the transmission of information. Because it allows messages to be composed out of elements, rather than drawn from a finite repertoire, it confers the ability to express an unlimited number of novel messages (27, 28). Journalists say that when a dog bites a man, that is not news, but when a man bites a dog, that is news: the power of grammar is that it allows us to convey news, by arranging familiar words in novel combinations. Like other digital combinatorial systems in biology (RNA, DNA, proteins), language can generate vast numbers of structured combinations. The number of possible sentences (each corresponding to a distinct message) is proportional to the number of words that may appear in a position in a sentence raised to the power of the length of the sentence. With an approximate geometric mean of ten choices available at every position in a sentence, one can estimate that a typical English speaker can easily produce or comprehend at least 1020 distinct sentences (29). This in turn makes it possible for language users to share an unlimited number of messages concerning specific events (who did what to whom, when, where, and why), generalized expertise (to accomplish this, do that), and flexible social contracts (if you do this, I’ll do that). Anyone who is skeptical that sophisticated reasoning, collaboration, and communication can bring survival advantages in a prehistoric lifestyle need only read ethnographic accounts of hunting or gathering in contemporary foraging peoples. One of many examples of hunter-gatherer ingenuity can be found in this description from the anthropologist Napoleon Chagnon of how the Yanomamö hunt armadillo: Armadillos live several feet underground in burrows that can run for many yards and have several entries. When the Yanomamö find an active burrow, as determined by the presence around the entry of a cloud of insects found nowhere else, they set about smoking out the armadillo. The best fuel for this purpose is a crusty material from old termite nests, which burns slowly and produces an intense heat and much heavy smoke. A pile of this material is ignited at the entry of the burrow, and the smoke is fanned inside. The other entries are soon detected by the smoke rising from them, and they are sealed with dirt. The men then spread out on hands and knees, holding their ears to the ground to listen for armadillo movements in the burrow. When they hear something, they dig there until they hit the burrow and, with luck, the animal. They might have to try several times, and it is hard work—they have to dig down two feet or more. On one occasion, after the hunters had dug several holes, all unsuccessful. . .one of them ripped down a large vine, tied a knot in the end of it, and put the knotted end into the entrance. Twirling the vine between his hands, he slowly pushed it into the hole as far as it would go. As his companions put their ears to the ground, he twirled the vine, causing the knot to make a noise, and the spot was marked. He broke off the vine at the burrow entrance, pulled out the piece in the hole, and laid it on the ground along the axis of the burrow. The others dug down at the place where they had heard the knot and found the armadillo on their first attempt, asphyxiated from the smoke (30, pp 78–79.). Pinker This jackpot was a reward for extraordinary feats of folk reasoning in taxonomy, physiology, physics, and geometry, some passed down from earlier generations, some improvised on the spot. And it depended on cooperative behavior among many individuals, coordinated by language. Other Extreme Human Traits. Other zoologically unusual features of H. sapiens may be explained by the theory of the cognitive niche. The vast range of habitats and foods exploited by our species may in part have been facilitated by natural selection of the genes in local populations to ambient conditions such as solar radiation, diet, and disease (31–34). But the these local adaptations pale in comparison with those made possible by human technology. The Inuit’s colonization of high latitudes may have been facilitated by adaptive changes in body shape and skin pigmentation, but it depended much more on parkas, kayaks, mukluks, igloos, and harpoons. This underscores that the cognitive niche differs from many examples of niches discussed in biology in being defined not as a particular envelope of environmental variables (temperature, altitude, habitat type, and so on), nor as a particular combination of other organisms, but rather the opportunity that any environment provides for exploitation via internal modeling of its causal contingencies. Our extended childhoods may serve as an apprenticeship in a species that lives by its wits, and our long lives may reflect a tilt in the tradeoff between reproduction and somatic maintenance toward the latter so as to maximize the returns on the investment during childhood. The dependence of children’s readiness for adulthood on their mastery of local culture and know-how may also shift the balance in male parental investment decisions between caring for existing offspring and seeking new mating opportunities. This in turn may have led to biparental care, longterm pair bonding, complex sexuality (such as female sexuality being unlinked from fertility, and sexual relationships subject to variation and negotiation), and multigeneration parental investment (35). Support for these hypotheses comes from the data of Kaplan (36), who has shown that among hunter-gatherers, prolonged childhood cannot pay off without long life spans. The men do not produce as many calories as they consume until age 18; their output then peaks at 32, plateaus through 45, then gently declines until 65. This shows that hunting is a knowledgedependent skill, invested in during a long childhood and paid out over a long life. Finally, the division of humankind into cultures differing in language, customs, mores, diets, and so on, is a consequence of humans’ dependence on learned information (words, recipes, tool styles, survival techniques, cooperative agreements, and customs) and their peripatetic natures. As splinter groups lose touch with their progenitors over time, the know-how and customs that the two groups accumulate will diverge from one another (37). Hominid Evolution and the Cognitive Niche. Given that the opportunity to exploit environments by technology and cooperation are independent of particular ecosystems, why was it Pliocene hominids that entered (or, more accurately, constructed) the cognitive niche and evolved sophisticated cognition, language, and sociality, rather than a population from some other taxon or epoch? This kind of historical question is difficult, perhaps impossible, to answer precisely because the unusualness of H. sapiens precludes statistical tests of correlations between the relevant traits and environments across species. But if we consider the cognitive niche as a suite of mutually reinforcing selection pressures, each of which exists individually in weaker form for other species, we can test whether variation in intelligence within a smaller range, together with a consideration of the traits that were likely possessed by extinct human ancestors, supports particular conjectures. Obviously any orthogenetic theory (such as Wallace’s) stipulating that the emergence our species was the goal of the evolutionary process is inconsistent with the known mechanisms of Pinker evolution. It is also apparent that intelligence, which depends on a large brain, is not a free good in evolution (38). Its costs include the metabolic demands of expensive neural tissue, compromises in the anatomy of the female pelvis necessary for bearing a large-headed offspring, and the risks of harm from birth, falls, and the mutation and parasite load carried by such a complex organ. The proper framing of the question must ask which circumstances made the benefits of intelligence outweigh the costs. The hypothesis is that the hominid ancestors, more so than any other species, had a collection of traits that had tilted the payoffs toward further investment in intelligence. One enabling factor may have been the possession of prehensile hands (an adaptation to arboreality) in combination with bipedality (presumably an adaptation to locomotion). We know from the fossil record that both preceded the expansion of the brain and the development of tool use (39). Perhaps the availability of precision manipulators meant that any enhanced ability to imagine how one might alter the environment could be parlayed into the manufacture and carrying of tools. A second contributor to the evolution of intelligence among hominid ancestors may have been an opportunistic diet that included meat and other hard-to-obtain sources of protein (5). Meat is not only a concentrated source of nutrients for a hungry brain but may have selected in turn for greater intelligence, because it requires more cleverness to outwit an animal than to outwit fruit or leaves. A third may have been group living, again with the possibility of positive feedback: groups allow acquired skills to be shared but also select for the social intelligence needed to prosper from cooperation without being exploited. Indirect support for the hypothesis that sociality and carnivory contributed to the evolution of human intelligence comes from comparative studies showing that greater intelligence across animal species is correlated with brain size, carnivory, group size, and extended childhoods and lifespans (40, 41). I am unaware of any review that has looked for a correlation between possession of prehensile appendages and intelligence, although it is tantalizing to learn that octopuses are highly intelligent (42). Coevolution of Cognition, Language, and Sociality. Many biologists argue that a niche is something that is constructed, rather than simply entered, by an organism (43, 44). An organism’s behavior alters its physical surroundings, which affects the selection pressures, in turn selecting for additional adaptations to exploit that altered environment, and so on. A classic example is the way beavers generated an aquatic niche and evolved additional adaptations to thrive in it. The particulars of a cognitive niche are similarly constructed, in the sense that initial increments in cooperation, communication, or know-how altered the social environment, and hence the selection pressures, for ancestral hominids. It is surely no coincidence that the psychological abilities underlying technological know-how, openended communication, and cooperation among nonkin are all hyperdeveloped in the same species; each enhances the value of the other two. (A similar feedback loop may connect intelligence with the life-history and behavioral-ecology variables mentioned in the preceding section.) An obvious interdependency connects language and know-how. The end product of learning survival skills is information stored in one’s brain. Language is a means of transmitting that information to another brain. The ability to share information via language leverages the value of acquiring new knowledge and skills. One does not have to recapitulate the trial-and-error, lucky accidents, or strokes of genius of other individuals but can build on their discoveries, avoiding the proverbial waste of reinventing the wheel. Language not only lowers the cost of acquiring a complex skill but multiplies the benefit. The knowledge not only can be exploited to manipulate the environment, but it can be shared with kin and other cooperators. Indeed, among commodities, inforPNAS | May 11, 2010 | vol. 107 | suppl. 2 | 8995 mation is unusually conducive to being shared because it is what economists call a “nonrival good”: it can be duplicated without loss. If I give you a fish (a rival good), I no longer have the fish; as the saying might have gone, you cannot eat your fish and have it. But if I teach you to fish, it does not mean that I am now amnesic for the skill of fishing; that valuable commodity now exists in twice as many copies. Language can multiply this proliferation: for the minor cost of a few seconds of breath, a speaker can confer on a listener the invaluable benefit of a new bit of know-how. Crucially, a commodity that confers a high benefit on others at a low cost to the self is a key ingredient in the evolution of cooperation by reciprocal altruism, because both parties can profit from their exchange over the long run (11). The ability to share know-how through language thus may have been a major accelerant in the evolution of cooperation because it gives humans both the incentive and the means to cooperate. People can trade not only goods but know-how and favors, and the negotiations are not limited to what can be exchanged there and then but to goods and favors transferred at widely separated times. Language may foster cooperation, but it also depends on it, because there is no advantage in sharing information with adversaries (as we see in the expression “to be on speaking terms”). The inherent synergies among language, intelligence, sociality, enhanced paternal and grandmaternal investment, extended lives and childhoods, and diverse habitats and food sources suggest that these features cohere as a characterization of the cognitive niche, with enhancements in each serving as an additional selection pressure for the others. As far as timing is concerned, we would expect that the corresponding adaptations coevolved gradually, beginning with the first hominid species that possessed some minimal combination of preconditions (e.g., bipedality, group living, omnivory), increasing in complexity through the lineage of species that showed signs of tool use, cooperation, and anatomical adaptations to language, and exploding in behaviorally modern H. sapiens. Evaluating the Theory of the Cognitive Niche The theory of the cognitive niche, I believe, has several advantages as an explanation of the evolution of the human mind. It incorporates facts about the cognitive, affective, and linguistic mechanisms discovered by modern scientific psychology rather than appealing to vague, prescientific black boxes like “symbolic behavior” or “culture.” To be specific: the cognitive adaptations comprise the “intuitive theories” of physics, biology, and psychology; the adaptations for cooperation comprise the moral emotions and mechanisms for remembering individuals and their actions; the linguistic adaptations comprise the combinatorial apparatus for grammar and the syntactic and phonological units that it manipulates. The selection pressures that the theory invokes are straightforward and do not depend on some highly specific behavior (e.g., using projectile weapons, keeping track of wandering children) or environment (e.g., a particular change in climate), none of which were likely to be in place over the millions of years in which modern humans evolved their large brains and complex tools. Instead it invokes the intrinsic advantages of know-how, cooperation, and communication that we recognize uncontroversially in the contemporary world. Science and technology, organizations (such as corporations, universities, armies, and governments), and communication media (such as the press, mail, telephones, television, radio, and the internet) are, respectively, just the exercise of cognition, sociality, and language writ large, and they singly and jointly enable the achievement of outcomes that would be impossible without them. The theory of the cognitive niche simply extrapolates these advantages backward in time and scale. Moreover, the theory requires no radical revision to evolutionary theory: neither the teleology and creationism of Wallace, nor mechanisms that are exotic, extreme, or invoked ad hoc for our species. Although grammatical language is unique to humans, and our intelligence and sociality are hyperdeveloped, it is not 8996 | uncommon for natural selection to favor unique or extreme traits, such as the elephant’s trunk, the narwhal’s tusk, the whale’s baleen, the platypus’s duckbill, and the armadillo’s armor. Given the undeniable practical advantages of reasoning, cooperation, and communication, it seems superfluous, when explaining the evolution of human mental mechanisms, to assign a primary role to macromutations, exaptation, runaway sexual selection, group selection, memetics, complexity theory, cultural evolution (other than what we call “history”), or gene–culture coevolution (other than the commonplace that the products of an organism’s behavior are part of its selective environment). The theory can be tested more rigorously, moreover, using the family of relatively new techniques that detect “footprints of selection” in the human genome (by, for example, comparing rates of nonsynonymous and synonymous base pair substitutions or the amounts of variation in a gene within and across species) (32, 45, 46). The theory predicts that there are many genes that were selected in the lineage leading to modern humans whose effects are concentrated in intelligence, language, or sociality. Working backward, it predicts that any genes discovered in modern humans to have disproportionate effects in intelligence, language, or sociality (that is, that do not merely affect overall growth or health) will be found to have been a target of selection. This would differentiate the theory from those that invoke a single macromutation, or genetic changes that affected only global properties of the brain like overall size, or those that attribute all of the complexity and differentiation of human social, cognitive, or linguistic behavior to cultural evolution. It is not necessary that any of these genes affect just a single trait, that they be the only gene affecting the trait (“the altruism gene,” “the grammar gene,” and so on) or that they appear de novo in human evolution (as opposed to being functional changes in a gene found in other mammals). The only requirement is that they contribute to the modern human version of these traits. In practice, the genes may be identified as the normal versions of genes that cause disorders of cognition (e.g., retardation, thought disorders, major learning disabilities), disorders of sociality (e.g., autism, social phobia, antisocial personality disorder), or disorders of language (e.g., language delay, language impairment, stuttering, and dyslexia insofar as it is a consequence of phonological impairment). Alternatively, they may be identified as a family of alleles whose variants cause quantitative variation in intelligence, personality, emotion, or language. Several recent discoveries have supported these predictions. The gene for the transcription factor FOXP2 is monomorphic in normally developing humans, and when it is mutated it causes impairments in speech, grammar, and orofacial motor control (47, 48). The human version shows two differences from the version found in great apes, at least one of them functional, and the ape homolog shows only a single, nonfunctional difference from the one found in mice. The pattern of conservation and variation has been interpreted as evidence for a history of selection in the human lineage (49). In addition, several genes expressed in development of auditory system differ in humans and chimpanzees and show signs of selection in the human lineage. Because the general auditory demands on humans and chimps are similar, it is likely that they were selected for their utility in the comprehension of speech (50). And the human ASPM gene, which when mutated causes microcephaly and lowered intelligence, also shows signs of selection in the generations since our common ancestor with chimpanzees (51). It is likely that many more genes with cognitive, social, and linguistic effects will be identified in the coming years, and the theory of the cognitive niche predicts that most or all will turn out to be adaptively evolved. Emergence of Science and Other Abstract Endeavors Even if the evolution of powerful language and intelligence were explicable by the theory of the cognitive niche, one could ask, with Wallace, how cognitive mechanisms that were selected for physical and social reasoning could have enabled H. sapiens to Pinker engage in the highly abstract reasoning required in modern science, philosophy, government, commerce, and law. A key part of the answer is that, in fact, humans do not readily engage in these forms of reasoning (9, 10, 52). In most times, places, and stages of development, people’s abilities in arithmetic consist of the exact quantities “one,” “two,” and “many,” and an ability to estimate larger amounts approximately (53). Their intuitive physics corresponds to the medieval theory of impetus rather than to Newtonian mechanics (to say nothing of relativity or quantum theory) (54). Their intuitive biology consists of creationism, not evolution, of essentialism, not population genetics, and of vitalism, not mechanistic physiology (55). Their intuitive psychology is mindbody dualism, not neurobiological reductionism (56). Their political philosophy is based on kin, clan, tribe, and vendetta, not on the theory of the social contract (57). Their economics is based on titfor-tat back-scratching and barter, not on money, interest, rent, and profit (58). And their morality is a mixture of intuitions of purity, authority, loyalty, conformity, and reciprocity, not the generalized notions of fairness and justice that we identify with moral reasoning (16). Nonetheless, some humans were able to invent the different components of modern knowledge, and all are capable of learning them. So we still need an explanation of how our cognitive mechanisms are capable of embracing this abstract reasoning. The key may lie in a psycholinguistic phenomenon that may be called metaphorical abstraction (9, 59–61). Linguists such as Ray Jackendoff, George Lakoff, and Len Talmy have long noticed that constructions associated with concrete scenarios are often analogically extended to more abstract concepts. Consider these sentences: 1. a. The messenger went from Paris to Istanbul. b. The inheritance went to Fred. c. The light went from green to red. d. The meeting went from 3:00–4:00. The first sentence (a) uses the verb go and the prepositions from and to in their usual spatial senses, indicating the motion of an object from a source to a goal. But in 1(b), the words are used to indicate a metaphorical motion, as if wealth moved in space from owner to owner. In 1(c) the words are being used to express a change of state: a kind of motion in state-space. And in 1(d) they convey a shift in time, as if scheduling an event was placing or moving it along a time line. A similar kind of extension may be seen in constructions expressing the use of force: 2. a. Rose forced the door to open. b. Rose forced Sadie to go. c. Rose forced herself to go. 2(a) conveys an instance of physical force, but 2(b) conveys a kind of metaphorical interpersonal force (a threat or wielding of authority), and 2(c) an intrapersonal force, as if the self were divided into agents and once part could restrain or impel another. Tacit metaphors involving space and force are ubiquitous in human languages. Moreover, they participate in the combinatorial apparatus of grammar and thus can be assembled into more complex units. Many locutions concerning communication, for example, employ the complex metaphor of a sender (the communicator) putting an object (the idea) in a container (the message) and causing it to move to a recipient (the hearer or reader): We gather our ideas to put them into words, and if our words are not empty or hollow, we might get these ideas across to a listener, who can unpack our words to extract their content (62). These metaphors could be, of course, nothing but opaque constructions coined in rare acts of creation by past speakers and memorized uncomprehendingly by current ones. But several phenomena suggest that they reflect an ability of the human mind to readily connect abstract ideas with concrete scenarios. First, children occasionally make errors in their spontaneous speech, Pinker which suggest they grasp parallels between space and other domains and extend them in metaphors they could not have memorized from their parents. Examples include I putted part of the sleeve blue (change of location → change of state), Can I have any reading behind the dinner? (space → time), and My dolly is scrunched from someone . . . but not from me (source of motion → source of causation) (63, 64). Second, several experiments have shown that when people are engaged in simple spatial reasoning it interferes with their thoughts about time and possession (9). Third, adults often experience episodes of spontaneous reminding in which an idea was activated only because it shared an abstract conceptual structure with the reminder, rather than a concrete sensory feature. For example, an episode of a barber not cutting a man’s hair short enough may remind him of a wife not cooking his steak well enough done. A futile attempt at evenly darkening successive regions of a photo in Photoshop may remind a person of a futile attempt to level a wobbly table by successively cutting slices off each of its legs (9, 65, 66). This process of analogical reminding may be the real-time mental mechanism that allows cognitive structures for space, force, and other physical entities to be applied to more abstract subject matter. The value of metaphorical abstraction consists not in noticing a poetic similarity but in the fact that certain logical relationships that apply to space and force can be effectively carried over to abstract domains. The position of an object in space is logically similar to the value of a variable, and thus spatial thinking can be co-opted for propositional inferences. In the realm of space, if one knows that A moves from X to Y, one can deduce that A is now at Y, but was not at Y in the past. An isomorphic inference may be made in the realm of possession: If A is given by Michael to Lisa, it is now owned by Lisa, but was not owned by her in the past. A similar isomorphism allows reasoning about force to be coopted for reasoning about abstract causation, because both support counterfactual inferences. If A forces B to move from X to Y, then if A had not forced it, B would still be at X. Similarly, If Michael forced Lisa to be polite to Sam, then if Michael had not forced her, she would not have been polite to Sam. The value of a variable (which is parallel to position in space) and the causation of change (which is parallel to the application of force) are the basic elements of scientific thinking. This suggests that a mind that evolved cognitive mechanisms for reasoning about space and force, an analogical memory that encourages concrete concepts to be applied to abstract ones with a similar logical structures, and mechanisms of productive combination that assemble them into complex hierarchical data structures, could engage in the mental activity required for modern science (9, 10, 67). In this conception, the brain’s ability to carry out metaphorical abstraction did not evolve to coin metaphors in language, but to multiply the opportunities for cognitive inference in domains other than those for which a cognitive model was originally adapted. Evidence from science education and the history of science suggest that structured analogies and other mental reassignments in which a concrete domain of cognition is attached to a new subject matter are crucial to the discovery and transmission of scientific and mathematical ideas (8, 68–70). Children learn to extend their primitive number sense beyond “one, two, many” by sensing the analogies among an increase in approximate magnitude, position along a line, and the order of number words in the counting sequence. To learn chemistry, people must stretch their intuitive physics and treat a natural substance not as having an essence but as consisting of microscopic objects and connectors. To understand biology, they put aside the intuitive notions of essences and vital forces and think of living things the way they think of tools, with a function and structure. To learn psychology and neuroscience, they must treat the mind not as an immaterial soul but as the organ of a living creature, as an artifact designed by natural selection, and as a collection of physical objects, neurons. PNAS | May 11, 2010 | vol. 107 | suppl. 2 | 8997 Wallace, recall, also wondered about the human ability to participate in modern institutions such as governments, universities, and corporations. But like humans’ puzzling ability to do science, their puzzling ability to take part in modern organizations is partly a pseudoproblem, because in fact the rules of modern institutions do not come naturally to us. Sociality in natural environments is based on concepts and motives adapted to kinship, dominance, alliances, and reciprocity. Humans, when left to their own devices, tend to apply these mindsets within modern organizations. The result is nepotism, cronyism, deference to authority, and polite consensus—all of which are appropriate to traditional small-scale societies but corrosive of modern ones. Just as successful science requires people to reassign their cognitive faculties in unprecedented ways, successful organizations require people to reassign their social faculties in evolutionarily unprecedented ways. In universities, for example, the mindset of communal sharing (which is naturally applied to food distribution within the family or village) must be applied to the commodity of ideas, which are treated as resources to be shared rather than, say, traits that reflect well on a person, or inherent wants that comrades must respect if they are to maintain their relationship. The evaluation of ideas also must be wrenched away from the mindset of authority: department chairs can demand larger offices or higher salaries but not that their colleagues and students acquiesce to their theories. These radically new rules for relationships are the basis for open debate and peer review in scholarship, and for the checks and balances and accounting systems found in other modern institutions (9). Conclusion The evolution of the human mind is such a profound mystery that it became the principal bone of contention between the two codiscoverers of the theory of natural selection. It has been an impetus to creationism and spiritualism in their day and in ours, and continues to be a source of proposed complications and elaborations of evolutionary theory. But in a year celebrating Darwin’s life and work, it would be fitting to see if the most parsimonious application of his theory to the human mind is sufficient, namely that the mind, like other complex organs, owes its origin and design to natural selection. I have sketched a testable theory, rooted in cognitive science and evolutionary psychology, that suggests that it is. According to this theory, hominids evolved to specialize in the cognitive niche, which is defined by: reasoning about the causal structure of the world, cooperating with other individuals, and sharing that knowledge and negotiating those agreements via language. This triad of adaptations coevolved with one another and with life-history and sexual traits such as enhanced parental investment from both sexes and multiple generations, longer childhoods and lifespans, complex sexuality, and the accumulation of local knowledge and social conventions in distinct cultures. Although adaptations to the cognitive niche confer obvious advantages in any natural environment, they are insufficient for reasoning in modern institutions such as science and government. Over the course of history and in their own educations, people accommodate themselves to these new skills and bodies of knowledge via the process of metaphorical abstraction, in which cognitive schemas and social emotions that evolved for one domain can be pressed into service for another and assembled into increasingly complex mental structures. 1. Wallace AR (1870) The limits of natural selection as applied to man. Contributions to the Theory of Natural Selection: A Series of Essays, ed Wallace AR (MacMillan, New York). 2. Tooby J, DeVore I (1987) The reconstruction of hominid evolution through strategic modeling. The Evolution of Human Behavior: Primate Models, ed Kinzey WG (SUNY Press, Albany, NY). 3. Brown DE (1991) Human Universals (McGraw-Hill, New York). 4. Kingdon J (1993) Self-Made Man: Human Evolution from Eden to Extinction? (Wiley, New York). 5. Wrangham RW (2009) Catching Fire: How Cooking Made Us Human (Basic Books, New York) pp v. 6. Leslie AM (1994) ToMM, ToBY, and agency: Core architecture and domain specificity. Mapping the Mind: Domain Specificity in Cognition and Culture, eds Hirschfeld LA, Gelman SA (Cambridge Univ Press, New York). 7. Spelke ES, Breinlinger K, Macomber J, Jacobson K (1992) Origins of knowledge. Psychol Rev 99:605–632. 8. Carey S (2007) Origins of Concepts (MIT Press, Cambridge, MA). 9. Pinker S (2007) The Stuff of Thought: Language as a Window into Human Nature (Viking, New York). 10. Pinker S (1997) How the Mind Works (Norton, New York). 11. Trivers R (1971) The evolution of reciprocal altruism. Q Rev Biol 46:35–57. 12. Kanwisher N, Moscovitch M (2000) The cognitive neuroscience of face processing: An introduction. Cogn Neuropsychol 17:1–13. 13. Klein SB, Cosmides L, Tooby J, Chance S (2002) Decisions and the evolution of memory: Multiple systems, multiple functions. Psychol Rev 109:306–329. 14. Cosmides L, Tooby J (1992) Cognitive adaptations for social exchange. The Adapted Mind: Evolutionary Psychology and the Generation of Culture, eds Barkow JH, Cosmides L, Tooby J (Oxford Univ Press, New York). 15. Cosmides L, Tooby J (2010) Whence intelligence? Proc Natl Acad Sci USA. 16. Haidt J (2002) The moral emotions. Handbook of Affective Sciences, eds Davidson RJ, Scherer KR, Goldsmith HH (Oxford Univ Press, New York). 17. Nowak MA, Sigmund K (1998) Evolution of indirect reciprocity by image scoring. Nature 393:573–577. 18. Ridley M (1997) The Origins of Virtue: Human Instincts and the Evolution of Cooperation (Viking, New York) 1st American Ed, pp viii, 295. 19. Fiske AP (1991) Structures of Social Life: The Four Elementary Forms of Human Relations (Free Press, New York). 20. Pinker S, Nowak MA, Lee JJ (2008) The logic of indirect speech. Proc Natl Acad Sci USA 105:833–838. 21. Lee JJ, Pinker S (2010) Rationales for Indirect Speech: The Theory of the Strategic Speaker Psychological Review. 22. Pinker S (1991) Rules of language. Science 253:530–535. 23. Jackendoff R (2002) Foundations of Language: Brain, Meaning, Grammar, Evolution (Oxford Univ Press, New York). 24. Chomsky N (1972) Language and Mind (Harcourt Brace, New York) Extended ed. 25. Jackendoff R (1990) Semantic Structures (MIT Press, Cambridge, MA). 26. Senghas A, Kita S, Özyürek A (2004) Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science 305:1779–1782. 27. Nowak MA, Plotkin JB, Jansen VA (2000) The evolution of syntactic communication. Nature 404:495–498. 28. Pinker S (1999) Words and Rules: The Ingredients of Language (HarperCollins, New York). 29. Miller GA, Selfridge J (1950) Verbal context and the recall of meaningful material. Am J Psychol 63:176–185. 30. Chagnon NA (1992) Yanomamö: The Last Days of Eden (Harcourt Brace, New York). 31. DiRienzo A (2010) Human population diversity. Proc Natl Acad Sci USA. 32. Bustamente C (2010) Genomic footprints of natural selection. Proc Natl Acad Sci USA. 33. Jablonski N (2010) The skin that makes us human. Proc Natl Acad Sci USA. 34. Tishkoff S (2010) Paleo-demography from extant genetics. Proc Natl Acad Sci USA. 35. Hawkes K (2010) The evolution of human life history. Proc Natl Acad Sci USA. 36. Kaplan H, Robson AJ (2002) The emergence of humans: The coevolution of intelligence and longevity with intergenerational transfers. Proc Natl Acad Sci USA 99:10221–10226. 37. Richerson P (2010) How cultures evolve. Proc Natl Acad Sci USA. 38. Wallace D (2010) Peopling the planet: Out of Africa? Proc Natl Acad Sci USA. 39. Wood B (2010) Evolution of the hominids. Proc Natl Acad Sci USA. 40. Lee JJ (2007) A g beyond Homo sapiens? Some hints and suggestions. Intelligence 35: 253–265. 41. Boyd R, Silk JB (2006) How Humans Evolved (Norton, New York), 4th Ed. 42. Mather JA (1995) Cognition in cephalopods. Adv Stud Behav 24:317–353. 43. Odling-Smee FJ, Laland KN, Feldman MW (2003) Niche Construction: The Neglected Process in Evolution (Princeton Univ Press, Princeton, NJ). 44. Lewontin RC (1984) Adaptation. Conceptual Issues in Evolutionary Biology, ed Sober E (MIT Press, Cambridge, MA). 45. Kreitman M (2000) Methods to detect selection in populations with applications to the human. Annu Rev Genomics Hum Genet 1:539–559. 46. Przeworski M, Hudson RR, Di Rienzo A (2000) Adjusting the focus on human variation. Trends Genet 16:296–302. 47. Vargha-Khadem F, et al. (1998) Neural basis of an inherited speech and language disorder. Proc Natl Acad Sci USA 95:12695–12700. 48. Lai CSL, Fisher SE, Hurst JA, Vargha-Khadem F, Monaco AP (2001) A novel forkheaddomain gene is mutated in a severe speech and language disorder. Nature 413: 519–523. 49. Enard W, et al. (2002) Molecular evolution of FOXP2, a gene involved in speech and language. Nature 418:869–872. 50. Clark AG, et al. (2003) Inferring Nonneutral Evolution from Human-Chimp-Mouse Orthologous Gene Trios. Science 302:1960–1963. 51. Evans PD, et al. (2004) Adaptive evolution of ASPM, a major determinant of cortical size in humans. Hum Mol Genet 13:489–494. 8998 | Pinker 52. Pinker S (2002) The Blank Slate: The Modern Denial of Human Nature (Viking, New York). 53. Carey S (2009) Origins of Concepts (MIT Press, Cambridge, MA). 54. McCloskey M (1983) Intuitive physics. Scientific American 248:122–130. 55. Atran S (1998) Folk biology and the anthropology of science: Cognitive universals and cultural particulars. Behav Brain Sci 21:547–609. 56. Bloom P (2003) Descartes’ Baby: How the Science of Child Development Explains What Makes Us Human (Basic Books, New York). 57. Daly M, Wilson M (1988) Homicide (Aldine de Gruyter, Hawthorne, NY). 58. Fiske AP (2004) Four modes of constituting relationships: Consubstantial assimilation; space, magnitude, time, and force; concrete procedures; abstract symbolism. Relational Models Theory: A Contemporary Overview, ed Haslam N (Erlbaum Associates, Mahwah, NJ). 59. Lakoff G, Johnson M (1980) Metaphors We Live By (Univ of Chicago Press, Chicago). 60. Jackendoff R (1978) Grammar as evidence for conceptual structure. Linguistic Theory and Psychological Reality, eds Halle M, Bresnan J, Miller GA (MIT Press, Cambridge, MA). 61. Talmy L (2000) Force Dynamics in Language and Cognition. Toward a Cognitive Semantics 1: Concept Structuring Systems (MIT Press, Cambridge, MA). 62. Reddy M (1993) The conduit metaphor: A case of frame conflict in our language about language. Metaphor and Thought, ed Ortony A (Cambridge Univ Press, New York), 2nd Ed. Pinker 63. Bowerman M (1983) Hidden meanings: The role of covert conceptual structures in children’s development of language. The Acquisition of Symbolic Skills, eds Rogers DR, Sloboda JA (Plenum, New York). 64. Pinker S (1989) Learnability and Cognition: The Acquisition of Argument Structure (MIT Press, Cambridge, MA). 65. Hofstadter DR (1995) Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought (Basic Books, New York), pp ix, 518 pp. 66. Schank RC (1982) Dynamic Memory: A Theory of Reminding and Learning in Computers and People (Cambridge Univ Press, New York). 67. Gentner D (2003) Why we’re so smart. Language in Mind: Advances in the Study of Language and Thought, eds Gentner D, Goldin-Meadow S (MIT Press, Cambridge, MA), pp 195–235. 68. Gentner D, Jeziorski M (1989) Historical shifts in the use of analogy in science. The psychology of Science: Contributions to Metascience, eds Gholson B, Shadish WR, Beimeyer RA, Houts A (Cambridge Univ Press, New York). 69. Spelke E (2003) What makes us smart? Core knowledge and natural language. Language in Mind: Advances in the Study of Language and Thought, eds Gentner D, Goldin-Meadow S (MIT Press, Cambridge, MA). 70. Boyd R (1993) Metaphor and Theory Change: What is “Metaphor” a Metaphor for? Metaphor and Thought, ed Ortony A (Cambridge Univ Press, New York), 2nd Ed. PNAS | May 11, 2010 | vol. 107 | suppl. 2 | 8999

Tutor Answer

School: Purdue University



Language and Cognition
Student’s name



Language and Cognition

The immediate environment in which an infant develops plays a significant role in their
language and cognition development. Ensuring that the environment in which a child grows is
conducive for language and cognitive development plays a significant role in the overall
development of the child. Ferguson, Cassells, MacAllister, and Evans (2013) note that
irrespective of the physical and mental status, every child has the ability to learn a specific
language. However, how a child develops depend on their ability to perceive and receive internal
and external stimuli. The presence, or lack, therefore, of appropriate stimuli in the child’s
environment tend to limit or enhance their language and cognition development. Environmental
deprivation, w...

flag Report DMCA

Tutor went the extra mile to help me with this essay. Citations were a bit shaky but I appreciated how well he handled APA styles and how ok he was to change them even though I didnt specify. Got a B+ which is believable and acceptable.

Similar Questions
Hot Questions
Related Tags

Brown University

1271 Tutors

California Institute of Technology

2131 Tutors

Carnegie Mellon University

982 Tutors

Columbia University

1256 Tutors

Dartmouth University

2113 Tutors

Emory University

2279 Tutors

Harvard University

599 Tutors

Massachusetts Institute of Technology

2319 Tutors

New York University

1645 Tutors

Notre Dam University

1911 Tutors

Oklahoma University

2122 Tutors

Pennsylvania State University

932 Tutors

Princeton University

1211 Tutors

Stanford University

983 Tutors

University of California

1282 Tutors

Oxford University

123 Tutors

Yale University

2325 Tutors