Overview
Language can cement relationships, start wars, and allow us to say what we want in our coffee. It is one of the few psychological behaviors that separate us from our evolutionary cousins, the great apes. The quest to understand language, how we use it, and how children learn it, is age-old. In the past 50 years, however, significant progress has been made in understanding how children acquire language by drawing upon a range of disciplines: linguistic, psychology, and computer science, and biology. Our 1999 book, How Babies Talk: The Magic and Mystery of Language Development in the First Three Years of Life (Penguin Dutton), provided an overview of what is known about language development for the lay public.
Theory Development
Theory Development: The Emergentist Coalition Model (ECM)
Our focus has been mainly on grammatical development (how children learn the syntax of their language), lexical development (how children learn their language’s vocabulary), and segmentation (how children carve up the stream of speech they hear into clauses, phrases, and words). The “Emergentist Coalition Model” (Golinkoff, Hirsh-Pasek, & Hollich, 1999; Hirsh-Pasek, Golinkoff, & Hollich, 2000; Damonte, Johanson, Golinkoff, & Hirsh-Pasek, 2014) is a truly developmental account of how the process of language learning changes in over the first two years of life. Using this theory as our guide, we produced a monograph on how children learn names for objects – count nouns (Hollich, Hirsh-Pasek, & Golinkoff, 2000). To become a sophisticated word learner, children must take a speaker’s point of view to learn the names of all kinds of things in the environment – even things that children don’t necessarily find interesting or attractive (Pruden, Hennon, Hirsh-Pasek, & Golinkoff, 2006). There are some assumptions though, that children seem to make at the beginning of word learning that they will have to relax. For example, at 12 months of age, children assume that words map to whole objects (e.g., Hollich, Golinkoff, & Hirsh-Pasek, 2007) and not to their attractive parts.
As the ECM is a model that emphasizes the availability of multiple cues to word learning, its tenets have implications for understanding and possibly remediating various types of language disorders. For example, we have used our Emergentist Coalition Model to guide us in research on how children on the autistic spectrum learn words (Parish-Morris, Hennon, Hirsh-Pasek, Golinkoff, & Tager-Flusberg, 2007). Children who are more sensitive to speaker’s social cues have larger vocabularies than autistic children who have difficulty inferring a speaker’s intent. These findings tell us that autistic children can learn words without being attuned to a speaker’s intent but will be limited in the number of words they can pick up incidentally.
IPLP & a New Language Screener
The Intermodal Preferential Looking Paradigm (IPLP) and a New Language Screener
For over 30 years, my lab has been probing the process of language learning. Along the way we developed two methods to investigate children’s language comprehension. Children’s understanding of language often precedes their ability to talk. Both methods “trick” babies into showing us what they know by relying on what they look at (their visual fixation) as their response. One method, the Intermodal Preferential Looking Paradigm (IPLP), is in use in laboratories around the world and relies on the video presentation of simultaneous events. Children see, for example, a boat on one side of the screen and a shoe on the other, while they hear audio that matches only one of these images (as in, “Where’s the shoe?”). We measure whether children look longer at the shoe or at the boat. If they understand the item in question (here, “shoe”), they should look longer to the shoe than the boat.
Another method uses the basic premise but in “real time,” with a real person and real objects (the Interactive, or 3D IPLP). We have used this method to study word learning in children as young as 10 months of age (Hirsh-Pasek & Golinkoff, 1996; Golinkoff & Hirsh-Pasek, 2008; Golinkoff, Ma, Song, & Hirsh-Pasek, 2013).
Recently, with funding from the Institute of Education Sciences, and with Jill deVilliers, Kathy Hirsh-Pasek, Aquiles Iglesias and Mary Wilson, we developed a language screener for children between the ages of 3;0 and 6;0. Called the Quick Interactive Language Screener or QUILS, it was brought out by Brookes Publishing in a version for English-reared children, and a version for Spanish-English reared children will be released soon. It uses the route of comprehension to test both vocabulary and grammar, what children already know (product) and how they learn (process) through a touch-screen response that takes but 15 minutes to administer by individuals without training in speech-language pathology. Importantly, because it tests over 600 monolingual and bilingual children, its results inform our understanding of acquisition in new ways (Pace, Luo, Morini, Golinkoff, de Villiers, Hirsh-Pasek, Iglesias, & Wilson, in preparation).
Segmentation of the Speech Stream & Infant-Directed Speech
Segmentation of the Speech Stream and the Role of Infant-Directed Speech in Language Learning
Even before babies can learn the grammar of their language, they need to find the units that comprise their language in the stream of speech they hear. This is the problem of segmentation that we as adults can only imagine when we listening to people talking in a foreign language. Where do the words and sentences begin and end? Yet babies who can’t walk can find the units in their native language. With Heather Bortfeld and James Morgan, we discovered that one of the “wedges” 6-month-old babies use to break into the language stream is their own name. When they hear their name in a passage, they can remember the word that comes after it. This also works for “mommy” or “momma” – whatever moniker their mother uses to refer to herself (Bortfeld, Morgan, Golinkoff, & Rathbun, 2005) but not for someone’s else’s name even if it has the same number of syllables and stress pattern.
Infants learn better from speech that is infant-directed rather than from adult-directed speech. “Babytalk” is slower, more repetitive, and spoken at a higher frequency, attracting the infant’s attention. Our lab showed that word learning occurs more readily with infant-directed than adult-directed speech (Ma, Golinkoff, Houston, & Hirsh-Pasek, 2011), although previous correlational studies had suggested that this must be so. Language learning is also facilitated when children learn from interactions that are contingent on their communicative bids. Even before language emerges, responding to children’s vocalizations or actions soon after they are produced builds a communication foundation that lays the groundwork for language (Golinkoff, Deniz Can, Soderstrom, & Hirsh-Pasek, 2015). Children can even learn from video chats when the conversations are contingent on children’s responses.
Some children cannot benefit from infant-directed speech because they are deaf or hard of hearing. With Giovanna Morini, Ph.D., I am studying how to maximize language development in children with hearing loss by uncovering whether or not they can process – and not just hear – speech (Levine, Strother-Garcia, Hirsh-Pasek, & Golinkoff, 2016).
Infants Can Identify Components of Events
Segmenting a Continuous World using Statistical Learning
To learn a language, infants must discern where events begin and end. It is likely they first use statistical learning to help find the carving joints in events as their top-down knowledge is initially limited. Infants observe and engage in routines every day that are composed of reliable sequences of actions. The event of bath time may be followed by other events that also have predictable structure, like putting on pajamas or reading a book before bed. How do infants know when one event ends and another begins? Segmenting events into units is critical for anticipating future actions, imitating others, categorizing events, and learning words that label those actions. Segmenting events is a crucial step in mapping words – verbs — to the action patterns babies perceive in the world., It is also crucial for mapping prepositions to events. Our research (Roseberry, Göksun, Hirsh-Pasek, & Golinkoff, 2012) shows that as early as 10 months of age, babies are more interested in relations between objects (such as over and under) than in metrical changes between them (2 centimeters versus 4 centimeters over). These findings suggest that infants are attuned to relations that will later be expressed in language.
Infants Can Identify Components of Events: Conceptual Underpinnings of Language
To ask whether and when babies can formulate dynamic language-relevant concepts that will surface in language as verbs and prepositions, we present babies with nonlinguistic events and use visual fixation as the dependent variable. In this line of research, and inspired by linguists like Talmy, Jackendoff, and Langacker, we probe the concepts that underlie language (e.g., Pulverman, Hirsh-Pasek, Golinkoff, Pruden, & Salkind, 2006). For example, the languages of the world refer to how an action is performed (also called “manner”) and the “path” along which the action occurs (e.g., in a circle versus laterally). We have found that babies can discriminate between different kinds of actions and paths in events as early as 7 months of age (e.g., Pulverman, Song, Golinkoff, & Hirsh-Pasek, 2013) as well as form concepts of these actions and paths (e.g., Pruden, Göksun, Roseberry, Hirsh-Pasek, & Golinkoff, 2013; Pruden, Göksun, Roseberry, Hirsh-Pasek, & Golinkoff, 2012; Song, Pruden, Golinkoff, & Hirsh-Pasek, 2016), around the end of the first year of life. Babies can also find specific types of paths in events – even if those paths are not encoded in their native language (Göksun, Hirsh-Pasek, & Golinkoff, 2010; Göksun, Hirsh-Pasek, Golinkoff, Imai, Konishi, & Okada, 2011; Göksun, George, Hirsh-Pasek, & Golinkoff, 2013). Indeed, Konishi (2015) has shown that language plays a role in shaping infants’ attention to the components of events that will be expressed in their native language. Bilingual babies have the additional task of noting which language encodes which components (Konishi on bilingual).
A new finding from our laboratory (Konishi, Stahl, Golinkoff, & Hirsh-Pasek, 2016) suggests that babies who are better at forming categories of the components of events know more verbs when they are 27- 33 months old. These exciting results suggest that the ability to discriminate and categorize nonlinguistic events predicts toddlers’ ability to find the essence of verb meaning (Golinkoff, Chung, Hirsh-Pasek, Liu, Bertenthal, Brand, Maguire, & Hennon, 2002). We have also studied whether infants can categorize the causes of events (Göksun, George, Hirsh-Pasek, & Golinkoff, 2013) and how preschoolers appear to rely on common labels to do so. Finally, infants, naïve adults, and ice skating experts use the goals of events to help them find the beginnings and endings of continuous events (Levine, Hirsh-Pasek, Pace, & Golinkoff, under review; Levine, Strother-Garcia, Hirsh-Pasek, & Golinkoff, in press).
Naming Events
Naming Events: Verb Learning
The field has made significant inroads into understanding how children learn verbs – a more difficult task than learning nouns (Pulverman, Hirsh-Pasek, Golinkoff, Pruden, & Salkind, 2006). Verbs are the architectural centerpiece of the sentence so learning verbs is tantamount to learning the grammar of one’s language. When we started our research on verbs, we were surprised at how hard they were for children to learn. Now, however, we now know that a number of the factors contribute to verb learning (e.g., Maguire, Hirsh-Pasek, & Golinkoff, 2006; Brandone, Pence, Golinkoff, & Hirsh-Pasek, 2007): grammatical information surrounding the verb (Imai, Li, Haryu, Hirsh-Pasek, Golinkoff, & Shigematsu, 2008); the ‘imageabilty’ of the action a verb names (Ma, Golinkoff, Hirsh-Pasek, McDonough, & Tardif, 2009); McDonough, Song, Hirsh-Pasek, Golinkoff, & Lannon, 2011); and the semantic concepts verbs encode (Golinkoff & Hirsh-Pasek, 2008). However, even children as old as 6 and 8 years struggle to extend familiar verbs to novel contexts.
Second Language Learning
Once we discovered that language appears to influence the way in which speakers encode events, we tested whether second language learners are hindered when their second language encodes different aspects of events than their first language. Song, Pulverman, Pepe, Golinkoff, & Hirsh-Pasek, K. (2015) and Schneck, Pulverman, Golinkoff, & Hirsh-Pasek (under review) showed that second language learners’ experience interference from the predominant patterns (“lexicalization patterns”) of their first language in both production and comprehension, respectively. Yet it is unknown whether this problem is unique to adult learners who are entrenched in the patterns of their first language. The number of children growing up in dual language environments is increasing in the United States. Despite the apparent benefits of speaking two languages, in the US knowing two languages is often associated with poverty. Yet early exposure to a rich language environment is crucial for dual language learners and their future academic success. Playful learning environments in which children are happily engaged afford opportunities to promote second language acquisition (Konishi, Kanero, Freeman, Golinkoff, & Hirsh-Pasek, 2014; McCabe, Tamis-LeMonda, Bornstein, Golinkoff, Hirsh-Pasek, Hoff, Kuchiro, Melzi, Mendelson, Paez, Song, & Wishard, 2013).
Vocabulary & Learning to Read
Learning Vocabulary is Crucial for Learning to Read
In 1995, Hart and Risley’s research became headline news. The language experience of children growing up in poverty was distinctly different from that of children growing up in working class or professional families. On average, the number of words heard per hour in the welfare group was 616 compared to 1,251 in the working class group and 2,153 in the professional group. Furthermore, disparities in early cognitive development even during infancy grow larger over time. The good news that lies beneath these statistics is that the trajectory of language development is malleable. With David Dickinson and Kathy Hirsh-Pasek we have found that inviting low income children to play with replicas that represent new vocabulary words strengthens their learning of those words (Toub, Hassinger-Das, Ilgaz, Weisberg, Nesbitt, Collins, Newman, Hadley, Rivera, Hirsh-Pasek, Golinkoff, Dickinson, & Nicolopoulou, in revision). Our research continues to explore the malleability of language development and the quality and quantity of linguistic input, offering suggestions for researchers, educators, and practitioners.
While Hart and Risley’s work has been taken to emphasize the quantity of the language input children hear, we (Hirsh-Pasek, Adamson, Bakeman, Owen, Golinkoff, Pace, Yust, & Suma, 2015) asked whether the quality of mother-child communication might matter even more for language development. Using a low-income sample of 2-year-olds, we bore out this prediction: Children’s language skill one year later was predicted by the “communication foundation” set up between parent and child.
The Effects of Technology
The Effects of Technology on Language and Reading
How will the advent of new technology impact children’s language development and reading? What can the new technology teach us about what matters for language development? In several studies we have asked whether children can learn language from television – if supported by an adult (Roseberry, Hirsh-Pasek, Parish-Morris, & Golinkoff, 2009) – and from interactive video programs like Skype (Roseberry, Hirsh-Pasek, Golinkoff, 2014). We have also turned this question on its head and asked whether cell phone calls can disrupt toddlers’ word learning – they do (Reed, Hirsh-Pasek, & Golinkoff, 2017). The bottom line is that the contingency between adults’ and children’s conversations matter a great deal for language development. The absence of this contingency is likely a major source of the “video deficit” that makes it very difficult for children less than 2.5 to learn from screens.
Learning language is useful in its own right but its importance for learning to read is often underestimated. With David Dickinson, have spoken to this issue. Nothing can replace understanding a language and having deep vocabulary knowledge for when ‘learning to read’ turns into ‘reading to learn.’