Language Acquisition

The way that humans come to learn a language, seemingly without even trying, is an incredibly amazing phenomenon, and one that we often take for granted. From the earliest ages, infants are sensitive to all kinds of attributes of the language input that they hear, and we are only beginning to understand how infants use these amazing learning abilities in order to end up as language experts. In my research, I have been particular interested in two issues.
First, what is the relationship between learning the structure of a language (the dependencies between it's sounds, words, and phrases) and the meaning of those elements? How does learning about one influence the learning of the other?
Second, what brings about and explains the immense variability we observe in language acquisition. While almost all children become "experts" in their native language(s), some get there much more quickly than others, with many consequences on other aspects of their life. What makes some individuals excellent language learnings, and others lag behind? Relatedly, why are some words or other language structures harder to learn than others?
I find these, and many other, questions about language acquisition fascinating, and have done and am doing much research on these topics.
Baby in Headturn Experiment
doggy comb kitty

Theories and Models of Learning and Knowledge Representation

My research relies strongly on computational models of learning, representation, and action. Computational models force us to be clear about our theories and hypotheses. Models' successes and failures tell us a lot about the necessary givens for learning and representation - both in terms of the structure of the learning system, and in terms of the structure of environmental input.
I am especially interested in statistical and neurally-inspired models of learning. Some have argued these models are doomed to fail, that they cannot explain critical facts about learning - such as the ability to learn and transfer abstract knowledge, and learn complex phenomena like long-distance dependencies. A number of my research projects have been directed toward showing that these models are actually quite good at these phenomena, and that when they fail, it's when and how humans do. But all models have their limitations, and finding them is a critical part of the scientific enterprise.
I am also a strong advocate of an interactionist research program, where computational models are built based on experimental findings, and then are used to make predictions in future experiments. This back and forth between models and experiments allows our models to better reflect cognitive processes, and helps structure and drive the experimental research program. In many of my papers, you will see this reciprocity at work.
Feed Forward Network
Network Graph

Meaning: Linguistic vs. World Knowledge

Humans acquire knowledge of semantic information by interacting with their world, and realizing its structure and affordances. But we also acquire a good portion of our meaning through language, by talking about things, and by using language operationally to achieve goals.
A good portion of my research has focused on investigating the semantic knowledge that is accessible to people purely through language statistics, like word co-occurrences, words' distributional similarities, and the abstract representations we acquire through language use. It turns out that people can learn a shocking amount of structured meaning knowledge through patterns in language alone.
Even more interesting is that there is usually a big difference between the semantic information that one acquires by interacting with object and events in the world, and the semantic information that is emphasized in language. My work has shown that people are sensitive to these differences, and incredibly good at using the type of semantic knowledge (from language, or from the world), that is appropriate in a particular situation.
Yellow Carrots

Big Data Analyses of Cognitive Development, Social Cognition, and Clinical Psychology

Technological change has had a vast effect on the amount of data we have available, and the types of questions we can ask. I was doing "big data" research for almost a decade before it had a name.
  • What can children learn from the millions of words that they hear? What aspects caregivers' speech will allow us to predict which children have high and low vocabularies?
  • Are there statistical differences that predict ideological difference in political speech on cable news?
  • Are people sensitive to the ways in which proper names are used in language, the connotations those names carry?
  • Can the patterns of language use for those with mental disorders (such as schizophrenia) allow us to learn more about the progression of the disease, make differential diagnoses, and predict treatment success?
These are just some of the questions I have been involved in studying.
But "big data" is not just about mining large data sets for correlations. Big data, when guided by theoretically-driven questions and computational models, can allow us to ask and test previously unthinkable hypotheses. Access to immense amounts of data can also suggest to us explanations that never would have seemed plausible given smaller amounts of data. Increased statistical power (both as a scientist and as a statistical learning) is very, very important.
I am trained as a cognitive developmentalist. But recently, I have become very interested in the application of these models and statistical analyses to social and clinical psychology phenomena. How do periods of profound deafness and growing up with a cochlear implant affect vocabulary and the development of semantic memory? How do people suffering from Alzheimer's and Schizophrenia differ in the ways that they use language, and can this help us diagnose people and better understand the progression of the disease? These are some of the questions I am currently investigating.
Feed Forward Network
Network Graph