Thoughts on Qualifying Exam

A week ago today I passed my Qualifying Exam. It felt surreal: it seemed only yesterday that I came across blog posts here and there about the grilling quals at Berkeley, but that was over two years ago when I was still an undergrad!

Looking back, I truly appreciate this experience–like everyone would tell you, it was a transformative journey. In the beginning, I naïvely wondered how one could possibly remember all the experiments, models, findings, theories, etc. from what looks like about a hundred papers. Although it seems obvious now, I wish I realized sooner that quals are not at all a memory test or a trivia competition. I used to represent each paper I read as its own “island”: I sort of knew where the question came from, how sound the experiment design was, what evidential support the hypothesis received, etc..  Quals pushed me to look beyond the “archipelago” of specific studies: I thought more deeply about the fundamental questions in the fields that I’m interested in (active learning, social learning, probabilistic models of cognition…), learned to take a stand based on available evidence, and practiced articulating my opinions in a consice, persuasive manner. Passing quals boosts my confidence to be a qualified researcher in my chosen field.

Surely people love to paint a rosy picture on hindsight–I wouldn’t be honest if I said I didn’t worry during nearly two months’ preparation. If you’re reading this paragraph and share the same feeling, I want to assure you that quals are not as terrifying as it looks like. Of course, it’s not easy, but no examiner will try to catch you by surprise. The questions that I received for each topic are the most common and fundamental ones in their corresponding field. For instance, I was asked how the progress made by probabilistic models of cognition flows naturally from key features of these models, what critiques probabilistic models received and which ones result from misunderstanding and which ones are really worrisome, etc.. Anyone who wishes to apply probabilistic models to their research simply cannot bypass these questions. I didn’t encounter any questions that made me think “Shoot, why didn’t I think of that?” The oral exam was a lovely conversation where your committee members genuinely wanted to hear your takes on important issues. I love it when I was unsure (e.g., Is social information just another source of data or does it enjoy a privileged epistemic status?), the four professors in the room were happy to work with me towards a promising speculation.

Tips

  1. Treat quals as a priceless opportunity to think about the fundamentals in your field and learn to use specific studies to argue for/against grand theories.
  2. Before compiling your reading lists, think about the major questions and issues in each topic, select papers around these questions/issues, and write down some answers while reading, and gradually build upon these answers along the way.
  3. Enjoy every exchange with your committee members during preparation and while taking the exam–after all, how often can you chat with the smartest people on earth about topics not necessarily related to specific projects?

 

 

 

Qualifying Exam Reading List

Here’s a PDF version of my reading list and proposed questions.

Topic #1 Active Learning

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? — Alan M. Turing, Computing Machinery and Intelligence (1950)

Introduction

Imagine, to train a human-like machine, how tiresome, if not hopeless, it would be if you have to feed every bit of information it needs. How much easier it can be if this machine can direct its own learning, selecting or generating the right data to learn the right things! Even humans infants are capable of this kind of active learning. But how?

As a prerequisite, to sift through the data-rich environment for useful information, an active learner needs a criterion to evaluate information. I open this topic with an overview of the idea of optimal experiment design and sampling norms that quantify the usefulness of queries. To start active learning, one needs to be motivated and attend to information within her grasp. Here, I include papers on curiosity and selective attention. Then, one needs to actually engage with the environment, generate useful data, and learn from them. Here, I include papers on three case studies: exploratory play, causal interventions, and question-asking. Finally, despite its remarkable potential, active learning does not guarantee that the learner can always figure out the best solution in the most efficient way. Here, I include papers on the limitations and the adaptiveness of active learning.

Overview (4 papers)

Background & Major Issues (2 papers)


  1. Gureckis, T. M. & Markant, D. B. (2012) Self-directed learning: A cognitive and computational perspective. Perspectives on Psychological Science, 7, 464-481.
  2. Coenen, A., Nelson, J. D., & Gureckis, T. M. (under review). Asking the right questions about human inquiry.

Value of Information (2 papers)


  1. Nelson, J. D. (2005). Finding useful questions: On Bayesian diagnosticity, probability, impact, and information gain. Psychological Review, 112(4), 979-999.
  2. Nelson, J. D., McKenzie, C. R., Cottrell, G. W., & Sejnowski, T. J. (2010). Experience matters: Information acquisition optimizes probability gain. Psychological Science, 21(7), 960-969.

Case Studies (16 papers)

Attention & Curiosity (2 papers)


  1. Kidd, C., Piantadosi, S. T., & Aslin, R. N. (2012). The Goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex. PloS ONE, 7(5), e36399.
  2. Gottlieb, J., Oudeyer, P. Y., Lopes, M., & Baranes, A. (2013). Information-seeking, curiosity, and attention: Computational and neural mechanisms. Trends in Cognitive Sciences, 17(11), 585-593.

Exploration & Play (7 papers)


  1. Bruner, J. (1961) The act of discovery. Harvard Educational Review, 31, 21-32.
  2. Sim, Z. L., & Xu, F. (2017). Learning higher-order generalizations through free play: Evidence from 2- and 3-year-old children. Developmental Psychology, 53(4), 642-651.
  3. Schulz, L. E. (2012). The origins of inquiry: Inductive inference and exploration in early childhood. Trends in Cognitive Sciences, 16, 382-389.
  4. Schulz, L. E., & Bonawitz, E. B. (2007). Serious fun: Preschoolers engage in more exploratory play when evidence is confounded. Developmental Psychology, 43(4), 1045-1050.
  5. Cook, C., Goodman, N., & Schulz, L. E. (2011). Where science starts: Spontaneous experiments in preschoolers’ exploratory play. Cognition, 120(3), 341-349.
  6. Bonawitz, E. B., van Schijndel, T. J. P., Friel, D., & Schulz, L. E.(2012). Children balance theories and evidence in exploration, explanation, and learning. Cognitive Psychology, 64(4), 215-234.
  7. Kretch, K. S., & Adolph, K. E. (2017). The organization of exploratory behaviors in infant locomotor planning. Developmental Science, 20(4), 1-17.

Causal Intervention (4 papers)


  1. Steyvers, M., Tenenbaum, J. B., Wagenmakers, E. J., & Blum, B. (2003). Inferring causal networks from observations and interventions. Cognitive Science, 27(3), 453-489.
  2. Schulz, L. E., Gopnik, A., & Glymour, C. (2007). Preschool children learn about causal structure from conditional interventions. Developmental Science, 10(3), 322-332.
  3. McCormack, T., Bramley, N. R., Frosch, C., Patrick, F. & Lagnado, D. A. (2016). Children’s use of interventions to learn causal structure. Journal of Experimental Child Psychology. 141, 1-22.
  4. Coenen, A., Rehder, B., & Gureckis, T. M. (2015). Strategies to intervene on causal systems are adaptively selected. Cognitive Psychology, 79, 102-133.

Question Asking (3 papers)


  1. Ruggeri, A., & Lombrozo, T. (2015). Children adapt their questions to achieve efficient search. Cognition, 143, 203-216.
  2. Ruggeri, A., Lombrozo, T., Griffiths, T. L., & Xu, F. (2016). Sources of developmental change in the efficiency of information search. Developmental Psychology, 52(12), 2159-2173
  3. Mills, C. M., Legare, C. H., Grant, M. G., & Landrum, A. R. (2011). Determining who to question, what to ask, and how much information to ask for: The development of inquiry in young children. Journal of Experimental Child Psychology, 110(4), 539-560.

Adaptiveness & Limitations (9 papers)

Environmental Features & Task Demands (6 papers)


  1. Oaksford, M., & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101(4), 608-631.
  2. Navarro, D. J., & Perfors, P. F. (2011). Hypothesis generation, sparse categories, and the positive test strategy. Psychological Review, 118(1), 120-134.
  3. Nelson, J. D., Divjak, B., Gudmundsdottir, G., Martignon, L. F., & Meder, B. (2014). Children’s sequential information search is sensitive to environmental probabilities. Cognition, 130(1), 74-80.
  4. Wu, C. M., Meder, B., Filimon, F., & Nelson, J. D. (2017). Asking better questions: How presentation formats influence information search. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1274-1297.
  5. Markant, D., & Gureckis, T. M. (2012). Does the utility of information influence sampling behavior?. In Proceedings of the 34th Annual Conference of the Cognitive Science Society.
  6. Coenen, A., & Gureckis, T. M. (2017). The distorting effect of deciding to stop sampling. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society.

Cost of Sampling (3 papers)


  1. Juni, M. Z., Gureckis, T. M., & Maloney, L. T. (2016). Information sampling behavior with explicit sampling costs. Decision, 3(3),147-168.
  2. Denrell, J., & March, J. G. (2001). Adaptation as information restriction: The hot stove effect. Organization Science, 12(5),523-538.
  3. Bramley, N. R., Dayan, P., Griffiths, T. L., & Lagnado, D. A. (2017). Formalizing Neurath’s Ship: Approximate algorithms for online causal learning. Psychological Review, 124(3), 301-338.

Topic #2 Social Learning

If I have seen further, it is by standing on the shoulders of giants. — Isaac Newton, Letter to Hook (1676)

Introduction

Learning doesn’t get far if each learner has to figure out everything on her own; even an extraordinary mind like Newton needed to stand on the “shoulders of giants”. Social learning, or learning from others, provides quick and cheap information. Moreover, cumulative culture gives learners access to more complex ideas and tools that individuals could create. However, social learning comes with a price. If the source is misinformed, so will the learner be; if the blind continue to lead the blind, nonadaptive behavior may spread and harm a population’s fitness. Under what conditions can social learning increase a population’s fitness, allowing it to ratchet up in technological complexity and preventing it from slipping back? Most importantly for developmental psychologists, how does the selection pressure on the population level translate to goals of individual learners, including the youngest humans?

I begin with the role culture plays in our species’ widespread success. I shift to the uniqueness and the root of our culture before moving on to analysis of how social learning can contribute to cumulative culture. Like analysis on the computational level sheds light on the goal of a cognitive system, analysis on the population level sheds light on the goal of an individual learner. For instance, one should be selective about when to learn from others, whom to learn from, and in what manner (e.g., faithfully vs. flexibly). Here, I include papers on epistemic trust and cases studies on social learning that follows, including imitation/overimitation, pedagogy, testimony, language, norms and conventions, etc.. I transit to the mechanisms underlying social learning; in particular, whether they differ from those of asocial learning. I conclude this topic with papers on how social learning may differ in different cultures.

Overview of Social Learning (7 papers, 1 book)

The Role of Culture (3 papers, 1 book)


  1. Henrich, J. (2015). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton: Princeton University Press.
  2. Boyd, R., Richerson, P. J., & Henrich, J. (2011). The cultural niche: Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences, 108(26), 10918-10925.
  3. Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. Proceedings of the National Academy of Sciences, 107(Supplement 2), 8993-8999.
  4. Morgan, T.J.(2016).Testing the cognitive and cultural niche theories of human evolution. Current Anthropology, 57(3), 370-377.

“Uniqueness” & Roots (4 papers)


  1. Whiten, A., Caldwell, C. A., & Mesoudi, A. (2016). Cultural diffusion in humans and other animals. Current Opinion in Psychology, 8, 15-21.
  2. Dean, L. G., Vale, G. L., Laland, K. N., Flynn, E., & Kendal, R. L. (2014). Human cumulative culture: A comparative perspective. Biological Reviews, 89(2), 284-301.
  3. Hare, B. (2017). Survival of the friendliest: Homo sapiens evolved via selection for prosociality. Annual Review of Psychology, 68, 155-186.
  4. Purzycki, B. G., Apicella, C., Atkinson, Q. D., Cohen, E., McNamara, R. A., Willard, A. K., Xygalatas, D., Norenzayan, A., & Henrich, J. (2016). Moralistic gods, supernatural punishment and the expansion of human sociality. Nature, 530(7590), 327–330.

Learning from & about Others (22 papers)

Epistemic Trust (5 papers)


  1. Landrum, A.R. Eaves Jr, B.S. & Shafto, P. (2015). Learning to trust and trusting to learn: A theoretical framework. Trends in Cognitive Sciences, 19, 109-111.
  2. Harris, P. L., Koenig, M. A., Corriveau, K. H., & Jaswal, V. K. (2018). Cognitive foundations of learning from testimony. Annual Review of Psychology, 69(1), 253–273.
  3. Kominsky, J. F., Langthorne, P., & Keil, F. C. (2016). The better part of not knowing: Virtuous ignorance. Developmental Psychology, 52(1), 31-45.
  4. Whalen, A., Griffiths, T. L., & Buchsbaum, D. (2017). Sensitivity to shared information in social learning. Cognitive Science, 42(1),168-187.
  5. Kinzler, K. D., Corriveau, K. H., & Harris, P. L. (2011). Children’s selective trust in native-accented speakers. Developmental Science, 14(1), 106-111.

Case Studies (7 papers)


  1. Kalish, C. W., & Sabbagh, M. A. (2007). Conventionality and cognitive development: Learning to think the right way. New Directions for Child and Adolescent Development, 2007(115), 1-9.
  2. Clark, E. V. (2010). Learning a language the way it is: Conventionality and semantic domains. In B. C. Malt & P. Wolff (Eds.), Words and the mind: How words capture human experience. (pp. 243-265). New York, NY: Oxford University Press.
  3. Schmidt, M. F., Butler, L. P., Heinz, J., & Tomasello, M. (2016). Young children see a single action and infer a social norm: Promiscuous normativity in 3-year-olds. Psychological Science, 27(10), 1360-1370.
  4. Legare, C. H., Sobel, D. M., & Callanan, M. (2017). Causal learning is collaborative: Examining explanation and exploration in social contexts. Psychonomic Bulletin & Review, 24(5), 1548-1554.
  5. Bridgers, S., Buchsbaum, D., Seiver, E., Griffiths, T. L., & Gopnik, A. (2016). Children’s causal inferences from conflicting testimony and observations. Developmental Psychology, 52(1), 9-18.
  6. Butler, L. P., & Markman, E. M. (2014). Preschoolers use pedagogical cues to guide radical reorganization of category knowledge. Cognition, 130(1), 116-127.
  7. Rhodes, M., Leslie, S. J., & Tworek, C. M. (2012). Cultural transmission of social essentialism. Proceedings of the National Academy of Sciences, 109(34), 13526-13531.

Consequences (2 papers)


  1. Bonawitz, E., Shafto, P., Gweon, H., Goodman, N. D., Spelke, E., Schulz, L. (2011). The double-edged sword of pedagogy: Instruction limits spontaneous exploration and discovery. Cognition, 120, 322–330.
  2. Lyons, D. E., Young, A. G., & Keil, F. C. (2007). The hidden structure of overimitation. Proceedings of the National Academy of Sciences, 104(50), 19751-19756.

Mechanisms (6 papers)


  1. Csibra, G., & Gergely, G. (2009). Natural pedagogy. Trends in Cognitive Sciences, 13(4), 148-153.
  2. Legare, C. H., & Nielsen, M. (2015). Imitation and innovation: The dual engines of cultural learning. Trends in Cognitive Sciences, 19(11), 688-699.
  3. Heyes, C. (2016). Who knows? Metacognitive social learning strategies. Trends in Cognitive Sciences, 20(3), 204-213.
  4. Schachner, A., & Carey, S. (2013). Reasoning about `irrational’ actions: When intentional movements cannot be explained, the movements themselves are seen as the goal. Cognition, 129(2), 309-327.
  5. Jara-Ettinger, J., Gweon, H., Schulz, L. E., & Tenenbaum, J. B. (2016). The naive utility calculus: Computational principles underlying commonsense psychology. Trends in Cognitive Sciences, 20(8), 589-604.
  6. Hu, J., Buchsbaum, D., Griffiths, T. & Xu, F. (2013) When does the majority rule? Preschoolers’ trust in majority informants varies by task domain. In Proceedings of the 35th Annual Conference of the Cognitive Science Society.

Cross-cultural Comparison (2 papers)


  1. Clegg, J. M., & Legare, C. H. (2016). A cross-cultural comparison of children’s imitative flexibility. Developmental Psychology, 52(9),1435-1444.
  2. Rogoff, B., Moore, L., Najafi, B., Dexter, A., Correa-Chávez, M., Solís, J. (2007). Children’s development of cultural repertoires through participation in everyday routines and practices. In J. E. Grusec & P. D. Hastings (Eds.), Handbook of socialization: Theory and research (pp. 490-515). New York, NY, US: Guilford Press.

Topic #3 Probabilistic Models of Cognition

In order to understand bird flight, we have to understand aerodynamics; only then does the structure of feathers and the different shapes of bird’s wings make sense.  — David Marr, Vision (1982)

Introduction

Lying at the heart of the mystery of human knowledge is the age-old question, how can we learn anything abstract and generalizable at all from concrete, transient, and noisy sensory input, let alone so much and so quickly?

Over the last three decades, probabilistic models of cognition have offered many exiting answers. Traditionally, they address questions at Marr’s computational level by elucidating what problem a cognitive system is trying to solve and offering an optimal solution to that problem under certain constraints, which allows us to understand the goal of learning as well as what can in principle be learned. This approach proves highly fruitful and has lent new insights to challenging problems such as the origin of abstract knowledge (e.g., hierarchical Bayesian models), how structured knowledge can be combined with statistical evidence (e.g., theory-based Bayesian models), one-shot learning of rich concepts (e.g., Bayesian program induction), etc.. Recently, probabilistic models such as rational process models also begin to address questions at lower levels, asking what algorithms learners can use to approximate (often intractable) Bayesian inference or what heuristics they should choose for a given task. However, it’s worth noting that probabilistic models are not the only modeling framework in cognitive science and they don’t go without criticism—we should be aware of the strengths and the weaknesses of different frameworks.

Here, I begin this topic with papers on rational analysis and comparison among different modeling frameworks in cognition. Then I choose to focus on probabilistic models of cognition, shifting to papers on key concepts behind probabilistic models of cognition, cases studies on language and causality, and the recent advancement towards the algorithmic level. I conclude with papers critiquing probabilistic models and papers that responded to these critiques and pointed out alternative approaches forward.

Overview (8 papers)

Rational Analysis (3 papers)


  1. Marr, D. (1982). The philosophy and the approach. In Vision (pp. 8-29). San Francisco, CA: Freeman.
  2. Anderson, J. R. (1990). Introduction. In The adaptive character of thought (pp. 1-40). Hillsdale, NJ: Erlbaum.
  3. Chater, N. & Oaksford, M. (1999). Ten years of the rational analysis of cognition. Trends in Cognitive Science, 3(2), 57-65.

Frameworks for Cognitive Modeling (5 papers)


  1. McClelland, J. L. (2009). The place of modeling in cognitive science. Topics in Cognitive Science, 1(1), 11-38.
  2. Bringsjord, S. (2008). Declarative/logic-based cognitive modeling In R. Sun (Ed.), The Cambridge handbook of computational psychology (pp. 127–176). New York, NY: Cambridge University Press.
  3. Piantadosi, S. T., & Jacobs, R. A. (2016). Four problems solved by the probabilistic language of thought. Current Directions in Psychological Science, 25(1), 54-59.
  4. McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., Seidenberg, M. S., & Smith, L. B. (2010). Letting structure emerge: Connectionist and dynamical systems approaches to cognition. Trends in Cognitive Sciences, 14(8), 348-356.
  5. Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14(8), 357-364.

On the Computational Level (12 papers)

Foundation (4 papers)


  1. Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D.(2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279-1285.
  2. Kemp, C., Perfors, A., & Tenenbaum, J. B. (2007). Learning overhypotheses with hierarchical Bayesian models. Developmental Science, 10(3), 307-321.
  3. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332-1338.
  4. Perfors, A. (2012). Bayesian models of cognition: What’s built-in after all?. Philosophy Compass, 7(2), 127-138.

Case Studies: Language (4 papers)


  1. Xu, F., & Tenenbaum, J. B. (2007). Word learning as Bayesian inference. Psychological Review, 114(2), 245–272.
  2. Perfors, A., Tenenbaum, J. B., & Regier, T. (2011). The learnability of abstract syntactic principles. Cognition, 118(3), 306-338.
  3. Frank, M. C., & Goodman, N. D. (2012). Predicting pragmatic reasoning in language games. Science, 336(6084), 998-998.
  4. Meylan, S.C., Frank, M.C., Roy, B.C., & Levy, R. (2017). The emergence of an abstract grammatical category in children’s early speech. Psychological Science, 28(2), 181-192.

Case Studies: Causality (4 papers)


  1. Griffiths, T. L., & Tenenbaum, J. B. (2007). From mere coincidences to meaningful discoveries. Cognition, 103(2), 180-226.
  2. Griffiths, T. L., & Tenenbaum, J. B. (2009). Theory-based causal induction. Psychological Review, 116(4), 661-716.
  3. Goodman, N. D., Ullman, T. D., & Tenenbaum, J. B. (2011). Learning a theory of causality. Psychological Review, 118(1), 110-119.
  4. Pacer, M. D. & Griffiths, T. L. (2011). A rational model of causal inference with continuous causes. In Advances in Neural Information Processing Systems 24.

Towards the Algorithmic Level (6 papers)

Foundation (2 paper)


  1. Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Sciences, 7(2), 217-229.
  2. Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245), 273-278.

Case Studies (4 papers)


  1. Vul, E., Goodman, N. D., Griffiths, T. L., & Tenenbaum, J. B. (2014). One and done?Optimal decisions from very few samples. Cognitive Science, 38(4), 599–637.
  2. Sanborn, A. N., Griffiths, T. L., & Navarro, D. J. (2010). Rational approximations to rational models: Alternative algorithms for category learning. Psychological Review, 117(4), 1144-1167.
  3. Bonawitz, E., Denison, S., Gopnik, A., & Griffiths, T. L. (2014). Win-stay, lose-sample,: A simple sequential algorithm for approximating Bayesian inference. Cognitive Psychology, 74, 35-65.
  4. Lieder, F., & Griffiths, T. L. (2017). Strategy selection as rational metareasoning. Psychological Review, 124(6), 762-794.

Critiques & Responses (6 papers)

  1. Jones, M., & Love, B. C. (2011). Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34, 169-231.
  2. Marcus, G. F., & Davis, E. (2013). How robust are probabilistic models of higher-level cognition? Psychological Science, 24(12), 2351-2360.
  3. Goodman, N. D., Frank, M. C., Griffiths, T. L., Tenenbaum, J. B., Battaglia, P. W., & Hamrick, J. B. (2015). Relevant and robust: A response to Marcus and Davis (2013). Psychological Science, 26(4), 539-541.
  4. Griffiths, T. L., Chater, N., Norris, D., & Pouget, A. (2012). How the Bayesians got their beliefs (and what those beliefs actually are): Comment on Bowers and Davis (2012). Psychological Bulletin, 138, 415-422.
  5. Frank, M. C. (2013). Throwing out the Bayesian baby with the optimal bathwater: Response to Endress (2013). Cognition, 128(3), 417-423.
  6. Tauber, S., Navarro, D. J., Perfors, A., & Steyvers, M. (2017). Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory. Psychological Review, 124(4), 410–441.

Sampling biases in developmental studies

Are parents who agree to let their children participate in scientific experiments really a representative sample of all human parents?

At some point, perhaps developmental researchers all have wondered, are parents who agree to let their children participate in scientific experiments really a representative sample of all human parents? Anecdotally, it seems to me that those who choose to sign up for studies have exceptional passion for or at least trust in science, curiosity about how their children think, devotion to early learning and education, etc.. Do their children learn differently compared to those whose parents do not consent? The answer is of the utmost importance to the generalizability of developmental studies : on the one hand, we can only test children whose parents agree for them to participate (How else are we going to pass IRB reviews?); on the other, we wish our results apply to all children.

Yue Yu and his colleagues at Rutgers University (Yu, Bonawitz, & Shafto, 2017) solved this apparent conundrum with a clever design and a popular statistical technique.

The experiment

In the beginning, two experimenters secretly observe pairs of parents and children (“parent-child dyads”) in a racially diverse local zoo or a playground. They keep track of both the quantity1 and the quality2 of parent-children interactions. A total of 109 dyads were observed.

Five minutes later, a third experimenter approaches the observed dyads, asking if the parents want their children to take part in a study. 78 pairs were invited (the other 31 were excluded for various reasons), among which 59 agreed and 19 refused.

For those who agreed, two experimenters introduced their children to a novel toy with five functions: “A tower that lights up when a button was pushed, a knob that produces a squeaking sound when squeezed, a lady bug pin light that flashes in three different patterns when pushed, a flower magnet that moves between three different places on the toy, and a turtle hidden in a pipe that is visible through a magnifying window” (p. 3 [emphasis added; changed to present tense]).

One experimenter who claims to be knowledgeable about the toy points to the tower (the target function) and says, “I’m asking you to think about: What does this button do?” Then children are given the toy to play with until bored. Children’s test performance is coded by several well-established measures3.

The relevant finding

The question is, do children with or without parental consent differ in their test performance? The problem is, the researchers don’t have data of those without consent. The solution is “model-based multiple imputation” (Rubin, 2004), a statistical technique that deals with missing data.

The “magical” imputation process works in roughly five steps:

1. Yu et al. (2017) found that children’s pre-test interactions with their parents are correlated with their test performance later. As a result, we can predict test performance reasonably well using pre-test parent-children interactions.

2. Let’s assume that for both consented and non-consented children, the relationship between test performance and parent-children interactions is the same.

3. So we can model the above relationship based on consented children

4. and use the model to predict non-consented children’s test performance from their interactions with parents (which are available!).

5. Repeat Step 1–4 100 times, each time adding a random noise.

Results of these 100 simulations suggest that non-consented children consistently differ from consented children in test performance, the mean differences of which range from .09 to .20.

In a nutshell, children with parental consent are a biased sample and their test performance may actually differ from that of the whole population!!


Reference

Yu, Y., Bonawitz, E., & Shafto, P. (2017). Inconvenient samples: Modeling the effects of non-consent by coupling observational and experimental results. Proceedings of the 39th Annual Conference of the Cognitive Science Society.


Footnotes

  1. Including: 1) the length of dyadic activities, 2) the length of supervised activities, and 3) the length of unsupervised activities.
  2. Including: (continue numbering) 4) the number of parents’ pedagogical questions, 5)the number of parents’ information-seeking questions, 6) the number of parents’ statements, and 7) the number of parents’ commands.
  3. Including: 1) the total time spent on the toy, 2) whether the target function is activated during the whole process, 3) the number of unique actions performed on the toy during the whole process, 4) the number of non-target functions activated during the whole process, 5) whether the target function is activated in the first minute, 6) the number of unique actions performed on the toy in the first minute, and 7) the number of non-target functions activated in the first minute.

Beginner’s CoCoSci list

  • I’ll come back to add comments on why I think these books, websites, lists, etc. are amazing when I get the chance.
  • Also, I’ll keep updating as I know or think of more.

Theory

baye

  1. Probability Theory: The Logic of Science (E. T. Jaynes, 2003)
    • Above is THE book that sets the foundation for modern Bayesian probability theory. More exiting still for cognitive scientists, it is not just about how mathematicians make sense of data or scientists make discoveries, but also about how the human mind makes sense of the world in an intuitive way. A must-read if you love “math on the mind”.
    • Link: Amazon
  2. Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis (J. V. Stone, starting 2013)
  3. Information Theory: A Tutorial Introduction (J. V. Stone, 2015)
  4. A Mathematical Primer for Social Statistics (J. Fox, 2009)
    • A quick (and very readable) refresher of linear algebra and calculus, both of which are essential for understanding stats and building computational models. Highly recommend if you want to recover a reasonable working knowledge of math without going through 1000-page linear algebra and calculus textbooks for math majors (again).
    • Link: Amazon, Fox’s website 

Modeling

  1. Computational Modeling in Cognition: Principles and Practice (Lewandowsky & Farrell, 2010)
  2. Bayesian Cognitive Modeling: A Practical Course (Michael Lee, 2014)
  3. Artificial Intelligence: A Modern Approach (3rd Edition) (Russell & Norvig, 2009)
    1. Link: Amazon, Berkeley website, GitHub
    2. Language: Python, LISP, Julia, Scala, Java, C#, Javascript
  4. The Cambridge Handbook of Computational Psychology (Sun, 2008)
  5. Probabilistic Models of Cognition (Goodman & Tenenbaum, online book)
  6. Statistical Rethinking: A Bayesian Course with Examples in R and Stan (McElreath , 2015)
  7. Foundational papers

Programming

  1. MATLAB/Octave
  2. R
  3. Python
  4. Church
  5. (formatting) LaTeX

Online Experiments

  1. MTurk
  2. psiTurk

Modeling + Cognitive Development

  1. Rational Constructivism in Cognitive Development (Xu & Kushnir, 2012)
  2. Causal Learning: Psychology, Philosophy, and Computation (Gopnik & Schulz, 2007)

Popular Science

  1. Algorithms to Live By (Christian & Griffiths, 2016)
  2. Thinking, Fast and Slow (Kahneman, 2013)

Reading lists, resources, blogs…

CoCoSci

  1. Josh Tenenbaum (MIT): resources
  2. Tom Griffiths (UC Berkeley): reading list, big data 
  3. Amy Perfors (University of Adelaide): general resources, course
  4. Dan Navarro (UNSW): resources 
  5. Noah Goodman (Stanford): resources
  6. Mike Frank (Stanford): past syllabi, blog
  7. Todd Gureckis (NYU): resources, blog
  8. Robert Jacobs (Rochester): Computational Cognition Cheat Sheets
  9. Garrison Cottrell (UCSD): Cognitive Modeling Greatest Hits, resources
  10. Rebecca Saxe (MIT): Theory of Mind resources
  11. Andreas Stuhlmüller (MIT): Ought, personal website
  12. Sharon Goldwater (University of  Edinburgh): reading list
  13. ESSLLI summer school: 2016 (Composition in Probabilistic Language Understanding), 2014 (Probabilistic Programming Languages)
  14. Brendan O’Connor (UMass): AI and social science
  15. Monica Gates (UC Berkeley): science outreach
  16. Jessica Hamrick (UC Berkeley): qual reading notes
  17. Wai Keen Vong (Rutgers): blog
  18. Baxter Eaves (Rutgers): blog

Cognitive Development

  1. Samuel G. B. Johnson (Yale): research

Stats & Methodology

  1. Daniël Lakens  (Eindhoven University of Technology): blog (the 20% statistician), personal 
  2. Sanjay Srivastava (University of Oregon): blog (the hardest science, e.g., everything is fucked)
  3. Will Gervais (University of Kentucky): stats books
  4. Simine Vazire (UC Davis): blog (sometime i’m wrong)
  5. Brian Nosek (Virginia): open science
  6. Ed Vul (UCSD): “voodoo correlation” (paper, book chapter)
  7. John Kruschke (Indiana University): blog (doing Bayesian data analysis)

Academia

  1. Lewandowsky and Ecker (UWA): research tools
  2. Brad Voytek (UCSD): lab philosophy
  3. Mike Pacer (UC Berkeley): qualifying exams
  4. The Professor Is In
    • Advice on how to build a career out of Ph.D., inside or outside the academia.
  5. Konrad Kording (Northwestern): resources (e.g., data skills, writing, productivity)
  6. Dredze (JHU) and Wallach (UMass): how to be a successful PhD student
  7. Matt Might (Utah): blog
  8. Tim Brady (UCSD): MTurk, journal ranking, related references
  9. Brian Scholl (Yale): musings

Miscellaneous

  1. Jordan Suchow (UC Berkeley): reading list
  2. Falk Lieder (UC Berkeley): practical rationality
  3. Monica Gates (UC Berkeley): blog
  4. Jessica Hamrick (UC Berkeley): blog
  5. Robert Hawkins (Stanford): website