Harnessing machine learning to understand change in social attitudes

Faculty Member: Professor Mahzarin Banaji

Attitudes, beliefs, and values are important determinants of behavior. During the 20th century, investigations into their nature formed the backbone of the social sciences. As evidence accumulated for the disparity between people’s reported attitudes (e.g., “I am egalitarian”) and their behavior (e.g., evidence of unequal treatment and biased decision-making), the question of measurement emerged: Is it possible that self-report measures are limited both by a desire for social desirability and by a lack of conscious access to the contents of the mind? These questions have driven a search for alternative ways of measuring attitudes, beliefs, and values, resulting in a set of implicit measures of cognition. So far, research using one such measure, the Implicit Association Test, has revealed that many American adults and children exhibit negative implicit attitudes towards outgroups such as African Americans, Muslims, and the elderly. These biases have been shown to influence behavior, such as hiring and admissions, voting, and medical decisions. In addition to using more traditional experimental methods to understand intergroup cognition, we seek to borrow tools from the area of machine learning to investigate the formation and change of implicit biases. The questions that we are planning to explore include the following: (1) Do intergroup biases reflect the frequencies of co- occurrences between certain group members and certain objects, as seen in image sets (e.g., pictures of women paired with beauty products, or of Asians paired with computers)? (2) How can we use the text of children’s stories as well as databases of speech produced by children to understand the developmental trajectory of intergroup biases? (3) What can we learn about long-term trends in intergroup biases by analyzing word embeddings extracted from texts produced during different historical time periods? By drawing on machine learning methods, we hope to extend the study of implicit intergroup biases to ages (both developmental and historical) for which traditional ways of data collection are not available.

Description of RA Duties: Research assistants are expected to contribute to the project by participating in the choice, development, coding, and execution of machine learning algorithms.

Requirements: Extensive programming experience (especially in R, MATLAB, and/or Python) is necessary and a background and/or interest in the substantive issues of social cognition is highly desirable. Must be a Harvard College undergraduate.

To Apply: Please email your resume and short cover letter to Benedek Kurdi at kurdi@g.harvard.edu