Xuechunzi Bai

Profile Image
Department of Psychology
The University of Chicago
Publications

Our research focuses on stereotype bias in humans, using four methodological approaches. You can browse by keyword: data-driven discovery, model-based mechanisms, large-scale behavioral experiments, and bias in AI.

Explicitly unbiased large language models still form biased associations.
Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Tom Griffiths
PNAS, 2025 [pdf] [project] [slides]
Globally inaccurate stereotypes can result from locally adaptive exploration.
Xuechunzi Bai, Susan Fiske, Tom Griffiths
Psych Science, 2022 [pdf] [project] [slides]
As diversity increases, people paradoxically perceive social groups as more similar.
Xuechunzi Bai, Miguel Ramos, Susan Fiske
PNAS, 2020 [pdf] [project] [slides]
Collective bias emerges even from rational social learning
Bufan Gao, Xuechunzi Bai
arXiv, Under review [pdf]
Large language models develop novel social biases through adaptive exploration
Addison Wu, Ryan Liu, Xuechunzi Bai, Tom Griffiths
arXiv, Under review [pdf]
Aligned but blind: Alignment increases implicit bias by reducing awareness of race.
Lihao Sun, Chengzhi Mao, Valentin Hofmann, Xuechunzi Bai
ACL Main, 2025 [pdf] [project]
Measuring machine learning harms from stereotypes requires understanding who is being harmed by which errors in what ways.
Angelina Wang, Xuechunzi Bai, Solon Barocos, Su Lin Blodgett
FAccT, 2025 [pdf]
Explicitly unbiased large language models still form biased associations.
Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Tom Griffiths
PNAS, 2025 [pdf] [project] [slides]
Learning too much from too little: False face stereotypes emerge from a few exemplars and persist via insufficient sampling.
Xuechunzi Bai, Stefan Uddenberg, Brandon Labbree, Alex Todorov
JPSP, 2025 [pdf]
Costly exploration produces stereotypes with dimensions of warmth and competence.
Xuechunzi Bai, Tom Griffiths, Susan Fiske
JEP:G, 2025 [pdf] [model] [slides]
The limitations of machine learning models for predicting scientific replicability.
Molly Crockett, Xuechunzi Bai, Sayash Kapoor, Lisa Messeri, Arvind Narayanan
PNAS Letters, 2023 [pdf]
Humans perceive warmth and competence in Artificial Intelligence.
Kevin McKee, Xuechunzi Bai, Susan Fiske
iScience, 2023 [pdf]
Globally inaccurate stereotypes can result from locally adaptive exploration.
Xuechunzi Bai, Susan Fiske, Tom Griffiths
Psych Science, 2022 [pdf] [project] [slides]
A spontaneous stereotype content model: Taxonomy, properties, processes, and prediction.
Gandalf Nicolas, Xuechunzi Bai, Susan Fiske
JPSP, 2022 [pdf]
Warmth and competence in human-agent cooperation.
Kevin McKee, Xuechunzi Bai, Susan Fiske
AAMAS, 2022 [pdf]
Urban space and social cognition: The effect of urban space on intergroup perceptions.
Kim Knipprath, Maurice Crul, Ismintha Waldring, Xuechunzi Bai
ANNALS, 2021 [pdf]
Cosmopolitan morality tradesoff ingroup for the world, seperating benefits and protection.
Xuechunzi Bai, Varun Gauri, Susan Fiske
PNAS, 2021 [pdf]
As diversity increases, people paradoxically perceive social groups as more similar.
Xuechunzi Bai, Miguel Ramos, Susan Fiske
PNAS, 2020 [pdf] [project] [slides]
Comprehensive stereotype content dictionaries using a semi-automated method.
Gandalf Nicolas, Xuechunzi Bai, Susan Fiske
EJSP, 2020 [pdf]
Vertical and horizontal inequality are status and power differences.
Susan Fiske, Xuechunzi Bai
Current Opinion, 2020 [pdf]
Stereotypes as historical accidents: Images of social class in post-communist versus capitalist societies.
Lucy Grigoryan, Xuechunzi Bai, Federica Durante, Susan Fiske, Marharyta Fabrykant, Anna Hakobjanyan, Nino Javakhishvili, Kamoliddin Kadirov, Marina Kotova, Ana Makashvili, Edona Maloku, Olga Morozova-Larina, Nozima Mullabaeva, Adil Samekin, Volha Verbilovich, Illia Yahiiaiev
PSPB, 2020 [pdf]
Exploring research methods blogs in psychology: Who posts what about whom.
Gandalf Nicolas, Xuechunzi Bai, Susan Fiske
PPS, 2019 [pdf]
Admired rich or resented rich? How two cultures vary in envy.
Sherry Wu, Xuechunzi Bai, Susan Fiske
JCCP, 2018 [pdf]
A spontaneous stereotype content model: Taxonomy, properties, processes, and prediction.
Gandalf Nicolas, Xuechunzi Bai, Susan Fiske
JPSP, 2022 [pdf]
Cosmopolitan morality tradesoff ingroup for the world, seperating benefits and protection.
Xuechunzi Bai, Varun Gauri, Susan Fiske
PNAS, 2021 [pdf]
As diversity increases, people paradoxically perceive social groups as more similar.
Xuechunzi Bai, Miguel Ramos, Susan Fiske
PNAS, 2020 [pdf] [project] [slides]
Comprehensive stereotype content dictionaries using a semi-automated method.
Gandalf Nicolas, Xuechunzi Bai, Susan Fiske
EJSP, 2020 [pdf]
Stereotypes as historical accidents: Images of social class in post-communist versus capitalist societies.
Lucy Grigoryan, Xuechunzi Bai, Federica Durante, Susan Fiske, Marharyta Fabrykant, Anna Hakobjanyan, Nino Javakhishvili, Kamoliddin Kadirov, Marina Kotova, Ana Makashvili, Edona Maloku, Olga Morozova-Larina, Nozima Mullabaeva, Adil Samekin, Volha Verbilovich, Illia Yahiiaiev
PSPB, 2020 [pdf]
Exploring research methods blogs in psychology: Who posts what about whom.
Gandalf Nicolas, Xuechunzi Bai, Susan Fiske
PPS, 2019 [pdf]
Collective bias emerges even from rational social learning
Bufan Gao, Xuechunzi Bai
arXiv, Under review [pdf]
Costly exploration produces stereotypes with dimensions of warmth and competence.
Xuechunzi Bai, Tom Griffiths, Susan Fiske
JEP:G, 2025 [pdf] [model] [slides]
Globally inaccurate stereotypes can result from locally adaptive exploration.
Xuechunzi Bai, Susan Fiske, Tom Griffiths
Psych Science, 2022 [pdf] [project] [slides]
Collective bias emerges even from rational social learning
Bufan Gao, Xuechunzi Bai
arXiv, Under review [pdf]
Measuring machine learning harms from stereotypes requires understanding who is being harmed by which errors in what ways.
Angelina Wang, Xuechunzi Bai, Solon Barocos, Su Lin Blodgett
FAccT, 2025 [pdf]
Learning too much from too little: False face stereotypes emerge from a few exemplars and persist via insufficient sampling.
Xuechunzi Bai, Stefan Uddenberg, Brandon Labbree, Alex Todorov
JPSP, 2025 [pdf]
Costly exploration produces stereotypes with dimensions of warmth and competence.
Xuechunzi Bai, Tom Griffiths, Susan Fiske
JEP:G, 2025 [pdf] [model] [slides]
Globally inaccurate stereotypes can result from locally adaptive exploration.
Xuechunzi Bai, Susan Fiske, Tom Griffiths
Psych Science, 2022 [pdf] [project] [slides]
Warmth and competence in human-agent cooperation.
Kevin McKee, Xuechunzi Bai, Susan Fiske
AAMAS, 2022 [pdf]
Collective bias emerges even from rational social learning
Bufan Gao, Xuechunzi Bai
arXiv, Under review [pdf]
Large language models develop novel social biases through adaptive exploration
Addison Wu, Ryan Liu, Xuechunzi Bai, Tom Griffiths
arXiv, Under review [pdf]
Aligned but blind: Alignment increases implicit bias by reducing awareness of race.
Lihao Sun, Chengzhi Mao, Valentin Hofmann, Xuechunzi Bai
ACL Main, 2025 [pdf] [project]
Measuring machine learning harms from stereotypes requires understanding who is being harmed by which errors in what ways.
Angelina Wang, Xuechunzi Bai, Solon Barocos, Su Lin Blodgett
FAccT, 2025 [pdf]
Explicitly unbiased large language models still form biased associations.
Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Tom Griffiths
PNAS, 2025 [pdf] [project] [slides]
The limitations of machine learning models for predicting scientific replicability.
Molly Crockett, Xuechunzi Bai, Sayash Kapoor, Lisa Messeri, Arvind Narayanan
PNAS Letters, 2023 [pdf]
Humans perceive warmth and competence in Artificial Intelligence.
Kevin McKee, Xuechunzi Bai, Susan Fiske
iScience, 2023 [pdf]
Warmth and competence in human-agent cooperation.
Kevin McKee, Xuechunzi Bai, Susan Fiske
AAMAS, 2022 [pdf]