Social-Cognitive Bias & Machine Learning

Traditional measures of the content of social groups’ stereotypes differ from measures of stereotype salience (Nicolas, Bai, & Fiske, in prep.)

In a recent line of research I have been interested in studying various social-cognitive biases in machine learning models. In particular, with Drs. Alexander Todorov (Psychology) and Arvind Narayanan (Computer Science), I am stuying how computer vision models may reflect human intersectional biases. In an initial project we are exploring if emotion and gender are associated in current computer vision models, resulting in, for example, more angry classifications for male faces or more happy classifications for female faces. I have also started to look into associations between race, gender, and emotion in widely-used pretrained models for natural language processing, such as Word2vec and Glove. Preliminary results suggest that, indeed, some of the intersectional stereotypes observed in humans are reflected in many machine learning models that are currently available.

Gandalf Nicolas
Gandalf Nicolas
Assistant Professor of Psychology