Data Blitz Sessions

algorithms and decision-making

Early career researchers data blitz

october 26, 2020 10am PT / 1PM ET


Schedule

10:00 - 10:05 am: Opening Statements and Introductions (Nate and Juliana)

10:05 - 10:13 am: Data Blitz Talk 1 + Q&A

10:13 - 10:21 am: Data Blitz Talk 2 + Q&A

10:21 - 10:29 am: Data Blitz Talk 3 + Q&A

10:29 - 10:37 am: Data Blitz Talk 4 + Q&A

10:37 - 10:45 am: Data Blitz Talk 5 + Q&A

10:45 - 10:53 am: Data Blitz Talk 6 + Q&A

10:53 - 11:03 am: Integrative Discussion (Nate and Juliana)

11:03 - 11:25 am: Virtual Roundtable Discussions

11:25 - 11:30 am: Closing Statements (Nate and Juliana)

data blitz sessions

TALK 1: Deliberative vs. Intuitive Thinking Precludes Algorithmic Aversion

Heather Yang, MIT

Who is likely to resist algorithmic advice? We assess the hypothesis that cognitive style, as measured through the Cognitive Reflection Test is associated with greater advice-seeking from algorithmic advisors. Across 11 online studies and over 2,400 participants, we find that individuals that rely on their intuition prefer more advice from human (vs. algorithmic) advisors. This relationship is partially mediated by perceptions of advisor accuracy –intuitive individuals believe that human advisors are, on average, more accurate than algorithmic ones. This work is the first to identify a non-demographic, individual-level difference that predicts preference for algorithmic vs. human advice.

TALK 2: Algorithmic Hiring: People Prefer to Have a Person Hire them Instead of an Algorithm

Jennifer M. Logg, Georgetown University

Although more and more companies are utilizing hiring algorithms, across 6 experiments (N = 4,016), we find that applicants chose a person over an algorithm to assess their application (Experiment 1: 70%). This preference weakened when the applicant pool is less competitive (Experiment 2: from 67% to 58%). However, people did prefer the algorithm when the hiring manager was a member of the applicant’s out-group (Experiment 3: in-group: 69%; out-group: 39%). Applicants have such a strong preference for a person that an algorithm must reach 75% accuracy in its past hiring decisions before they prefer it.

TALK 3: Perceptions of Algorithms’ Capabilities to Assess Diversity

Teodora K. Tomova Shakur, New York University

We investigate perceptions about algorithms’ capabilities to use different kinds of diversity characteristics when making hiring recommendations. Employing text analysis, we found that algorithms are less likely to rely on deep-level diversity characteristics but more likely to rely on surface-level diversity characteristics than humans. Using samples of working adults, we discovered that perceptions that algorithms neglect unique qualities explained perceptions of their inability to use deep-level diversity markers. Finally, we found that real information about algorithms’ capability to detect uniqueness changed these perceptions. These findings reveal lay beliefs that algorithms are unlikely to grasp deep-level diversity, despite an increasing amount of evidence that they can.

TALK 4: The AI Invasion: How Workplace Artificial Intelligence Affects Career Preferences

Noah Castelo, University of Alberta

We explore when and why students and workers perceive AI as a helper vs. a competitor in the workplace and how those perceptions affect career preferences. When AI’s performance relative to humans is ambiguous, it is perceived as more of a helper than a competitor, but this reverses when AI’s performance is known to be high relative to humans. The relationship between performance and seeming like a competitor is stronger for AI than for other technologies and for other humans, partly due to AI’s ability to threaten workers’ sense of competence and autonomy. Finally, we explore the interaction between perceptions of AI and perceptions of the job in question in shaping career preferences.

TALK 5: Algorithmic Face-ism: Uncovering and Mitigating Algorithmic Bias in Decision-Based Facial Recognition Systems

Hatim A. Rahman, Northwestern University

How do some of the most advanced machine learning facial recognition algorithms make important decisions, such as who to hire or who is considered a leader? Existing research suggests advances in machine learning methods can answer these questions by using facial features of an image—facial morphology—to accurately and objectively predict answers to such questions. We show, however, that even after implementing some of the most common methods to control for algorithmic bias, such as judgment sampling and deep neural networks, bias still exists in decision-based facial recognition algorithms. Contrary to existing knowledge, we show existing machine learning algorithms do not rely on facial morphology to make decisions. Instead, after accounting for common algorithmic biases, we found that these algorithms relied mostly on transient features, such as the image’s overall illumination or background lighting. We identify the specific stages—sampling and model functioning stages—in which bias is likely to arise in facial recognition algorithms. These results suggest that decision-based facial recognition algorithms are perpetuating bias in ways that researchers have thus far overlooked, with troubling implications for their use by governments, organizations, and researchers. We introduce the concept of algorithmic “face-ism”, in which machine learning algorithms unfairly express an inherent preference for specific facial morphologies. This paper thus identifies how bias enters and influences leading decision-based facial recognition systems and demonstrates how previously taken-for-granted factors contribute to bias in automated decisions. We conclude by discussing how such biases can be mitigated in decision-based facial recognition algorithms.

TALK 6: Making Sense of Recommendations

Michael Yeomans, Imperial College

Computer algorithms are increasingly being used to predict people’s preferences and make recommendations. Although they are cheap to scale, their accuracy is unproven. Here, we compare computer recommender systems to human recommenders in a domain that affords humans many advantages: predicting which jokes people will find funny. We find that recommender systems outperform humans, even close relations. Yet people are averse to relying on these recommender systems. This aversion partly stems from the fact that people believe the human recommendation process is easier to understand. It is not enough for recommender systems to be accurate, they must also be understood.