CE26 - Innovation, travail 2020

Understanding and improving trust in scientific truths: an experimental approach – TrustSciTruths

Understanding and improving confidence in scientific truths: an experimental approach

We are conducting controlled experiments to identify the determinants of trust in scientific experts and processes. A 1st line of research focuses on the effectiveness and perception of algorithms used to recruit workers. A 2nd axis studies the impact of an adviser's gender and expert status on recommendation follow-up. A 3rd axis looks at whether the fact that the conclusion of a reasoning is pleasant affects the ability to reason.

Understanding and improving trust in scientific truths: Algorithm aversion, expert perception and motivated reasoning

Recent advances in artificial intelligence and machine learning have transformed many fields, including those involving complex human decisions. Whereas automation mainly affected routine jobs, AI is now having an impact on highly skilled professions, sometimes surpassing humans in tasks such as medical diagnosis, judicial decisions or even recruitment. In areas where the public lacks the knowledge to make informed decisions, trust in experts is crucial. Trust in scientists correlates with respect for the climate, and trust in health experts increases respect for Covid-19 measures. Confidence in scientific truths also depends on the ability to analyze available information, which in turn shapes the interpretation of scientific findings. This research project explores trust in scientific truths by drawing on experimental economics and online and field experiments. It is structured around three main axes: -Efficiency and aversion to algorithms We analyze public distrust of algorithmic decisions in recruitment. A first study examines the factors influencing preference or aversion to algorithmic hiring, including transparency and the use of variables such as gender. A second study in collaboration with a microfinance company compares the effectiveness of algorithmic and managerial recruitment, using behavioral and psychological measures to predict productivity. Trust in experts The public's trust in experts plays a key role in adherence to scientific recommendations, particularly in the areas of climate or public health. However, the majority of experts in the media are men, which can affect the perception of their credibility. An experiment carried out in this project aims to identify whether trust in an advisor varies according to gender and expert status. Motivated reasoning and acceptance of scientific truths Trust in scientific truths is also based on the ability to reason from available information, influencing in particular the way in which this information is interpreted and causally related. We are investigating whether individuals' ability to reason is influenced by the desirability of the conclusion. In collaboration with Jérémy Celse, an experiment is testing whether participants reason differently when the conclusion of a reasoning is pleasant or not. The overall aim of this project is to improve understanding of the determinants of trust in science, and to identify levers for strengthening it, in contexts where scientific decisions have major societal implications.

To understand trust in science and scientific processes, it's not enough to observe what's going on in society. We need controlled experiments that clearly distinguish between cause and effect. Experimental economics is a method that helps us to test these questions in a structured framework, eliminating biases that could distort the results. In particular, through laboratory and online experiments, we can measure participants' beliefs and better understand how they perceive the reliability of scientific information.

 

Our project is organized along three lines:

 

Trust in recruitment algorithms

We are seeking to understand why some people mistrust algorithms when they are used to recruit employees. An initial online experiment involves participants playing the roles of workers and recruiters. We analyze their preference between the algorithm and managers for making recruitment decisions. For workers, we test the impact of transparency on the algorithm and the criteria it uses to select candidates. For managers we also test the impact of transparency and of the confidence managers have on the quality of their hiring decisions.

 

We conduct a second experiment in the field in collaboration with a microfinance company. In this context, we are comparing the effectiveness of recruitment carried out by artificial intelligence (AI) using behavioral data on applicants, with recruitment carried out by managers. Candidates are randomly divided into two groups: one where the AI makes the final decision, the other where human recruiters do.

 

Trust in advisers depending on their gender and expert status

Experts play a key role in the transmission of scientific knowledge, but women are far less present in the media than men. We're wondering whether this difference is linked to lower public confidence in recommendations made by women. To verify this, we conduct an experiment in which we strictly control the way in which a recommendation is expressed. This allows us to isolate the effect of gender and expert status on the trust placed in the recommendation.

 

Motivated reasoning

Finally, we explore whether individuals reason differently depending on whether the conclusion of a reasoning is favorable or unfavorable to them. We use a logic problem inspired by “Dirty Faces” (Weber, 2001), where participants have to deduce information about themselves. By varying the experimental conditions, we test whether their ability to reason changes according to the conclusion the reasoning process leads to.

 

Thanks to these experiments, our project aims to better understand why certain scientific decisions are accepted or rejected, and how to strengthen public confidence in science and technology.

 

 

The data collected for the first project show that the perception of recruitment algorithms varies according to the criteria they use. When workers have to choose between an algorithm and a manager to decide whether to hire them, their preference for the algorithm decreases if it takes gender into account in addition to past performance. This result reflects a sensitivity to non-discrimination principles, in line with regulations prohibiting profiling based on gender or other membership criteria.

 

On the managers' side, our results confirm a well-documented bias: they too rarely delegate hiring decisions to the algorithm, overestimating their own ability to select the best candidates. However, when managers receive feedback on the quality of their decisions, they are more inclined to rely on the algorithm to improve the relevance of their hires.

 

In another study, we are testing the effectiveness of a recruitment algorithm based on psychological and behavioral measures combined with demographic variables in collaboration with a microfinance company. Despite a biased training sample (composed solely of employees recruited by human resources) and the strategic responses of some candidates, a random forest algorithm manages to accurately predict the performance of new employees. In a field experiment with the same company, we are testing an algorithmic recruitment system. All candidates answered questions measuring various psychological and behavioral traits. Our algorithm and human resources give a recruitment recommendation for each candidate. Candidates selected by the algorithm performed better than others, confirming the robustness of algorithmic predictions despite sampling biases and some attempts by candidates to influence their selection.

 

In a 2nd line of research, we examine the impact of gender and expert status on the trust placed in recommendations and on the choice of an advisor. Our results show that presenting an advisor as an expert reduces the gender trust gap in favor of female advisors, suggesting that this status can mitigate, or even reverse, certain prejudices. We also observe an asymmetry in the choice of advisors: women clearly prefer female advisors, while men have no clear preference. Finally, men who choose a female advisor are more inclined to follow her recommendations, because they differ from men who choose a male advisor.

We're going to continue our research by deepening our current work up to the point of publication.

 

In particular, our project with Jérémy Celse on motivated reasoning still needs work. We hope it will pave the way for further collaborations. Our discussions have led us to explore related topics, such as confirmation bias or the tendency to ignore certain elements in reasoning. An interesting avenue would be to influence participants' beliefs using self-persuasion techniques (Schwardmann et al. 2022). This would enable us to study more precisely how our beliefs influence our ability to reason.

 

In the longer term, we would also like to further investigate the use of algorithms and the trust placed in them. Our collaboration with Rustamdjan Hakimov and Dorothea Kübler, supported by ANR funding, is likely to continue. I've also started talking to a researcher specializing in human-computer interaction about a possible future collaboration.

As of the 1st of May 2022, the project on trust in experts (for which we have already collected data) has been presented in workshops, seminars and conferences (ASFE annual congress 2021, ASFEE conference 2021, ESA world meeting 2021).

The fact that many people do not believe in scientific truths has important and sometimes even dramatic consequences. Since people's rationality is bounded, trust in experts and in processes set up by experts is necessary. Trust in experts may depend on who the expert is. Furthermore, trust in a process may be affected by whether we like the outcome of the process or not. Our project is articulated in 4 axes.
The first axis is interested in algorithm aversion, the distrust the public can have in a scientific procedure aiming at reaching an outcome, namely the allocation of prospective students to universities.
The second axis investigates trust in experts depending on their gender and the gender-stereotypes associated to the field of expertise. Are we more likely to follow the recommendation when it comes from a male expert than from a female expert when the area of expertise is perceived as masculine? Will one also rather receive a recommendation from a male expert rather than from a female expert? Do the answers to these questions change when the area of expertise is perceived as feminine?
The third axis is interested in the trust we have in human reasoning depending on whether we agree with the conclusion or not. Is it more difficult to perceive flaws in logical reasoning when one agrees with the reasoning process's conclusion than when one does not?
The fourth axis will propose a theoretical mode that will aim at getting al deeper understanding of the interplay and dynamics between trust in a scientific process-matching algorithm- and the outcome of the process-an allocation of students to universities. It may also allow us to evaluate the performance of algorithms along a new dimension by distinguishing those for which a high-trust equilibrium exists from those for which trust is not a sustainable outcome.

Project coordination

Marie-Pierre Dargnies (Dauphine Recherches en Management)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partnership

DRM Dauphine Recherches en Management

Help of the ANR 119,880 euros
Beginning and duration of the scientific project: October 2020 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter