Elsevier

Cognitive Psychology

Volume 61, Issue 2, September 2010, Pages 87-105
Cognitive Psychology

Seeing is believing: Trustworthiness as a dynamic belief

https://doi.org/10.1016/j.cogpsych.2010.03.001Get rights and content

Abstract

Recent efforts to understand the mechanisms underlying human cooperation have focused on the notion of trust, with research illustrating that both initial impressions and previous interactions impact the amount of trust people place in a partner. Less is known, however, about how these two types of information interact in iterated exchanges. The present study examined how implicit initial trustworthiness information interacts with experienced trustworthiness in a repeated Trust Game. Consistent with our hypotheses, these two factors reliably influence behavior both independently and synergistically, in terms of how much money players were willing to entrust to their partner and also in their post-game subjective ratings of trustworthiness. To further understand this interaction, we used Reinforcement Learning models to test several distinct processing hypotheses. These results suggest that trustworthiness is a belief about probability of reciprocation based initially on implicit judgments, and then dynamically updated based on experiences. This study provides a novel quantitative framework to conceptualize the notion of trustworthiness.

Introduction

The success of human civilizations can be largely attributed to our remarkable ability to cooperate with other agents. Cooperative relationships, in which individuals often endure considerable risk, are built on the foundation of trust – a nebulous construct that is nevertheless intimately tied to both interpersonal (Rempel, Holmes, & Zanna, 1985) and economic prosperity (Zak & Knack, 2001). However, as anyone who has ever purchased a used car can attest, not everyone turns out to actually be trustworthy. Thus, the accurate inference of an individual’s level of trustworthiness is crucial for the development of a successful relationship. How then does one actually assess trustworthiness? Business people often attest to the importance of looking a future partner in the eye and physically shaking their hand before signing a contract. When physical meetings are not an option, people frequently rely on reputation, which in the world of online commerce has taken the form of buyer testimonials about a seller’s prior transactions on sites such as EBay. This is consistent with the notion that the best predictor of an individual’s level of trustworthiness is their behavior in previous interactions with us (Axelrod and Hamilton, 1981, King-Casas et al., 2005). We are more likely to invest trust in someone previously shown to be trustworthy than someone who has previously betrayed us. Therefore, one useful model for inferring the trustworthiness of another is to make an initial assessment based on available information, and then update this judgment based on subsequent interactions.

Because trust is an amorphous construct, it is often difficult to measure and operationalize. From a psychological perspective, trust can be considered the degree to which an individual believes that a relationship partner will assist in attaining a specific interdependent goal (Simpson, 2007). While this definition can apply to any number of social interactions, consider an example of confiding in a colleague. Alex has been privately deliberating a decision and seeks feedback from Trevor. Alex is interested in Trevor’s perspective, but does not want David to know. Trust, in this example is Alex’s belief that Trevor will not divulge this sensitive information to David. Of course, trust is a broad concept that extends to many different aspects of social interaction and may depend on the assessment of a variety of factors, including, honesty, competence, competiveness, and greed. However, in order to study trust experimentally, it is necessary to have a good operationalization of it, even though this may be limiting.

The Trust Game is a task that has been developed by Behavioral Economists to serve as a proxy for everyday situations involving trust like in our example. The Trust Game explicitly measures our Psychological operationalization of trust, and assesses the degree to which an individual is willing to incur a financial risk with a partner (Berg, Dickhaut, & McCabe, 1995). This simple game involves two players, A and B. Player A is endowed with an initial amount of money, say $10, and can choose to invest any amount of this endowment with B. The amount that Player A invests is multiplied by the experimenter by some factor, usually 3 or 4, and then Player B decides how much of this enlarged endowment, if any, they would like to return to Player A. The partner can choose to repay the investor’s trust by returning more money than was initially invested, or abuse their trust by keeping all (or most) of the money. In this game, trust is operationally defined as the amount of money that a player invests in their partner, and trustworthiness is defined as the likelihood that the partner will reciprocate trust. Evidence from empirical work has shown that most investors are willing to transfer about half of their endowment. In turn, when the investment is multiplied by a factor of 3, partners are usually willing to reciprocate trust so that both partners end up with approximately equal payoffs (Berg et al., 1995). This simple game provides a useful behavioral operationalization of trust, and also demonstrates that in general players exhibit both trust and trustworthiness, contrary to the standard predictions of economic Game Theory (Camerer, 2003). Additionally, this game can serve as a framework for experimental manipulations of social signals. Two types of social signals we will investigate in this study are the degree to which both the initial judgments of a partner and previous experience with that partner can alter decisions of trust and reciprocity.

Research has demonstrated that trustworthiness is often rapidly inferred from social signals, and can in turn influence behavior in the Trust Game. Trustworthiness judgments are influenced by brief social interactions (Frank, Gilovich, & Regan, 1993), as well as information about an individual’s moral character (Delgado, Frank, & Phelps, 2005). Even more subtly, signals of trustworthiness can be detected from simply viewing faces (Winston, Strange, O’Doherty, & Dolan, 2002). Facial expressions can be processed outside of conscious awareness (Morris et al., 1998), and indeed, competence judgments about an individual can be made within 100 ms (Willis & Todorov, 2006) and affective judgments about an individual can be made as quickly as 140 ms (Pizzagalli et al., 2002). Individuals who are attractive or who appear happy are also more likely to be viewed as trustworthy (Scharlemann, Eckel, Kacelnik, & Wilson, 2001). Our group has recently investigated how initial impressions can influence trust, and demonstrated that implicit judgments of facial trustworthiness can predict the amount of financial risk a person is willing to take in a Trust Game (van ‘t Wout & Sanfey, 2008). In this study, normed ratings of player trustworthiness (as assessed by briefly viewing a photograph of each player) were a significant predictor of how much money these players were given in a standard one-shot Trust Game. These set of studies support the notion that both explicit (e.g., information about a partner’s moral character) and implicit social signals (e.g., facial trustworthiness) can influence initial judgments of trustworthiness, and that these judgments can in turn impact the degree to which people actually place trust in other individuals in a meaningful social interaction.

Social signals can also be inferred from repeated interactions. The best predictor of whether a person will place trust in their partner in a given Trust Game round is whether or not this partner previously reciprocated trust (King-Casas et al., 2005). The process of placing trust when it has previously been reciprocated, but stopping once trust is abused, is often referred to as a tit-for-tat strategy, and has been demonstrated to be the optimal strategy for repeated interactions (Axelrod & Hamilton, 1981). Repeated interactions have also been shown to influence subjective ratings of moral character in a Prisoner’s Dilemma game (Singer, Kiebel, Winston, Dolan, & Frith, 2004) and in a Trust Game (Delgado et al., 2005). These findings suggest that in a repeated interaction, trustworthiness can be learned based on the history of a partner’s behavior.

One model of investor behavior in the context of a Trust Game therefore involves an initial judgment of trustworthiness based on available information, which is then updated based on subsequent interactions with that partner. One question that currently remains unanswered is the interaction between this initial assessment and the subsequent updating, for example, the degree to which each signal may contribute to the final trust decision, and how concordant (a trustworthy face engaged in reciprocal behavior) and conflicting (e.g., a trustworthy face who does not reciprocate) information is handled. No study has directly investigated this question in the context of a Trust Game, though one experiment has provided preliminary evidence suggesting that initial judgments may influence the way information from repeated interactions is updated (Delgado et al., 2005). In this study, participants played a repeated Trust Game with three fictional characters. Prior to interacting with these purported partners, participants were given a short vignette describing the moral character of each partner. One character was depicted as “good”, one “neutral”, and a third as “bad”. The investigators observed that participants rated the “good” character as more trustworthy at the start of the game, and were in turn more likely to trust them. However, because all partners reciprocated 50% of the time, participants learned to trust the “good” partners less over time, and in fact began to “match” the 50% reinforcement probability (Herrnstein, 1961). At the conclusion of the game, even though participants trusted the “good” partner less than they did at the beginning, they still placed more trust in them than either of the other partners, and were still investing more than 60% of the time. There are two possible interpretations of this finding from a reinforcement learning framework. First, as suggested by the authors, the positive moral information may have biased the participants to ignore negative feedback, meaning they were unable to update the value of the partner after they were betrayed. An alternative interpretation is that the positive moral information increased the initial trust evaluation of the partner, but did not influence the way the participant interpreted the feedback. According to this interpretation, if given enough trials, the participant would have eventually learned the 50% reinforcement rate, though it would have taken longer compared to the neutral and negative partners. However, the design employed in this study makes it difficult to assess which hypothesis is more likely.

A useful method to examine the question of how initial judgment and experience interact is to employ mathematical models of behavior. Reinforcement learning (RL) is concerned with understanding how people learn from feedback in repeated interactions with the environment (Sutton & Barto, 1998). Assuming that the decision-maker is attempting to maximize his or her reward on each trial, one strategy is to predict the value of an environmental state, and then update these predictions based on the actual feedback received. One method for updating predicted values is to use the simple Rescorla–Wagner delta rule (Rescorla & Wagner, 1972), which quantifies on each trial the difference between the predicted value V and actual reward r, with this difference referred to as prediction error.δ=rS-VS(t)

The most straightforward way to learn the value of the relevant stimulus s, is to update its predicted value in proportion to the current prediction error δ. The degree to which the prediction error influences the new value is scaled by a learning rate α, where 0 < α < 1.VS(t+1)=VS(t)+αδ

Thus, receiving rewards greater than expected will lead one to increase the value associated with a given stimulus. Conversely, receiving rewards that were less than expected will cause a decrease in that value. Using a RL approach, all stimuli have an initial starting reward value, which is updated via a learning rule. Because of its simplicity, this framework not only provides a very powerful way to understand how people learn from feedback, but also provides a principled way to understand how social signals influence learning in a repeated Trust Game.

Use of this framework to understand how people learn in a social context encourages very specific hypothesis testing, and has the potential to provide insight into the subtle processes involved in social learning. To date, relatively few studies have attempted to study social learning from an RL perspective (Behrens et al., 2008, King-Casas et al., 2005). However, some recent studies have begun to use modeling in conjunction with behavior to better understand how social decision-making develops. For example, one experiment (Hampton, Bossaerts, & O’Doherty, 2008) used computational modeling to provide insight into the process of mentalizing about another player’s strategy in a game known as the Inspection Game. Additionally, Apesteguia, Huck, and Oechssler (2007) demonstrated that when given the opportunity to view other player’s behavior in a game, people will often imitate the strategy that provides the highest payoff. Greater discrepancies between an individual’s payoff and another player’s payoff result in an increased likelihood of switching to the other strategy.

Finally, a few recent studies have utilized computational approaches to study how social advice can impact learning (Biele et al., 2009, Doll et al., 2009). In these studies, prior to a standard learning task, participants are given information (termed as advice or instructions) from either another participant or from the experimenter about the optimal choice. These experiments have found evidence supporting the notion that social information leads to learning biases, namely that accurate information helps participants learn better, while inaccurate information impairs learning. Biele et al. (2009) found support for a model that assigned greater weight to outcomes consistent with the advice than to the same outcomes on unadvised choices. Doll et al. (2009) found that the best fit of the behavioral data was produced by a model that initialized instructed stimuli to a higher than normal starting value and reduced the impact of instruction inconsistent outcomes while increasing the impact of instruction consistent outcomes. These studies suggest that explicit information such as advice or moral information can impact not only initial expectations but also how people learn from feedback. Information consistent with the prior information is weighted higher in the value update, and information inconsistent with the advice is weighted lower. However, no study to date has examined how implicit information impacts learning in an interactive social decision scenario.

The present study adapted the design of van ‘t Wout and Sanfey (2008) to examine how implicit initial trustworthiness information (i.e. facial features) interacts with experienced trustworthiness (i.e. the probability of reciprocation) in a repeated Trust Game. First, we expected to replicate our previous finding that facial trustworthiness influences initial financial risk-taking in a social context (van ‘t Wout & Sanfey, 2008). Second, we expected to replicate other work, which has demonstrated that previous experiences also influence behavior (Axelrod and Hamilton, 1981, King-Casas et al., 2005). Finally, and most importantly, we predicted that these two processes, facial trustworthiness and experienced trustworthiness, would interact such that partners that both look trustworthy and reciprocate frequently will be entrusted with the most money. To increase our construct validity, we employed multiple measurements of trustworthiness, which included behavior in the Trust Game as well as subjective ratings. To further characterize our behavioral findings, we used RL models to test three distinct processing hypotheses – (1) initialization, (2) confirmation bias, and (3) dynamic belief. The Initialization models (GL initialization & trust decay) posit that the implicit trustworthiness judgments influence behavior at the beginning of the game, but are eventually overridden by the player’s actual experiences (i.e. whether or not trust is reciprocated). The Confirmation Bias model proposes that initial implicit trustworthiness judgments influence the way feedback (i.e. non-reciprocated trust) is updated throughout the interactions (Biele et al., 2009, Delgado et al., 2005, Doll et al., 2009). This model assumes that learning is biased in the direction of the initial impressions. Finally, the Dynamic Belief model proposes that the facial trustworthiness judgment serves as an initial trustworthiness belief, which is continuously updated based on the player’s experience in the game. These beliefs, in turn, influence learning. This model equally emphasizes the initial judgment and experience and predicts that players will learn to give more money to partners that are trustworthy, and less money to partners that betray trust. By explicitly formalizing the potential mechanisms via these models, this study can increase our understanding of how trust is placed in social economic exchanges.

Section snippets

Participants

Sixty-four undergraduates were recruited from the psychology participant pool at the University of Arizona and received course credit for their participation in the experiment. Three participants were excluded after indicating during debriefing that they did not understand the experiment, leaving a total of 61 participants (mean age = 18.67 sd = 1.38, female = 79%). All participants gave informed consent, and the study was approved by the local Institutional Review Board.

Trust Game

Participants played a

Behavioral data

Overall, as expected, we found a main effect of reciprocity, where participants gave more money overall to partners who reciprocated 80% of the time (mean = 5.64, se = 0.18) as compared to partners who only reciprocated 20% of the time (mean = 3.42, se = 0.20) using a repeated measures ANOVA F(1, 60) = 125.70, p < 0.001, η2 = 0.68. There was a trend for partner type that approached significance, F(2, 120) = 2.41, p = 0.09 where participants tended to invest more money in participants who looked more trustworthy

Discussion

This study investigated the processes underlying the decision to trust (or not trust) a partner in a consequential interaction. Previous research has reported that both initial impressions (Delgado et al., 2005, van ‘t Wout and Sanfey, 2008) and direct experience (King-Casas et al., 2005, Singer et al., 2004) play important roles in influencing judgments of trustworthiness. This experiment provides the first account of how these variables interact in a social interactive financial investment

Conclusion

Our study integrates theories and methods from psychology, economics, and reinforcement learning to gain a greater understanding of how high-level social cues such as trustworthiness are acquired and utilized in a consequential social decision. The findings suggest that trustworthiness judgments may serve as a risk belief (i.e. probability of reciprocation). This belief is based on initial judgments of perceived trustworthiness and is dynamically updated based on experiences through repeated

Acknowledgments

The authors thank Niko Warner and Carly Furgersen for their help in the collection of the data and Dr. Mike X. Cohen and three anonymous reviewers for their helpful comments.

References (48)

  • R. Adolphs et al.

    The human amygdala in social judgment

    Nature

    (1998)
  • H. Akaike

    A new look at the statistical model identification

    IEEE Transactions on Automatic Control

    (1974)
  • R. Axelrod et al.

    The evolution of cooperation

    Science

    (1981)
  • R.H. Baayen et al.

    Mixed-effects modeling with crossed random effects for subjects and items

    Journal of Memory and Language

    (2008)
  • G. Barron et al.

    Small feedback-based decisions and their limited correspondence to description-based decisions

    Journal of Behavioral Decision Making

    (2003)
  • T.E. Behrens et al.

    Associative learning of social value

    Nature

    (2008)
  • G. Biele et al.

    Computational models for the combination of advice and individual learning

    Cognitive Science

    (2009)
  • S.J. Blakemore et al.

    From the perception of action to the understanding of intention

    Nature Reviews Neuroscience

    (2001)
  • C. Camerer

    Behavioral game theory

    (2003)
  • T.F. Coleman et al.

    An interior, trust region approach for nonlinear minimization subject to bounds

    SIAM Journal on Optimization

    (1996)
  • W.A. Cunningham et al.

    Separable neural components in the processing of black and white faces

    Psychological Science

    (2004)
  • N.D. Daw et al.

    Cortical substrates for exploratory decisions in humans

    Nature

    (2006)
  • M.R. Delgado et al.

    Perceptions of moral character modulate the neural systems of reward during the trust game

    Nature Neuroscience

    (2005)
  • M.J. Frank et al.

    Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning

    Proceedings of the National Academy of Sciences of the United States of America

    (2007)
  • Cited by (207)

    • CFOs’ facial trustworthiness and bank loan contracts

      2023, International Review of Economics and Finance
    View all citing articles on Scopus
    View full text