Tip:
Highlight text to annotate it
X
MING: Good afternoon, my friends.
My mane is Ming, and I'm delighted today
to introduce my friend the renowned social psychologist
David DeSteno and the author of this book, "The Truth About
Trust."
I first got to know David-- thank you, Dana.
I first got to know David through his highly innovative
work in studying the science of compassion, which
is a topic I'm very passionate about.
David and his lab, they are renowned for devising
very creative methods in studying
how emotional states affect behavior.
And they are also known for studying moral behavior
in real time.
Real time, not fake time like the other labs, real time.
David is most interested to figure out
how to foster prosocial behavior all around the world.
And he told me that in summary his work
can be summarised in three words, vice and virtue.
Vice and virtue, there's one that I prefer over the other.
And with that, my friends, let's please welcome my friend
David DeSteno.
DAVID DESTENO: And thank you for having me.
It's a real honor to be with you here and share
this work with you.
And as you can probably guess by the title today,
I'm going to talk about trust.
I think it probably comes as no surprise to you
that issues and dilemmas of trust pervade our lives.
Trust determines who we want to work with, who we love
and who would marry, who we trust to learn from,
who we'll go to for support.
Now, we all can remember the big stuff,
the times trust really matters.
Is a new business partner going to be trustworthy,
or is he going to skim profits?
Is a spouse being faithful or unfaithful?
Is a child using drugs even when she swears, trust me.
Trust me.
Trust me.
I'm not.
But issues of trust aren't just about those potentially
momentous situations.
Issues of trust pervade our common daily life.
Will my neighbor remember and I trust him to really feed my dog
while I'm away, or am I going to come home and the dog's hungry?
The mechanic, is he really being honest when
he says my car need a new transmission?
Was the salesperson when I bought this suit really honest
when he told me it makes me look thin?
You can tell me.
I don't know.
I won't comment on that.
But whether it's big or whether it's small,
what all of these issues have in common is a simple dynamic.
They really depend on trust.
And we know the more we trust individuals,
we can gain a lot more by working
together and cooperating.
But in reality, as you probably can guess,
trust is a double-edged sword.
Yes, we can gain more by working together.
That's why we have trust in the first place.
But trusting somebody also makes us vulnerable to that person.
It means that our outcomes are dependent on them being
competent, on them having integrity,
on them working with us.
And so given that trust is so central to human life,
you would hope, you would like to think that we really
understand how it works, that we can make really good decisions
about who we should trust or whether we're
going to be trustworthy ourselves.
But I'm here to tell you we're not really good about that.
And until recently, the science underlying that
hasn't been really good about it.
And so in some ways, that's what led me to write this book.
As a scientist, I really wanted to work
on correcting a lot of the misconceptions that
are out there to empower people to make better
decisions but also so that we can work together
to nudge us all to nudge society to become more
trustworthy and cooperative overall.
So to do that, you have to start like you
would with anything else.
You have to get rid of the misconceptions
and figure out how trust really works.
And so that's what I want to talk about today in general.
In the book, I talk about lots of issues
that I'm not going to talk about today.
We talk about issues of how trust
affects learning and academic success.
One of the best predictors of a child's academic success
isn't how much they like their teacher.
It's how much they trust their teacher.
And they trust that the teacher is
competent in telling them and giving them information.
We talk about how trust affects our relationships
and especially romantic ones and how
it can function to smooth out the bumps in those in ways that
operate even below our conscious awareness
to keep harmony with those we love.
In the book, I talk about how trust
is affected by power and money.
There's great work out there showing
that people's trustworthiness tracks socioeconomic status.
This is work by my friend Paul Piff.
He's a psychologist at Berkeley where
he shows that higher SES correlates
to increased untrustworthiness.
But really it's not about being in the 1%.
It's not a birthright of the 1% that makes you untrustworthy.
It's simply about money and power
relative to those around you.
And so any of you, if we put you in a position
even for 10 minutes where you feel elevated sense of power,
it becomes a lot more difficult in some ways
to actually be trustworthy.
And also how and when can you trust yourself?
It's already February.
A lot of New Year's resolutions have gone by the wayside,
so if it's a good thing to know.
But today what I want to talk about is three broader themes.
And the first is what does it mean to be trustworthy
and how can we understand how trust operates within ourselves
and our own trustworthiness?
The second is can we actually detect
whether somebody else is going to be trustworthy?
In some ways, this has been the holy grail
of governmental research and security research.
And we've been pretty bad at it.
But I have some new data I want to share
with you that suggests we can do it.
And then finally, the question that's probably
closest to my heart in the work that I normally do,
which is, how can we increase trustworthiness and thereby
increase our own and each other's resilience around us?
So let's start with the first question.
Most of us, when we think about trust,
we think about it as this stable trait.
A person's trustworthy or they're not.
But I want to convince you that that's probably
not the best way to think about it.
That's not how it really works.
Growing up, we have this idea that it's a typical motif,
right?
You see it in cartoons all the time.
There's an angel on one shoulder and a devil on the other,
and they whisper into your ears.
And if you grow up listening to the angel,
well then you're going to be a good person.
You're going to be trustworthy.
Everybody is going to love you.
Everything's going to be good.
There's just one problem with that.
And that is if you actually look at the scientific data,
it doesn't really hold up.
What we've learned over the past decade especially
in psychological science is that people's moral behavior
is a lot more variable than any of us would have expected.
And it's a lot more influenced by the situation.
And so if you want to control your own behavior
and predict the behavior of those around you,
you need to realize that it's not a stable trait.
You need to understand how it's affected by the situation.
And so my model for understanding trustworthiness,
it's better to think of it as a scale, the old school
type of type with the plates that go up
and down as opposed to a digital one.
In any one moment, your mind, whether you know it or not,
is weighing two types of cost.
It's weighing costs and benefits in the short term
versus costs and benefits in the long term.
And those usually correlate with what's good for me
in an expedient fashion right now versus what's good for me
to do even if it costs me in the moment to built a reputation
and to build social bonds in the long term.
And depending upon the situation, which decision you
choose can change from moment to moment.
You can think about it.
If my friend Ming loans me money,
in the moment if I don't pay him back, well, I'm ahead.
I've profited in the short term.
But long term, it's probably a poor decision
because he's not going to give me money again.
I'm going to get a reputation as being a cheater.
But if I can get away with it, my mind,
unbeknownst to me and my own moral codes that I endorse,
will try to push me to be a bit untrustworthy.
And so I want to suggest to all of you
who think this can't happen to me
and that you are completely honest
and trustworthy and wonderful, it can happen to any of us.
And let me show you an example of how it happens
and also why you probably don't think it's true of you
even though it is.
So the first issue is how do you study trustworthiness?
I can't really walk around with a clipboard and say, Cindy,
are you a trustworthy person?
Because what people will probably say is they'll
do one of two things.
Either they know they're not and they'll, yes I am.
Because who wants to say I'm not?
But what happens more frequently is
they think they are, and they predict they will be,
but when push comes to shove, time and again our behavior
isn't what we expect.
And so the way that we have to study trustworthiness is not
by asking people or looking at their past reputations
but by staging events in real time as opposed
to fake time where we can actually see when push comes
to shove, what will people actually do
when real rewards are on the line?
So let me give you an example of how we do this.
So we set up an experiment in our lab to look at this,
and it's rather simple.
We bring people into the lab.
These are normal community members
or even undergraduates from the Boston community
all known to be trustworthy people.
We bring them in and say, look, we've
got two tasks that need to be done.
One is really long and onerous and it's
these terrible logic problems, and circling letters
E, and random digit strings, and all the things
that you would feel like is a big waste of your time.
Or you can do a fun photo hunt on the computer.
Here's a coin.
I want you to flip the coin, and whatever one you get,
it will determine whether you do the photo hunt or the logic
problems.
And whichever one you don't do, the person
sitting in the next room is going to get.
And we're going to trust you to do this the right way.
Is that OK?
They say sure.
And then we let them go.
What do you think happens?
A lot of people just assign themselves to the good task.
Any guesses for how many?
80%?
Close, 90%.
We've done this many, many times.
So it's not a fluke finding.
We've done it in our lab.
Other people have copied the methodology.
90% of people-- well they do one other thing.
Some of them don't flip the coin and just
say, oh, I got the good task when they come out.
Or some of them, because we have them on hidden video,
flip the coin repeatedly until they get the answer they want,
which is the same as not flipping at all.
But they feel better about themselves.
And so these are people who we asked them
before, if you don't flip the coin, is that untrustworthy?
They said, oh, it would be terribly untrustworthy.
But they do it.
And if you ask them when they come out,
we have them rate on a computer how trustworthily they
just acted.
So here higher numbers mean higher trust
on a one to seven scale.
So when they're judging themselves doing this,
they're above the midpoint.
So they say, yeah, it was OK.
I was trustworthy.
If you take those same people and you now
have them watch somebody else do this,
they condemn the person for it.
That person was not trustworthy.
When I did it, it was OK.
When that person did it, they're definitely not trustworthy.
Now, the interesting thing about this
is these were people who are normal people.
And so when we see people like Lance Armstrong or Bernie
Madoff, you think, oh, it's something wrong.
Those people are morally corrupt and untrustworthy.
No, well, yes what they did was untrustworthy.
But the same process-- on a smaller scale, of course,
and we can only study on a smaller scale in the lab--
happens with us.
It happens with any of you.
Now, the question is, well, why don't we realize this?
Why don't we learn to stop trusting ourselves?
Well, the reason why is our mind whitewashes our own behavior.
So if you ask these subjects, why did you not flip the coin?
They'll say things.
They'll create stores like, well, yeah, I should have,
but today I was late for an appointment.
And if I'm not there, somebody depending on me.
And so it was OK.
So they'll create all kinds of justifications
for why it was OK for them in the situation
and how it doesn't reflect on the fact
that they're an untrustworthy person
but they can be untrustworthy.
Now, in some way that's a good thing.
It has to be adaptive, because if any of us
felt like we couldn't trust ourselves,
that alternative is much worse.
Because it means we're not going to save money for the future
because we know future us is going to go blow it a casino.
We're not going to diet and take care of our health
because we assume three days from now
I'm going to gorge on ice cream or chocolate cake.
We're stuck with ourselves.
If somebody else is untrustworthy,
we can stop interacting with them.
We can't stop interacting with ourself,
and so we need to trust ourselves even
when we make mistakes.
So that's OK, but what I'm here to do
is to help you try and learn that
so that you can decrease the probability that you're
actually going to make those mistakes.
But what I haven't told you yet is
that there's any evidence that people actually
recognize what they did was wrong.
So let me give you an example.
So in psychology we had this method
which is called cognitive load.
And it's a way to kind of tie up people's mental resources
so they can engage in rationalization.
And the way it works is we give them
random digit strings of numbers, say like seven digits.
And you have to remember these digits.
So what we're doing is you'll get a string of numbers,
and you'll have to say 7-6-5-4-1-0, 7-6-5-4-1-0,
and then you'll have to answer a question,
how trustworthily did you just act?
And you have to remember these numbers because I'm
going to have you enter them in a minute,
and you've got to get them right.
And so what this does is it ties up your mind.
It prevents your mind from engaging in rationalization.
So when we did this experiment again,
and we have 90% of people who did cheat even though they said
they wouldn't, what you find is on the white bars
on the bottom, those who were under cognitive load,
there's no difference in how you judge yourself
or how you judge others.
And those are significantly less.
You see yourself as less trustworthy
than when you have the time to rationalize.
So the second, the moment that you're
committing the transgression, your mind knows it.
You feel in your gut.
You feel that pang of guilt.
But what happens is you don't want
to think of yourself as untrustworthy.
And so your mind engages in this rationalization.
The good you tamps away, tamps down the guilt so
that it can create a view of you.
Well, I had a reason, and it's OK.
And I am trustworthy.
So the point is to remember that all of us,
even if we think of ourselves as trustworthy,
I'm sure most of you in general are trustworthy,
but your mind is making these calculations.
Here when we gave them anonymity-- or at least
they thought they were anonymous.
They didn't know we have them on hidden video-- their mind's
impulses for short-term gain created a story.
It pushed them to say, well, I can get away with it now.
Even not consciously, it just pushes
them to make this decision as an impulsive way.
And then they justify it because the long-term consequences
they believe are not there, because they
believe they're anonymous.
Let's turn to the second question.
The second question is, can I trust you?
How do you figure, how do you determine
that question about somebody?
Now, as we all know, human society
flourishes when we cooperate with each other
and when we trust each other.
The problem is if one person doesn't
uphold his or her end of the bargain,
that person can gain at the other's expense.
And so what you have is a very dynamic yet delicate balance
that we every day have to navigate through and optimize
our outcomes.
If we make the wrong decision over and over again,
we're going to have a problem.
So here what we try to do is we try to use people's reputation.
Now, as I just told you, reputation
isn't a great predictor, and so often we're wrong.
But the problem that confronts us other times
is sometimes we have to decide if we're
going to trust somebody new who we don't know anything about.
And we don't know their reputation,
yet we're negotiating with them.
What do you do there?
You have the opportunity for establishing
a long-term relationship or you have the opportunity
for being screwed over in a way that you couldn't predict.
And if you're wrong, well, time and time again that's
going to cause you a lot of problems.
It's a very non-optimal outcome to be wrong.
So given all that, it would be nice
if we could actually detect if somebody else was
going to be trustworthy.
Now, as I said at the beginning of this talk,
people have been looking for the Holy Grail of what
signifies deception or untrustworthiness
for a long time.
Is it a true smile?
Does that mean I can trust you?
Is it shifty eyes?
Does that mean I can't trust you?
And the TSA spent $40 million on this program
to look for these single microexpressions
that in GAO testimony before Congress
has been shown to be utterly useless.
And the problem is I think the reason why we haven't found how
we can detect trustworthiness is we've been going about it
in really the wrong way.
There is not going to be one marker.
There is not going to be one golden cue.
Cues to trustworthiness are going to be subtle and dynamic.
Why is that the case?
Well, it's very adaptive if I'm standing here
and you're looking and me and all of a sudden I
see a major threat behind you to show fear.
Because that lets you know even without turning around
very quickly, there's something dangerous there.
But trust isn't something that you
want to communicate very easily or untrustworthiness.
Why?
I mean, imagine if you're trustworthy person
and you had a clear tell.
It's like walking around with a big T on your forehead
that says, I'm trustworthy.
What would happen?
Everybody would want to cooperate with you,
more of them so they could probably take advantage of you
because they know they could.
Or if you were untrustworthy and you walked around with a big U
on your forehead, well, everybody would ignore you.
And nobody would cooperate with you.
And your outcomes would be poor.
And so trust signals have to be played close to the vest.
We have to interact with each other.
I can get a feeling for you.
You can get a feeling for me.
And then we can decide and reveal our cards very slowly.
So they're going to be subtle and dynamic.
They're also going to be the context dependent.
What signals trust in any one specific culture may vary.
What signals trust in any one situation may vary.
Think about it.
There's different kinds of trust.
There's integrity.
So can I trust that you're going to do
the best job you can to help me?
Are you meaning well toward me?
That's different than trusting your competence.
If you don't have the competence to help me,
all the intention in the world is going to be useless.
And so the cues I look for for competence versus integrity
may be very different, and we have to think about that.
But the main reason why I think we haven't
found the cues to trust is that they're going to occur in sets.
I mean, think about it, right?
If touching my face means I'm going to be untrustworthy,
if I do this, am I doing that because I've got an itch
or because I'm going to cheat you?
Don't know by one thing.
You can't tell.
The only way you can begin to read cues to trustworthiness
is to look for them occurring in sets so you can disambiguate
the meaning of any single one.
And that's what the field typically doesn't do.
And so I'm going to quickly tell you about two experiments
that we did to show how trust can be read.
The first one is kind of exploratory.
We threw out everything that we had known before,
and we simply started to try and identify what cues actually
predict real monetary trustworthy behavior
and to demonstrate that they do this in an accurate way.
And the second part was designed to actually confirm
in a very tightly controlled, highly precise way
that these are the cues that matter.
And I'll show you what I mean by that in a second.
We have an exploratory phase and a confirmatory phase.
So how did we do this I will start
with the exploratory phase.
What are candidates for signals related to trust?
Well, we brought 86 people into the lab
and we put them into dyads, which are groups of two.
The only requirement is you couldn't know the person
with whom you were now going to interact.
We gave them five minutes to have a get-to-know-you
conversation.
You could talk about anything you want.
We gave them a list of topics, but they
could talk about anything they wanted.
And you're going to play a game for real money, a game that
pits self-interest versus being trustworthy, communal interest.
And I'll show you how the game works in a second.
And then we gave them topics to start,
but they could talk about anything that they wanted.
So we brought them in.
They simply sat across from each other
at a table, half the subjects.
And we had three cameras on them that were time
locked so we could record every single gesture,
every single cue they made.
Now, we also had another group of subjects
who conversed in their get to know you
in separate rooms over Google Chat
or Gchat-- any type of internet chat.
And the logic for this is the same amount of information
is being exchanged in the conversation,
but in one condition you have access
to the person's nonverbal cues.
In the other you don't.
And then we brought them into separate rooms
if they weren't in separate rooms already.
And we said, you're going to play this game.
We gave each of them four tokens.
And the tokens are worth $1 to each of them
but $2 to their partner.
And so this game is called the give some game.
And it's a nice analog for self-interest
versus communal interest.
Because if you want to be selfish,
you can try and get the other person to give you all of his
and give nothing.
And that means you'll have $12 and he'll have nothing.
But the most trustworthy thing to do if you really
implicitly trust each other and want to benefit each other
is to exchange all you have at the same time,
because then you all started with four
and now you have eight.
And so we had people making real decisions
and we paid them accordingly.
And we also had them tell us what
they thought their partner was going to do.
Now, the nice thing about it was whether or not
you talked to your partner over an internet chat
or face to face, the amount of trustworthy behavior
didn't change, which is nice.
I think it's because people are now becoming very used
to communicating over internet mediated platforms.
And so it's not like being face to face
made people more trustworthy.
There was people who were cheating and being
cooperative at equal levels in both cases.
But here the axis is error, the amount that you were off.
And so lower bars mean accuracy in terms of absolute value.
If you were in the presence of the other person,
your guess for how much that person
was going to be trustworthy or cheat you in absolute dollars
was significantly greater.
So what this tells us is that people are picking up on a cue.
There is some information there that your mind is gleaning
from body language, whether you know it or not.
And so what we did next was we ran
models of all these possible combinations of cues
to see what would matter.
And the model that predicted untrustworthiness
the best consisted of four cues, touching your hands,
touching your face, crossing your arms, and leaning away.
If you think about it, what does this really mean?
Well, we know from the nonverbal literature
that fidgeting with your hands and touching
your face repeatedly is usually a marker of anxiety
and not feeling comfortable.
Crossing your arms and leaning away
is a marker of I don't want to affiliate with you.
Put them together, what does it mean?
It means, I don't really want to be with you.
I don't like you.
And I'm nervous because I'm going
to screw you over in a minute.
And so none of these cues predict it on their own,
but together they did.
So the more often you saw a partner show this set of cues,
the smaller number of tokens you expected that person
to share with you, which meant the more selfish you thought
that person was going to be.
And the more often you yourself or any subject
emitted this four set of cues, the less trustworthy you
actually were, the more tokens you
kept and tried to get from the other person without sharing.
And so in some sense, what we're showing is ground truth here.
These cues are predicting actual financial cheating
versus cooperative behavior.
Now, the most interesting part about it
was if you asked our subjects, so what did you use?
They had no idea.
Or they would suggest it was other cues that
didn't predict anything.
Yeah, I showed you they were more accurate when
they saw the person.
And they adjusted their numbers and guesses accordingly.
So what this means is your mind is using these cues
even though you're not aware of what they are.
It's still building intuitions with them.
But how do you know those are the right cues?
People are doing lots of things.
How do I know that when I'm crossing my arms,
it's not that my left pupil is dilating
and that's the magic cue.
Well, being scientists, we needed to have precise control.
No matter what actor I had, I couldn't
get them to have exceedingly precise control
of every expression they're emitting.
So what do you do?
You need a robot.
So this robot, her name is Nexi.
She was designed and created by my collaborator Cynthia
Breazeal at MIT's Media Lab.
And so we simply used Nexi.
And I'll show you a video of it in a moment.
But the experiment was simple.
We repeated the same thing we did before,
except we replaced one of the people with the robot.
So now you're talking to this robot.
And the robot will emit the cues and express the cues
that we think signify untrustworthiness or not
in a very, very precisely controllable way.
The robot was controlled by two people, one
who was the voice of the robot, the other who would control
whether or not she made the cues.
Because you don't want the same person doing it,
because they might give cues in their vocal tone.
And because I know a lot of you are engineers,
I'll give you a quick idea of how this worked.
The one person here you can see who
is sitting in front of the computer,
there's a webcam on her face.
As she moves her head, it's gotten by the webcam.
The robot's head moves in real time.
She's wearing a mic here, so as she speaks,
it picks up the phonemes and the robot's mouth
moves in real time.
The next person controls whether the robot
gives these untrustworthy cues or other similar cues.
And the third person is our robot mischief person
who basically controls and monitors the system.
Because every once in awhile, it would go haywire
and the robot would like it's doing something crazy,
like it's possessed or something.
It would break down.
But normally it worked wonderfully and just fine.
And so we had participants, 65 participants
from the community.
We brought in 31 of them.
We showed that we had seen the cues
that Nexi meant untrustworthy.
The others didn't.
Here's a picture of it crossing its arms.
Here's a picture of it touching its face.
I'll show you a video in a second.
So the first part is people have to get used to the fact
that they're talking to a robot.
So we had this kind of part where they just
acclimated to it.
And here's what that looks like.
[VIDEO PLAYBACK]
NEXI: So my name's Nexi.
What's your name?
KIM: My name's Kim.
NEXI: Kim?
It's very nice to meet you.
KIM: You too.
NEXI: To get started today, why don't I
tell you a little bit about myself?
KIM: OK.
NEXI: I was born and built at the MIT Media
Lab two years ago.
So I guess in human years, I'm pretty young.
But in robot years, that's more like being 20.
KIM: (NERVOUS LAUGH).
DAVID DESTENO: So you can see she's a little uncomfortable.
In fact, we had to put that black barrier on the bottom
because people were afraid it was
going to go Terminator on them and kill them.
So we needed that little barrier.
But they quickly acclimated to this, as you'll see.
[VIDEO PLAYBACK]
KIM: That's basically all I do for fun though.
I don't have a lot of time.
NEXI: Did you grow up in Upstate New York?
KIM: Yeah, I did until I was 18, when I moved out here.
NEXI: It seems like that must have
been a big transition for you.
KIM: It was.
It was a really big transition.
But I kind of decided that it wasn't the life that I wanted.
DAVID DESTENO: So they would self disclose.
We heard about pets dying and all these things.
One person kept asking the robot if it believed in God.
That person was hard.
But for the most part, people behave.
And just so you can tell what it looks like face on,
I'll just show you a 10-second clip.
[VIDEO PLAYBACK]
NEXI: We all share a big, open room.
There are lots of cords and gadgets.
So it's probably not like your house.
But it's home for me.
Why don't you tell me about where you're from?
MAN: Well, I was born in Lawrence, Massachusetts,
and I had a residency in Somerville right now.
I've been doing residential the past four months.
DAVID DESTENO: And so we then had
them play this game with the robot.
We told them, look, the robot's got an artificial intelligence
algorithm that it's going to decide how much money it wants
to give you and how much money it thinks
you're going to give it based on how the interaction went.
It didn't, but that's what we told them.
And then we asked them questions about how much they
trusted the robot, et cetera.
So what happened?
So to make a long story short, what happened is these are,
for those of you who are mathematically inclined,
these are standardized regression coefficients.
When Nexi made the cues that signaled untrustworthiness
in the human to human interactions,
people reported trusting it less.
Now, the important thing is they didn't
report liking it less, because I was worried,
oh, they just might think it's doing something weird.
No, they liked it equally, but they trusted it less.
Now, that's important, because to me that makes it real.
Because we all have friends that we
like who we wouldn't trust with our money.
And so OK, and the less they trusted
it, the fewer tokens they predicted Nexi would give them,
basically meaning they thought Nexi was going
to be selfish and cheat them, and the smaller amount of money
in that game they actually gave it.
And so what this tells us is that we
know these are the cues because we manipulated them
with exact precision here while nothing else was happening
or things were happening that we could control.
And so cues to trustworthiness can
be imperfectly assessed, but better than chance.
And so the TSA starts to need to look
for cues in sets in a context-dependent way.
But in some ways, the more interesting part of this
is that what it suggests is that technology is now good enough
that the mind will now use these cues to ascribe
moral intent to robots, or to avatars, or to virtual agents.
So while you may not get it from R2-D2,
you will probably get it from Wall-E. See, I'll hear aww.
Wall-E is not human in the least,
but he has enough human characteristics in the eyes
and in the hands that he can move them
in a way that pings our mind's mental machinery
to make us feel trust, or warmth,
or compassion toward him.
So what does this mean?
It's a whole Pandora's box, because in some ways it's good.
So for people like Cynthia who want to design these robots
and she's working on the smaller ones
so that they can actually accompany kids
for medical treatment where parents can't go.
Think radiation treatments for kids with cancer.
They can go with the children.
They'll seem more trustworthy, more comforting.
But like any other science, it's not good or bad.
It depends on the uses of the people who want them.
We all know trust sells, so if I'm
a marketer, what does this mean?
It means that I have the perfect trustworthy or untrustworthy
person that I can show you.
Because in a human, stuff leaks.
No matter how much we're going to try and control it,
which is why we could pick up on untrustworthiness.
There is no leaking here.
We can control everything.
And so as we're conversing more and more with automated agents
and avatars, our trust is going to be manipulated in ways
that we could never have known before
or that our mind is not ready to defend against.
OK, finally, the last part of the talk,
how do we go about enhancing trustworthiness and enhancing
the compassion and resilience of each other?
To let you know just how powerful this can be
and how quickly trust can change and compassion can change,
let me give you of my favorite examples.
Some of you may know this story.
It's called the Christmas Eve truce of World War I.
So it was 1914, and the British were
fighting the Germans outside of Ypres, Belgium.
And it had been a long and a bloody battle.
And they were each in their trenches separated
by the no-man's land in between.
And on Christmas Eve, as the Brits
looked across the no-man's land, they
started to see lights appear.
And then they started to hear songs.
And at first, they didn't know what
they were because they were in German,
and they didn't speak German.
But then they soon recognized the melodies.
And what they were were Christmas carols.
And what happened next was amazing.
The men came out of their trenches,
and they started celebrating together.
They started exchanging trinkets.
They started talking about their families, showing pictures,
celebrating.
Now, these were men who hours ago
were trying to kill each other.
And no one would have ever trusted if I walked out,
was an open shot, I couldn't trust
that you weren't going to shoot me.
They always had shot each other before,
but here they were celebrating with each other
in a very communal way, by their own words, very amazing.
Here we were laughing and chatting
to men who only a few hours before we were trying to kill.
Now, if that's not a big change in how trustworthy somebody can
be, I don't know what is.
So the question is, how do we display such trust
and compassion in one moment and such cruelty the next?
Because if we can understand that,
then we can do something about it.
But to answer that question, what we have to realize first
is how do we address a different one?
How do we identify who is worthy to help?
The world is full of more people than we could possibly help.
Not that we don't want to help them,
but it could be overwhelming.
And there's this phenomenon that we know of in psychology called
compassion fatigue, which is when you're confronted
with people over and over and over again who need help,
you begin to dial it down.
I have this experience that I'm not
proud of when I go with my daughter to New York,
and we're walking by, and there was a homeless person.
She was like, daddy, help this person.
And then I realized that in that moment,
I'm completely ignoring this person,
because it's a common thing that I face all the time.
And if I stopped to try and help every person,
it would be overwhelming.
And so we have to understand how our mind goes
about deciding whose pain is worthy to feel,
who it's worth to help, and who it's worth
be trustworthy toward.
And once we understand that, then we can figure out,
OK, how do we increase the number of people
to whom we should feel that?
Well, one way that I think our mind does it is it
uses a simple metric.
And that metric is similarity.
So it comes back to this is Robert Trivers, who
was the discoverer of reciprocal altruism, which is basically
the idea, why do we help people in the biological sense?
It's I scratch your back today.
You'll scratch mine tomorrow.
In some ways, that's what similarity is.
When there's a lot of people who need my help, going back
to that equation of short-term versus long-term gain,
who should I help?
Who is it worth it for me to help?
What your mind does shaped by evolution is it
decides the person who is more similar to me is the person
that it's worth helping, because that's more likely
the person who's going to pay me back and be around later.
At least initially that's how it works.
And so what we wanted to do was to see
how deeply embedded this bias is.
If I said to you, if an American soldier's on the battlefield
and he comes across an American soldier and a member
of the Taliban and both of them are suffering the same wounds,
who is he or she going to feel more compassion for?
And if I said the American soldier,
you might not find that surprising.
But what I want to argue is that it's not
dependent on longstanding conflict.
It's this unconscious computation
that your mind makes.
And so we tried to strip that down to as basic a level
as we could.
And we did that by using something
called motor synchrony, which is people basically moving
together in time.
You see it in the military.
You see it in conga lines.
You see it lots of places.
You see it in lots of rituals.
And the idea is that if two people are moving together,
that's a marker that for here and now, their outcomes,
their purposes, their goals are joined.
And so we wanted to see if we could actually
show this effect at that level.
So we brought people into a lab, and they
thought it was a music perception study.
So they sat across from each other.
And there was sensors on the table,
and they had earphones on.
They didn't talk.
All they had to do was as you hear
the tones in your earphone, tap the sensor.
And so it was constructed so that the two people would
either be tapping in time or completely
randomly and out of time.
That was it.
Then what happens is they see the person
they were tapping with get cheated.
This party is staged, but they don't know it.
They believe it's real.
They see this person get cheated in a way that
makes that person have to do a lot of extra work
that they shouldn't have had to do.
And then what happens is we give them
a chance to decide if they want to go and help that person
and relieve that person's burden.
And that's what we look at.
So what happens?
We asked the people, how similar were you
to that person in the experiment?
The simple act of tapping your hands-- they didn't talk.
They didn't do anything-- made them
feel that they were more similar to the other person.
Now, if you ask them why, they'll create a story.
They'll say, oh, I think we were in the same class.
Or I think I've met this person somewhere
or we share the same goals.
They don't know.
They never talked to this person before.
None of that was true.
But because they had this intuition
that they felt more similar, they
had to create a story for it.
How much compassion did you feel for this person
when they got cheated and got stuck doing this onerous
work that they weren't supposed to do?
Remember, in both cases, the amount of suffering
is exactly the same.
Yet they feel more compassion for this person
if they were just tapping in time with them.
How many wanted to go help this person?
This I found truly amazing.
6 out of 34 people would say, oh,
I'll go help that person who was harmed
and cheated versus 17 of 35.
We had a threefold difference.
When you tapped your hands in time with this person,
50% of them said, I want to go help
this person who was wronged.
That's a huge effect if it's scalable.
How much time did they spend helping?
These are seconds.
So if you tapped your hands with this person,
you spent a lot more time knowing that everything you did
would relieve that person's burden.
And if you look at it-- again these
are regression coefficients-- if you tapped your hand in time
with this person, yes, you felt more similar to them.
And yes you like them more.
But what actually predicted the compassion you feel?
Not how much you like them but how similar you felt to them
If you tapped with them, you felt more similar.
That predicted how much compassion
you felt toward them even though the level of suffering
was the same objectively.
And the amount of compassion you felt for them directly
predicted how much time, how much effort you
put into relieving their pain.
Now, what this suggests is that compassion and trustworthiness
are flexible.
Because if you're going to be trustworthy to me,
that means you're going to sacrifice
your own immediate outcomes to benefit me like these people
did here.
Can I trust you to help me?
Can trust you not to shoot me back
with the Brits and the Germans.
Where I live, what this means is trying
to solve some of the more contentious things
we have in Boston, which is Yankees versus Red Sox.
But what that means basically is not
thinking about your new neighbor as the guy
who hates the dreaded Yankees.
Think about him as the guy who likes Starbucks
as much as you do.
If you can actually retrain your mind to find similarities
that you have with people, it will
increase your trustworthiness toward them and the compassion
that you feel toward them.
When you think about social media,
there are tremendous ways to do this.
We can use the computational power of social media
in ways to connect people that have never
been connected before.
Think about things like profiles on Facebook or other things.
We have vast knowledge of what people like and don't like,
what they've done or haven't done.
Perhaps what you can do is find what people in conflict
have in common very rapidly in the background
and surface that information to them.
And if you do, then it should function
in just the same way as tapping your hands.
There's nothing magic about tapping your hands.
We've done it with wearing the same wristband colors,
et cetera.
Anything that you can do to highlight similarity
with someone will make your goals
seems more joined, which will increase the compassion you
feel to them if they're suffering,
which will increase how trustworthy you
are toward them even in ways that don't involve compassion.
So at Google, I'm really interested and open
to talking with any of you about if you
have ideas about using the computational power
that you all have to kind of nudge trustworthiness
and compassion in the world.
But that's all kind of a top-down way.
This is a way that we have to remind ourselves,
OK, think about this person as similar to me or not.
It will be nice if we had a way that could make it automatic,
a way that works from the bottom up so that we don't
have to stop and remind ourselves.
And one way that we can do this--
and I know it's an idea close to Ming's heart,
and I'm really honored to be here
to be able to talk with him about it-- is mindfulness.
If you read the paper or know anything
about mindfulness, what you'll know
is that it is enjoying a renaissance.
And we know it does all kinds of wonderful things.
And probably many of you have more experience
with it here thanks to Ming's course.
It will increase your creativity.
It will increase your productivity.
It's good for your health.
It'll lower your blood pressure.
It'll even increase your scores on standardized tests.
These are all good things.
But if you think about it, it's not
what it was originally designed for.
If you look at what Buddha said or many
of the other ancient meditation teachers-- well
this is a quote by Buddha.
I teach one thing and one only.
That is suffering and the end of suffering.
There weren't LSATs and GMATs back then.
So all these other things that meditation does are wonderful,
and they're great.
But one of the main purposes was to foster compassion and end
suffering and to increase our being good to each other
and being trustworthy to each other.
And so what we decided to do was actually
put that idea to a test.
And so we brought people into the lab.
These were people who had never meditated before.
They were members of the Boston community.
And they were all equally interested in meditation,
doing a class for eight weeks.
We assigned half of them to actually take
a mindfulness class led by a Buddhist lama.
And they also would go home during the week
with MP3s created by the lama that they would practice.
The other half were put on a wait list.
So this way we had groups that were equally interested
in medication, because you might imagine that if we just
recruited people who wanted to meditate,
they might have been different types of people
in the first place.
So both groups were equally interested, but only half
of them actually got the course.
The other half got it after we did the measure.
After eight weeks, we brought them back to the lab.
Now, they thought they were coming
to have their memory, and their executive control,
and all these cognitive measures tested, which we did.
But before we did those, what we really were interested in
is what was going to happen in the waiting room.
And so in our waiting room, we had three chairs.
Two were filled by actors and the third was for the subject.
And so when the subject arrived, what did the subject do?
Well, all them except one sat down.
We couldn't get that other guy to sit down no matter what.
But most of them sat down.
And then a third actor would enter the room.
This person was on crutches, had one of those foot
boots you wear when your ankle is broken.
And as she'd walk down the hall entering the room,
she would kind of wince in pain and looked
noticeably uncomfortable.
And she'd enter the room, and there weren't any chairs.
And so she'd lean against the wall.
And the question was, what would the person do?
The actors were told to busy themselves in their iPhone
and to not pay attention.
Now, in psychology we call this a bystander effect
where this really limits helping.
If you're in a situation where you
see somebody in pain and other people aren't helping,
that tends to decrease anybody's odds of helping,
because you say, oh, it's not a big deal
or maybe I shouldn't help.
And so this situation is one that
makes being counted on that you're trustworthy to come
and help the lowest it possibly can.
So what happens?
So people who were in the control group, only very
small percentage of them helped, like 16%.
Among those who meditated, 50% of them helped.
That's a threefold increase.
And that's a threefold increase in the situation that
is designed to actually work the most
against your willing to help.
Now, if that can happen after eight weeks
and if that is scalable, that is a huge, huge effect
that you can count on other people.
You can trust them that they're going to help you.
Now, why does it work that way?
It works that way because one part of mindfulness
is this idea of equanimity.
And that means realizing that I am similar to you,
and you are similar to me.
Friends are enemies, are enemies can become friends.
And what that does is it trains the mind
to see us all as valuable and interlinked.
And it breaks down the categories
that we put on each other of we're different in religion.
We're different in sports teams you like, et cetera.
And I think that's why it works.
And then it becomes automatic.
It does the same thing that my little tapping example
was doing.
And so when it comes down to it, really what I want to say
is that in the end, it's trust or it's dust.
And what I mean by that is without trust,
our ability to be resilient as a society is exceedingly low.
And so how can we build it up?
Anything we can do to nudge it up
is important to being resilient.
In the fall of 2012, I don't know how many of you remember
out here, but on the East Coast, super storm Sandy hit New York.
And it was a devastating storm.
And there are neighborhoods that still aren't recovered.
But the AP did a great study.
Controlling for the amount of damage that occurred,
they looked at what was the single most important predictor
of a neighborhood's resilience.
The single most important predictor
was how much neighbors trusted each other.
How much they know that the other person
they could count on them, that that person was
going to have compassion for them,
that they were going to work together.
The neighborhoods that were higher in trust
were the neighborhoods that got up and running
in terms of commerce, and support,
and social services faster than anything.
And that's why I say in the end, it really is trust or dust.
If we don't trust, we're harming everybody.
But of course there were people in neighborhoods who price
gouged and who did things they shouldn't.
And so really my message in the book is, yes, trusting is good.
We should all trust.
But trusting wisely is better.
And so it's my hope that any of you who read this or come
into contact with this work, it will empower you to think
about the way trust actually works
and the forces that impinge on it
to make better decisions about who you can trust
but also how to foster trustworthiness in yourself.
And I thank you so much for listening to me.
MING: Thank you, my friend.
We have time for questions.
Anybody have any questions?
AUDIENCE: Two fairly related questions,
in the earlier test about looking for cues,
so I'm just wondering how you came
to the domain of different things
you were looking for that could conceivably
be a cue in your analysis.
Because you could say whether the pinkie is touching
the hand and the hand's touching is or not.
And then along with that, with whether you
were going into it starting off focused on physical cues
or if you were also considering the types of issues
that were brought up in conversation, which could then
play into the talking with a robot
or talking over the internet.
DAVID DESTENO: So let me do the second one first.
We were primarily interested in physical cues.
There's lots of work out there as well
on linguistics and the type of phrasing
that people use as well as vocal tone.
We weren't looking for those.
It doesn't mean that those don't matter.
And so I'm not saying these are the only cues that matter,
but these are sufficient to predict.
The more that we know about, our accuracy will go.
But we were interested in the actual physical, biological
motion cues.
How did we get them?
We simply started with the brute force method,
was looking for individual cues that
had some predictive ability at all.
Because even if they're not predictive greatly
on their own, they have to have some predictive power
on their own.
And then we would begin to do is to assemble
different subsets just trying to maximize the amount of accuracy
that we can predict.
So it was a very bottom-up approach.
And then once we have those four sets,
those predicted the greatest amount of variance
in people's selfish monetary behavior.
Which is then why, again, it was really
important to use the robot.
Because you're right.
This was a correlational method.
Who knows what we could be picking up.
Maybe on every time I cross my arm, it was my pinkie.
And so we could actually manipulate it with precision
with the robot to validate it.
MING: If you could put a pinkie down here,
it's not trustworthy.
That's what I learned from a movie.
AUDIENCE: Let's see if I can word this the right way.
It seems like the general conclusion
from the research or your conclusion
is you're saying we should be more trusting of others,
like sort of the hope for the better world.
And the question is, is there also then some drive for people
themselves to be trustworthy?
Like in the example you gave in the beginning
about people in a marriage, one cheated on the other,
is it to say, put that aside.
Trust that person.
Or is there some other conclusion in that sense?
DAVID DESTENO: It's a good question.
What we know from all the-- so people
like Martin Nowak at Harvard is a straight mathematician,
evolutionary biologist.
And so they run these fantastic models.
And what we know is that if you are
untrustworthy in the short run, you will profit immensely.
But over time, that profit then starts to go down.
And so in the long run, people who are trustworthy
profit the most in terms of everything and even as a group.
And so we know that's the better outcome.
But if you can be untrustworthy and not get caught,
you're going to profit.
So how do we try to balance those?
And so what we're trying to do is
to make everybody want to be more
trustworthy but at the same time also make better decisions.
It will be impossible to have a world where
everybody is trustworthy.
Because if everybody's trustworthy,
you stop even looking and caring.
You just automatically say, yes, I'll trust you.
And then if there's a mutation or whatever that causes people
to be more untrustworthy, they're
going to profit like crazy.
Until then everybody starts caring,
and so it's always going to be in an equilibrium.
The question is, can we increase the set point
for trustworthiness to a higher level?
And so it's about increasing your own trustworthiness
but about deciding if you can trust somebody else wisely.
So yes, if you know absolutely nothing,
it's better to trust than not trust in the long run
in terms of quantifying the benefits that can happen.
But it's certainly not as good as making
an informed correct decision.
And so my hope is to try and open people's eyes
to how trust really works so that you
can make better decisions.
AUDIENCE: So similar to that question, trust over time,
have you done any research into how analysis of trust
has to change, how much it needs to be
dependent on data changing?
Like the first study was on the initial get to know you,
how much I trust you as a person.
And then the question is later on, something happens.
How much should future events be added into that evaluation?
DAVID DESTENO: You mean at what point will I
change my judgment of whether I can trust you?
It's dependent on a lot of things.
It's often dependent on the magnitude of what you have held
up your end for what you haven't.
But I guess my argument is that you
need to look at each situation if it's important and new.
Because even somebody who has been always trustworthy,
if the costs and benefits change--
the reason they're always trustworthy is the cost
and benefits are rather stable.
Take that person and change the cost and benefits
either by dangling a reward that is immense in front of them
or giving them anonymity so they won't get caught,
like our people, and they'll change.
So I think there's not a clear time frame.
I think we all adjust at different rates
depending upon the magnitude.
But my message is no matter what you think, consider
the situation.
If it's somebody you always trusted,
consider has their power changed?
Has anything else changed?
Because they may not want to be untrustworthy just
like our subjects did, but they will
be and construct a story for why.
AUDIENCE: Hi and thank you for coming.
I have a question actually kind of related to that.
So have you done any study in how people's relationship
long term potentially changes how sensitive or not sensitive
they are to social cues?
I can imagine someone who perhaps
does all of the social cues that you mentioned
in terms of lying, but perhaps like a brother and sister,
for example.
They've gotten numb to it over time,
and maybe they can't pick up on it anymore.
Or do you have a sort of sense of how long-term relationships
can change how people pick up on that?
DAVID DESTENO: Two things on that,
we haven't done that work, but what
we know from the nonverbal literature in general
is that people have a what are often termed accents.
So there's a kind of panhuman way of doing it.
But then different cultures or even different families
or individuals will have modifications of that.
And so the longer you are with someone,
you can learn that for this person,
this is that person's tell in some ways.
And it will be some combination of these.
But other things added to that will increase your accuracy
for that person.
But another thing in terms of long term, what does trust
do, it's beneficial.
So there's great work done by Sandra Murray.
She's psychologist who studies romantic relationships.
And one thing that the trust does in our relationships
is it smooths out the bumps, as I said.
So we've probably all had times when our significant other
does something and it makes us go hmm,
whether it's you think the person's flirting with someone
or they're working late or what is it.
Well, if you inherently trust this person,
at a very nonconscious level, that trust erases that hmm.
It just gives you an intuition that everything's fine.
And if you trust that intuition, that's good.
Because lots of times we'll do something
that we're not trying to be untrustworthy.
It's just an inadvertent thing.
But if a person interprets that as, oh, you're untrustworthy,
it can start you into kind of a death spiral.
And so the good thing about trusting someone
over the long time in a relationship
is that it helps smooth out those bumps
so there aren't mistakes made.
So that one person doesn't interpret the other person
as flirting or doing something with somebody else
that they shouldn't have.
Now, if they keep doing it, well, then you're
going to know it's real.
But that's a benefit of trust long term.
MING: Thank you, my friend.
So the book is "The Truth About Trust"
available where books are sold, also
available at the back of this room.
And David will be around to sign books.
And my friends, David DeSteno.
DAVID DESTENO: Thank you all.