Tip:
Highlight text to annotate it
X
♪♪
[ Applause ]
-Thank you.
How many people have had security features
go nowhere at all?
I certainly have.
We all come up with ideas
that we think will improve security.
And many of them are good ideas.
During my PhD, I actually published some of these ideas.
But here's the problem.
Once the paper left the lab,
I was on to the next thing.
I didn't spend any time, invest any time,
in actually selling these features.
When I started up BlackBerry, I thought,
"Hey, I've got an easy shot.
I'll log a feature request, some magic will happen,
and voilà -- everybody will be using my feature.
But the real world doesn't work like that.
And security features die all the time.
What does it take to become a successful security feature?
Well, let's take a look at an example -- Stack Cookies.
There we go.
In November of '96, Aleph One wrote an article
for Phrack on "Smashing the Stack for Fun and Profit."
In that article, he goes through in detail
what it takes to exploit
a stack-based buffer overflow vulnerability,
trampling the return address.
Well, in January '98,
Crispin Cowan and a couple of other authors
present a paper at Usenix Security, "StackGuard."
Their goal is simple --
to protect that return address on the stack.
And the idea is also fairly simple, right?
You put a cookie on the stack.
Before you use the return address, you check the cookie.
If the cookie's good,
you have a high-probability chance
that the return address is also good and you can use it.
If the cookie's bad, abort the program.
Don't touch the return address.
But that's not the end of their story.
In fact, that's just the beginning of their story.
In August '98,
they actually rewrite the patch, all right?
StackGuard v2.
And in May '99,
they're at Linux Expo
and they've actually got updated performance numbers.
They'd recompiled a lot of the red hat packages
to prove that the overhead
in doing stack cookies wasn't that high.
Next event on the timeline is January 2001,
where ProPolice,
which is done by some researchers
at IBM Research Japan, is released.
And this is another rewrite of the StackGuard patch.
Next event on our timeline
is actually Microsoft releasing Visual Studio in 2003.
Now, Microsoft, when they released Visual Studio
with Stack's matching defenses,
didn't actually know about ProPolice,
they didn't actually know about StackGuard.
It was only after the release
that they found out that these things existed.
But that same year, Crispin and team,
they're at the GCC Developers Conference, all right?
And they're actually presenting the third version
of the patch that they've written.
So we're now at a total version of four versions of this patch.
What's the end result of all of that work?
Well, in 2005, June,
GCC finally adopts ProPolice.
Now, this is also interesting
because the version they adopted wasn't actually ProPolice
written by IBM Research Japan
and it wasn't any of the three StackGuard patches.
It was actually another version of the patch
written by Richard Henderson at Red Hat.
Now, once the patch had been adopted into GCC,
the flood gates opened.
And Red Hat appears to be the first major distro
to include support for StackGuard, all right?
That isn't overly surprising considering the final patch
did come from someone at Red Hat.
Ubuntu was in October 2006,
SUSI was in December of 2006, Debian was in February of 2009,
and Arch was in August 2011.
Basically, Stack Cookies at that point
had passed a tipping point.
Now, the feature continued to get new improvements, all right?
And in May 2013, the strong option was added.
And adoption of that has been a lot quicker
than the original work.
What open -- oh. And finally --
Sorry, can't forget this --
In May -- sorry -- August 2013,
that original '98 Usenix's paper
won the Usenix Test of Time award.
So, congratulations to the those authors.
What opened the flood gates?
It's fairly obvious.
Adoption of the patch into GCC and Visual Studio.
Let's take a look at another feature.
I'm actually gonna use a feature developed internally
within BlackBerry and is shipping on our Priv,
which is our Android-based smartphone.
If you take a look at Android permissions,
there's five different permission levels --
none, normal, dangerous, signature, and system.
None -- well, that is what it sounds like.
Normal permissions are automatically granted
to any application that requests them.
Dangerous permissions can be accepted
or rejected by the end user.
Signature permissions are only available
to other applications that are signed with the same key.
And system permissions are restricted
to applications on the base system.
What happens if you, as a third party developer,
want to protect access to certain APIs?
Well, you've really got three options.
You've got normal, dangerous, and signature.
Normal, as I said, is available to all applications.
Dangerous puts enforcement in the hands of the end user.
We've got three talks on usable security tomorrow.
I don't think I'm giving anything away
by saying that prompting the end user
isn't always the best option, right?
The third option -- signature.
Well, signature might work for a small number of applications
because you have to sign them all with the same key.
But it doesn't scale.
We've also got the problem
that you have to install the application
defining the permission
before you install the application
requesting the permission.
So what did we do?
Our solution within the security research group at BlackBerry
was called "controlled open permissions."
It allows the application package
to be signed with different keys,
but it doesn't offload enforcement
on the end user.
The concept's simple.
The application requesting access to the content
sends a token to the application hosting the content.
The application hosting the content
uses a verification publicly to verify the token.
If the token successfully verifies,
it sends the content back.
So, how do we go about getting developers internally
to use controlled open permissions?
Well, we were lucky, all right?
In our case, developers already realized
that they had a problem
and they were looking for a solution.
In not -- Sorry.
Not in all fields do people realize
that there even is a problem.
If you take a look at IOT,
a lot of people are just now realizing
that we have a big problem that needs solving.
So the first step for us
was actually convincing the developers
that this would solve their problem.
Selling your solution is entrepreneurship.
It's less about the technical solution
than it is about all that support and other stuff
around the solution.
For us, we had a short, concise description.
We had prototype implementation.
And we had detailed notes on how to use
this prototype implementation.
Conference papers are often too long,
and you end up scaring away the exact people
that you want to try and convince.
We went to meetings.
And, in fact, we even came in on the weekends,
because in the transition of this research
from prototype to implementation,
the developers were using a hardware security module,
or hardware signing module,
that was out supporting different signatures,
and that caused problems.
Now, the key is someone had to work through those problems.
And it gives you a lot of credibility
if you work with the end users
or the end developers
to deploy and fix some of these issues.
We all have --
oh, the end result of this is we actually got the feature
into the BlackBerry common infrastructure,
which is a set of common libraries
that all the BlackBerry-developed
applications will use on the Priv.
We all have a set of tools that we use for solving problems.
This includes developers.
It includes companies
who are looking to satisfy customer requests.
It includes product managers
who are looking to find new features
that customers are wanting.
In fact, it includes most problems of life,
including non-security related problems.
So, how do we get our feature or tool to be used?
Well, we need to make sure
that the right feature gets into the diaper bag.
Okay, why a diaper bag? Three reasons.
First, diaper bag has limited space.
We can't just keep throwing new features at the developer
and expect them to use them all.
How many Stack analysis tools are out there?
Lots.
How many do we expect the developer to use?
One if we're lucky, right?
And the fact that -- that -- that -- sorry.
The static analysis tool that they use
will be the one that they're most familiar with,
the one that was mandated by management,
or the one that everybody else seems to be using.
Second, parents don't go into parenting
with a five-day course on changing diapers.
Yeah, I didn't think you thought -- sorry.
You didn't think you'd hear about diapers in this talk,
did you?
All right, computer science students
don't go into business even with a whole course on security
a lot of the time.
And Zachary Peterson has a talk about this
on the third day of Enigma.
Both the diaper on the left
and the diaper on the right work.
But which one do you think is in more people's diaper bags?
People are comfortable with learning,
and they're comfortable with messing up once or twice.
But if they don't learn it quickly enough,
they will switch.
How many of us have had a lesson on using a compiler?
I'm gonna guess almost none of us.
There's 2,000 different command line options for GCC.
That's a lot of options.
I only know a very, very, very small number of those.
We can't assume that people will use security tools directly.
In fact, I've seen cases where the security checks
were disabled in static analysis tools.
We need to make it as easy as possible.
And third, diaper bag
is already filled part-way with necessary pieces.
I wish I could tell my daughter to just go on the potty.
And then I could just throw everything out of the diaper bag
and use it to carry cooler things instead of diapers.
But it doesn't work like that.
We can't just tell a developer
to throw out everything that they have been using
and use this cool, new development environment.
We still have Seed,
even with the problems that it causes,
because of the benefits that it also provides.
The translation or transition
from IPv4 to IPv6 has been really slow.
And in fact, I'm not sure that IPv4
is gonna be dead in my lifetime.
A developer needs a compiler.
They need a development environment.
They need certain libraries and support.
And that takes a large amount of space in the bag,
and it leaves a very limited amount of space
for other standalone tools.
Okay, so how do you go about getting your security tool
or feature into the diaper bag?
Well, option one --
convince the developer to throw something else out.
What are they gonna stop using?
Is there another tool that yours can replace entirely?
Does your tool require relearning,
or does it work similar to that old tool?
Can you convince them to switch compilers?
Is there another template library
that has better cross-site scripting protections
than the other one?
Accession is an example
of one tool trying to replace another.
SSH gave administrators things that were really useful,
like automatic log-ins and file transfer
and other things.
The resistance to switching
means you need to be much better,
not just a little bit better
than the tool you're looking to replace.
Option two -- make your tool small and easy to use.
And I'm not talking small in terms of lines of code.
I'm talking small in terms of cognitive load.
There is a small amount of space
in the diaper bag to add something new,
especially if it fits well with the current workflow.
Valgrind's an example of this.
Compile your program with this extra library,
and you get this extra error checking.
Option three --
add your feature or tool
to something else that's already in there.
That's what happened when StackGuard got included in GCC.
And there's a whole field of compiler protections --
shadow stacks, new warnings,
new errors, those sorts of things --
that are being introduced into compilers.
Static analysis --
can you add your rule
to an existing static analysis tool,
or do you really need to create
a whole new static analysis tool kit?
Now, with static analysis,
just make sure you don't turn on everything all at once.
You don't want to overwhelm them.
And option four --
okay, and I know I'm cheating just a little bit --
don't put your feature in the diaper bag.
Make the baby carry it.
All right, developers operate within a specific environment.
So make your feature part of that environment.
ASLR -- address space lay at randomization --
is an example of this.
On mobile platforms,
we have the sandboxing environment
that's provided by default to all the application developers.
If you take a look at most Android applications,
they're written in Java.
Why is that?
Because Java's the default development environment
that's provided to them.
If you're gonna create a new smartphone,
my suggestion -- don't do it in Ada.
Although Ada might be secure,
you're not gonna get that broad deployability
that you're looking for.
For StackGuard, getting the feature into GCC
was their tipping point.
And they didn't reach that tipping point
by releasing the GCC patch.
Remember, they released that GCC patch --
the first version of it -- back in '98.
You will not reach that tipping point
just by releasing a patch with your paper.
They reach the tipping point
because others understood the problem
and the value of their solution.
Microsoft released first
probably because the internal security team at Microsoft
had less people to convince.
SELinux is used partly
because it's part of the Linux environment.
But even they didn't get it in right away.
They had to create
the whole Linux security module's framework.
And they had to listen to the feedback
that they were getting,
take those suggestions seriously,
and work with them, not get discouraged.
For controlled open permissions within BlackBerry,
the tipping point was getting it
into the BlackBerry common infrastructure.
For many features, it's gonna be getting them
into that developer diaper bag or onto the baby.
However, that's not the only tipping point.
And there's two others.
The second is legislative changes.
Not all security features are used because they're elegant.
But promoting proper legislative frameworks
is a whole nother talk
and one I'm not gonna get into.
And the third is public pressure.
HTTPS has long been a part of the web developer diaper bag,
but not everybody has used it.
Edward Snowden is working on changing this --
or has done a lot to change
and help people to re-evaluate the security features
that they are using.
And things like Let's Encrypt
are helping to provide even easier tools.
What if your feature isn't for end developers, though?
What if it's for users?
Well, the approach is the same,
but the target audience isn't.
You need to convince the people in key areas
that they have a problem
and you have the solution to that problem.
And remember, those people might not be security experts.
I know on a couple of occasions
I've had to take deep breath and relax
in trying to explain and sell security.
PointGuard, which is some of the subsequent work
that Crispin did, it died.
I've had security features die, as well.
Cause of death --
failure to pass that tipping point.
For StackGuard, it took probably about six months
to write the paper and initial prototype
and seven years to pass the tipping point.
Adding your feature to a tool that's already in the diaper bag
is probably going to be easier
than adding a new tool to the diaper bag.
Although, that's just my personal experience.
And I haven't found any bullet-proof approach.
But things that help --
elevator pitch, brief summary, and collaborations.
And in fact, the closer the collaboration, the better.
It's been suggested that the most successful way
to transfer research into technology or into product
is to transfer the people who worked on the research.
It allows those most passionate about the research
to help those who are responsible
for its eventual deployment.
Our goal as security professionals
should be to change the world for the better.
So let's all get out there and build better diaper bags.
[ Applause ]