Tip:
Highlight text to annotate it
X
BORIS DEBIC: Welcome, everyone, to yet another
Authors@Google talk.
It is my distinct privilege today to host Dr. Ray
Kurzweil, and my good friend here, Peter Norvig, from our
Artificial Intelligence group, former director
of research at Google.
So just to give you a little bit of context why I am
hosting this talk.
When I was a kid and wrote my first lines of code in
elementary school, I saw a tremendous potential in that
toy that I was playing with.
And I said to all my friends, you know what?
One of these days, these are going to
be as smart as humans.
We just have to work a lot at it.
And they would say, oh, no, that's impossible.
How can you say something like that?
I really didn't have a good answer in those times.
I was just a kid.
But I told them, look, I mean it's all
built of atoms, right?
The CPUs in this thing, that's atoms, and our
brains, that's atoms.
So there's no theoretical
impossibility for this to happen.
Well, today, I'm very happy to host two guys who can explain
why this will happen in much more detail.
Please welcome Dr. Ray Kurzweil to Google.
[APPLAUSE]
PETER NORVIG: I think it's redundant to introduce Ray.
You all know him as an inventor, author, a futurist.
And you know, there was a book a few years back that accused
Xerox PARC of fumbling the future.
And I would say, to continue that metaphor, Ray has
intercepted the future and returned it for a touchdown,
multiple times.
He's done it with the flatbed scanner, with OCR, with
print-to-speech, text-to-speech, speech
recognition, music synthesis, and so on and so on.
I won't list all the honors, but he's been recognized by
Presidents Johnson, Clinton, and
Reagan, and by Bill Cullen.
Those of you who are younger, you'll have to Google that.
But let me put it this way.
Have you heard of Plato, Aristotle, Socrates?
Philosophers.
And Ray is a philosopher, too.
But more importantly, foremost, he's an engineer.
And when it comes to these tough questions of creating
the mind, philosophers are useful, but I'm putting my
money on the engineers.
[LAUGHTER]
[APPLAUSE]
RAY KURZWEIL: Well, thanks for that, Peter.
I-- can you hear me back there?
Yeah?
I agree with that.
In fact, I decided I wanted to be-- well, I called it an
inventor when I was five.
And I had this conceit.
I know what I'm going to be, and it kind of reflected my
family philosophy that if you have the right ideas, you can
overcome any problem.
And I particularly like coming here.
This is actually my third time at Authors@Google.
I was here in 2005.
I wouldn't exactly say Google was a young
upstart at that time.
It was, I think, about 4,000 people.
I did it in the lunchroom near here.
The spirit hasn't changed.
I think you're about 10 times the size.
40,000 is like the size of a small city.
But you're still actually a start-up compared to the
opportunity, because the world is increasingly based on
knowledge and information.
In fact, 65% of American workers are knowledge workers.
So the mission of organizing and providing intelligent
access to all the world's knowledge is the most
important task in the world, and Google is clearly the
leader in that.
And there's tremendous potential, because knowledge
is growing exponentially.
So I want to say a few words about exponential growth and
my law of accelerating returns, which was the primary
message of "The Singularity is Near." But I think Google is
actually a very good example of that exponential growth.
I happened to be on Moira Gunn's "Tech Nation" NPR
program yesterday, and she was reminiscing about her 2001
interview with Larry and Sergey, who came in with dark
suits and ties.
And they were trying to explain this cool computer
they were going to create.
And she didn't quite understand what it was.
And Larry said, well, it's going to be like HAL.
And then Sergey said, but it won't kill you, so.
[LAUGHTER]
RAY KURZWEIL: So I think we got the second part of that.
The first part of that we have, in the sense that Google
is pretty amazing in terms of finding information.
I'm amazed by it every hour.
But I think we can go further in that direction, and that's
what I'd like to talk about.
You all have these billions of pages of millions of books,
and very good access to it, but there's a lot of
information there that's reflected in the natural
language ideas.
And computers, now, I think can begin to understand those.
And that's something I'm working on.
That's something I talk about in this book.
And I'd like to share that idea with you.
First, I'll say a few words about the law of
accelerating returns.
I mentioned I decided to be an inventor when I was five.
I realized 30 years ago that the key to being
successful is timing.
Those inventors whose names you know are the ones who got
the timing right.
So Larry and Sergey had this great idea about
reverse-engineering the links on the internet to provide a
better search engine, but they did it at
exactly the right time.
And so in 1981, I was thinking, my project has to
make sense when I finish the project, and the world will be
a different place two, three, four years from now.
That was even true in '81.
It's even more true today.
Acceleration is another feature of the law of
accelerating returns.
Our first communication technology, spoken language,
took hundreds of thousands of years to develop.
Then people saw that stories were drifting.
People didn't always retell the story in the same way, so
we needed some record of it.
So we invented written language.
That took tens of thousands of years.
Then we needed more efficient ways of
producing written language.
The printing press actually took 400 years to
reach a mass audience.
I gave a speech at the University of Basel recently
on the occasion of it's 550th anniversary.
It was founded 20 years after Gutenberg's invention, right
near the spot where he invented it.
And I said, well, you must have had some of his books
when you opened your doors.
And they said, yes, we got them very quickly.
It was only a century later.
I mean, that was the Google of that time.
It took maybe a century to find the right information.
So you didn't really find it in your lifetime.
It took 400 years for that really to reach an appreciable
number of people.
The telephone reached 25% of the US population in 50 years.
The cell phone did that in seven years.
Social networks--
wikis, blogs-- took about three years.
Go back three or four years ago, most people didn't use
social networks, wikis and blogs.
Ten years ago, most people didn't use search engines.
That sounds like ancient history, but it
wasn't so long ago.
And then we very quickly become dependent on these
brain extenders.
I mean, during that one-day SOPA strike, I felt like a
part of my brain had gone on strike.
Because there was a way around it, but I didn't know that
until the day came.
So I really felt like I'm going to lose part of my mind.
Yet this was not technology that I had
even a few years earlier.
What's driving this is the exponential growth of
information technology.
In 1981, I began to look at data, being an engineer.
But I started out with the common wisdom that you cannot
predict the future.
And that remains true as to which company, which standard
will succeed.
But if you measure the underlying properties of
information technology--
the first one I looked at, in a classical one, the power of
computation per constant dollar.
So the calculations per second per constant dollar.
Or the number of bits we're moving around wirelessly, or
the number of bits on the internet, or the cost of
transmitting a bit, or the spatial resolution of
brain-scanning, or the amount of data we're downloading
about the brain, or the cost of sequencing a base pair of
DNA or a genome, or the amount of genetic data we're
sequencing--
I mean, these fundamental measures follow amazingly
predictable trajectories, really belying the common
wisdom that you cannot predict the future.
And what's predictable is that they grow exponentially.
And that is not intuitive.
Our intuition about the future is that it's linear, not
exponential.
If you ever wondered, why do I have a brain?
It's really to predict the future, so we could predict
the consequences of our actions and inactions.
So I'm walking along, and, OK, that animal's going that way
towards a rock, and I'm going this way.
We're going to meet in about 20 seconds up at that rock.
I think I'll go a different way.
That proved to be useful for survival.
That became hardwired in our brains.
Those predictors of the future are linear, and they work very
well for the kinds of situations we encountered when
our brains evolved 1,000 years ago.
It's not appropriate for the progression of information
technology.
And I'd say the principal difference between myself and
my critics is they look at the current situation and they
make linear extrapolations.
So halfway through the Genome Project, seven years, 1% had
been completed, and mainstream scientists who were still
skeptical said, I told you this wasn't going to work.
Seven years, 1%?
It's going to take 700 years, like we said.
My reaction was, no, we're almost done.
[LAUGHTER]
RAY KURZWEIL: I mean, 1%, you're pretty much finished.
I mean that's--
you can try that with your product submission schedules.
[LAUGHTER]
RAY KURZWEIL: But over the next-- it had been doubling
every year.
There was reason to believe that would continue.
It was only seven doublings from 100%.
And that's exactly what happened.
It kept doubling and was finished seven years later.
That has continued.
Up to the present day, the first genome
cost a billion dollars.
We're now down to under $10,000, and so on.
And it's true in every area of information technology.
Not everything--
I mean, transportation's not yet an information technology.
But industries are converting.
It's not just the gadgets we carry around.
Health and medicine has become an information technology.
I'll talk about that.
The world of physical things is going to become an
information technology as three-dimensional printing
gets going, and I'll touch on that.
It's worth just examining for a moment the difference
between linear progressions, which is our intuition, and
the reality of information technology, which is
exponential.
So linear goes one, two, three, four.
Exponential, which is information technology, goes
to two, four, eight, siexteen.
Is that really so different?
Actually, it's not that different.
A linear progression is a good approximation of an
exponential one for a short period of time.
I mean, look at an exponential.
Take a little piece of it.
It looks like a straight line.
It's a very bad estimate over a long period of time.
At step 30, the linear progression's at 30.
At step 30, the exponential progression's at a billion.
And that's not an idle speculation about the future.
This Android phone is several billion times more powerful
per constant dollar than the computer I used as an
undergraduate.
It's a million times cheaper, it's several thousand times
more powerful, in terms of computation, communication,
memory, and so on.
And it's also 100,000 times smaller.
That's another exponential progression.
And we'll do both of those things again
in the next 25 years.
So that gives you some idea of what will be feasible.
So this is what I wanted to cover.
Any questions on any of this?
[LAUGHTER]
Well, this was the first graph I had, in 1981.
So I don't know if you can see that, but I
had it through 1980.
And this calculations per second per constant dollar.
It's a logarithmic scale, which I have to take some
pains to explain to many audiences.
But every labeled point on this y-axis is 100,000 times
greater than the level below it.
So this modest little uptick represents trillions-fold
increase in the amount of computer you can get per
constant dollar over the last century, going back to the
1890 census.
Several billion-fold, just since I was a student.
People go, oh, Moore's law.
But Moore's law is actually just the part on the right.
That had actually only been underway for a little over a
decade when I did this estimate.
This started decades before Gordon Moore was even born.
1950s, they were shrinking vacuum tubes, making them
smaller and smaller to keep this exponential growth going.
CBS predicted the election of Eisenhower with a vacuum-tube
based computer in 1952.
Remember that?
[LAUGHTER]
A few people here might remember it.
When I first talked to Google in 2005, I don't think anybody
remembered it.
But finally, that hit a wall.
Couldn't shrink the vacuum tubes anymore and keep the
vacuum, and that was the end of that paradigm.
But it was not the end of the ongoing exponential, it just
went to the fourth paradigm.
And people have been talking about the end of Moore's law,
but the sixth paradigm will be three-dimensional computing.
We've taken baby steps in that direction.
If you talk to Justin Ratner, the CTO of Intel, he'll show
you this experimental circuits they have that are
three-dimensional
self-organizing molecular circuits.
Those will become practical in the teen years, before we run
out of steam with flat integrated circuits, which is
what Moore's law is all about.
But the most interesting thing about this is, just look at
how smooth and predictable a trajectory that is.
People say, well, it must have slowed down during the Great
Depression, or the recent recession--
neither of which is the case.
Did Google slow down during the recent recession?
I mean, these technologies continue because we're
creating the computers and the systems and the search engines
of 2013 and 2014 with the computers of 2012.
We couldn't do that in 2002.
We had computers of 2002, so we created
the systems of 2003.
That's why the technology builds on itself.
But it goes through thick and thin, through war and peace,
through boom times and recessions--
nothing seems to affect it.
And we could talk about natural limits, but I examine
that in "The Singularity is Near," and if you look at what
we know about the physics of computing, we do need a
certain amount of matter and energy to compute, to
remember, to transmit a bit, but it's very, very small.
And based on the limits that we understand that have been
demonstrated, we can go well into the century and develop
systems that are many trillions of times more
powerful than we have today.
So I won't dwell on these examples of electronics, but
you could buy one transistor for $1 in 1968.
I thought that was actually pretty cool, at the time,
because in the early '60s, I would hang out at the surplus
electronic shops on Canal Street in New York--
they're still there--
and buy something this big, a telephone relay that could
switch one bit, for $50.
And it was big and slow, 30-millisecond reset time.
I can actually get something much faster
and smaller for $1.
Today, you can get billions for $1.
And they're better, again, because they're smaller, so
the electrons have less distance to travel.
Cost for a transistor cycle is coming down
by half every year.
That's a measure of price/performance.
So the fact that you can buy an Android phone that's twice
as good as the one two years ago for half the price partly
is because Google is clever, but partly it's because of
this law of accelerating returns.
It's a 50% deflation rate.
We put some of that price/performance improvement
into better performance and some of it into lower prices.
So you get better products for lower costs.
And that's going to continue for a very long time.
The economists actually worry about deflation.
We had massive deflation during the Depression.
That was a different source.
It was not price/performance improvement.
It was the collapse of consumer confidence.
But they're still concerned as more and more of the economy
becomes information technology, like all of health
and medicine.
Peter's working on education becoming information
technology.
And if you can get the same stuff--
computes, bits of communication, base pairs of
DNA, physical things printed out on
three-dimensional printers--
for half the cost of a year ago, Economics 101 will say
that you will buy more.
But you're not going to double your consumption year after
year, because after all, how much do you need?
You'll reach a saturation point.
So maybe you'll increase your consumption 50%.
And so the size of the economy of these information
technologies will shrink, not as measured in bits, bytes,
and base pairs, but as measured in constant currency.
And for a variety of good reasons, that would not be a
good thing.
And that is not what is happening.
In fact, we more than double our consumption each year.
This is bits shipped, but I have 50 other consumption
graphs like this.
Every form of information technology has had an average
growth rate of 18% per year for the last 50 years in
currency, despite the fact that you can get twice as much
of it each year for the same price.
And the reason for that is, as we reach certain points of
price/performance, whole new applications explode.
I mean search engines like we have now, or even like we had
10 years ago, weren't feasible 20 years ago.
Search engines--
there were search engines before three or four years
ago, but they didn't take off because they weren't even able
to upload one picture.
And when the price/performance reached a certain point, these
applications exploded.
And we have an insatiable appetite for information, for
knowledge--
which is really information that has
been shaped by meaning.
That's the mission of Google, is to turn information into
knowledge that people can access and benefit from.
So "Time Magazine" had a cover story on my law of
accelerating returns.
They wanted to put a particular computer they had
covered and were fond of on the chart.
I said, well, I don't know.
It might be below the chart, because sometimes people come
out with things that are not cost-effective, and then they
don't last in the marketplace.
This has just come out.
But it actually was on the curve.
It's the last point there.
This is a curve I laid out 30 years ago.
I laid it out through 2050.
But we're right where we should be.
This has been an amazingly predictable phenomenon.
Communication technology--
Martin Cooper is one of the faculty at Singularity
University.
He invented a product that you sell, the mobile phone.
And that's the number of bits of data we send around
wirelessly in the world.
So it's over the last century.
A century ago, this was Morse code over AM radio.
Today, it's 4G networks.
And again this is trillions-fold increase.
That's a logarithmic scale.
But look at how smooth a progression that is.
Internet data traffic.
This is a graph I had just the first few points of in the
early '80s.
It was the ARPAnet.
And I said, wow, this is going to be a world wide web
connecting hundreds of millions of people to each
other and to vast knowledge resources by the late '90s.
I wrote that in the '80s.
And people thought that was ridiculous, when the entire
defense budget could only tie together a few thousand
scientists.
But that's the power of exponential growth.
That is what happened.
That's the same data on the right, seen on a linear scale.
That's how we experience the world.
So to the casual observer, it looked like, whoa, the World
Wide Web is a new thing, came out of nowhere.
But you could see it coming.
And you can see revolutions coming if you look at these
progressions.
And that is what I advise young companies to do.
Because I get some business plans and do some entering,
and very often, these plans talk about the world three,
four years from now, like nothing is going to change.
And you only have to look at the last three or four years
to see that that's not correct.
I could talk for a long time about this phenomenon.
But we are turning health and medicine into an information
technology.
I mentioned the Genome Project.
But we can actually reprogram this outdated
software in our bodies.
How long do you go without updating your
Android phone software?
This is probably updating itself right now.
But I'm still walking around with software in my body that
evolved thousands of years ago-- like, for example, the
fat insulin receptor gene, which says, hold onto every
calorie 'cause the next hunting season may not work
out so well.
That was a good idea 1,000 years ago.
You worked all day to get a few calories.
There were no refrigerators, so you stored
them in your fat cells.
I'd like to tell my fat insulin receptor gene, you
don't need to do that anymore.
I'm confident the next hunting season will be good at the
supermarket.
[LAUGHTER]
RAY KURZWEIL: So that was actually tried in animal
experiments.
We have a number of ways of turning genes off, like RNA
interference.
And these animals ate ravenously and remained slim
and got the health benefits of caloric restriction while
doing the opposite.
They lived 20% longer.
They're working with a drug company to bring that to the
human market.
I'm on the board of a company that takes lung cells out of
the body of patients who have a disease caused
by a missing gene.
So if you're missing this gene, you probably will get
this terminal disease, pulmonary hypertension.
So they scrape out lung cells from the throat, add a gene in
vitro, and then inspect that it got done correctly,
replicate the cell several million-fold--
that's another new technology--
inject it back in the body, it goes through the bloodstream.
The body recognizes them as lung cells.
You've now added millions of cells with that patient's DNA,
but with the gene they're missing, and this has actually
cured this disease in successful human trials, and
it's doing its Phase III trial now before it gets approved.
There are hundreds of examples of reprogramming biology.
My father had a heart attack in 1961, damaged his heart,
which is the case of 50% of all heart attack survivors,
have a damaged heart.
He could hardly walk.
He died of that in 1970.
Up until very recently, there's nothing you could do
about it, because the heart does not
rejuvenate itself naturally.
You can now reprogram stem cells to rejuvenate the heart.
Now, I've talked to people who could hardly walk, and now
they're normal.
We are growing organs already.
Some of these simpler organs are being used in humans.
Other ones are now being implanted in animals, where we
lay down the scaffold with three-dimensional printers and
then use the three-dimensional printer to populate it with
stem cells and regrow, for example, a kidney.
So all of this is coming.
It's a complex area.
But the point is that health and medicine has become an
information technology, and therefore it's subject to this
law of accelerating returns.
So these technologies, which are already beginning to enter
clinical practice, they're going to be 1,000 times more
powerful in 10 years and a million times more
powerful in 20 years.
It gives you some idea of what's coming.
If I want to send you a music album or a movie or a book,
just a few years ago, I'd send you a FedEx package.
I can now send you a Gmail message with those products as
an attachment.
I can also send you these musical instruments, if you
have the three-dimensional printer.
And this is a revolution right before the storm.
They've been expensive.
They were hundreds of thousands of dollars and tens,
now thousands.
They will, in a number of years, go sub-$1,000.
The resolution is improving at a rate of about 100 in 3-D
volume per decade.
It's still over several microns.
Needs to be sub-micron.
The range of materials is increasing.
Ultimately, a substantial fraction of manufacturing will
be done this way, turning information files into
physical products.
Today, you can print out 70% of the parts you need with
your three-dimensional printer to create another
three-dimensional printer.
[LAUGHTER]
RAY KURZWEIL: That will be 100% in five to eight years.
So that brings me to the brain.
And I want to spend some time on that.
I've been thinking about this topic for 50 years, actually,
thinking about thinking.
I wrote a paper when I was 14--
that's 50 years ago--
that basically described the human brain as a large number
of pattern recognizers.
That was my Westinghouse Science Talent Search
submission, and I got to meet President Johnson.
And I did a program that did pattern recognition on musical
melodies and then wrote original music with the
patterns it had discovered.
So you could feed in Chopin, and it would write, then,
music like it was a student of Chopin or Mozart, and you
could recognize which composer had been analyzed with the
original music that it was composing.
And this book actually articulates a
very consistent thesis.
Pattern recognition is what we do well.
We're not very good at logical thinking.
Computers do a far better job of that.
One of the predictions I made in the early
'80s was that by '97--
actually, I said '98--
a computer would take the World Chess Championship.
I also predicted that when that happened, we would
immediately dismiss chess as being of any significance.
Both of those things happened in '97 when Deep
Blue defeated Kasparov.
And people said, well, of course that's true.
Chess is a logic game, and computers are logic machines,
so we would expect them to do a better job
than humans on chess.
But what they will never do is be able to understand the
vagaries and subtleties and ambiguities of human language.
So already we're seeing that being overturned.
And there's actually a pretty impressive range--
it's just a first step--
but an impressive range of language that you can say to
systems like Google now, and it will understand you pretty
well, and actually begin to develop a model of who you
are, something that Siri doesn't do.
How many of you can answer this "Jeopardy" question?
"A long tiresome speech delivered by a frothy pie
topping." What is a meringue harangue?
[LAUGHTER]
So Watson got that correct.
The two humans who were the best human "Jeopardy" players
ever did not get it.
And Watson got a higher score than the best
two humans put together.
And there's a lot of
misunderstandings about Watson.
People say, well, it's not really doing any true
understanding of language, because it's just doing
statistical analysis of words.
Actually, what it does--
I mean, it actually has many different modules.
What the IBM engineers did is create a framework called
UIMA, which runs these different systems and is able
to analyze their strengths and weaknesses and combine them.
So actually, the engineers in charge of Watson don't
necessarily understand all of those modules.
The ones I think that are most effective are ones that are
statistical, but they're not just doing
statistics on word sequences.
They're building a hierarchical model with a
whole field of probabilities at different
levels of the hierarchy.
And if that does not represent a true understanding of the
material, then humans have no true understanding, either,
because that is how the neocortex works.
And another misconception is that every fact was sort of
programmed in some language like Lisp.
In fact, Watson got its knowledge by reading Wikipedia
and several other encyclopedias, 200 million
pages of natural language documents.
And it is true that it actually doesn't do as good a
job on any page as a human.
So you could read a page, and if you knew nothing about the
presidency, you'd conclude, wow, there's a 95% chance
Barack Obama's president, having read that one page.
And Watson will read it and come out with a conclusion,
oh, there's a 58% chance that Barack Obama is president.
So it didn't do as good a job of understanding that page.
But it has read 200 million pages, and maybe 100,000 of
those have to do with Barack Obama being president.
And it can then combine all those probabilities using
sound probability theory--
Bayes' theorem and so on--
and conclude that there's a 99.99% chance that Barack
Obama is president.
It has total recall of those 200 million pages and can
analyze the cross-implications in three seconds.
It's just a first step, but that is the kind of capability
that we're leading to.
My vision of search engines in the not-too-distant future is
that they won't wait to be asked questions.
They'll be listening in on our conversations--
what we say, what we write, what we read, what we hear, if
you let them, and I believe people will, because it'll be
useful to have an intelligent assistant like this--
and it will anticipate your needs.
So suddenly, it might pop up and say, oh, just yesterday,
you were talking about, if only we could have better
bioavailable means of phosphatidylcholine.
Well, here's a study that came out 36 minutes ago on just
that topic.
If it sees you struggling in a conversation to come up with
the name of that actress, right in your field of view on
your Google Glass, you'll get information about that
actress, not even having asked for it.
It can just see you needed that.
Obviously, that could be annoying if it's really
information you don't want.
That'll be the key.
But actually, we very much want this information.
I mean, people are constantly Googling something at dinner.
But we don't even want to have to put that information in.
An intelligent assistant should be
listening to what we say.
So some of the best evidence for the thesis I've come up
with on how the neocortex works has emerged just as I
was sending off the book.
Actually, four times I was about to send it to the
publisher and said, no, wait, this great
research just came out.
I've got to include this.
And we actually delayed the book as a result.
The publisher wasn't happy with that, but these were
great pieces of research to support the thesis.
The thesis is that there are modules in the brain that are
comprised of about 100 neurons, and that each one of
these recognizes a pattern and is capable of wiring itself,
literally with a wire, biological wire, an axon and a
dendrite, to other modules to create this hierarchy that the
neocortex represents.
And that hierarchy doesn't exist when
the brain is created.
Even before we're born, we start building this, one
conceptual layer at a time.
And that's actually the secret of human thought, the ability
to build these modules.
One piece of research that came out just as I was sending
off the book is that the neocortex is comprised of
these modules of about 100 neurons.
The wiring and structure of those 100
neurons is not plastic.
It's stable throughout life.
It is the connections between these modules which are
dynamic and plastic and are created.
And our neocortex creates our thoughts, but our thoughts
create our brain, in terms of these connections and the
patterns that each module learns.
This is different from neural nets, and I've never been a
fan of neural nets.
I was one of the pioneers of hierarchical hidden Markov
models in the '80s and '90s and used that for speech
recognition, and today, that is the dominant technique in
speech recognition, speech synthesis, character
recognition.
It's one of the popular techniques in natural language
understanding.
And it's really the closest mathematical equivalent to
what I'm talking about.
This 100-neuron module is more complex than one neuron in a
neural net.
It's capable of dynamically learning a pattern,
recognizing the pattern even if parts of it
are occluded or missing.
It can actually tell other pattern recognizers to expect
a pattern because it's almost recognized a pattern and
another part's coming, and so lower-level pattern
recognizers should be alert for that.
It's capable of creating these connections up
and down the hierarchy.
And that's much more complex than one
neuron in a neural net.
So the neural net is based on one neuron, either a model of
it that we have in synthetic neural nets or, in theory, the
neural net that the brain represents.
And that's not the right building block, either for AI
or for the brain.
There was this recent research at Google that showed an
ability to do image recognition with a neural net
without any labeling of the data.
It was impressive, but it only recognized 15% accuracy.
I think a much better model is based on not having the neuron
as the building blocks.
The building block are these modules.
And we have about 30 billion neurons in the neocortex.
There's about 300 million of these pattern recognizers.
Now a word about the neocortex.
It is this part of the brain where we do
hierarchical thinking.
It can think in hierarchies, and it can solve problems in
hierarchies.
And it can see a solution to a problem and then reapply it in
situations that might be a little different.
And only mammals have a neocortex.
So 100 million years ago, these mammals emerged,
rodent-like creatures with a neocortex that was the size of
a postage stamp, about as thin as a postage stamp, flat and
smooth, and it covered the brain.
But it was capable of a certain amount of this
hierarchical thinking.
So these mammals could solve problems quickly, or could see
another member of its species solve a problem and learn it
in a matter of hours.
Animal species without a neocortex could learn, too,
but not in the course of one lifetime.
They had pre-programmed behaviors.
Those behaviors could evolve in biological evolution, but
that would take thousands of lifetimes.
So over thousands or tens of thousands of years, they could
gradually change their behavior.
And that was OK, because the environment
changed that slowly.
So there would be environmental changes that
required an accommodation in behavior over
thousands of years.
But then 65 million years ago, there was a cataclysmic event
that happened very quickly called the Cretaceous
extinction event.
And we see archaeological evidence of
that around the globe.
There's a layer that represents this catastrophic
change in the environment that happened very quickly.
And there are theories about that having
to do with a meteor.
But it's very clear that there was a sudden change in the
environment at that time.
And the animals that didn't have a neocortex and that
couldn't adjust quickly, thousands of those
species died out.
That's when the mammals took over their ecological niche of
small- and medium-sized animals.
So to anthropomorphize, biological evolution said,
wow, this neocortex is a pretty good design, and it
kept growing it in size through increasingly complex
mammal species.
By the time it got to primates, it's no longer a
smooth sheet.
It's got all these convolutions and ridges to
increase its surface area.
It's still a flat structure.
If you take the human neocortex, you can stretch it
out into a flat structure the size of a large table napkin.
It's about the same thickness.
It's still thin.
But it has so many convolutions and ridges, it
actually comprises 80% of the brain.
And that's where we do our hierarchical thinking.
So if you take a primate, it also has one with convolutions
and ridges, but the innovation in *** sapiens is we have
this large forehead to squeeze in more of this neocortex.
And that greater quantity was the enabling factor for the
qualitative leap we had of being able to make inventions
like language and art and science and Nexus phones.
[LAUGHTER]
RAY KURZWEIL: So how does this work?
Well, for one thing, our ability to actually see inside
the brain and confirm these types of insights is growing
exponentially.
Different types of brain scanning are growing at an
exponential rate.
We can now see your brain create your thoughts.
We can see your thoughts create your brain.
We can see individual links and neural connections forming
in real time.
And another piece of research that came out just as I was
sending off the book is that at the beginning of life,
there is this very uniform wiring of the neocortex,
basically connections in waiting.
So you have one pattern recognizer, and it wants to
connect itself, let's say, to one at a
higher conceptual level.
It has actually connect a wire.
There's actually a grid there, like avenues and streets of
Manhattan, and it finds the right avenue and the right
street and makes the final connections.
And we actually see that process in real time now,
inside a living brain.
And then it actually finalizes that connection.
And then the connections that are never used die away.
About half of the connections that exist in a newborn
actually go away by the time you're two years old.
So to take a simplified example of how this works,
these pattern recognizers learn patterns, and there are
different levels of the conceptual hierarchy.
And there's a lot of redundancy, which is one way
it deals with uncertainty and one way it can deal with
variations in patterns.
So I have a bunch of pattern recognizers that have learned
to recognize a cross-bar in a capital A.
And that's all they care about.
Some exciting new technology or a pretty girl could walk
by, it doesn't care.
But when it sees a cross-bar in a capital A,
it goes, whoa, crossbar!
[LAUGHTER]
And it sends up a signal--
I believe this is not on or off.
The whole system is a network of probabilities.
But it says there's a high probability we
have a crossbar here.
At that next higher level, it's getting different inputs,
and it might then fire with a high probability--
ah, capital A. And at a higher level, a pattern recognizer
might think, hm, there's a very good probability that the
word "apple" is printed here.
And in another part of the visual cortex, a pattern
recognizer might go, oh, an actual physical apple.
And in another region, a pattern recognizer might go,
oh, someone just said the word "apple." Go up a number of
levels further, where you're not getting input at a higher
level of conceptual hierarchy, so it's connected to multiple
senses, it may see a certain fabric, smell a certain
perfume, here a certain voice, and say, oh, my wife has
entered the room.
At a much higher level, there are pattern recognizers that
go, oh, that was funny.
That was ironic.
She's pretty.
Those are actually no more complicated, except for the
fact that they exist at this very high level of the
conceptual hierarchy.
I talk about the book this brain surgery of a young girl.
She was conscious, which you can be in brain surgery,
because there's no pain receptors in the brain.
Whenever they stimulated a particular point in her
neocortex, she would laugh.
And they thought they were triggering a laugh reflex, but
they quickly discovered, no, they're triggering the
perception of humor.
She just found everything hilarious when they
triggered that spot.
You guys are so funny, standing there, was her
typical comment.
But only when they were triggering that spot.
And these guys weren't funny.
[LAUGHTER]
They had one spot--
and we obviously have many of them--
but they had found one that would represent the
perception of humor.
Where does this hierarchy come from?
Well, we're not born with it, obviously.
That's what we're creating from the moment we're born, or
even before that.
I have a one-year-old grandson now, and he's laid down
several layers.
We can lay down, really, one conceptual layer at a time.
And we run through the 300 million.
One of the reasons children can learn, say, a new language
so easily is that they have all this *** neocortex.
By the time we're 20, it's really filled up.
But that doesn't mean we can't learn new things.
We have to forget something to learn something new.
We don't necessarily have to completely forget it, because
there's a lot of redundancy, and when we're first starting
to learn something, there's lots of redundancy and a lot
of the patterns are imperfect.
And over time, we can actually perfect that model and have
less redundancy and still have a good recognition.
So we can free up neocortical recognizers for a new subject.
But some people are better at that than others.
I mean, the rigidity that some people have in learning a new
idea is reflected in this ability or inability to learn
new material.
Now is 300 million a lot or a little?
It was a lot compared to other primates, who
have somewhat less.
And that was the enabling factor for science and art and
music and language and so on.
But it's also a big limitation, if you recognize
the limitations we have in learning new knowledge.
We ultimately will be able to expand the neocortex.
So I'm working now on synthetic neocortexes, not in
the near future to be directly connected to the brain, but I
think if you go out to the 2030s, we will
be able to do that.
And we actually don't have to put them inside the brain.
We just have to put the gateways to it in the brain.
If I do something interesting on this-- do a search, do a
language translation, ask Google Now a question--
it doesn't take place in this rectangular box.
It goes out to the cloud.
And if I suddenly need 1,000 processors or 10,000 for a
tenth of a second, the cloud provides that, to the limits
of the law of accelerated returns at that point in time.
Ultimately, we'll be able to do that with the brain and
have more than 300 million pattern recognizers, that run
faster, that can be backed up.
And that's where we're headed.
We'll have a greater quantity.
The last time we added a greater quantity, we got this
qualitative leap of creating art, science, and language.
And we'll be able to make another qualitative leap with
that expansion.
Already, these devices represent brain-expanders, but
we'll have much more powerful means of doing that.
So just a few comments.
Peter will appreciate this.
But we are destroying jobs at the bottom of the skill
ladder, creating new jobs at the top.
So we're investing more in education.
We spend 10 times as much on K-12 per capita in constant
dollars, compared to a century ago.
We had 50,000 college students in 1870.
We have 12 million today.
There's a big revolution coming, which Peter can tell
you about, in higher education.
It's fostered by this tremendous boon in both
intelligent computation and communication.
We've tripled the amount of education a child gets in the
developing world, doubled in the developed world, over the
last half-century.
Larry Page and I actually worked on a major energy study
for the National Academy of Engineering.
And the cost of solar energy--
both PPV and total installed costs-- are coming down.
As a result, the total amount of solar energy is on
exponential climb.
It's doubling every two years.
Right now it's 1%.
So people go, oh, 1%, that's a fringe player.
It's kind of a nice thing to do, but it's not really
significant.
Just the way that they dismissed the internet or the
Genome Project when they were 1% of a
usable corpus of users.
It's only seven doublings at two years each from 100%.
This was adopted by the National Academy of
Engineering.
I presented it recently to the prime minister of Israel.
And he was in my class at the Sloan School in the '70s, and
he said, Ray, do we have enough
sunlight to do this with?
And I said yes, we have 10,000 times more than we need.
After we double seven more times, we'll be
using one part in 10,000.
So there's a whole other discussion about
resources in general.
We're running out of resources if we limit ourselves to
19th-century First Industrial Revolution technologies like
fossil fuels.
But in terms of water, energy, food--
with vertical agriculture, another looming revolution
coming over the next decade, we actually will
have a lot of resources.
So this is the progress we've made in longevity over the
last 1,000 years.
We've quadrupled life expectancy.
It's doubled in the last 200 years.
And this is from the linear progression
of health and medicine.
It's now become an information technology.
This'll go into high gear once we really master these
techniques of biotechnology.
There's many revolutions coming.
But the most important one is that what's unique about the
human species is that we have knowledge.
And there's many different ways to measure knowledge, but
no matter how you look at it, it's growing exponentially.
So we're doubling the amount of knowledge, by some
measures, at say every 13 months.
And that's actually what's hard to do.
We have a much better means already of finding knowledge
with Google and other tools.
That's going to get more and more powerful.
But we need that added intelligence in order to
actually continue this exponential growth of
information technology.
So Google is still very well-positioned for fantastic
growth in importance and success over the
next several decades.
Thank you very much.
[APPLAUSE]
BORIS DEBIC: Thank you, all.
We'll do a Q&A, and please use the audience microphone.
AUDIENCE: Hi, my name is Jason.
I actually work in PR, so I think a lot about perceptions
of this kind of progress.
And I'm thinking about how people have a tremendous
tendency to sort of take for granted whatever the next
progression is, or to sort of underestimate to correct for
whatever improvements there are.
What do you think about that, the fact that you see, if you
measure all these things--
and I'm thinking of Steven Pinker's work on violence
dropping over time as well.
People tend to sort of correct for that and take it for
granted, and say, well, dismiss it at each stage.
Do you think that is just sort of built in to us?
RAY KURZWEIL: People have an amazing ability to accept new
changes and then assume that the world's
always been that way.
If you described self-driving cars a decade ago, people
would dismiss that as science fiction.
Now that we have it, people shrug.
Well, it's not in everybody's hands, but actually, I've
talked to people who've driven in the Google cars that
quickly actually gain more confidence in the AI driver
than a human driver.
Maybe that's not saying much.
People very quickly then take it for granted.
I travel around the world.
I don't get that here in Silicon Valley, but as I go to
other parts of the world, there is a common perception
that the world's getting worse.
And a big subset of that school of thought is that
technology's responsible for it.
I'd like to show them this graph.
So this is the world in 1800.
And these are countries.
The x-axis is the wealth of nations, income per person.
On the y-axis is life expectancy.
And over the last 200 years, there's been dramatic
improvement in both.
A little bit of movement in the First Industrial
Revolution, but as you get to the 20th century, there's a
wind that carries all these nations towards the upper
right-hand corner of the graph.
And there's still a have/have-not divide, but the
countries that are worst off at the end of the process are
still far better off than the countries that were best-off
at the beginning.
And I shouldn't say "end of the process," because the
process actually is going to go into high gear as we get to
the more mature phases of AI and three-dimensional printing
and biotechnology and so on.
But people forget what the world was like three or four
years ago, before we had social networks
and wikis and blogs.
And during that SOPA strike, people were shocked that they
could have to do without these brain extenders which we
didn't have just a few years ago.
So yes, people take changes for granted.
But also, they very readily adopt them.
You describe the world 20, 30 years from now, and people
say, well, I don't know if I want to opt in for that.
It doesn't happen that way.
It happens through thousands of product announcements and
research advances.
But when there's a somewhat better treatment for cancer,
there's no philosophical discussion.
Is it really a good thing to extend longevity?
People adopt and celebrate it if it works.
So we will continue to make this kind of progress.
I think it's a moral imperative that we do.
There are downsides.
That's a whole other discussion.
But overall, as you can see, life is continuing to get
better in all the ways that we can measure-- health, wealth,
education, so on.
AUDIENCE: You mentioned one of the great innovations of the
humans is having a lot more space up there for neocortex.
What about some of our Earth-mates, like whales?
They've got a lot more space up there.
RAY KURZWEIL: Right.
There are some other animals--
actually, the whale brain is bigger.
We have one other enabling factor, which is this
opposable appendage, which enabled us to take our ideas
and our visions and say wow, I could take that branch and I
could strip it of the leaves, and I could put a point on it,
and I could create this tool.
And then we had the opposable appendage to do that.
And then we had the tool to create other tools.
And these other species don't have that opposable appendage.
I mean, we see some clumsy ability to move things around,
say, by an elephant, which also has a big brain.
But it's actually not clear that the neocortex,
specifically, is bigger in a whale.
But it's pretty comparable.
They don't have this opposable appendage that enabled us.
So those two things enable us to create technology.
And technology has reshaped the world.
AUDIENCE: But then what about sort of deep thought, as
opposed to just being able to shape the world?
Right?
So taking is on a slightly different vector.
RAY KURZWEIL: It depends what you mean by deep thought.
I mean, the fact that we can develop these greater number
of levels of abstraction--
the neocortex, in most other mammals, is really devoted to
the challenges of being a raccoon or whatever.
And we've been able to actually then create these
abstract levels.
So we still have the old brain, and so the neocortex is
a great sublimator.
And it can take the sex and aggression of the old brain
and convert it into poetry and music.
And that then becomes an end in itself.
And we've really been the only species to master these
additional levels, which you would consider deep thought.
But it's in an extension of the neocortical hierarchy.
AUDIENCE: It seems pretty clear that the size of the pie
for 3-D printing is growing significantly, such that,
like, I've already made a couple
investments in that market.
And I'm wondering if based on your research, you've
identified any other markets where you see the size of the
pie growing so much, where if you make a broad play across
the industry, that it's nearly guaranteed to grow.
[LAUGHTER]
RAY KURZWEIL: I think search is very well-positioned.
[LAUGHTER]
RAY KURZWEIL: Even though it may seem to be saturated, its
role in our lives is not.
'Cause search is going to become much more intelligent.
Our knowledge bases continue to expand, and we can really
use this as an intelligent assistant to help guide us, to
actually help us solve problems and be more of an
assistant as we make search more intelligent.
And it's not just the way we traditionally think of search.
It's this whole world of knowledge.
And Google is very much committed to knowledge in all
of its different forms and in finding intelligent ways to
find that information and use it.
So that's very well-positioned.
Virtual reality is going to become a big deal.
Google has an interest in that.
The project Glass, Google Glass, will be a first step.
But ultimately, I mean, this is--
actually, I like the big screen, but it's actually
still pretty little.
It's still like looking at the world through a keyhole.
I've got this big screen--
AUDIENCE: Check out Ingress, if you haven't yet.
RAY KURZWEIL: Of real reality.
And we will be online all the time, with augmented reality,
and just used to looking at people and having pop-ups tell
us who they are.
And just telling us their name will be very useful.
That'll be a killer app.
[LAUGHTER]
AUDIENCE: Hi.
So I had a question.
Once we have these pattern recognizers that we can access
remotely, obviously, a best of breeds will emerge and
everyone will want to copy the best, most accurate, most
efficient one.
At that point, if I did that, would I still be me?
RAY KURZWEIL: I talk about that in the book.
There are three great philosophical questions--
consciousness, free will, and identity.
And you're asking about the identity issue.
And I think, in my view, identity comes from a
continuity of pattern.
People say, well, no, Ray.
You're this physical stuff.
You're flesh and blood.
That's actually not true.
I'm completely different physical stuff than I was six
months ago.
And I go through that in the book.
All these different cells die and are recreated.
OK.
The neurons persist, but the parts of the neuron, like the
tubules and the actin filaments and all of these,
turn over-- some in five hours, some in five days.
And we're completely different stuff a few months later.
So we're like a river.
Charles River goes by my office.
Is that still the same river it was yesterday?
It's completely different water, but the pattern has a
continuity, so we call it the same river.
We're the same thing.
Now we can augment that pattern by, say, introducing
non-biological parts to it.
And I think it's very clear if that's done in a continuous
manner, it's very analogous to what's happening naturally,
which is that we're constantly changing the stuff and
gradually changing the pattern, but there's a
continuity of pattern, and that's the
nature of our identity.
So I to talk about that in that chapter.
AUDIENCE: Hi.
Could you comment on the progress in the field of
nanotechnology since you wrote "Singularity?"
RAY KURZWEIL: What was the last?
AUDIENCE: Could you just comment on the progress in the
field of nanotechnology since you wrote "The Singularity is
Near?"
RAY KURZWEIL: Well, there's been--
nanotechnology is a further-off revolution than
biotechnology.
But there have been advances in our ability to create small
structures which being applied, actually, to
electronic devices.
And electronics is clearly nanotechnology.
The feature sizes are approaching 20 nanometers,
which is like 100 carbon atoms.
We're starting to build three-dimensional structures.
So there's definitely been a lot of technology there.
MEMS, there are MEMS devices now that are under 100
nanometers, 'cause it's using the same technology as
semiconductors.
There are experiments with devices in the human body.
There are dozens of experiments of
blood-cell-sized devices that are nanoengineered doing
therapeutic interventions in animals.
I think that's a further evolution than the biotech.
Biotech is really here.
It's kind of on the experimental cutting edge.
Like if you want to fix your heart if you've had a heart
attack, you actually can't.
It's not FDA-approved.
It will be soon, but right now you have to go
to Israel or Thailand.
So it's kind of on the edge, but it's very close at hand.
Nanotechnology is still, I think, late 2020s for those
types of applications.
AUDIENCE: I hope this doesn't come across as a flaky
question, but--
RAY KURZWEIL: No question is flaky.
AUDIENCE: In your research, have you found the same law of
accelerating returns in happiness, fulfillment,
satisfaction?
RAY KURZWEIL: Well, this is actually a similar question to
the first one, in that our expectations
are constantly changing.
If you talk to a caveman or woman thousands of years ago,
they would say, gee, if I could just have a bigger
boulder to keep the animals out of my cave and prevent
this fire from going out, I would be happy.
Well, don't you want a better website?
[LAUGHTER]
RAY KURZWEIL: So we don't even know what we want until
somebody invents these ideas.
And our expectations of what should be
are constantly changing.
People who are poor today still generally have access to
refrigerators and to communications and clothing.
You go back several hundred years ago, even a middle-class
person only had one shirt before there was automation in
the textile industry.
So our expectations of what it takes to be happy change.
I think people are happier, because a much higher
percentage of the population gets part of their
satisfaction and definition in life from their work.
Not everybody, apparently.
I was interested by this French strike where they were
very upset at extending the retirement age from 60 to 62.
And I thought, gee, these people really must not like
their work.
But then I realized that I had retired when I was five,
because I'm really doing what I love to do.
And I think that should be the objective of work.
And many more people have the opportunity to do that.
Work done in the information sector, people really have a
passion for it, whereas 100 years ago, they were just glad
if they could earn a living.
But it's a moving frontier.
And I think that's a good thing, and that's part of what
propels humanity forward, is we're constantly
questing for more.
And more doesn't necessarily mean greater quantity of
physical things.
It could be just more music and more opportunity to have
relationships, which social networks gives us the
opportunity to do, and so on.
AUDIENCE: So with the increase in knowledge work, it requires
a lot of knowledge transfer between humans.
Do you envision any efficient methods of knowledge transfer
within humans beyond?
RAY KURZWEIL: Could you speak a little louder?
I'm missing some words.
AUDIENCE: Do you envision any efficient methods of knowledge
transfer between humans?
Not like reading books or anything, just beaming.
RAY KURZWEIL: Yeah, well, when we can have massively
distributed communication points in a neocortex, it
could provide a higher-bandwidth way of
communicating.
But we have to appreciate that there's actually a very kind
of challenging translation job for one neocortex to another.
I talk about this in the book.
If you could actually get this information at any bandwidth,
and even process it quickly, of someone else's neocortex,
you'd have no idea what it means, because that pattern
recognizer, say, fires with a higher probability.
But you can only interpret that based on the ones that
are connected to it.
And each of those, you go only understand by the ones
connected to it, all the way down the hierarchy.
You'd have to actually have a complete dump of most of their
neocortex to understand it.
And so just--
it's not like we would readily understand someone else's
neocortex, even if you could transfer that information
without translating it.
We have a translation mechanism, which is language.
So we could take thoughts from one neocortex, even though
it's very different from someone else's, because we've
each built this hierarchy, and actually communicate a thought
that the other person can understand.
That's what language enables us to do.
We could perhaps do some automatic translation, just
like we translate languages now, from one neocortex to
another and provide higher-bandwidth connection.
I mean, it's something we could speculate once we're
able to do that in the 2040s.
AUDIENCE: Excuse me, if you've already covered this.
I was way in the back, and it was a little hard to hear you,
but do we have software engineering stuff to model
these clusters of neurons and create these models already?
RAY KURZWEIL: Well, the closest that we've had is
these hierarchical hidden Markov models, which as I
mentioned, have become a common technique in AI.
They're missing certain things, in that generally the
hierarchy is fixed.
So I mean, I began pioneering this in the '80s, and we did
it for speech recognition, and then we added simple natural
language understanding and we had some fixed levels of
spectral features, phonemes, words, and then simple
syntactic structures.
But it was relatively fixed.
It could prune some elements, some of these recognizers, if
they weren't used.
But it didn't actually self-organize, in terms of
creating the connections, which is really the essence of
what the neocortex does.
If you want to get into a better level of natural
language understanding, you need to be able to do that,
because one of the features of language is that it doesn't
just have two or three fixed levels of hierarchy.
Language reflects the hierarchy of the neocortex.
It can have many different levels.
And you really need to model quite a few levels in order to
make semantic sense of language.
And we need to be able to
dynamically build that hierarchy.
But it's interesting, actually, that I think there's
a mathematical similarity between this hierarchical
hidden Markov model technique and what happens in the brain.
And it's not because we were trying to emulate the brain in
the '80s and '90s, because we didn't really understand--
we didn't have enough information to confirm that
that's how the brain works.
It's just that technique worked, and biological
evolution evolved neocortexes that way for the same reason.
AUDIENCE: Speaking of assuming that the world will not change
a lot, I'd like you to comment on the non-technical aspect of
this change.
We all assume that 20 years from now we'll be living in a
stable democracy, with free market and
a capitalist economy.
Those changes that you predict, how much of that are
they going to change, politically and economically?
RAY KURZWEIL: Well, I do think the distributed communication
technologies we have is democratizing.
I wrote that in the 1980s, and then it was discussed in my
first book, which I wrote in the '80s.
I said the Soviet Union would be swept away by the
then-emerging social network, which was communication over
Teletype machines and fax machines, by this clandestine
network of hackers.
And so people heavily criticized that.
At that time, the Soviet Union was a mighty nuclear
superpower.
It's not going to get swept away by a
few Teletype machines.
But that's exactly what happened in the 1991 coup
against Gorbachev.
The authorities grabbed the central TV and radio station,
which had always worked in the past, 'cause it kept everybody
in the dark.
But now this clandestine network, this sort of first
social network, kept everybody in the know.
And it just swept away the totalitarian government.
And with the rise of the web, there was a great wave of
democratization in the late '90s.
We see the effect of social networks today.
It is democratizing for people to share knowledge at that
grassroots level, see how other people live and think.
It really is able to harness the wisdom of crowds rather
than the wisdom of a lynch mob.
And we've also democratized the tools of creativity.
So a kid with a notebook computer could start Facebook.
And a couple of kids in a late-night dorm room challenge
started Google.
And we see now, younger kids doing quite dramatic things,
teenagers with tools that everybody has, a kid in Africa
with a smartphone has access to more knowledge than the
president of the United States did 15 years ago.
So these are having an impact on our economy, on society.
Here's a very dramatic demonstration of the political
power of this organized group of people who are able to
communicate.
The SOPA legislation was headed for bilateral passage.
Both Democrats and Republicans were for it.
It was going to be passed, one of the few examples where
there was agreement on a piece of legislation.
Well, users saw that as a threat to the freedom on the
web and organized this demonstration.
Within hours, it was dead.
So I mean, just think of the tremendous political power
that was demonstrated there.
Google participated in that, but suddenly Wikipedia becomes
a great political power.
It just snaps its fingers, and.
So I think that these are very positive phenomena.
And it's affecting society.
It's affecting communication.
People criticize online education now because it's
missing a social component that you have with a campus.
But we can actually do a better job with social
networks and social communication online, because
we overcome the geographic barrier.
AUDIENCE: I'm struggling to find the exact words-- sorry.
But I wanted to ask you whether you see power--
not as in electronic power, but power as in control over
individuals--
as something that's exponentially accelerating, in
terms of the state or security apparatus versus freedom.
It seems like both are accelerating quite quickly,
and there's this tension between the power that's being
centralized versus of the individual.
RAY KURZWEIL: Well you can I imagine--
these tools can be used to spy and wreck
privacy, invade privacy.
The recent scandal going on in Washington raises issues of
the privacy of emails and so on.
On the other hand, I think it's also been very
democratizing, as I mentioned.
I think it's led to greater freedom.
I think that trend has been more pronounced, the ability
of individuals to organize around a set of ideas that
they quickly support in terms of freedom.
And we've seen the democratizing effect of
decentralized electronic communication.
Privacy is a very important issue.
There's certainly an important issue here.
I think Google does a good job of it, but it's something that
has to be a high priority.
If any service like Facebook or something did not keep
faith with its users, in terms of these social issues, there
would be a reaction.
And it raises complicated issues.
Like privacy, it used to be enough to just close the
curtains in your bedroom, and now we have 1,000 virtual
windows on our lives.
Nonetheless, I think we're doing pretty well.
I almost never encounter someone who says, oh, my life
was ruined by the loss of privacy because of all these
new technologies.
Now, maybe those people don't talk to me.
But I think we're doing OK.
But it is making these once-routine issues much more
complicated.
AUDIENCE: So when you were talking about the digitization
of or the information age of manufacturing with printers,
3-D printers, I had a question about resources.
Like if you print with, like, hydrocarbons, for example,
then you might need an oil rig and a ship and a truck to get
the resources from the Earth into the printer, and that
takes a lot of time and a lot of fuel.
Whereas if you build with plants, then you need to farm
somewhere, and again, you need transport to where the
printers are.
So how do you see things changing?
RAY KURZWEIL: There's not that many resources you need to
create these physical things.
By far the most hydrocarbons are used in burning them for
fossil fuels.
Yes, some of those products are used now
in chips, for example.
But it's a very small part of the output.
And if we can actually create the right products at the
destination in a distributed manner, and then also recycle
those these materials, that's a pretty efficient use of
these materials.
Peter Diamandis has a book called "Abundance" that deals
with, in detail, this issue of energy, these kinds of
resources for three-dimensional printing--
water, food, building materials.
And as we adopt new technologies, we actually find
that there's a tremendous abundance of resources, like
10,000 times more sunlight than we need to meet all our
energy needs.
Larry Page was fond of going a mile or two that way, and
there's a lot of heat in the Earth, geothermal energy,
which is also thousands of times more than we need.
And there are a lot of other scenarios.
So as we find new 21st-century technologies, we
can tap these resources.
There's new water technologies, like Kamen's
Slingshot machine, which are decentralized and can create
clean water very inexpensively, vertical
agriculture to grow food in AI-controlled buildings,
recycling all the nutrients so in fact it would not be the
wasteful and ecologically-damaging
food-production techniques we use now, but we can create
food very inexpensively.
Including in vitro-cloned meat--
I mean, why grow meat from animals when we only need a
small part of the animal?
We know how to, in fact, grow the muscle tissue, which is
what we want.
It's been demonstrated.
This can be done in AI-controlled buildings at
very low cost, ultimately.
AUDIENCE: But do you think that, say, a computer will be
able to be printed with resources that
were sourced locally?
RAY KURZWEIL: There's actually some experimental
three-dimensional printing systems that can print
electronics.
Being able to actually print electronics in a distributed
matter, there are pros and cons to it.
An argument can made--
computation and communication is very universal, so let's
have plants that really do that efficiently and then
customize it for people with software.
That's the model we're using now.
I mean, it's remarkable how powerful a computational
communication device you can get for very little money.
And that's continuing to improve.
BORIS DEBIC: I hope you all made some new neocortical
connections today which will be useful in your work and in
your lives.
And please join me in thanking Dr. Kurzweil.
[APPLAUSE]