Tip:
Highlight text to annotate it
X
Our algorithm takes a three camera stereo video as input, as you can see in the top row
and generates the depth and velocity maps of all objects in the scene.
The depth map is show bottom left.
The closer the object, lighter grey it appears.
The velocity is shown bottom right.
The small red lines are showing the motion of the image regions.
The uniqueness of our algorithm is instead of calculating the depth map for each frame independently,
we track the objects in the scene, and hence our calculation uses temporal information, information from previous frames as well.
one great advantage of this approach is that we can build a coherent model of the complete environment,
just like those built in computer games.
What you see now is not the virtual reality of a computer game.
It is a rendered view built up by our algorithm.
Since we have the complete coherent word model we can generate any sequence of camera positions.
So this is just a sample camera path.
By the way, we have used this model for the bouncing balls in the beginning of the clip.
The jumps of the balls were generated using rules of physics in this environment.
This is called augmented reality.
Besides being a good toy, our technology can be used to solve many problems.
We will show five such applications in the followings.
You already can see the first one, which is the road detection built on top of our technology.
As you can see it is pretty accurate. And remember,
since we have the velocities as well, we can build a complete coherent model of the environment.
We can just see how this map would look like using this road detection. And this is what you gonna see now.
So we are flying freely in this built world model. And if we fly up really high,
we can see how our map would match to google maps.