Wednesday, November 28, 2007

3D Reconstruction: UCSD's Latest Contribution


Augmented reality walkthroughs of a building or a city, online alignment of a camera network and 3D navigation through a collection of photographs are just a few of the potential applications of an algorithm created in the computer science department at UC San Diego's Jacobs School of Engineering.
For this work, UCSD computer scientists earned was one of three honorable mentions for the David Marr prize which is the best paper award at the world’s premier computer vision conferences, ICCV, the International Conference on Computer Vision which took place last month in Rio de Janeiro, Brazil.

“The algorithm is very much practical. We have performed real-life 3D reconstructions. In fact, the significance of the paper lies in our approaches for designing a theoretically correct algorithm that also works well in practice,” explained Manmohan Chandraker, the first author on the award-winning ICCV paper and a fifth-year Ph.D. student in the Department of Computer Science and Engineering at UCSD’s Jacobs School of Engineering.

A longer story on this research is still in the works. Check out this low-fidelity video that gives a taste of the research.
If you have trouble with the streaming file above, you can check out the video on You Tube.


1 comment:

Unknown said...

Interesting...couldnt watch it all due to stream issues.

Photosynth impressed me a lot though.

I see potential use's for these things in seamingly markless outdoor tracking.

* Any densely populated area is likely to have a lot of pictures online.(Flickr ect)

* Those pictures can be used to reconstruct a 3D model of the landscape, with the most distinctive features likely to be the most accurately constructed.

* This 3D model can then be used to be matched back onto the real world. Effectively using real landmarks as markers for AR tracking.

* The result should be, good outdoor Augmented Reality sycning. In theory without the need for any GPS either.

Easier said then done, of course.