Tuesday, October 16, 2007

Context is Next



Looking at the photo above, you see a person on a tennis court, wielding a tennis racket and chasing a...lemon. Right?

Wrong. You don’t think it’s a lemon. You know it's a tennis ball.

A computer might not be so perceptive. A computer with the latest image labeling algorithms would have no problem making the following list of objects for the photo above: person, tennis racket, tennis court, lemon.

The only lemon I can imagine on this tennis court is in the water bottle of the line judge.

Computer scientists at UC San Diego and UCLA are looking to give automated image labeling systems a little more common sense. And that common sense comes in the form of context. And they are squeezing some of that common sense out of a little-known widget from Google Labs called Google Sets.

“We think our paper is the first to bring external semantic context to the problem of object recognition,” said computer science professor Serge Belongie from UC San Diego's Jacobs School of Engineering.

Belongie and his students (including Carolina Galleguillos -- the lead singer for the band Audition Lab) are presenting their "lemon blaster" this week at ICCV 2007 – the 11th IEEE International Conference on Computer Vision in Rio de Janeiro. The computer scientists show that the Google Sets can be used to provide external contextual information to automated object identifiers. The context is added in a post-processing step that comes after the image is split up into parts and labeled by a computer.





A full press release will be available here, on the Jacobs School Web site.

A copy of the paper is available here

Check out the write up on the Wired Science blog

No comments: