Snapshots from the UC San Diego Jacobs School of Engineering.
Wednesday, July 30, 2008
Flash Server and Nature paper
The press release for a Nature paper from the bioengineering lab of Jeff Hasty includes an embedded video of growing yeast cells (cooler than it sounds) that is streaming on our new flash server. This is the first press release video to stream from our flashy new flash server. We are hoping this is going to be a cross-platform solution to our video needs. Check it out!
http://www.jacobsschool.ucsd.edu/news/news_releases/release.sfe?id=760
Tuesday, July 29, 2008
Remote Control Face in Voice of San Diego
The Voice of San Diego ran a great story on Jacob Whitehill's computer science research. Whitehill is the guy who can turn his face into a remote control, thanks to a Web cam and some serious computer science can do. The story is by Darryn Bennett.
Voice of San Diego, July 29 -- A third year computer science graduate student at UCSD, Jacob Whitehill and his colleagues are working to make a new generation of robots that would be effective and responsive teachers. They believe the key is to train them to recognize and respond to facial expressions, the way humans do naturally. Whitehill described the demonstration, part of his research at UCSD's Machine Perception Laboratory, as "almost like having a remote control built into your face."
Tuesday, July 22, 2008
Label Reading with a Purpose
The Calit2 Life blog ran a great story from Jacobs School computer science professor Serge Belongie. The post is republished below. This is an update on a story that I wrote about last fall, when Belongie and his collaborators presented their ideas at a conference.
Soylent Grid Is People!
One of the big challenges in solving large scale object recognition problems is the need to obtain vast amounts of labeled training data. Such data is essential for training computer vision systems based on statistical pattern recognition techniques, for which a single example image of an object is unfortunately not enough.
For my research group, this has been especially evident in our work on the Calit2 GroZi project, which has the goal of developing assistive technology for the visually impaired. This includes tasks such as recognizing products on grocery shelves and reading text in natural scenes. (Check out this YouTube video for a bit of background on the project.)
In the past, this type of labor-intensive data labeling task would fall on hapless grad students or undergrad volunteers. (As an example, last winter my TIES group and CSE graduate student Shiaokai Wang manually labeled all the text on hundreds of product packages, all for the meager reward of pizza and soda.)
Recently, however, a movement has emerged that harnesses Human Computation to solve such labeling tasks using a highly distributed network of human volunteers. As an example, CMU's recaptcha system applies this principle to the task of transcribing old scanned documents, wherein the image quality is low enough to throw off conventional Optical Character Recognition (OCR) software.
Think of it like this. Every time you solve a CAPTCHA, i.e., those distorted words you have to type in at websites like myspace and hotmail to prove that you're not a spambot, you're using your powerful human intelligence to solve a small puzzle. Systems like recaptcha, the Mechanical Turk, and the Soylent Grid (currently under development by Calit2 affiliate Stephan Steinbach, CSE graduate student and CISA3 project member Vincent Rabaud, visiting scholar Valentin Leonardi, and TIES summer scholar and ECE undergraduate Hourieh Fakourfar) seek to redirect this human problem-solving ability toward useful tasks.
Hourieh's summer project has as its aim to adapt our fledgling Soylent Grid prototype to the above-mentioned text annotation task. A critical requirement for such a system to work is a steady traffic of web visitors looking for content.
Some day, when the Soylent Grid is a household name, we'll have strategic partnerships set up with big-name websites that serve up 1000s of CAPTCHAs per hour. Until then, we've got our work cut out for us to find some traffic to get our experiment started. As a humble starting point, we're going to outfit the pdf links on my group's publications page so that people who click on the link get served a labeling task before they can download the pdf. From there, we plan to move on to bigger and better websites with increased levels of traffic.
Now you may ask, how do we prevent visitors from inputting nonsense instead of providing useful annotation? As with recaptcha, the solution is to use a pair of images, one with known (ground truth) annotation, the other unknown. In this way, the visitor's response on the known example can be used to validate the response on the other example. Moreover, the response of multiple visitors on the same image can be pooled to form confidence levels, and when this level is high enough, an image can be moved from the "unknown" stack to the "known" stack.
Naturally, many questions remain. How do we make these labeling tasks sufficiently atomic and easy to complete so that the web visitor doesn't get frustrated? How much ground truth labeling is needed in a given image database to "prime the pump"? How do we deal with ambiguity in the labeling task or in the user input? Some initial thoughts on these and other questions are put forward in Stephan and Vincent's position paper from ICV'07, but there's nothing like a messy real-world experiment to get real-world answers to these questions!
Wednesday, July 9, 2008
Swurl of UC San Diego Alumni Activity
A pair of computer science BS/MS students from UC San Diego's Jacobs School of Engineering are abuzz in a Swurl of activity...literally.
Ryan Sit and Jonathan Neddenriep have a new startup called Swurl that promises to "Bring your web life together."
Check out the Swurl story on TechCrunch. According to the TechCrunch story, Ryan Sit explained that Swurl isn’t so much about keeping your friends constantly updated on your current activities (à la Friendfeed). Instead, Swurl is more like an automatically generated blog and scrapbook that you’ve created for your friends and family.
Ryan Sit's "diaper changing and server tuning" Swurl is here.
Ryan Sit is no stranger to startups. He is one of the founders of DropShots, a pioneering service that allows family and friends to share their photos and video online. Read the DropShots story in Pulse, the Jacobs School of Engineering alumni magazine.
Ryan Sit and Jonathan Neddenriep have a new startup called Swurl that promises to "Bring your web life together."
Check out the Swurl story on TechCrunch. According to the TechCrunch story, Ryan Sit explained that Swurl isn’t so much about keeping your friends constantly updated on your current activities (à la Friendfeed). Instead, Swurl is more like an automatically generated blog and scrapbook that you’ve created for your friends and family.
Ryan Sit's "diaper changing and server tuning" Swurl is here.
Ryan Sit is no stranger to startups. He is one of the founders of DropShots, a pioneering service that allows family and friends to share their photos and video online. Read the DropShots story in Pulse, the Jacobs School of Engineering alumni magazine.
Monday, July 7, 2008
Machine Perception Lab at UCSD in ABC News
Lee Dye from ABC wrote a great story about UC San Diego computer science PhD student Jacob Whitehill's automated facial recognition research. The first couple graphs are below. Read the full story here.
A Computer That Can Read Your Mind
Facial Recognition Program Makes For Better Virtual Teachers
By LEE DYE
July 7, 2008 —
One of these days your computer will probably know what you are thinking before you know it yourself. The human face conveys emotions ranging from fear to confusion to lying, sometimes involuntarily, and scientists are figuring out how to make use of those expressions.
At the University of California at San Diego, for example, a graduate student has developed a program that will slow down or speed up a video based entirely on changes in his facial expressions, like a slight frown, or a smile. The purpose of this particular program is to make robotic instructors more responsive to their student's needs, but there are many other potential applications for the work.
"The project I'm working on is how can we use machine perception, including things like facial expressions, to improve interactivity between students and teachers," said Jacob Whitehill, a computer science doctoral candidate. "That includes human teachers, and also robotic teachers, which is something our lab is increasingly interested in."
Whitehill has tested his
Wednesday, July 2, 2008
Undergrads are Bioinformatics Pioneers
UC San Diego bioinformatics undergrads published their pioneering work on "comparative proteogenomics" in the July 2008 issue of the prestigious journal Genome Research. Abstract here.
You can watch a video of two of the students here on YouTube:
Right now, you can read the release and watch the video on the Jacobs School news site.
Also, Howard Hughes Medical Institute has a nice story about the same project. Much of the funding that enabled Pevzner to put UC San Diego undergrads right at the cutting edge of bioinformatics research came from Howard Huges. Read the grant announcement story here.
Tuesday, July 1, 2008
Calit2 Life
Calit2 just launched Calit2.Life -- a super cool group blog. Here is the press release. The initial roster of contributors includes roughly one dozen senior Calit2 personnel, communications staff and directors of affiliated research centers. Over time, more contributors will be added to ensure that Calit2.Life remains representative of people and activities on both campuses.
Check it out!
Subscribe to:
Posts (Atom)