Electrical engineering professor Gert Lanckriet recently won an NSF CAREER grant for work on developing automated ways to search, annotate and generally make sense of the ever-growing sea of digital music. More info on the funded project is below. Also, check out some cool related stories (and video!) from Lanckriet and his collaborators:
CAREER: An Integrated Framework for Multimodal Music Search and Discovery.
A revolution in music production and distribution has made millions of songs instantly available to virtually anyone, on the Internet. However, a listener looking for "dark electronica with cello" or "music like U2's", without knowing a relevant artist or song name, or a musicologist wanting to search through large amounts of unknown ethnic music, would face serious challenges. Novel music search and discovery technologies are required to help users find the desired content.
The non-text-based, multimodal character of Internet-wide information about music (audio clips, lyrics, web documents, images, band networks, etc.) poses a new and difficult challenge to existing database technology that depends on unimodal, text-based data-structures. This project addresses two fundamental research questions at the core of addressing this challenge: (1) The automated annotation of (non-text-based) audio content with descriptive keywords; and (2) the automated integration of the heterogeneous content of multimodal databases, to improve music search and discovery on the Internet or in a personal database. The resulting architecture leverages the automation and scalability of machine learning with the effectiveness of human computation, engaging music professionals or enthusiasts around the world.
The research addresses questions at the core of multimedia information retrieval in general, enabling the design of a new generation of expressive and flexible retrieval systems for multimodal databases, with applications to music discovery, video retrieval, indexing multimedia content on the home PC, etc.