z-logo
open-access-imgOpen Access
User preference-aware music video generation based on modeling scene moods
Author(s) -
Rajiv Ratn Shah,
Yi Yu,
Roger Zimmermann
Publication year - 2014
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/2557642.2579372
Subject(s) - computer science , multimedia , android (operating system) , entertainment , cloud computing , mobile device , active listening , human–computer interaction , world wide web , art , communication , sociology , visual arts , operating system
Due to technical advances in mobile devices (e.g., smartphones, tablets) and wireless communications, people now can easily capture user-generated videos (UGVs) anywhere, anytime and instantly share their real-life experiences via social web sites. Enjoying videos has become very popular entertainment. One challenge is that many mobile videos do not have very appealing audio that was captured with the video. In this demonstration, to overcome this issue we propose a music video generation/creation system (Android app and backend system) that aims to make UGVs more attractive by generating scene-adaptive and user-preference aware music tracks. In our system, we take geographic categories, visual content and user listening history into account. In particular, the sequences of geographic categories and visual features are integrated into a SVMhmm model to predict video scene moods. The music genre, as a user preference is also exploited to personalize the recommended songs. We believe this is the first work that predicts scene moods from a real-world video dataset collected by users' daily outdoor recordings to facilitate user-preference aware music video generation. Our experiments confirm that our system can effectively combine objective scene moods and individual music tastes to recommend appealing soundtracks for videos. Our Android app only sends recorded sensor data and a few keyframes of a UGV to a cloud service (backend system) to retrieve recommended music tracks, therefore it is bandwidth efficient since the transmission of video data is not required for analysis.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom