z-logo
open-access-imgOpen Access
Capturing Visual Experiences
Author(s) -
Brian Curless
Publication year - 2006
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.20.45
Subject(s) - computer science , computer vision , artificial intelligence , motion (physics) , parallax , representation (politics) , computer graphics (images) , grasp , point (geometry) , human–computer interaction , geometry , mathematics , politics , political science , law , programming language
Why do we take pictures and videos? Often, the answer is that we hope to capture moments in time, so that we can later recall and savor them once again. Digital cameras and camcorders are making it ever easier to record these moments, but often, something is lost. Photographs freeze time and space, losing the sense of motion in a scene and losing the freedom of motion available to the original viewer, and video is usually of lower resolution and finite duration, and still gives up viewpoint freedom. Furthermore, the task of sorting through the reams of image and video data that an individual records is becoming simply burdensome. In this talk, I will describe research aimed at helping the user to better capture and re-experience the moment. One approach is to build complex hardware to acquire an immersive representation of the scene, allowing virtual flythroughs and the like. The work I will describe is far less heavyhanded – it is based on handfuls of images and simple video captures and has more modest goals of representing subtle effects like rippling water and small parallax. In fact, I argue that these subtle effects are quite powerful and can better reflect the experience available to a person observing a scene than, say, artbitrary flythroughs. The specific projects I will present span a range of inputs and outputs. From a single photograph of a natural setting, I will show how one can add pleasing motions such as swaying branches and rippling water [2]. By taking a handful of nearby photographs or attaching a multi-lens array to a camera, I will demonstrate how small parallax and synthetic aperture effects become available to the user [3]. Panoramas can be synthesized from a set of photos with the same optical center; I will describe how this idea can be extended to work with panning videos to create panoramic video textures [1]. Next, I will show some recent progress in combining the spatial resolution and ease of editing of photographs with the high temporal resolution of video. Finally, I will describe a new interface to video browsing that leverages the conventions of hand-drawn storyboards to enable single image illustration of video clips with intuitive spatial dragging on the summary image to explore the time axis of the video [4]. 1

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom