z-logo
Premium
Generic interactive pixel‐level image editing
Author(s) -
Liang Y.,
Gan Y.,
Chen M.,
Gutierrez D.,
Muñoz A.
Publication year - 2019
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.13813
Subject(s) - pixel , computer science , tone mapping , image editing , artificial intelligence , computer vision , heuristics , contrast (vision) , image (mathematics) , computer graphics (images) , high dynamic range , dynamic range , operating system
Several image editing methods have been proposed in the past decades, achieving brilliant results. The most sophisticated of them, however, require additional information per‐pixel. For instance, dehazing requires a specific transmittance value per pixel, or depth of field blurring requires depth or disparity values per pixel. This additional per‐pixel value is obtained either through elaborated heuristics or through additional control over the capture hardware, which is very often tailored for the specific editing application. In contrast, however, we propose a generic editing paradigm that can become the base of several different applications. This paradigm generates both the needed per‐pixel values and the resulting edit at interactive rates, with minimal user input that can be iteratively refined. Our key insight for getting per‐pixel values at such speed is to cluster them into superpixels, but, instead of a constant value per superpixel (which yields accuracy problems), we have a mathematical expression for pixel values at each superpixel: in our case, an order two multinomial per superpixel. This leads to a linear least‐squares system, effectively enabling specific per‐pixel values at fast speeds. We illustrate this approach in three applications: depth of field blurring (from depth values), dehazing (from transmittance values) and tone mapping (from brightness and contrast local values), and our approach proves both favorably interactive and accurate in all three. Our technique is also evaluated with a common dataset and compared favorably.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here