
Image and video compression/decompression based on human visual perception system and transform coding
Author(s) -
Chi Yung Fu
Publication year - 1997
Language(s) - English
Resource type - Reports
DOI - 10.2172/489146
Subject(s) - coding (social sciences) , perception , decompression , human visual system model , artificial intelligence , computer science , computer vision , matching (statistics) , compression (physics) , computer graphics (images) , image (mathematics) , psychology , mathematics , medicine , statistics , surgery , neuroscience , materials science , composite material
The quantity of information has been growing exponentially, and the form and mix of information have been shifting into the image and video areas. However, neither the storage media nor the available bandwidth can accommodated the vastly expanding requirements for image information. A vital, enabling technology here is compression/decompression. Our compression work is based on a combination of feature-based algorithms inspired by the human visual- perception system (HVS), and some transform-based algorithms (such as our enhanced discrete cosine transform, wavelet transforms), vector quantization and neural networks. All our work was done on desktop workstations using the C++ programming language and commercially available software. During FY 1996, we explored and implemented an enhanced feature-based algorithms, vector quantization, and neural- network-based compression technologies. For example, we improved the feature compression for our feature-based algorithms by a factor of two to ten, a substantial improvement. We also found some promising results when using neural networks and applying them to some video sequences. In addition, we also investigated objective measures to characterize compression results, because traditional means such as the peak signal- to-noise ratio (PSNR) are not adequate to fully characterize the results, since such measures do not take into account the details of human visual perception. We have successfully used our one- year LDRD funding as seed money to explore new research ideas and concepts, the results of this work have led us to obtain external funding from the dud. At this point, we are seeking matching funds from DOE to match the dud funding so that we can bring such technologies into fruition. 9 figs., 2 tabs