z-logo
open-access-imgOpen Access
Neural network retrieval of cloud parameters of inhomogeneous clouds from multispectral and multiscale radiance data: Feasibility study
Author(s) -
Cornet Céline,
Isaka Harumi,
Guillemet Bernard,
Szczap Frédéric
Publication year - 2004
Publication title -
journal of geophysical research: atmospheres
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.67
H-Index - 298
eISSN - 2156-2202
pISSN - 0148-0227
DOI - 10.1029/2003jd004186
Subject(s) - radiance , remote sensing , standard deviation , pixel , multispectral image , cloud computing , environmental science , computer science , physics , optics , mathematics , geology , statistics , operating system
In this paper, we investigated the feasibility of retrieving cloud parameters of inhomogeneous and fractional clouds from simulated multispectral and multiscale radiometric data by using mapping neural networks. A radiometric database prepared for neural network training consists of area‐averaged radiance data for two pixel scales, i.e., (1 km × 1 km) and (0.25 km × 0.25 km) pixels, respectively. The cloud parameter retrieval assumes a vertically uniform inhomogeneous and fractional cloud defined with 6 parameters, i.e., the mean and standard deviation of optical thickness, the mean and standard deviation of effective radius, the fractional cloud cover, and the cloud top temperature, all defined at a scale of cloud parameter retrieval. The retrieval procedure comprises two separate steps: the first one is relative to the angular interpolation and correction of radiance data (surface reflection and thermal emission contribution). The second step concerns the cloud parameter retrieval as such from interpolated and corrected radiance data. The input vector to the retrieval MNNs consists of 8 radiometric data in addition to a number of necessary ancillary data such as surface temperature and ground albedo. The 8 radiometric data are 5 area‐averaged radiances over (1 km × 1 km) pixel and 3 standard deviations of radiance over (1 km × 1 km) pixel estimated from (0.25 km × 0.25 km) pixel radiances. After evaluating the performance of the neural networks trained for each step, we tested the whole retrieval procedure for three types of inhomogeneous and fractional clouds: flat‐top bounded cascade clouds and flat‐top and non‐flat‐top Gaussian process clouds. All the cloud parameters of these clouds can be retrieved with reasonable accuracy in spite of the fact that the mean and standard deviation of optical thickness of non‐flat‐top clouds exhibit some dispersion. The inclusion of (0.25 km × 0.25 km) pixel radiance data as input vector components improved significantly the performance of the cloud parameter retrieval. Finally, we analyzed the consequences of some simplifying assumptions on the retrieved cloud parameters, and discussed the perspectives of the cloud parameter retrieval based on the neural networks.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here