A method for cloud of
classification of AVHRR image data with fractal dimension
Ryoichi Kawads and Mikio Takagi Institute of Industrial Science University of Tokyo, Tokyo, Japan Abstract The need for automated cloud detection and classification arises from the massive quantities of data by satellites. Only automated processing can handle this many data. In this paper, we present a method for multi spectral cloud classification of Advanced Very high Resolution Radiometer (AVHRR) data, which uses fractal dimension for texture analysis a well as other features like channel-1 visible reflectivity, channel-4 infra-red brightness temperature, and so on. Features to measure texture are important, especially for night-time analysis on infra-red data 9visibel feature are not available for night time analysis). These features provide some information for distinguishing cumuliform clouds from stratiform clouds or clear regions. In order to represent different textures we calculated fractal dimension of each pixel in images, which was found to be more effective than local difference. Use of fractal dimension leads to correct interpretation of coat lines or tidal fronts, which are often mis-classified into clouds by use of difference. The classification is based on maximum likelihood method, which uses features extracted for AVHRR data of NOAA (National Oceanic and Atmospheric Administration) satellites. Introduction In multispectral classification of satellite images, local texture such as variance, standard deviation, and difference is a very important feature especially at night. But using of these often lead to false classification of coast line and tidal fronts into clouds. In this study we used fractal dimensions to represent different textures. Unlike other features described above, values of fractal dimension of border liens of different domains are not so large as that of clouds. Although the calculation time of fractal dimension is generally larger than that of others, we did pixel by the pixel considered -------------- that is much faster than calculation over a block whose center point is the pixel. Thus, we got good results almost free of the problems above. The images used are map images made from Advanced Very high Resolution Radiometer's data of NOAA meteorological satellites. Channel-1 (0.58~0.68m m; visible) and 2 (0.725~1.1m m; middle IR), 4 (10.3~11.3mm ; far IR and 5(11.5~12.5m m : far IR) contains brightness temperature. Fractal dimension
Figure 2: Original image (NOAA-10. 1987.5.9.1sh. ch1). Figure 3: (a) The brightness of each pixel in this image represents the value of local difference around the pixel in fig.2 image. The difference is calculated by eq. 3. (b)The brightness represents fractal dimension of fig. 2. Figure 4: Neighborhood pixels considered when calculating difference. Maximum Likelihood Method Point-wise likelihood classifier is used. The features are : NOAA-10 (day) .......... ch1, ch4, ch1-ch2, ch3-ch4,FD NOAA-9, 11 (night) .......... Ch4, ch3-ch4m, ch4-ch5, FD NOAA-9,11 (day) .......... ch1, ch4, ch1-ch2, ch3-ch4, ch4-ch5 FD There are 6 classification classes, that are : Sea, land, high cloud, middle cloud, and sun glint. The following is the definition of the membership function for class c(7): Eq. 4 Where Pi (x) = 1 / (2p)N/2 | åi |1/2 . exp {-1/2 (x-mi)T åi-1(x-mi)} .................(5) N : the dimensions of the pixel vectors m : the number f predefined classes 1* i] m), mi : mean of class i, åi : covariance matrix of class i. Results and discussions We performed calculation on sequent S81 computer. Original map images have 512 x 512 pixels, already being calibrated and transformed onto Melcator - Projection planes. The distance between a pixel and its neighboring pixel is about 4 km. Each pixel has five data that are reflectivity of ch1 and ch2 and brightness temperature of ch3, ch4 and ch5. Fig 6 (a), 7-9 shows original images and classified images. For comparison, classification result with difference instead of fractal dimension is shown in fig 6(b). (difference is defined as eq.(3) ). In Fig. 6 (1), classification is performed with only two features, the accuracy is rather low than the other results. However because of the use of fractal dimension for texture representation, coast liens and tidal fronts are found to be classified into their proper classes, although some pixels in the sea are misinterpreted as land. Fig 6 (b) is derived from the same original image with fig. 6 (a), using difference instead of fractal dimension. In this result image, tidal fronts and coast liens are misinterpreted as low clouds. In fig 7,the season of which is winter, snow-covered land is mis-classified into low cloud. If there is the class of snow covered land, it may be improved. In fig, 8 and 9, which are day images. The classification results are generally good. In Fig 9 sun glint is well detected. However, compared with classification of nigh images, the calculation times is rather large as described in the figure captions, because more features are used in maximum ;likelihood classifier. In this classification, supervising data is made fro each images, so it is not completely automatic at this time. However, by making data base of supervising data for several different conditions such as four seasons and times in a day, automatic classification will be realized. Figure 5: (a) Gray levels and classes for. 6-8. (b) Gray levels and classes for fig.9. Figure 6: Classified image of fig.2. (a) The features are ch4, ch3-ch4, and FD. Elapsed time on Sequent S81 computer is about 180 seconds. (b) The features are ch4, ch3-ch4, and difference by eq.3. Elapsed time is about 90 seconds. Figure 7: (a) Original image (NOAA-11, 1988.12.14.1h, ch4). (b) Classified image of (a). The features are ch4, ch3-ch4, ch4-ch5, and FD. Elapsed time is about 220 seconds. Figure 8: (a) Original image (NOAA-10, 1987.5.9.6h, ch2). (b) Classified image of (a). The features are ch1, ch4, ch1-ch2, ch3-ch4, and FD. Elapsed time is about 270 seconds. Figure 9: (a) Original image (NOAA-9, 1987.5.9.13h, ch2). (b) Classified image of (a). The features are ch1, ch1-ch2, ch3-ch4, ch4-ch5 and FD. Elapsed time is about 310 seconds. Conclusion It is shown in this study that fractal dimension is generally more useful than difference for texture representation, especially for classification of coast lines and tidal fronts. There is the problem of calculation time of fractal dimension that is larger than that of difference, but by calculating FD along local lines, not over locks, we reduce the problem to such extent that we can perform FD calculation pixel by pixel, which leads to more accurate classification. References
|