Selecting a Density Measure

I observed that the segmentation masks of very thin and thick clouds often do not vary dramatically, producing similar cloud cover percentages for sky images which would not correspond to similar amounts of solar irradiance reaching the ground. However, when I compared the segmentation masks of the different cloud categories defined in SWIMCAT, I noticed a clear differentiation between the masks of thick clouds and the other low density categories. In Figure 1 below, examples of the images and segmentation masks are shown for the four categories. I noted the increased variation and randomness in the thin cloud masks–a quality which can be measured using the Haralick entropy of the image [1]. A higher Haralick entropy corresponds to more regions where black and white border each other, and thus a high level of variation and randomness in the mask. The thick cloud image produces a very clean mask and therefore a very low entropy level.

Figure 1: SWIMCAT Sample Images and Segmentation Masks

SWIMCAT samples and segmentation masks

Computing Haralick Entropy

The Haralick entropy measure is the entropy of a gray level co-occurrence matrix (GLCM) corresponding to a given image. The GLCM of an image tracks how often different values of gray saturation occur together based on a pre-determined displacement vector. A simple example is shown below, based upon the 3×3 pixel image in Figure 2.

Figure 2: Sample Binary Image

Sample Binary ImageFor a binary image, there are only four possible co-occurrences across the displacement vector (shown as blue and orange in Figure 2). In this example, the displacement vector is [1, 1] to represent a shift 1 to the right and 1 down. Therefore, the GLCM will be of dimension 2 x 2. The possible combinations are:

  • Black – black
  • Black – white
  • White – black
  • White – white

The symmetric GLCM for this image is constructed as shown in Figure 3. Each value in the matrix tracks the number of times this combination appeared in the image.

Figure 3: Sample Symmetric GLCM

White Black
White 2 1
Black 1 4

I used the MATLAB Image Processing Module to compute the symmetric GLCM for the segmentation masks. Specifically, the function graycomatrix(image,'NumLevels',2,'GrayLimits',[], 'Offset', [1,1], 'Symmetric', true) produces the matrix for a file denoted by “image.” Then, the GLCM entropy was calculated using

entropy formula,

where Pij refers to the element in the ith row and jth column of an NxN normalized, symmetric GLCM.

Categorizing Using Haralick Entropies

Each of the 533 images in the four categories of interest in the SWIMCAT database were segmented using the Red-Blue Ratio Adaptive Thresholding algorithm. The entropy level of each of these segmentation masks was computed as described and plotted below in Figure 4. For the purposes of understanding how cloud cover affects irradiance passing through the atmosphere, a division between thick and thin clouds is useful. In the context of the clusters below, that translates to a division between the thick clouds and the thin, patterned, and clear sky images. Visually, there is a reasonable dividing line near an entropy of 1.2. Below, an optimal threshold is found for separating the images based upon cloud density.

Figure 4: Haralick Entropy by Cloud Category

Segmented Image Entropy by Category

For a sample of this size, it is reasonable to assume a normal distribution for the two populations we are considering. The bell curves in Figure 5 show the distribution of entropies for the thick and thin cloud categories. The mean and variance of the two populations is shown in Figure 6.

Figure 5: Distribution of Entropies

Figure 6: Population Statistics

 Thin, Patterned, Clear Sky Images
Mean 1.470684
Standard Deviation 0.221869
Frequency 0.74671
Thick Cloud Images
Mean 1.044588
Standard Deviation 0.145505
Frequency 0.253283

Using the two-sample t statistic (for unknown means and variances),  t = 25.44, where H0 is μ1= μ2. There are 134 degrees of freedom because the smallest population is n = 135 samples. Therefore, the population means can be said to be significantly different at α = 0.001.

Having shown that the populations of image entropy values are distinct, we now compute the threshold between the populations which results in the minimum error. The probability of error in categorizing an image is defined as:

P(error) = P(E > T | cloud = thick)×P(cloud = thick) + P( E < T | cloud = not thick)×P(cloud = not thick),

where E is the image entropy and T is the entropy threshold. This probability of categorization error was computed for a range of thresholds from 0.6 to 1.6, in steps of 0.01. The minimum error was found to be 0.1143 at a threshold value of 1.16 (shaded region on plot). This error corresponds to the shaded region on the plot in Figure 5, where the optimal threshold is shown as a black line.


  1. Haralick, Robert M., et al. “Textural Features for Image Classification.” IEEE Transactions on Systems, Man, and Cybernetics, SMC-3, no. 6, 1973, pp. 610–621.