Introduction

The reason why I am particularly interested in this research is because the main idea of this proposed method strongly links to Statistics. Not only does the title of the research mention model, but the content also utilizes several statistical tools such as least-square method and likelihood. Furthermore, this is the first time for me to get in touch with the cloud model, which I could not find a lot of resource on Google, so I thought it is best for me to bring it to the discussion.

In this research paper, the authors first introduce the advantages of the cloud model, following up with the basic principle of the normal cloud model, then the authors describe the proposed method on image segmentation, with some experiments and conclusion afterwards.

For this report, I will disregard the first and final part of the paper, to efficiently go through the desired knowledge I want to discuss. In the end of the report, I have some unsolved questions listed in the discussion.

Normal Cloud Model

The cloud model comes with two characteristics that combines advantages of type-1 and type-2 fuzzy sets:

  • Randomness (type-1).
  • Fuzziness or, equivalently, uncertainty (type-2).
  • Association of the above.

Next on, I will focus on the normal cloud model. As seen from its name, it is a cloud model bsed on normal distribution, as a result, the function below looks similar to the Gaussion distribution function.

Let \(U\) be a quantitative universal set such that \(x_i \in U\).

Let \(C\) be the qualitative concept related to \(U\), which is a random realization of \(U\).

\(x \sim N(E_x, {E_n'}^2) , \ E_x = E(x) , \ {E_n'}^2 = Var(x)\)

\(E_n' \sim N(E_n, {H_e}^2) , \ {H_e}^2 = Var(E_n')\)

The certainty degree of freedom of \(x\) on \(C\) is:

\[\mu = \exp\left[-\frac{(x-E_x)^2}{2(E_n')^2}\right]\]

The entropy \(E_n\) is the uncertainty measure of each \(C\).

The hyper-entropy \(H_e\) is the uncertainty measure of \(E_n\).

We say, the distribution of \(x\) on \(U\) is a normal cloud, and each x is a cloud drop.

Intuitively, the \(x\) random variable represents the randomness, and the \(E_n\) and \(H_e\) represent the fuzziness or uncertainty. \(\mu\) can be explained as the certainty degree, which in our language is the probability that this value will occur in this particular distribution \(C\).

\(C(25, 3, 0.3)\) is demonstrated in the below figure.

And the upper bound and the lower bound of the interval work the same as the original sense, i.e., approximately 99% of the observations lie in the \(\hat {\mu} \pm 3 \hat {\sigma}\) interval.

\(y_uc(x; E_{x_i}, E_{n_i}, H_{e_i}) = \exp \left[-\frac {(x-E_{x_i})^2}{2(E_{n_i} + 3H_{e_i})^2} \right]\)

\(y_lc(x; E_{x_i}, E_{n_i}, H_{e_i}) = \exp \left[-\frac {(x-E_{x_i})^2}{2(E_{n_i} - 3H_{e_i})^2} \right]\)

Suppose a variable \(x\) = 23; its maximum probability of membership lies in the point (0.8, 0.341), which means that the variable \(x\) = 23 has the maximum probability 0.341 at the point whose membership degree is equal to 0.8.

Proposed Method

Let’s discuss this method with the sample image above with corresponding grayscale histogram. It can be observed that the histogram consists of 5 peaks, which in terms reveals the information that this image consist of five classes \(C\).

  1. Locate the peaks and valleys of each class.

  2. Transform frequency distribution to possibility distribution by bijective tranformation.

  3. fit cloud models to each possibility distriburion with parameters determined by least-squared method.

    • If the distribution is close to {0, 255}, half-cloud is contructed.
    • If the distribution is asymmetric, then two half-cloud models are combined with parameters \(E_{x_i}, E_{n_l}, E_{n_r}, H_{e_l}, H_{e_r}\), which is demonstrated below in the image.

  1. Repeat until each distribution has its own fitted cloud model.

  2. Segmenting pixels (\(X_i\)) according to the “principle of maximum membership degree”. Simply put, \(x_i\) belongs to the distribution that maximize \(\mu_j(x_i)\), which is the certainty degree of \(x_i\) on \(C_j\). As shown in the image below.

Discussion

  • What is fuzzy set theory?
  • What does entropy represent in normal distribution?
  • What is the universe of discourse?
  • The bijective transformation part is still an undone mission.