Exposure-based Weighted Dynamic Histogram Equalization for Image Contrast Enhancement

Exposure-based Weighted Dynamic Histogram Equalization for Image Contrast Enhancement

Yung-Yao Chen1,* and Shin-Anne Chen1

1 Graduate Institute of Automation Technology, National Taipei University of Technology, Taiwan

(Received 21 July 2014; Accepted 7 August 2014; Published on line 1 March 2015)
*Corresponding author: yungyaochen@mail.ntut.edu.tw
DOI: 10.5875/ausmt.v5i1.835

Abstract: Global histogram equalization (GHE) [1] is a common method used for improving image contrast. However, this technique tends to introduce unnecessary visual artifacts and cannot preserve overall brightness. Many studies have attempted to overcome these problems using partitioned-histogram (i.e., sub-histogram) equalization. An input image is first divided into sub-images. Individual histograms of the sub-images are then equalized independently, and all of the sub-images are ultimately integrated into one complete image. For example, exposure-based sub-image histogram equalization (ESIHE) [2] uses an exposure-related threshold to divide the original image into different intensity ranges (horizontal partitioning) and also uses the mean brightness as a threshold to clip the histogram (vertical partitioning).

This paper presents a novel method, called exposure-based weighted dynamic histogram equalization (EWDHE), which is an extension of ESIHE, is proposed. This study makes three major contributions to the literature. First, an Otsu-based approach and a clustering performance measure are integrated to determine the optimal number of sub-histograms and the separating points. Second, an exposure-related parameter is used to automatically adapt the contrast limitation to avoid over-enhancement in some portions of the image. Third, a new weighted scale factor is proposed to resize the sub-histograms, which accounts for the sub-histogram ranges and individual pixel numbers of these ranges. Simulation results indicated that the proposed method outperformed state-of-the-art approaches in terms of contrast enhancement, brightness preservation, and entropy preservation.

Keywords: Image contrast enhancement; multilevel Otsu method; partitioned-histogram equalization.

Introduction

Image enhancement techniques have been proposed for many decades. Global histogram equalization (GHE) is a popular technique for image enhancement because it is simple and easy to implement [1]. This technique flattens the image’s entire density distribution and stretches the dynamic range of gray levels to achieve overall contrast enhancement. In GHE, the cumulative density function (CDF) of an image is used for transforming the gray levels of the original image to the levels of the enhanced image. However, the main drawback of GHE is that it usually results in excessive contrast enhancement, resulting in unpleasant artifacts in the processed image. The mean brightness of a GHEed image is always the middle gray level, regardless of the input mean brightness. This is an inappropriate property in certain applications that require brightness preservation, such as TV applications.

Many methods have been proposed to fulfill the brightness preservation requirement. Kim proposed brightness preserving bi-histogram equalization (BBHE) [3]. This well-known method was one of the earliest methods to address this problem. Bi-histogram equalization refers to the process that first separates the input image’s histogram into two sub-histograms. A similar GHE approach is then typically applied in each sub-histogram. In BBHE, the input image’s histogram is separated based on the input mean brightness, and the resulting new sub-histograms are independently equalized. BBHE can preserve the original brightness to a certain extent; that is, the mean brightness of a BBHEed image is between the middle gray level and input mean brightness. Similar to BBHE, Wang et al. proposed equal area dualistic sub-image histogram equalization (DSIHE) to separate the input image’s histogram into two parts [4]. Unlike BBHE, the separating point of DSIHE is the median of the input image’s brightness, thus it results in two sub-histograms that contain exactly the same number of pixels. DSIHE is meant to preserve brightness and yield maximum entropy after two independent sub-equalizations. Chen and Ramli proposed another bi-histogram equalization method, called minimum mean brightness error bi-histogram equalization (MMBEBHE) [5]. In MMBEBHE, the separating point is found by testing all possible gray levels of the input histogram and calculating the difference between the input mean brightness and the output mean brightness. The separating point is then chosen by enumerating the value that achieves the minimal difference.

The partitioned-histogram equalization approach can be viewed as a more general version of the bi-histogram equalization approach in which the input image’s histogram can be partitioned into several pieces rather than just two. Wongsritong et al. proposed multi-peak histogram equalization with brightness preserving (MPHEBP) that first smoothes the input histogram with a mean filter, and divides the smoothed histogram based on the local maximums [6]. Ibrahim and Kong proposed an extension of MPHEBP, known as brightness preserving dynamic histogram equalization (BPDHE) [7]. In BPDHE, the number of sub-histograms is also determined by the local maximums of the smoothed input histogram. Each partition is assigned to a new dynamic range before performing the equalization process. However, the number of resultant sub-histograms is somehow undetermined in these two methods because it is affected by the size of the smoothing filter and the definition of the local maximum.

Another type of partitioned-histogram equalization approach performs the same bi-histogram equalization process iteratively. For example, Chen and Ramli also proposed enhancement method, called recursive mean-separate histogram equalization (RMSHE) [8]. This technique uses BBHE through a number of iterations, $r$, which is predefined by the user. In the first iteration ($r=1$) of RMSHE, the input image’s histogram is divided into two sub-histograms based on the input mean brightness. In the second iteration ($r=2$), the mean brightness of each resultant sub-histogram is calculated, and each sub-histogram is then further divided into two parts based on its average values. After $r$ iterations, the resulting ${{2}^{r}}$ pieces of sub-histogram are equalized respectively and are used to compose the enhanced output image. Similarly, Sim et al. proposed a recursive sub-image histogram equalization (RSIHE) method that performs DSIHE iteratively [9]. RSIHE also produces ${{2}^{r}}$ pieces of sub-histograms but, unlike RMSHE, the separating points that are used for division in the $r$th iteration are defined as the median values, rather than the mean values, of existing sub-histograms. After equalization, RSIHE preserves more image entropy information than RMSHE. For RMSHE and RSIHE, when the value of $r$ is large, the output mean converges to the input mean. Hence, when $r\ge 2$, RMSHE and RSIHE preserve brightness more effectively than simple bi-histogram equalization approaches

However, for the partitioned-histogram equalization approaches, finding the optimal number of partitions challenges the production of significant enhancement results. When the input histogram is partitioned into too many pieces before equalization, no apparent enhancement occurs. By contrast, when the input histogram is simply bisected (e.g., using the discussed bi-histogram equalization approaches), the brightness and information preservation properties are weaker. The weaknesses of RMSHE and RSIHE are more obvious because the input histogram can only be divided by the power of two (i.e., ${{2}^{r}}$) pieces, and it is still difficult to identify the optimal value of $r$. In this paper, the measure of enhancement (EME) defined in (15) is used to quantitatively measure the degree of image contrast. Figure 1(d) shows the EME performance of RMSHE and RSIHE. It clearly indicates that when the value of $r$ is larger, the contrast enhancement deteriorates.

The aforementioned methods mainly partition the input histogram in a horizontal direction (i.e., along different gray levels) and the level of enhancement is completely determined by the probability density functions of individual sub-histograms without any adjustment. However, the enhanced contrast of the image is usually associated with noise amplification, and noise amplification degrades image quality. Several enhancement techniques have been proposed to control the enhancement rate by clipping the histogram in a vertical direction (i.e., slightly modifying the original probability density function). Kim and Paik proposed gain-controllable clipped histogram equalization (GCCHE) [10] that is a generalization of BBHE and RMSHE. In GCCHE, the clipping threshold is determined based on the 1mean1 brightness 1and 1pre-specified control 1gain. Singh and Kapoor proposed exposure based sub-image histogram equalization (ESIHE), which uses an exposure-related threshold to bisect the input histogram and mean brightness as a threshold to clip the histogram [2].

In addition to the drawbacks associated with the individual methods, the common challenge of partitioned-histogram equalization is finding the separating points and optimal number of partitions. This paper presents the exposure-based weighted dynamic histogram equalization (EWDHE) method for solving these contrast enhancement problems. The proposed EWDHE integrates an Otsu-based approach and a clustering performance measure to optimally define the number of separating points. Determined by the proposed weighted scale factor, the resultant sub-histograms are then rearranged to new dynamic ranges that occupy a complete dynamic range, from 0 to $L-1$. To avoid over-enhancement of noise, an exposure-related parameter is proposed to automatically adapt the contrast limitation. The remainder of this paper is organized as follows. Section 2 describes the proposed EWDHE method, Section 3 presents the experimental results, and Section 4 concludes the paper.

Exposure-based Weighted Dynamic Histogram Equalization

Assume that a digital image,$\mathbf{X}=\left\{ {{X}_{m,n}} \right\}$, is composed of $L$ discrete gray levels denoted as $\{0,1,\cdots ,L-1\}$, where ${{X}_{m,n}}$ represents the gray level of the pixel at $[\text{m},\text{n}]$. Let N denote the total number of pixels in image $X$ and let $h(k)$ denote the number of occurrences of each gray level. A plot of $h(k)$ is the histogram of $X$.

This section presents the EWDHE algorithm. The algorithm consists of four steps: determining the separating points, resizing the sub-histogram ranges, histogram clipping, and sub-histogram equalization. The following subsections describe each step.

Determination of separating points

Prior to the equalization process, one must first find the optimal number of sub-histograms and selecting the locations of the separating points is a challenge. This section proposes a solution to this problem based on multilevel threshold selection and a clustering performance metric. The modified Otsu’s method for one-dimensional multilevel threshold selection in [11] is applied, and the resultant thresholds are treated as the possible separating points, as shown in Fig. 2. Each non-zero bin of the input histogram is then assigned to a class, that is, a separate sub-histogram. The advantage of using the Otsu-based separation method is that it seeks to minimize within-class variance, which is similar to seeking the minimization of the total squared error of each sub-histogram, corresponding to the individual mean brightness. In other words, the method preserves overall brightness and the natural look of the input image. For bi-level thresholding, Otsu's method exhaustively searches for the threshold that minimizes the within-class variance, defined as a weighted sum of the variances of the two classes:

\[\sigma _{w}^{2}\left( t \right)={{w}_{1}}\left( t \right)\sigma _{1}^{2}\left( t \right)+{{w}_{2}}\left( t \right)\sigma _{2}^{2}\left( t \right),~0<t<L-1,\tag{1}\]

where weights ${{w}_{1}}$ and ${{w}_{2}}$ are the probabilities of the two classes being separated by threshold $t$, and $\sigma _{1}^{2}$ and $\sigma _{2}^{2}$ are the variances of these classes. Optimal threshold ${{t}^{*}}$ is selected to minimize within-class variance $\sigma _{w}^{2}$, that is

\[{{t}^{*}}=\text{arg}\underset{0<t<L-1}{\mathop{\text{min}}}\,\sigma _{w}^{2}\left( t \right).\tag{2}\]

Equation (2) can be extended to multilevel thresholding of an image. The optimal threshold set that divides the input histogram into m sub-histograms can be expressed by

\[\left\{ t_{1}^{*},t_{2}^{*},\cdots ,t_{m}^{*} \right\}=\text{arg}\underset{{{t}_{1}},\cdots ,{{t}_{m}}}{\mathop{\min }}\,\sigma _{w}^{2}\left( {{t}_{1}},{{t}_{2}},\cdots ,{{t}_{m}} \right),\]
\[0<t_{1}^{*}<t_{2}^{*}<\cdots <t_{m}^{*}<L-1.\tag{3}\]

Using the thresholds in (3) as separating points, the corresponding sub-histogram ranges are denoted as follows:$H_{sub}^{1}=\left[ 0,{{t}_{1}} \right]$, $H_{sub}^{2}=\left[ {{t}_{1}}+1,{{t}_{2}} \right]$, $\cdots $, $H_{sub}^{m}=[{{t}_{m-1}}+1$, $L-1$]. However, the optimal number of sub-histograms, $m$, has not been found. Thus, a clustering performance metric is proposed,

\[\psi=\frac{\text{min}({{D}_{jk}})}{\text{max}(Va{{r}_{i}})}.\tag{4}\]

where $Va{{r}_{i}}$ represents the variance of the $i$-th sub-histogram, and ${{D}_{jk}}$ represents the distance between any two average values of both the $j$-th and the $k$-th sub-histograms. Actually, when $m$ is too large, the enhancement of image is very slight. In this paper, only the values of metric $\psi$ that come from four numbers of partitions ($m$=2,3,4,5) are tested, and the partition number that results in the largest $\psi$ is selected.

Resizing sub-histograms by weighted scale factor

A poor contrast image means that the histogram range of an image does not occupy the complete dynamic range, or that most of the image pixels are at certain low gray levels. That is, a successful histogram equalization algorithm should prevent any dominating portion of the output image from occurring within a narrow grayscale range. Simply applying the traditional histogram equalization in each sub-histogram does not ensure substantial enhancement because a sub-histogram with a small grayscale range may not be sufficiently enhanced or stretched. Therefore, a new sub-histogram range must be allotted according to the number of pixels that belong to each original sub-histogram, and the sub-histogram range that contains more pixels should be more widely stretched. In addition, the sub-histogram range with a wider range, but fewer pixels, should be controllably narrowed to preserve its natural appearance. Hence, a new weighted scale factor is proposed to resize the sub-histograms, which accounts for all of the original sub-histogram ranges and individual pixel numbers of these ranges, as shown in Fig. 3.

The $i$-th span, ${{span}_{i}}$, is defined as the dynamic grayscale range of the i-th sub-histogram in the input image. For example, the span of the first sub-histogram $H_{sub}^{1}=[0,{{t}_{1}}]$ is ${{t}_{1}}+1$. To determine the resized span of each sub-histogram in the output image, the following factor and ratio are used instead of the span. The corresponding i-th resized span, ${{RS}_{i}}$, is expressed by

\[facto{{r}_{i}}={{\alpha }_{1}}spa{{n}_{i}}+{{\alpha }_{2}}\log \left[ \frac{N}{m}+\frac{1}{m}\left( {{N}_{i}}-\frac{N}{m} \right) \right]\text{ }\!\!~\!\!\text{ }.\tag{5}\]

and

\[R{{S}_{i}}=\frac{facto{{r}_{i}}}{\mathop{\sum }_{k=1}^{m}facto{{r}_{k}}}\times \left( L-1 \right),\tag{6}\]

where $0\le {{\alpha }_{1}}$, ${{\alpha }_{1}\le 1}$ and $m$ is the optimal number of sub-histograms, ${{N}_{i}}$ is the pixel number contained in sub-histogram $H_{sub}^{i}$, and $N$ is the total number of pixels in the input image.

The parameter ${factor}_{i}$ accounts for both the dynamic grayscale range and the number of pixels contained in the $i$-th sub-histogram of the input image, where the weight pair (${\alpha }_{1}$, ${\alpha }_{2}$) determines how much emphasis should be placed on individual pixel numbers of sub-histograms in the input image. The weights satisfy the ${{\alpha }_{1}}+{{\alpha }_{2}}=1$ criterion, and when ${{α}_{1}}=1$, only the value of the span is used, that is, the spans in the output image are not resized. In this paper, $\left( {{\alpha }_{1}},{{\alpha }_{2}} \right)=(0.8,0.2)$ is used, and experimental results indicated that for most images, ${\alpha }_{1}$ values ranging from $0.4$ to $1$ result in suitable enhancement. Moreover, the distribution of an ideally equalized image is a uniform distribution, which indicates that, in an ideal case, the multilevel Otsu method should equally divide the input histogram. Conversely, any two sub-histograms with the same span should contain an equal number of pixels. The term on the right hand side of (5) reflects this property, that is,

\[\sum\nolimits_{i=1}^{m}\left[ \frac{N}{m}+\frac{1}{m}\left( {{N}_{i}}-\frac{N}{m} \right) \right]=1.\tag{7}\]

Histogram clipping by an exposure-related parameter

A parameter called the intensity exposure was defined in [12]. It ranges from $0$ to $1$ and represents the measure of intensity exposure of an image. When the value of exposure equals $0.5$, this leads to the most visually pleasing appearance of an image. For an input image, if the value of exposure is less than $0.5$, this implies that most of the image is under-exposed, and a lower exposure value reflects a more apparent degree of under-exposure. Conversely, if the value of exposure is more than $0.5$, this implies that most of the image is over-exposed. In both cases, the image exhibits poor contrast and requires contrast enhancement. The value of image intensity exposure can be expressed by

\[\text{exposure}=\frac{1}{L}\frac{\mathop{\sum }_{k=0}^{L}h(k)k}{\mathop{\sum }_{k=0}^{L}h(k)},\tag{8}\]

where $h(k)$ is the histogram of the image and $L$ is the total number of gray levels.

In [2], the exposure value was used to bisect the input histogram into two sub-histograms: a sub-histogram of the under-exposed region and a sub-histogram of the over-exposed region. However, in this paper, the exposure value is used to adjust the clipping threshold. Histogram clipping is intended to avoid over enhancement and lead to the natural appearance of an image. Each bin of the input histogram that has a value greater than the clipping threshold is limited to the threshold. When performing the equalization process, an image with an exposure value that is too high or too low must be treated more carefully than an image with an exposure value near 0.5 (i.e., it requires a lower clipping threshold) to prevent excessive enhancement or other side effects.

The formula for clipping threshold ${T}_{c}$ is presented in (9) and (10) to calculate the clipped histogram:

\[{{T}_{c}}=\left[ 1+\exp \left( -\frac{\left| \text{exposure}-0.5 \right|}{\text{exposure}} \right) \right]\times T,\tag{9}\]
\[{{h}_{c}}\left( k \right)={{T}_{c}},\text{ }\text{if}\text{ }h\left( k \right)\ge {{T}_{c}},\tag{10}\]

where $T$ is an average number of gray level occurrences, that is,

\[T=\frac{1}{L}\sum\nolimits_{k=0}^{L}{h(k)}.\tag{11}\]

Equalizing sub-histograms independently

By applying the same concept used in other partitioned-histogram equalization approaches, the next step in EWDHE is to independently apply traditional histogram equalization to each sub-histogram. That is, the CDFs of individual sub-histograms are used to transform the gray levels of the original image to the levels of the enhanced image. From the threshold set $\left\{ t_{1}^{*},t_{2}^{*},\cdots ,t_{m-1}^{*} \right\}$ in (3), the input histogram is partitioned into m sub-histograms. The probability density functions (PDFs) are calculated from the clipped histogram:

\[PD{{F}_{1}}(k)={{h}_{c}}(k)/\sum\nolimits_{p=0}^{{{t}_{1}}}{{{h}_{c}}(p)},\]
....
\[PD{{F}_{m}}(k)={{h}_{c}}(k)/\sum\nolimits_{p={{t}_{m-1}}+1}^{L-1}{{{h}_{c}}(p)}.\tag{12}\]

The corresponding CDFs of individual sub-histograms can be expressed by

\[CD{{F}_{1}}(k)=\sum\nolimits_{p-0}^{k}{PD{{F}_{1}}(p)},\]
....
\[CD{{F}_{m}}(k)=\sum\nolimits_{p={{t}_{m-1}}+1}^{L-1}{PD{{F}_{m}}(p)}.\tag{13}\]

In addition to $CDFs$, the values of resized spans ${RS}_{i}$ in (6), are also considered. Let $[mi{{n}_{i}},ma{{x}_{i}}]$ denote the grayscale range of the $i$-th resized sub-histogram. The sum of the resized spans still equals the entire dynamic range, that is, $\sum\nolimits_{i=1}^{m}{R{{S}_{i}}=L}$. Because gray-level $k$ belongs to the i-th sub-histogram (i.e., $k\in H_{sub}^{i}$), the corresponding transfer function for histogram equalization can be expressed by

\[{{f}_{i}}\left( k \right)=mi{{n}_{i}}+\left( R{{S}_{i}}-1 \right)CD{{F}_{i}}\left( k \right).\tag{14}\]

Finally, an EWDHEed image is produced by a combination of individual transfer functions, and all sub-images are integrated into one complete image.

Simulation Results

This section presents a description of the implementation of the proposed EWDHE method and seven other histogram equalization methods, namely, GHE [1], BBHE [3], DSIHE [4], ESIHE [2], MMBEBHE [5], RMSHE [8], and RSIHE [9]. To implement RMSHE and RSIHE, the input histogram was divided into four sub-histograms by setting $r=2$. This is a common setting used in the literature for comparison because usually $r>2$, thus image enhancement is insufficient, as shown in Fig. 1. To compare the proposed method with other existing methods, eight test images from the McGill Image Database were used [13]: Beach, Sunset, Sky, Street, Dandelion, Pot plant, Orchid, and Hillside. A visual quality comparison of four images (Beach, Sky, Dandelion, and Hillside) is shown in Figs. 4 to 7.

Image quality measures

To quantitatively evaluate the performance of EWDHE, three popular objective assessment approaches are used in this study: enhancement measure (EME) [14], absolute mean brightness error (AMBE) [5], and discrete entropy (DE) [2]. These measures are commonly used as image quality measures for different aspects of image enhancement. The EME, AMBE, and DE values calculated for the different methods are respectively shown in Tables 1, 2, and 3.

The first measure, EME, is defined as

\[EME(\mathbf{X})=\frac{1}{{{k}_{1}}{{k}_{2}}}\sum\nolimits_{i=1}^{{{k}_{1}}}{\sum\nolimits_{j=1}^{{{k}_{2}}}{20\ln \frac{\max ({{X}_{m,n}})}{\min ({{X}_{m,n}})}}},\tag{15}\]

where the image $\mathbf{X}$ is divided into ${k}_{1}$ ${k}_{2}$ non-overlapping blocks of fixed size ($8\times 8$ in this paper). This measure of enhancement represents the degree of image contrast that it finds the average ratio of maximum to minimum intensities in each block over the entire image. High contrast blocks lead to a high EME value, whereas for homogeneous blocks, the EME value is close to zero. A larger value of EME indicates that the image has higher contrast. However, it should be emphasized that the EME value is highly sensitive to noise. The second measure, AMBE, is defined as

\[AMBE=\left| E\left( \mathbf{X} \right)-E\left( \mathbf{Y} \right) \right|,\tag{16}\]

where $E(\mathbf{X})$ and $E(\mathbf{Y})$ are respectively the mean brightness values of input image $\mathbf{X}$ and output image $\mathbf{Y}$. A lower value of AMBE indicates better preservation of the original image brightness. The third measure, DE, is defined as

\[DE\left( \mathbf{X} \right)=-\sum\nolimits_{i=0}^{L-1}{p(i)}\text{ log}p\left( i \right),\tag{17}\]

where $p(i)$ is the probability of gray level $i$. This entropy is known as the Shannon Entropy, which measures the uncertainty associated with gray levels and represents the average information content in an image. A higher DE value indicates that the image provides richer detail.

Objective assessment and visual comparison

Comparing the overall performance of different image enhancement methods is not an easy task. Although it is desirable for an enhancement method to outperform all aspects of other methods, a trade-off exists among the degree of enhancement, entropy preservation, brightness preservation, and other features. Some methods tend to focus on improving one image quality measure while ignoring the others. Figure 1 shows an excellent example of this, where improved brightness preservation is associated with a much poorer degree of enhancement.

In Table 1, a comparison of EME values shows that, on average, the proposed method outperformed the other methods used in the simulation, except for traditional GHE. Nevertheless, GHE tends to excessively enhance noise and cannot preserve mean brightness. As mentioned, the EME value is sensitive to noise. The over-amplification of noise generated by GHE results in a high EME value, but it also degrades the image quality. This explains why visual comparison is also relevant. Figure 4(b) shows why the GHEed image exhibited the highest EME value. The top part of Fig. 4(b) shows that the noise in the sky was over-amplified after GHE, thus increasing the average ratio of the maximal to minimal intensities in the image sub-blocks (i.e., a higher EME value). In Fig. 4(f), the MMBEBHE result exhibits the same noise over-amplification problem, but its EME value is not as high. This occurs because the foreground of the bottom part of the MMBEBHE image is still dim with low contrast. This result reflected the excellent performance of the proposed EWDHE; it achieved sufficient contrast enhancement and also controlled over-enhancement.

In Table 2, a comparison of the AMBE values shows that, on average, the proposed method outperforms the other methods used in the simulation, except for MMBEBHE. The main objective of MMBEBHE is to seek extreme brightness preservation, but this compromises other image qualities. Tables 1 and 3 show that for the test images, MMBEBHE exhibits the worst EME and DE values. In Table 3, a comparison of the entropy values shows that, on average, the proposed method outperforms the other methods used in the simulation, except for ESIHE. Similar to MMBEBHE, ESIHE can be viewed as another type of histogram equalization method that seeks to preserve extreme entropy. As an extension of ESIHE, the goal of the proposed method is not to place too much emphasis on entropy preservation, but to greatly improve the contrast enhancement and brightness preservation. In fact, for all eight test images, the proposed EWDHE method produced more favorable EME and AMBE results than ESIHE.

Conclusion

This paper proposes a novel histogram equalization method, called EWDHE. The simulation results indicated that EWDHE outperformed most other methods in terms of contrast enhancement (EME comparison), brightness preservation (AMBE comparison), and entropy preservation (DE comparison). In Tables 1, 2, and 3, the three assessment comparisons indicate that EWDHE produced the second best values of the eight histogram equalization methods. While other methods were ranked first in the three comparisons, each of these methods is associated with a clear disadvantage. EWDHE, on the other hand, achieved a favorable balance of the various image qualities. EWDHE also produced images with natural and sufficient contrast enhancement without over-amplifying the noise.

As an extension of ESIHE, EWDHE uses a method that resizes the sub-histogram range by introducing a new weighted scale factor. EWDHE uses an adaptive histogram clipping threshold that results in more controllable contrast enhancement. Both of these contributions mean that EWDHEed images exhibit much higher contrast, while efficiently preventing over-enhancement. In addition, unlike simple bisecting in ESIHE, EWDHE uses an Otsu-based multilevel threshold that makes the process of determining sub-histograms more flexible and better preserves brightness. Visual comparison and objective assessment demonstrated the superior performance and robustness of EWDHE for a variety of images.

Acknowledgement

This work was supported in part by the National Science Council, Taiwan, under grant NSC 102-2218-E-027-016-MY2.

References

  1. R. C. Gonzalez, and R. E. Woods, "Digital Image Processing", 3rd edition, Prentice Hall, 2007.
  2. K. Singh and R. Kapoor, "Image enhancement using exposure based sub image histogram equalization," Pattern Recogn. Lett., vol. 36, pp. 10-14, 2014.
    doi: 10.1016/j.patrec.2013.08.024
  3. K. Yeong-Taeg, "Contrast enhancement using brightness preserving bi-histogram equalization," Consumer Electronics, IEEE Transactions on, vol. 43, no. 1, pp. 1-8, 1997.
    doi: 10.1109/30.580378
  4. W. Yu, C. Qian, and Z. Baomin, "Image enhancement based on equal area dualistic sub-image histogram equalization method," Consumer Electronics, IEEE Transactions on, vol. 45, no. 1, pp. 68-75, 1999.
    doi: 10.1109/30.754419
  5. C. Soong-Der and A. R. Ramli, "Minimum mean brightness error bi-histogram equalization in contrast enhancement, " Consumer Electronics, IEEE Transactions on, vol. 49, no. 4, pp. 1310-1319, 2003.
    doi: 10.1109/TCE.2003.1261234
  6. K. Wongsritong, K. Kittayaruasiriwat, F. Cheevasuvit, K. Dejhan, and A. Somboonkaew, "Contrast enhancement using multipeak histogram equalization with brightness preserving, " in Circuits and Systems, 1998. IEEE APCCAS 1998. The 1998 IEEE Asia-Pacific Conference on, 1998, pp. 455-458.
    doi: 10.1109/APCCAS.1998.743808
  7. H. Ibrahim and N. S. P. Kong, "Brightness preserving dynamic histogram equalization for image contrast enhancement," Consumer Electronics, IEEE Transactions on, vol. 53, no. 4, pp. 1752-1758, 2007.
    doi: 10.1109/TCE.2007.4429280
  8. S.-D. Chen and A. R. Ramli, "Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation," IEEE Trans. on Consum. Electron., vol. 49, no. 4, pp. 1301-1309, 2003.
    doi: 10.1109/tce.2003.1261233
  9. K. S. Sim, C. P. Tso, and Y. Y. Tan, "Recursive sub-image histogram equalization applied to gray scale images," Pattern Recognition Letters, vol. 28, no. 10, pp. 1209-1221, 2007.
    doi: 10.1016/j.patrec.2007.02.003
  10. K. Taekyung and P. Joonki, "Adaptive contrast enhancement using gain-controllable clipped histogram equalization," Consumer Electronics, IEEE Transactions on, vol. 54, no. 4, pp. 1803-1810, 2008.
    doi: 10.1109/TCE.2008.4711238
  11. P. Liao, T. Chen, and P. Chung, "A fast algorithm for multilevel thresholding," Journal of Information Science and Engineering, vol. 17, pp. 713-727, Sept. 2001.
  12. M. Hanmandlu, O. P. Verma, N. K. Kumar, and M. Kulkarni, "A novel optimal fuzzy system for color image enhancement using bacterial foraging," Instrumentation and Measurement, IEEE Transactions on, vol. 58, no. 8, pp. 2867-2879, 2009.
    doi: 10.1109/TIM.2009.2016371
  13. Available online.http://tabby.vision.mcgill.ca/html/browsedownload.html
  14. S. S. Agaian, B. Silver, and K. A. Panetta, "Transform coefficient histogram-based image enhancement algorithms using contrast entropy," Trans. Img. Proc., vol. 16, no. 3, pp. 741-758, 2007.
    doi: 10.1109/tip.2006.888338

Refbacks

  • There are currently no refbacks.


Copyright © 2011-2018 AUSMT ISSN: 2223-9766