PLoS ONE
Public Library of Science
image
Image-quality metric system for color filter array evaluation
Volume: 15, Issue: 5
DOI 10.1371/journal.pone.0232583
  • PDF   
  • XML   
  •       
Abstract

A modern color filter array (CFA) output is rendered into the final output image using a demosaicing algorithm. During this process, the rendered image is affected by optical and carrier cross talk of the CFA pattern and demosaicing algorithm. Although many CFA patterns have been proposed thus far, an image-quality (IQ) evaluation system capable of comprehensively evaluating the IQ of each CFA pattern has yet to be developed, although IQ evaluation items using local characteristics or specific domain have been created. Hence, we present an IQ metric system to evaluate the IQ performance of CFA patterns. The proposed CFA evaluation system includes proposed metrics such as the moiré robustness using the experimentally determined moiré starting point (MSP) and achromatic reproduction (AR) error, as well as existing metrics such as color accuracy using CIELAB, a color reproduction error using spatial CIELAB, structural information using the structure similarity, the image contrast based on MTF50, structural and color distortion using the mean deviation similarity index (MDSI), and perceptual similarity using Haar wavelet-based perceptual similarity index (HaarPSI). Through our experiment, we confirmed that the proposed CFA evaluation system can assess the IQ for an existing CFA. Moreover, the proposed system can be used to design or evaluate new CFAs by automatically checking the individual performance for the metrics used.

Bae and Cherifi: Image-quality metric system for color filter array evaluation

1. Introduction

An improvement in the structure and operation, a technique for reducing the pixel size, and a wide dynamic range in CMOS image sensors have recently become important issues for the development of smaller advanced cameras. The most representative Bayer CFA is often used to implement a single sensor for color images [1]. Camera manufacturers are developing CFAs with different colors and structures to improve the picture quality. Factors evaluating the camera quality include the color accuracy, color difference, image contrast, and dynamic range etc.

The development of new image sensors, particularly CFAs, means the development of CFA elements and structures that have a better IQ in terms of the human visual system (HVS) than previously developed CFAs. Therefore, successfully developing a new image sensor requires the evaluation criteria of the image rendered through the image-processing pipeline. The pipeline typically consists of demosaicking, noise reduction, white balance, CFA interpolation, color conversion, and gamma correction for rendering the sensor data. However, an overall picture quality assessment of a newly developed CFA often occurs later than the design of the new CFA. In addition, an evaluation of the IQ after the CFA development is conducted by measuring the IQ evaluation items developed thus far, either one by one or in groups. This is because some CFAs are not applicable before the image processing pipeline process, or there are no comprehensive HVS-based IQ evaluation systems after the pipeline. The framework for analyzing the image characteristics of CFAs, the Image Systems Engineering Toolbox (ISET), was first developed by Wandell et al. [24]. The ISET is a camera simulation software that receives spectral information from scenes and illuminants, and creates rendered images through optical modeling such as camera lenses and camera sensor simulation. The ISET software has been verified using data of various devices. [5, 6]. In the present study, an IQ analysis of the major CFAs developed to date using existing and newly proposed metrics on the framework was conducted.

Color images are acquired through multiple sensors or a single sensor. Although multiple sensors can acquire high-quality images, they have problems in terms of size and price because they require as many sensors as the number of color planes constituting the color image. As a result, mobile devices acquire color images using a single sensor. To produce color images from a single sensor, an array of color filters is attached over a single sensor [7, 8]. The color arrangement within a single sensor has only a one-color channel signal at a particular location, and the color components of the other two channels are therefore lost. Thus, to obtain a full RGB three-channel color image from a single-color sensor, the two-color channel signals lost at a particular location must be interpolated. This color interpolation process is called demosaicing [7]. In this paper, bilinear [9], laplacian [10], adaptive laplacian [11], projection onto convex sets (POCS) [12] interpolations are considered for evaluating CFAs.

Many CFA patterns with different primary colors have been developed to date, as highlighted in [13], namely, RGB [1, 1417], RGBE [18, 19], CMY [16, 2023], and RGBW [2430] CFAs. The CFA pattern affects the resolution, sharpness, aliasing, reconstruction errors, and dynamic range of the sensor captured image. The captured image is intrinsically affected by an optical feature of the CFA patterns as well as the spectrometry and response characteristic of the image sensor. However, it is also related to the HVS.

The mean squared error (MSE) obtained by averaging the squared intensity differences between an original image and its reproduction is the most widely used image-quality metric (IQM). IQMs based on the MSE are easy to calculate and have obvious physical differences. However, they are not very well matched with the perceived visual quality (e.g., [3136]). In addition, the root mean square (RMS) metric does not include any information about the device used to present the images. In other words, the RMS error value is not a calibrated value. Because display technologies intrinsically have non-linear transfer functions, the displayed image looks different on a display. For example, each display has a different display gamma. Therefore, un-calibrated images are not suitable for measuring the perceptual difference. In addition, HVS-based IQ evaluation metrics have been studied from the past.

To solve the problem of a non-reflection of the cognitive quality of an MSE-based method, numerous IQMs have been proposed [37, 4045]. CIELAB for measuring color reproduction errors was designed to approximate human-vision based color discrimination and aspires to achieve perceptual uniformity. However, it turns out that color discrimination is determined by numerous factors, including the spatial pattern of the image and the visual processing [38, 39]. S-CIELAB, a spatial extension of CIELAB, was presented by adding a color separation and spatial filtering procedure to CIELAB to account for human spatial-color sensitivity [40, 41]. Meanwhile, the structural similarity (SSIM) was presented to consider the image degradation as the perceived change in structural information [42]. The SSIM is based on the assumption that the HVS is highly adapted to extract structural information from the visual field. In addition, the modulation transfer function (MTF) or spatial frequency response (SFR) describes the image resolution and perceived sharpness as the objective assessment of the imaging performance of an optical system [4345]. Recently, IQ assessment methods closely correlated with HVS, such as MDSI [46] and HaarPSI [47], were presented. MDSI measures the structural and color distortion using combination of gradient similarity (GS) and chromaticity similarity. HaarPSI evaluates local similarities as well as entire similarities using the coefficients obtained from Haar wavelet decomposition.

Gasparini etc proposed a no-reference metric for measuring demosaicing artifacts through psycho-visual experiments [48]. Using a psycho-visual comparison test adopting a single or double stimulus method, it analyzes the subjective evaluation of the demosaicing artifacts. Then, it introduce a no-reference metric for demosaicing artifacts based on measures of blurriness, chromatic and achromatic distortions that are able to fit psycho-visual experiments. While the method focuses on a no-reference metric definition of subjective (perceptual) IQ assessment for demosaicing methods in a given CFA structure, this paper introduces a combination of proven metrics for automatic and objective IQ evaluation for CFA structures as well as demosaicing methods. This paper proposes a CFA IQM system for quantitatively evaluating the existing CFAs or CFAs to be developed in the future. The metrics for measuring the CFA performance include 1) a color error using the CIELAB color metric in the Macbeth Color Checker (MCC), 2) the color reproduction error (visible distortion) using S-CIELAB, 3) the SSIM, 4) the MTF5033 (SLANTED-BAR), 5) the moiré robustness using a MSP [49, 50], 6) an AR error using a GRAY-BAR, 7) structural and color distortion using MDSI, and 8) perceptual similarity using HaarPSI. CIELAB, S-CIELAB, SSIM, MTF50, MDSI, and HaarPSI are existing IQMs, and MSP and AR are newly proposed metrics in this study.

2. Materials and methods

2.1. Kind of CFAs

Various CFAs used commercially or for research are shown in Fig 1. A mosaic of Bayer CFA arranges RGB color filters on a square grid of photo-sensors, the pattern of which is 50% green, 25% red, and 25% blue [1]. First, the figure includes RGB1 (Yamanaka CFA [14]), RGB2 (Lukac CFA [15]), RGB3 (vertical stripe CFA [16]), RGB4, (diagonal stripe CFA [16]), and RGB5 (modified Bayer CFA [15,17]). There are no known studies addressing the performance issues for other RGB CFAs except for Bayer CFA (RGB2–RGB6) in such a comprehensive and systematic manner. In addition, CMY1 uses CFA of secondary colors, again to allow more of the incident light to be detected rather than absorbed [16]. CMY2 (Switchable CMY, RGBCY CFA [20]) has a pair of CMY CFAs that can switch between multiple sets of color primaries (namely, RGB, CMY, and RGBCY) in the same camera. These CFA shift structures and switchable primaries are known to be useful for improving the optimal color fidelity and signal-to-noise ratio in various types of scenes. CMY3 (CGMY CFA [2123]) is a CFA pattern using subtractive colors, such as cyan, magenta, yellow (C, M, Y), and green to deal with low light conditions. An RGBW1 (RGB and White (W)) matrix is a CFA pattern that includes a white (or transparent or panchromatic) filter element with high sensitivity [2427]. Panchromatic pixels generate the luminance information, whereas chromatic pixels such as R, G, and B produce the color information. RGBW2 is a CFA in which RGB pixels and panchromatic pixels diagonally alternate in a minimal repeating unit of 4 × 4 pixels [28, 29]. RGBW3 [30] has first and second lines, which filter elements for luminance components disposed in each line and are offset from the filter elements for the luminance components in an adjacent line, where the first line includes filter elements for two-color components, and the second line includes filter elements for a single-color component.

Test CFAs for performance comparison: (a)–(f) RGB-, (g)–(i) CMY-, and (j)–(l) RGBW-based CFAs.
Fig 1
Test CFAs for performance comparison: (a)–(f) RGB-, (g)–(i) CMY-, and (j)–(l) RGBW-based CFAs.

2.2. Proposed CFA IQ evaluation system

To simulate the proposed CFA IQ evaluation system, we used ISET [5,6] with bilinear [9], laplacian [10], adaptive laplacian [11], and POCS [12] demosaicing. As mentioned in section 1, the proposed CFA evaluation system consists of eight metrics for respective evaluations of the color accuracy, color reproduction, structural information, image contrast, moiré phenomenon, and noise. Fig 2 shows the imaging pipeline for the proposed CFA IQ evaluation system. In the proposed system, the CFA structure and demosaicing method are changeable, and the CFA IQ evaluation results are plotted on the polar coordinates.

Proposed CFA IQ evaluation system.
Fig 2
Proposed CFA IQ evaluation system.

Fig 3 shows the test input images used for the proposed CFA IQ evaluation system: a) SLANTED-BAR (ISO 12233 resolution chart) [51] for calculating the image contrast using MTF50, b) MCC [52] for measuring the color error using CIELAB, c) PUPPY [53] for measuring the color reproduction error (visible distortion) using S-CIELAB, analyzing the structural information using SSIM, structural and color distortion using MDSI, and perceptual similarity using HaarPSI, d) LINEAR-CHIRP for analyzing the moiré robustness using MSP, and e) GRAY-BAR for analyzing the noise robustness. Applying an appropriate and high-quality initial dataset is essential to accurately assessing the system performance. Of the test input images, PUPPY is the only multipectral scene. The sensor response of multispectral scenes is calculated, then CIE XYZ value at each pixel location is computed by the ISET camera simulator. MCC color image was created based on the Gretag-MCC [52]. The rest of the test input images are color images created by patterns generated by the algorithm. PUPPY has a 32-bit resolution and a spectral wavelength of 400–700 nm, and are illuminated by a D65 illuminant with a mean luminance of 100 cd/m2 (varying mean luminance with 3, 6, 12, 50, 100, 200, 400 cd/m2 for the S-CIELAB and SSIM experiments using PUPPY). The field of view (FOV) is 2° for a SLANTED-BAR, 30° for MCC, and 10° for PUPPY, LINEAR-CHIRP, and GRAY-BAR. The resolution is 636 × 720 for SLATED-BAR, MCC, and PUPPY, and 500 × 500 for LINEAR-CHIRP and GRAY-BAR.

Test input images used in the proposed CFA IQ evaluation system: (a) SLANTED-BAR, (b) MCC, (c) PUPPY, (d) LINEAR-CHIRP, and (e) GRAY-BAR.
Fig 3
Test input images used in the proposed CFA IQ evaluation system: (a) SLANTED-BAR, (b) MCC, (c) PUPPY, (d) LINEAR-CHIRP, and (e) GRAY-BAR.

2.2.1. Color error using CIELAB with MCC

The Delta E metric, calculated by CIE L*a*b *, is one of the extremely well-known perceptual color fidelity metrics [37]. The spectral power distribution derived from the radiant power emitted by two light sources is transformed into CIE XYZ values. The CIE XYZ represents the spectral sensitivity of the three types of cone cells that are sensitive to the RGB primary colors. This means that the CIE XYZ values are a device-invariant representation of color. The CIE XYZ values are transformed into a L*a*b* space, in which an equal perceptual color difference corresponds to an equal distance. The perceptual color difference between the reference (ideal) image and the rendered image can then be calculated by taking the Euclidean distance of the two images in the L*a*b * space [54]. The color difference is represented by ΔE * units. To evaluate the color accuracy of an ideal image (using an ideal sensor) and a CFA output image, MCC shown in Fig 3(B) is rendered under the D65 illuminant used in the proposed system. In addition, CIELAB ΔE* is calculated as the Euclidean distance between two colors. Three numerical values are L* for the lightness and a* and b* for the green–red and blue–yellow color components. The metrics for the MCC patches include the color error, the lightness error for the six gray patches, and the xy chromaticity.

2.2.2. Color reproduction error (visible distortion) using S-CIELAB

S-CIELAB Delta E describes how a spatial pattern causes a visual difference based on the assumption of a color-pattern separability, whereas the CIELAB Delta E metric estimates the magnitude of the difference between two color stimuli in a uniform color space. To apply the CIELAB Delta E metric to color images, the spatial patterns of the image are considered [40, 41, 55]. Fig 4 shows the S-CIELAB procedure. The S-CIELAB includes the color separation and spatial-filtering process convolving with kernels of different sizes and shapes before the CIELAB step. The S-CIELAB Delta E metric extends the CIELAB to include the spatial sensitivity, and represents the visibility of the distortion in an image.

S-CIELAB model.
Fig 4
S-CIELAB model.

S-CIELAB is largely composed of three steps. The first step is a color separation step, in which the original (ideal) image and test (rendered using a CFA) image are transformed into the luminance, red/green, and blue/yellow components [56]. The second step is a spatial filtering step in which the respective separated components are filtered using spatial filters based on the spatial sensitivity of the human eye. Finally, the third step is the CIE-XYZ transformation of the filtered components before the CIELAB step. S-CIELAB can obtain a ΔEs* map using the CIELAB color difference equation. The error map describes where the test image is visually distorted as compared to the original image.

The S-CIELAB difference between the original (ideal) image and the test (rendered) image estimates the reproduction error and visual distortion. Except for the three steps added in S-CIELAB, the reproduction error calculation of S-CIELAB is the same as that of CIELAB. The S-CIELAB difference describes the spatial sensitivity as well as the color sensitivity. The S-CIELAB difference is the same as the CIELAB for a uniform region, although it finds a visual difference more accurately than the CIELAB for a complex pattern region. The color difference of S-CIELAB is expressed in ΔEs* units in this study. To evaluate the color reproduction of a CFA output image and an ideal reproduction, we use the PUPPY image shown in Fig 3(C).

2.2.3. Structural information using SSIM

Under the hypothesis in which human visual perception (HVS) is greatly adapted to retrieve structural information from a scene, SSIM was devised for a quality assessment based on a degradation of the structural information [42]. Specifically, the SSIM index is used for measuring the similarity between the original (ideal) and target images (output images by CFA). The peak signal-to-noise ratio and MSE based metrics do not reflect HVS. By contrast, the SSM takes account the perceived image degradation based on the loss of structural information. The structural information of the image signifies the strong dependencies between pixels owing to their spatial closeness. Such spatial dependencies maintain significant information regarding the structure of an object in any visual scene.

The system diagram of the SSIM is shown in Fig 5. The SSIM system separates the similarity measurement into three comparisons: luminance, contrast, and structure. For x (ideal image) and y (CFA output image), two nonnegative image signals, a luminance comparison function l(x,y), a contrast comparison function c(x,y), and a structural comparison function s(x,y) are calculated. Then, the three comparisons are combined, and the SSIM index between image x and y is obtained as follows:

SSIM(x,y)=[l(x,y)]α[c(x,y)]β[s(x,y)]γ.

SSIM procedure.
Fig 5
SSIM procedure.

In the case of α = β = γ = 1, a specific form of the SSIM index is as follows:

SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2).
where constants C1 = (K1L)2 and C2 = (K2L)2. In addition, I is the dynamic range of the pixel values (255 for 8-bit grayscale images) and K1<<1, K2 <<1. To evaluate the structural information of an ideal image and the output images by the CFAs, the PUPPY image shown in Fig 3(C) is used.

2.2.4. Image contrast using MTF50 by slanted-bar

The MTF is calculated using ISO 12233 (slanted-bar) with ISET [41, 42]. The MTF accurately describes the image contrast attenuation for each spatial frequency. To obtain the MTF of an imaging system, the ISO 12233 examines a slanted edge for all color channels [45]. The luminance MTF can be derived by combining the respective MTF for all color channels. The edge response function can be expressed as the integrated line spread function (LSF) through a differentiation. A Fourier transformation of the LSF provides the corresponding MTF. Accordingly, the method analyzes the edge response for computing the MTF through the LSF.

The slanted-bar method specified in ISO 12233 integrates the line measurements at the edge location. The measurements solve the down-sampled trouble of the imaging system using a super-sampled edge. Fig 6(A) shows a rectangular region near the slanted edge. The derivative for horizontal lines for all color channels is integrated into the edge response of the imaging system. Through the edge response, the luminance MTF of the system as well as the LSF and MTF for all color channels are derived.

(a) Created slanted-bar and (b) its corresponding MTF50 curve.
Fig 6
(a) Created slanted-bar and (b) its corresponding MTF50 curve.

Fig 6(B) shows the MTF for all color channels and the luminance MTF of the system. The horizontal and vertical axes indicate the spatial frequency in cycles per millimeter at the sensor surface and the contrast reduction, respectively. The contrast reduction on the vertical axis represents the SFR in the ISO standard. The red, green, blue, and black lines represent the MTF for the R, G, and B channel and the luminance MTF, respectively. The luminance MTF is calculated by the weighted sum (luminance = 0.3R + 0.6G + 0.1B) of the respective color channels. In addition, the Nyquist sampling frequency (cycles/mm), MTF50, and percent alias are shown in the figure. The MTF50 indicates the spatial frequency where the luminance MTF becomes 0.5. In addition, the percent alias, namely, the percentage of aliasing, is calculated as the area under the right luminance MTF at the Nyquist frequency. The Nyquist frequency is indicated by the vertical red line in the figure. To analyze the image contrast for a rendered image using a CFA, the SLANTED-BAR image shown in Fig 3(A) is applied.

2.2.5. Moiré robustness using MSP through a linear chirp

CFA-output images can be degraded by the appearance of a moiré pattern occurring in the digital imaging system. A color moiré has artificial color banding that can appear in images with repetitive patterns of high spatial frequencies [49, 50]. The color moiré is the result of aliasing (image energy above the Nyquist frequency) in an image sensor. It is actually difficult to quantitatively estimate a moiré phenomenon because it is spatially irregular and its color band is varied. Thus, we use a linear-chirp pattern (gradually narrowing the widths of the black and white stripes) with a low to high spatial frequency for quantitatively estimating the moiré robustness. The linear-chirp signal is a sinusoidal wave that increases linearly in terms of frequency. Because a moiré phenomenon is an unintended color band, we analyze it using the square root of the sum of the square of only a* and b*, (a*)2+(b*)2 (an ab color value is considered a moiré) for the central horizontal line in the linear-chirp pattern. In addition, it is filtered using a one-dimensional (1 × 5) mean filter to reduce the surrounding noises. Then, a moiré value of higher than a threshold is regarded as the MSP toward a low frequency to a high frequency, because tiny moiré in a low-frequency region does not affect the human eye.

Fig 7 shows an example of a moiré measurement using a linear-chirp pattern. We can see that an unintended color band occurs in a high-frequency region of the CFA output image in Fig 7(A), even though the input linear-chirp pattern shown in Fig 3(D) consists only of black and white stripes. Fig 7(B) shows an ab color image for the CFA output image. In the ab color image, the low-frequency region has little unintentional color value and thus has little moiré phenomenon, whereas the high color value in the high-frequency region means that the moiré phenomenon is severe. Fig 7(C) and 7(D) represent the color value for the central horizontal line and the average color value of a 5 × 5 block for the central horizontal direction on the ab color image. We can confirm that the moiré value becomes higher toward a high frequency. As shown in Fig 7(C), the moiré is extremely irregular in nature regardless of the frequency band. Nevertheless, it can be seen in Fig 7(D) that the moiré increases gradually from low frequency to high frequency through the average color value of the block for the central horizontal direction. The blue and red curves indicate the respective resulting moiré curves by the ideal sensor and arbitrary CFA. We increased the block size to 15 × 15 to better understand the moiré characteristics of each CFA. Using a threshold for the average color value of the block, the MSP for each CFA is analyzed. In addition, the MSP is expressed in units of spatial frequency (cycles per degree).

(a) CFA output image for LINEAR-CHIRP image, (b) ab color ((a*)2+(b*)2) image for CFA output image, (c) color value for central horizontal line, and (d) mean color value of 5 × 5 block for central horizontal direction on ab color image.
Fig 7
(a) CFA output image for LINEAR-CHIRP image, (b) ab color ((a*)2+(b*)2) image for CFA output image, (c) color value for central horizontal line, and (d) mean color value of 5 × 5 block for central horizontal direction on ab color image.

2.2.6. Achromatic reproduction using gray-bar

The AR error of achromatic color is measured by the difference in luminance for the central horizontal line of the ideal and CFA output images for GRAY-BAR shown in Fig 3(E). The brightness of the GRAY-BAR decreases for the vertical line in the image. The original GRAY-BAR image is degraded owing to the change in luminance and noise during the rendering process through the CFA. The luminance value of the central horizontal line of the output image rendered by an arbitrary CFA for the GRAY-BAR contains a partial luminance change and white noise compared with the output image by the ideal sensor. The AR error for any CFA is measured by looking at the average difference in luminance for the central horizontal line between the GRAY-BAR rendered by an ideal sensor and the CFAs.

2.2.7. Structural and color distortion using MDSI

The MDSI utilizes gradient and chrominance features to measure structural and color distortion [46]. A gradient-chromaticity similarity map is made by combining these two similarity maps. First, for reference (ideal) and distorted (CFA) images, R and D, GS is obtained by

GS(x)=2GR(x)GD(x)+C1GR2(x)+GD2(x)+C1,
where C1 is a constant to control numerical stability. The gradient-color similarity (GCS) is calculated as the follows:
GCS¯(x)=αGS¯(x)+(1α)CS¯(x),
where GS¯(x) and CS¯(x) means the enhanced gradient and color similarity function [46]. And the MDSI is defined as the follows:
MDSI=[14i=1N|GCS¯i1/4(1Ni=1NGCS¯i1/4)|]1/4,

2.2.8. Perceptual similarity using HaarPSI

The HaarPSI was presented for yielding full reference IQ assessments [47]. The HaarPSI evaluates local similarities as well as entire similarities between two images by using the coefficients obtained from a Haar wavelet decomposition. For two grayscale images f1, f2, the local similarity is computed based on a 2D discrete Haar wavelet transform as the following;

HSf1,f2(k)[x]=lα(12j=12S(|(gj(k)*f1)[x]|,|(gj(k)*f2)[x]|,C)),
where C>0, k∈{1,2} selects either horizontal or vertical Haar wavelet filters and S represents the similarity. The HaarPSI is computed by
HaarPSIf1,f2=lα1(xk=12HSf1,f2(k)[x]Wf1,f2(k)[x]xk=12Wf1,f2(k)[x])2,
where W means a weight map which is derived from the response of a single low-frequency Haar wavelet filter.

2.2.9. Normalization of metrics and its combination

The polar coordinate was selected to observe the entire performance of the test CFAs. To quantitatively and visually evaluate the test CFAs, the measured eight metrics should be normalized with the dynamic range of [0, 1]. The smaller the value of the color difference ΔE by CIELAB with the MCC image is, the color difference (color reproduction error) ΔEs by S-CIELAB and MDSI with the PUPPY image, and the AR error ΔN with the GRAY-BAR image, the better the reproduction performance and structural (and color) similarity for the color and luminance by the CFAs. By contrast, the larger the SSIM and HaarPSI value from the PUPPY image, the MTF50 value from the SLANTED-BAR image, and the MSP value from the LINEAR-CHIRP image are, the better the structural information preservation, perceptual similarity, image contrast, and moiré robustness performance of the CFAs. First, ΔE, ΔEs, and ΔN measured using CIELAB, S-CIELAB, AR, and MDSI are normalized to give a higher score to the smaller difference value as in the following:

nΔE=(ΔEmaxΔE)/(ΔEmaxΔEmin)nΔEs=(ΔEs.maxΔEs)/(ΔEs.maxΔEs.min)nΔN=(ΔNmaxΔN)/(ΔNmaxΔNmin)nMDSI=(MDSImaxMDSI)/(MDSImaxMDSImin),
where the min and max values of each of the above metrics are ΔEmax = 6, ΔEmin = 2.2, ΔEs.max = 9.5, ΔEs.min = 2.5, ΔNmax = 1.0, ΔNmin = 0.5 and MDSImin = 0.5, MDSImin = 0. By contrast, the measured SSIM, MTF50, MSP, and HaarPSI for the structural information, image contrast, and moiré starting point are normalized to give a higher score to a bigger measurement value, as in the following:
nSSIM=(SSIMSSIMmin)/(SSIMmaxSSIMmin)nMTF50=(MTF50MTF50min)/(MTFmaxMTFmin)nMSP=(MSPMSPmin)/(MSPmaxMSPmin)nHaarPSI=(HaarPSIHaarPSImin)/(HaarPSImaxHaarPSImin),
where the min and max values of the respective metrics are SSIMmax = 1, SSIMmin = 0, MTF50min = 110, MTFmin = 30, MSPmax = 150, MSPmin = 20, and HaarPSImax = 1, and HaarPSImin = 0. The SSIM is originally calculated as [0, 1]. The min and max values used to calculate the score of each metric were empirically derived by considering the distribution of the measured values of the test CFAs for each metric.

3. Results and discussion

We simulated the performances of all test CFAs with eight metrics with the text input images. The bilinear, laplacian, adaptive laplacian, and POCS demosaicing method for each CFA is used in the proposed system. Figs 8 to 15 and Table 1 show comparisons between test CFAs by bilinear demosaicing, while Figs 16 and 17 and Tables 2 and 3 show overall comparisons between test CFAs by all the demosacings used in this paper.

Color difference error of 24 color patches for test CFAs.
Fig 8
Color difference error of 24 color patches for test CFAs.
xy chromaticity diagram of test CFAs for MCC.
Fig 9
xy chromaticity diagram of test CFAs for MCC.
Color reproduction error using S-CIELAB and structural information results using SSIM for test CFAs.
Fig 10
Color reproduction error using S-CIELAB and structural information results using SSIM for test CFAs.
MTF50 of (a) R (red), (b) G (green), (c) B (blue), and (d) K (black) color for test CFAs.
Fig 11
MTF50 of (a) R (red), (b) G (green), (c) B (blue), and (d) K (black) color for test CFAs.
Output image of test CFAs for LINEAR-CHIRP image, ab color image for output image, color value for center horizontal line of color image, and filtered color value using one-dimensional (1 × 5) mean filter for the color value.
Fig 12
Output image of test CFAs for LINEAR-CHIRP image, ab color image for output image, color value for center horizontal line of color image, and filtered color value using one-dimensional (1 × 5) mean filter for the color value.
Mean color value curves using (a) 1 × 5 and (b) 1 × 15 mean filters and MSP results for test CFAs.
Fig 13
Mean color value curves using (a) 1 × 5 and (b) 1 × 15 mean filters and MSP results for test CFAs.
Luminance value curve and mean AR error value for test CFAs.
Fig 14
Luminance value curve and mean AR error value for test CFAs.
MDSI and HaarPSI results for test CFAs.
Fig 15
MDSI and HaarPSI results for test CFAs.
Polar coordinate visualization for test CFAs (magenta dotted line: Bilinear, cyon dotted line: laplacian, green dotted line: adaptive laplacian, blue dotted line: POCS, and red solid line: average).
Fig 16
Polar coordinate visualization for test CFAs (magenta dotted line: Bilinear, cyon dotted line: laplacian, green dotted line: adaptive laplacian, blue dotted line: POCS, and red solid line: average).
CFA ranks for (a) blinear, (b) laplacian, (c) adaptive laplacian, (d) POCS, and (e) total demosaicing methods.
Fig 17
CFA ranks for (a) blinear, (b) laplacian, (c) adaptive laplacian, (d) POCS, and (e) total demosaicing methods.
Table 1
Mean delta E for 24 color patches and mean delta L for 6 achromatic patches for test CFAs.
MetricsBayerRGB1RGB2RGB3RGB4RGB5CMY1CMY2CMY3RGBW1RGBW2RGBW3
Mean ΔE3.002.952.932.942.972.953.703.692.612.582.632.60
Mean ΔL3.964.014.034.064.014.063.953.863.972.162.152.13
Table 2
Comparison results of all metrics for test CFAs according to demosaicing methods.
MetricsBayerRGB1RGB2RGB3RGB4RGB5CMY1CMY2CMY3RGBW1RGBW2RGBW3
BilinearΔEs3.583.253.473.503.243.465.285.286.093.613.793.80
SSIM0.730.790.750.740.780.760.760.780.660.750.760.76
MSP120.0068.0088.0050.00118.0085.0075.0074.0033.00138.0055.0065.00
MTF5079.2075.0077.6060.8074.4075.2070.9078.8070.0074.2060.0065.20
ΔE3.002.952.952.942.972.953.703.692.612.582.602.60
ΔL0.620.670.670.660.590.670.720.740.800.630.620.63
MDSI0.340.340.340.340.330.340.370.370.400.350.360.36
HaarPSI0.750.750.750.740.750.750.680.690.600.700.680.68
LaplacianΔEs3.673.383.755.104.144.504.874.415.862.793.344.13
SSIM0.690.670.810.800.640.840.850.870.780.850.670.81
MSP55886585861057988451108977
MTF509884.294.4077.081.5267.8069.865.674.0083.0089.0087.60
ΔE2.932.933.313.603.653.552.942.972.392.803.523.68
ΔL0.750.790.630.780.770.870.610.540.590.840.750.59
MDSI0.360.320.430.280.370.410.430.340.320.300.300.25
HaarPSI0.660.750.680.680.650.690.630.720.690.660.710.72
Adaptive LaplacianΔEs3.743.143.014.235.213.795.174.965.643.182.643.21
SSIM0.680.650.760.640.590.790.890.780.860.630.870.70
MSP1477889929593858769959389
MTF50104.463.0074.0082.688.2379.36372.265.4092.895.285.20
ΔE2.953.123.513.123.143.153.243.313.343.113.832.97
ΔL0.810.850.750.830.690.620.600.640.550.740.840.78
MDSI0.360.410.370.310.410.290.350.380.300.250.230.31
HaarPSI0.650.710.660.750.700.590.710.760.660.640.690.79
POCSΔEs4.083.984.113.386.133.294.385.336.283.384.633.31
SSIM0.630.760.630.850.710.880.680.860.680.720.770.79
MSP7089969891857169751157595
MTF50111.873.8084.8074.073.6483.4078.4069.076.4072.687.496.00
ΔE2.943.212.543.332.892.792.943.202.532.562.753.15
ΔL0.800.730.810.690.860.740.670.550.620.610.630.67
MDSI0.350.390.390.230.353.360.460.310.420.320.270.25
HaarPSI0.670.680.740.710.630.780.680.690.750.740.650.62
Table 3
Performance comparison of test CFAs by normalized metrics.
DemosaicBayerRGB1RGB2RGB3RGB4RGB5CMY1CMY2CMY3RGBW1RGBW2RGBW3
Bilnear0.73910.68530.69950.66480.75260.69580.59650.60640.52990.75520.64950.6637
Laplacian0.68200.67550.68490.62350.61450.60820.65540.70660.64880.70960.67330.7047
Adaptive laplacian0.71950.59670.65150.64280.64630.69920.66370.66030.64950.69590.69730.6918
POCS0.67240.64860.66370.70960.58270.71490.63610.66710.64370.74380.69780.7354
Total0.70330.65150.67490.66020.64900.67950.63790.66010.61800.72610.67950.6989

Fig 8 shows the color difference for the color patches for the test CFAs using the CIELAB metric and MCC. In the CIELAB plane, the blue circles indicate the measured value, and the small red lines show the distance from the measurement to the ideal value. All of the test CFAs commonly show larger color difference in the blue color region. In addition, CMY1 and CMY2 show large color difference even in the red-yellow region. In the mean Delta E of Table 1, the RGBW CFAs showed the smallest color difference of (2.58~2.63), and the color difference of the RGB CFAs (2.93~2.97) were smaller than the CMY CFAs (2.61~3.70). It can be seen that the RGB CFAs have larger color difference compared with the RGBW or CMY CFAs. In the mean Delta L for the lightness (luminance) error, the RGBW CFAs were the smallest at 2.13–2.16, and the RGB CFAs have the highest lightness error at 4.01~–4.06, whereas CMY CFAs range at 3.86~–3.97.

Fig 9 shows the chromaticity of test CFAs for the MCC. As in the CIELAB color plane of the test CFAs mentioned above, all test CFAs have larger color difference in greenish-blue region even in the chromaticity. Bayer and RGB CFAs have larger color difference in red (or pink) and greenish-blue region, whereas CMY1 and CMY2 among the CMY CFAs have larger color difference in yellow, red, and greenish-blue region. We can see that the CMY3- and RGBW-CFAs among the test CFAs have a relatively smaller color difference for all colors of the MCC.

Fig 10 shows the color reproduction error (ΔEs*) using S-CIELAB and the structural information results using the SSIM for the test CFAs. S-CIELAB typically predicts a lower visibility of the color differences for textured regions. Qualitatively, these predictions are consistent with the measurements of human spatial-color sensitivities. In the color reproduction error, RGB CFAs (3.24~3.50) performed better than the CMY CFAs (5.28~6.09) and RGBW CFAs (3.61~3.80). All of the CFAs commonly have a remarkable color difference for blue (see the blue block region of the lower-right corner in the respective output images). In addition, CMY1 and CMY2 show a particularly larger color difference even in the red (puppy doll region in the output images) and yellow (yellow panel regions in the upper-left corner in the output images) color region. These results are similar to the color error result of CIELAB described above because S-CIELAB is consistent with the basic CIELAB calculation for large uniform areas.

Because the SSIM measures the structural similarity of the luminance, contrast, and structure between an ideal image and the output image, the closer it is to 1, the better the IQ of the output image is, which is contrary to CIELAB and S-CIELAB. All test CFAs obtained almost equally excellent SSIM values (0.73~0.79) except for CMY3. In the results of CIELAB and S-CIELAB, CMY3 showed better (smaller) color error and color reproduction error performance for bluish-green or greenish-blue (upper-center panel region in the output images) color. So we can deduce that CMY3 obtained a poor SSIM value because the structural comparison function among the similarity functions of the SSIM has a lower value (i.e., poor structural similarity), compared to the luminance comparison function or the contrast comparison function.

Fig 11 shows the MTF50 of R, G, B, and K (black) color for the test CFAs. The MTF measures the contrast reproductivity, which is the ability to resolve the black and white vertical lines in the rendered SLANTED-BAR image. In general, the contrast reproducibility decreases as the spatial frequency increases. In the figure, the small vertical red line (at 227 cycles/mm) indicates the Nyquist frequency.

As described in section 2.2.4, the luminance MTF indicates that the G color among the RGB colors has the largest weight. As a result, the overall distribution of the luminance MTF for K color is similar to the MTF distribution of the G color. On the other hand, the weights of the R and B color are smaller than that of the G color. Additionally, the MTF distributions of these two colors are similar to each other. Although a CFA is composed of the same elements, the MTF results vary depending on the location and structure of the elements. In the MTF result for R and B color, RGB1, RGB2, and RGB4 among the RGB CFAs, CMY1 and CMY2 among the CMY CFAs, and RGBW1 among the RGBW CFAs show a significant contrast reproductivity. In the MTF results for the G color, RGB1, RGB2, RGB4, and RGB5 among the RGB CFAs, CMY2 and CMY3 among the CMY CFAs, and RGBW2 and RGBW3 among the RGBW CFAs show a superior resolution capability. As a result, we confirmed that Bayer (79.20), RGB1 (75.00), RGB2 (77.60), RGB5 (75.20), CMY2 (78.80), and RGBW1 (74.20) show an excellent contrast reproducibility according to the spatial frequency.

During the simulation of the moiré phenomenon, the illuminant used was D65, and the mean luminance was set to 100 cd/m2 . Fig 12 shows the image output by the test CFAs, where ab ((a*)2+(b*)2) calculates the color of the output image, the original color values for the center-horizontal line of the ab color image, and the filtered color values using a one-dimensional (1 × 5) mean filter for the original color values of the center horizontal line. The test CFAs cause unintended color bands or moiré phenomena within the high spatial frequency region of the sinusoidal, or a linear-chirp pattern. The LINEAR-CHIRP image does not originally contain any color. However, rendering using the CFA causes unintended color to appear in the high spatial frequency band. In the ab color image, the darker red region indicates that the moiré is severe. The original color value for the central horizontal line of the ab color image contains a lot of noise owing to the high-frequency effect. To analyze the moiré color band more quantitatively, a one-dimensional 1 × 5 mean filter was applied to the original color value for the central horizontal line of the ab color image. Bayer, RGB2, RGB4, RGB5, and RGBW1 show gentle color values rising from the low to high spatial frequency band, which indicates robustness to the moiré, as compared to the other test CFAs. It is also noteworthy that, even though all CFAs include the same elements, the moiré pattern differs depending on the location and structure of the elements. RGB1 and RGB3 in the RGB CFAs, CMY2 and CMY3 among the CMY CFAs, and RGBW2 and RGBW3 among the RGBW CFAs have already caused a moiré phenomenon even within a relatively lower spatial frequency band than the other test CFAs.

Fig 13 shows the mean color value curves of the central horizontal line of the ab color image using (a) 1 × 5 and (b) 1 × 15 mean filters, and the MSP results for the test CFAs. As shown in the Fig 13(A), as the spatial frequency increases, the moiré worsens. To more quantitatively evaluate the moiré characteristics, we plotted the mean color value curve using 1 × 15 mean filter as shown in Fig 13(B). In the figure, the small red horizontal line indicates the threshold applied to the filtered color value for detecting the MSP where the moiré starts to occur. The threshold was empirically set at 25. While Bayer (120 cpd), RGB4 (118 cpd), and RGBW1 (138 cpd) show higher MSP performance, RGB3 (50 cpd), CMY3 (33 cpd), and RGBW2 (55 cpd) show a very poor MSP performance.

Fig 14 shows the luminance value curve for the center horizontal line of the output image. The respective output gray-bar images rendered by each test CFA contains some noise. However, the overall condition is excellent. We can see the mean AR error value (i.e., mean noise value), namely, the mean of the absolute difference of the luminance values of the central horizontal line for the gray-bar images rendered by the ideal imaging system and the test CFAs. It should be noted that the mean AR error value of CFAs with the same elements, unlike other metrics (for example, MTF50 or MSP), is similar despite the change in structure and location of the elements. As a result, the mean AR error (0.7527) of CMY CFA is somewhat higher than that of Bayer CFA (0.6234), RGB CFA (0.6511), and RGBW CFA (0.6289). In addition, RGB4 in the RGB CFAs has the lowest mean AR error whereas CMY3 shows the highest one.

Fig 15 shows the MDSI and HaarPSI results for the test CFAs. MDSI means that the closer to 0, the higher the structural and color similarity. All test CFAs show relatively good MDSI values. The closer the GCS is to 1, the higher the structural and color similarity between the original and rendered image. Most CFAs have a lower GCS near indigo (or blue) color, which means that the similarity is lower in the area. MDSI values are similar for each CFA, however the GCS distribution is different (especially in the red and blue color regions). Based on this phenomenon, it can be deduced that MDSI value may vary according to color distribution of the input image. For each test CFAs, HaarPSI ranges 0.60 to 0.75. Total images rendered show entirely high HaarPSI value because the degree of distortion is weak compared to the original image. In perceptual similarity comparison, RGB-based CFAs show higher HaarPSI values, while CMY or RGBW-based CFAs have rather lower HaarPSI values.

Table 2 shows the results of each metric for test CFAs using different demosaicing methods. The red box represents the highest scoring CFAs in the demosaicing and metrics. RGB4 for bilinear demosaicing, RGBW1 for laplacian demosaicing, Bayer for adaptive laplacian demosaicing, and RGB5 for POCS demosaicing have the highest number of red box (best metric-score). Also, regardless of demosaicing methods, Bayer, RGB1~RGB5, CMY1~CMY3, and RGBW1~RGBW3 CFA have seven, two, zero, one, three, three, one, one, four, seven, two, and two red boxes respectively.

Fig 16 shows a polar coordinate visualization of the test CFAs through the proposed CFA IQ evaluation system. It can be seen that the performance of the test CFAs can be easily visualized. Table 3 shows the performance comparison of test CFAs as the average of the normalized metrics mentioned in section 2.2.9. Bayer shows the best performance for adaptive laplacian demosaicing and obtained higher metric scores of 0.0203~0.1228 compared to the other test CFAs. On the other hand, RGBW1 shows the best performance for bilinear, laplacian, and POCS demosacing and received superior metric socres of 0.0026~0.2323, 0.003~0.1014, 0.0084~0.1611 for the respective demosaicing methods. For all the demosaicing methods used in this paper, the best CFA was RGBW1 and acquired higher scores of 0.0228 to 0.1081 compared to the other test CFAs. Based on the analysis in Table 3, the CFA ranks for the respective and total demosaicing methods are shown in Fig 17. We can see that the metric score difference between the worst and the best CFA for each demosaicing method is significant.

As a result, the proposed CFA IQ evaluation system can be useful for analyzing the IQ characteristics of existing CFA structures or for evaluating the IQ when developing a CFA with a new structure. The respective metrics used in this paper are an example of analyzing a CFA. The existing or proposed metrics used in this paper evaluate quantitatively and objectively the images rendered by CFAs. In future research, we will incorporate the psychophysical (subjective) assessment factors into CFA image quality assessment, considering various experimental methods such as participants (experts and non-experts), experimental images, experimental settings such as background illumination and gamma correction of a monitor, and online or in-situ site selection.

4. Conclusions

This paper presented a novel CFA IQ evaluation system that enables a comparative study of the IQM of output images rendered through various CFA patterns. Although many CFA patterns have been developed over the past few decades, it remains a challenge to design and analyze new CFA patterns for improving the IQ and color reproduction. The proposed CFA evaluation system includes newly devised metrics such as MSP and AR, as well as existing metrics such as CIELAB, S-CIELAB, SSIM, MTF50, MDSI, and HaarPSI, to evaluate CFA patterns and demosaicing methos from various perspectives of color accuracy, color reproduction error, AR, structural information, image contrast, moiré robustness, structual distortion, and perceptual similarity for rendered output images. To analyze the CFA IQ performance more precisely, any parameters concerning the applied metrics can be modified, or novel quantitative metrics can be added in the evaluation system.

Acknowledgements

The author is grateful to Steven Lansel, a software engineer; Oculus VR; Facebook Technologies; Munenori Fukunishi, the Division of Information Sciences, Chiba University, Japan; Prof. Brian A. Wandell, Stanford Psychology; and Joyce Farrell, the Executive Director of the Stanford Center for Image Systems Engineering, Stanford University.

References

1 

BE Bayer. . Color Imaging Array. US Patent3,971,065, 1975.

2 

JE Farrell, F Xiao, PB Catrysse, BA Wandell. . A simulation tool for evaluating digital camera image quality. Proc. SPIE. 2003;5294:, pp.124–131. , doi: 10.1117/12.537474.

3 

JE Farrell, PB Catrysse, BA Wandell. . Digital camera simulation. Applied Optics. 2012;51:, pp.A80–A90. , doi: 10.1364/AO.51.000A80

4 

Lansel SP. Local linear learned method for image and reflectance estimation, Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., Stanford, USA, 2011.

5 

J Farrell, M Okincha, M Parmar. . Sensor calibration and simulation. Proc. SPIE. 2008;6817:68170R-1–68170R-9. , doi: 10.1117/12.767901.

6 

J. Chenet al. Digital camera imaging system simulation. IEEE Trans Electron Devices. 2009;56:, pp.2496–2505. , doi: 10.1109/TED.2009.2030995

7 

J Adams, K Parulski, K Spaulding. . Color processing in digital cameras. IEEE Micro. 1998;18:, pp.20–31. , doi: 10.1109/40.743681

8 

M Vrhel, E Saber, HJ Trussell. . Color image generation and display technologies. IEEE Signal Process Mag. 2005;22:, pp.23–33. , doi: 10.1109/MSP.2005.1407712

9 

A. BovikThe Essential Guide to Image Processing, Academic press: California, USA, 2009; pp. , pp.43–68.

10 

KH Chung and YH Chan. . Low-complexity color demosaicing algorithm based on integrated gradients. Journal of Electronic Imaging2010;19:, pp.021104-1–021104-15. , doi: 10.1117/1.3432484.

11 

JF Hamilton and JE Adams. Adaptive color plane interpolation in single sensor color electronic camera, U.S. Patent No. 5,629,734, to Eastman Kodak Company (Rochester, NY), 1997.

12 

BK Gunturk, Y Altunbasak, RM Mersereau. . Color plane interpolation using alternating projections. IEEE Trans Image Process. 2002;11:, pp.997–1013. , doi: 10.1109/TIP.2002.801121

13 

Lukac R, Plataniotis KN. Color filter arrays for single-sensor imaging. IEEE 23rd Biennial Symposium on Communications 2006;352–355. , doi: 10.1109/BSC.2006.1644640

14 

S. Yamanaka. Solid state camera. U.S. Patent4,054,906, 1977.

15 

R Lukac, KN Plataniotis. . Color filter arrays:design and performance analysis. IEEE Trans. Consumer Electronics. 2005;51:, pp.1260–1267. , doi: 10.1109/TCE.2005.1561853

16 

K Hirakawa, PJ Wolfe. . Spatio-spectral color filter array design for optimal image recovery. IEEE Trans. Image Process. 2008;17:, pp.1876–1890. , doi: 10.1109/TIP.2008.2002164

17 

FillFactory, Technology image sensor: the color filter array. Available online:http://www.fillfactory.com/htm/technology/htm/rgbfaq.htm (accessed on 27 Jan. 2018).

18 

DPREVIEW, Sony Cyber-shot DSC-F828 review. Available online:https://www.dpreview.com/reviews/sonydscf828/16 (accessed on 27 April 2019).

19 

DPREVIEW, Sony announce new RGBE CCD. Available online:https://www.dpreview.com/articles/1471104084/sonyrgbeccd (accessed on 27 April 2019).

20 

B Sajadi, A Majumder, K Hiwada, A Maki, R Raskar. . Switchable primaries using shiftable layers of color filter arrays. ACM Trans. Graph. 2011;30:65:, pp.1–65:10. , doi: 10.1145/2010324.1964960

21 

I Sato, K Ooi, K Saito, Y Takemura, T Shinohara. . Color image pick-up apparatus. US Patent4,390,895, 1983.

22 

JF Hamilton, JE Adams, DM Orlicki. . Particular pattern of pixels for a color filter array which is used to derive luminanance and chrominance values. US Patent6,330,029B1, 2001.

23 

University of London, Color filter arrays: a design methodology, Available online:https://pdfs.semanticscholar.org/481d/d0027a9fc899d099bbeccf78a2860c02985c.pdf (accessed on 27 April 2019).

24 

M Parmar, BA Wandell. . Interleaved imaging: an imaging system design inspired by rod-cone vision. Proc. SPIE. 2009;7250:, pp.725008, doi: 10.1117/12.806367.

25 

EB Gindele, AC Gallagher. . Sparsely sampled image sensing device with color and luminance photosites. US Patent6,476,865, 2002.

26 

G. Susanu. RGBW Sensor Array, US Patent8,264,576, 2008.

27 

T. Sugiyama. Image-capturing apparatus, US Patent7,746,394, 2010.

28 

T Kijima, H Nakamura, J Compton, J Hamilton. . Image sensor with improved light sensitivity. US Patent20,070,177,, pp.236, 2007.

29 

M Kumar, EO Morales, JE Adams, W Hao. New digital camera sensor architecture for low light imagingIEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10Nov.2009, , pp.2681–2684. , doi: 10.1109/ICIP.2009.5414126

30 

T Yamagami, T Sasaki, A Suga. . Image signal processing apparatus having a color filter with offset luminance filter elements. US Patent5,323,, pp.233, 1994.

31 

B. GirodDigital Images and Human Vision, A.B. Watson ed. The MIT press: Massachusetts, USA, 1993; pp. , pp.207–220.

32 

PC Teo, DJ Heeger. . Perceptual image distortion. Proc. SPIE. 1994;2179:, pp.127–141. , doi: 10.1117/12.172664.

33 

AM Eskicioglu, PS Fisher. . Image quality measures and their performance. IEEE Trans. Communications. 1995;43:, pp.2959–2965. , doi: 10.1109/26.477498

34 

MP Eckert, AP Bradley. . Perceptual quality metrics applied to still image compression. Signal Processing. 1998;70:, pp.177–200. , doi: 10.1016/S0165-1684(98)00124-8

35 

S. Winkler. A perceptual distortion metric for digital color video. Proc. SPIE. 1999;3644:, pp.175–184. , doi: 10.1117/12.348438.

36 

Z Wang, AC Bovik. . A universal image quality index. IEEE Signal Processing Letters. 2002;9:, pp.81–84. , doi: 10.1109/97.995823

37 

RS Hunter. . Accuracy, precision, and stability of new photo-electric color-difference meter. J. Opt. Soc. Am. 1948;38:, pp.1092–1106.

38 

AB Poirson, BA Wandell. . Appearance of colored patterns:pattern:color separability. Journal of the Optical Society of America. 1993;10:, pp.2458–2470. , doi: 10.1364/josaa.10.002458

39 

AB Poirson, BA Wandell. . Pattern-color separable pathways predict sensitivity to simple colored patterns. Vision Research. 1996;36:, pp.515–526. , doi: 10.1016/0042-6989(96)89251-0

40 

X Zhang, BA Wandell. . A spatial extension of CIELAB for digital color-image reproduction. Society for Information Display Journal. 1997;5:, pp.61–63. , doi: 10.1889/1.1985127

41 

XM Zhang, JE Farrell, BA Wandell. . Applications of a spatial extension to CIELAB. Proc. IS&T and SPIE Symposium on Electronic Imaging. 1997;3025:, pp.154–157. , doi: 10.1889/1.1985127.

42 

Z Wang, AC Bovik, HR Sheikh, EP Simoncelli. . Image quality assessment:from error visibility to structural similarity. IEEE Trans. Image Process. 2004;13:, pp.600–612. , doi: 10.1109/tip.2003.819861

43 

ImagEval Consulting LLC, ISET-Calculating the System MTF Using ISO-12233. Available online: http://www.imageval.com/ApplicationNotes/SlantedBarMTF.pdf (accessed on 20 Jan. 2018).

44 

CAMBRIDGE in COLOUR, Lens quality: MTF, resolution & contrast. Available online:https://www.cambridgeincolour.com/tutorials/lens-quality-mtf-resolution.htm (accessed on 16 April 2019).

45 

D Williams, PD Burns. . Low-frequency MTF estimation for digital imaging devices using slanted edge analysis. Proc. SPIE-IS&T Electronic Imaging Symposium2003;5294:, pp.93–101. , doi: 10.1117/12.532405.

46 

HZ Nafchi, A Shahkolaei, R Hedjam, and M Cheriet. . Mean deviation similarity index: efficient and reliable full-reference image quality evaluator. IEEE Access2016;4:, pp.5579–5590.

47 

R Reisenhofer, S Bosse, G Kutyniok, and T Wiegan. . A Haar wavelet-based perceptual similarity index for image quality assessment. Signal Processing: Image Communication2018;61:, pp.33–43. , doi: 10.1016/j.image.2017.11.001.

48 

F Gasparini, F Marini, R Schettini, M Guarnera. . A no-reference metric for demosaicing artifacts that fits psycho-visual experiments. EURASIP Journal on Advances in Signal Processing2012;2012;, pp.1–15. , doi: 10.1186/1687-6180-2012-123.

49 

G Oster, M Wasserman, C Zwerling. . Theoretical interpretation of moiré patterns. Journal of the Optical Society of America1964;54:, pp.169–175. , doi: 10.1364/JOSA.54.000169.

50 

F Liu, J Yang, H Yue. Moiré pattern removal from texture images via low-rank and sparse matrix decompositionVisual Communications and Image Processing (VCIP)Singapore, 13–16Dec.2015, , pp.1–4. , doi: 10.1109/VCIP.2015.7457907

51 

D Williams. Benchmarking of the ISO 12233 slanted-edge spatial frequency response plug-inProc. IS&T's PICS ConferencePortland, 51998, , pp.133–136.

52 

54 

W Mokrzycki, M Tatol. . Color difference Delta E-A survey. Machine Graphics and Vision2011;20:, pp.383–411.

55 

X Zhang, DA Silverstein, JE Farrell, BA Wandell. Color image quality metric S-CIELAB and its application on halftone texture visibility IEEE Computer Society International Conference (COMPCON)San Jose, USA, 23–2621997, doi: 10.1109/CMPCON.1997.584669

56 

X Zhang, BA Wandell. . Color image fidelity metrics evaluated using image distortion maps. Signal Processing. 1998;70:, pp.201–214. , doi: 10.1016/S0165-1684(98)00125-X


6 Jan 2020

PONE-D-19-25883

Image Quality Metric System for Color Filter Array (CFA) Evaluation

PLOS ONE

Dear Dr. Bae,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Feb 20 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

    A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.
    A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.
    An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Hocine Cherifi

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

3. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical.

4. Thank you for stating the following financial disclosure:

'No'

    • Please provide an amended Funding Statement that declares *all* the funding or sources of support received during this specific study (whether external or internal to your organization) as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now 
    • Please state what role the funders took in the study.  If any authors received a salary from any of your funders, please state which authors and which funder. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

c. Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors propose a method to evaluate the mosaic of CFA camera. The topic is very interesting and the proposal quite convincing, though the evaluation of the protocol and the conclusions are limited.

Hereafter are some comments:

You need to position your work vs. this article:

A no-reference metric for demosaicing artifacts that fits psycho-visual experiments

F Gasparini, F Marini, R Schettini, M Guarnera - EURASIP Journal on Advances in Signal Processing, 2012

Not clear how the mosaics, in particular complex/pseudo-random mosaics or general mosaics (SFA) will impact the relevant of the method. In particular in the case of a color transform from N bands to colour.

Random color filter arrays are better than regular ones

P Amba, J Dias, D Alleysson - Color and Imaging Conference, 2016

and

Multispectral filter arrays: Recent advances and practical implementation

PJ Lapray, X Wang, JB Thomas, P Gouton - Sensors, 2014

Neither the role of the algorithm definition:

Xin Li, Bahadir Gunturk, Lei Zhang, "Image demosaicing: a systematic survey," Proc. SPIE 6822, Visual Communications and Image Processing 2008, 68221J (28 January 2008);

and

Demosaicing of periodic and random color filter arrays by linear anisotropic diffusion

JB Thomas, I Farup - Color and Imaging Conference, 2018

I think this needs to be discussed, because the simulation and evaluation of your framework is limited to regular patterns and very basic demosaicing. It is very difficult to study mosaic alone in this case and get strong conclusion. In fact in particular Moire might be also limited by the use of better algorithm combined to each mosaic, which limit the conclusions.

That does not impact the definition of the evaluation method though.

It is not very well described what are the characteristics of the spectral data used. This needs to be described because it may impair the simulation.

Beside I encourage to revise the references: there were many online accessed documents. Some might be find in academic articles properly cited rather than a link, this must be fixed.

As well, you may provide more specific acknowledgement to what the cited person have actually done to desserve those.

English and text could be improved.

Reviewer #2: The article presents that: currently the most frequently used method of demosaicing in terms of cost and time of computation is bilinear interpolation. On what basis did it say so? There is, for example, the method of the nearest neighbour interpolation which is even faster and requires much less resources.

It would be worthwhile to see how the results obtained for different CFA systems behave for different demosaicing methods.

Consideration should be given to the statement: The pipeline typically consists of demosaicking, noise reduction, white balance, CFA interpolation, color conversion, and gamma correction for rendering the sensor data.

Is the formulation correct and is this the correct order of operations?

The proposed quality assessment system for different CFA has been reduced to a selection of several metrics used to assess the quality of a digital image where a reference image is required. The article proposes visualization in the form of polar coordinates. For different CFA systems we obtain a different distribution, however, it is the subjective observer who must assess which of the measurements is more important for him.

The authors should look at another measure of image quality assessment closely correlated with HVS. Examples of such metrics are: DSCSI, MDSI, HPSI etc…

Lee, Dohyoung, and Konstantinos N. Plataniotis. "Towards a full-reference quality assessment for color images using directional statistics." IEEE Transactions on image processing 24.11 (2015): 3950-3965.

Nafchi, Hossein Ziaei, et al. "Mean deviation similarity index: Efficient and reliable full-reference image quality evaluator." IEEE Access 4 (2016): 5579-5590.

Reisenhofer, Rafael, et al. "A Haar wavelet-based perceptual similarity index for image quality assessment." Signal Processing: Image Communication 61 (2018): 33-43.

It is also worthwhile to examine the articles:

Frackiewicz, Mariusz, and Henryk Palus. "Toward a perceptual image quality assessment of color quantized images." Tenth International Conference on Machine Vision (ICMV 2017). Vol. 10696. International Society for Optics and Photonics, 2018.

Frackiewicz, Mariusz, and Henryk Palus. "New image quality metric used for the assessment of color quantization algorithms." Ninth International Conference on Machine Vision (ICMV 2016). Vol. 10341. International Society for Optics and Photonics, 2017.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.


20 Feb 2020

Revised text based on reviewer's advice. Please refer to the attachment files (Response to reviewer). Thank you for your valuable review.

Submitted filename: Response to Reviewers_2.docx

11 Mar 2020

PONE-D-19-25883R1

Image Quality Metric System for Color Filter Array Evaluation

PLOS ONE

Dear Dr. Bae,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Apr 25 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

    A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.
    A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.
    An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Hocine Cherifi

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The work is improved, however there are still quite strong limitations:

2 major conceptual concerns:

1-It is not only the demosacing that seems to be evaluated, but the whole imaging pipeline, as shown in Fig 2. At least

2-Generally the writing part should be reworked and be more structured, accurate and concise.

Less major comments:

1-Do you really need to introduce all the formula for the IQMs?

2-The results should be supported by a psychophysical experiment OR why you do not do it should be very clearly stated (in this case you seem to enforce that the recent IQM correlate with perception, but then you use many metrics that do not correlate very well with perception).

3-An exercise of quantitative summary/analysis of the results would be welcome.

4-I invite you to revisit my first rounds of comments.

Minor comments:

1-Figs are not readible because too small.

2-PUPPY image and other images are not referenced and it is not sure if the simulation was in spectral or in colour only.

In general I would like this work to be published, but would appreciate it to be more compact. Beside I am not sure that Plos One is the right support for this technical development (up to the editors to take this decision though)

Reviewer #2: Are there statistically significant differences between the CFA systems tested for quality indices?

In Table 1 there are identical values for HPSI, MDSI and others for different CFA systems, maybe you need to use a different way of presenting the results?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.


1 Apr 2020

Reviewer #1: The work is improved, however there are still quite strong limitations:

2 major conceptual concerns:

1-It is not only the demosacing that seems to be evaluated, but the whole imaging pipeline, as shown in Fig 2. At least

Sol) The entire image pipeline, including demosaicing part in Figure 2, has been modified as shown below. And the explanation is also added.

-------------------------------------------------------------------------------------------------------------

Figure 2. Proposed CFA image-quality evaluation system.

Figure 2 shows the imaging pipeline for the proposed CFA IQ evaluation system. In the proposed system, the CFA structure and demosaicing method are changeable, and the CFA IQ evaluation results are plotted on the polar coordinates.

2-Generally the writing part should be reworked and be more structured, accurate and concise.

Sol) The figures and writing parts of the whole paper have been revised. Duplicates have been removed from the paper, and the paper has been organized to be more concise.

Less major comments:

1-Do you really need to introduce all the formula for the IQMs ?

Sol) The formulas for IQM have changed according to the advice you have given.

---------------------------------------------------------------------------------------------------------

(change) equations 1 to 24 -> equations 1 to 9

(deleted) The existing equations 1-4, 8-11, and 15 have been deleted.

(integrated) The equations 17-20 and 21-24 have been changed to equations 8 and 9 respectively.

2-The results should be supported by a psychophysical experiment OR why you do not do it should be very clearly stated (in this case you seem to enforce that the recent IQM correlate with perception, but then you use many metrics that do not correlate very well with perception).

Sol) Right. Good opinion. We are considering a psychophysical (subjective) experiment. The figure below is an example of a program tool for comparing the quality of images rendered by a CFA. The tool sequentially shows a pair of images generated by test CFAs, and the experiment participants select a preferred image from the two images. Whenever images are selected, all images are sorted in order of preference by bubble sorting. Currently, we are developing a prototype of this tool, and we are contemplating participants (experts and non-experts), experimental images such as experimental images, background illumination and gamma correction of the monitor, and online or field experiment sites. Although the results of these subjective experiments are not reflected in the current paper, they will be included in future image quality metrics. Considering the current situation, the following were included at the end of section 3 (Results and Discussion).

(Line 508) The existing or proposed metrics used in this paper evaluate quantitatively and objectively the images rendered by CFAs. In future research, we will incorporate the psychophysical (subjective) assessment factors into CFA image quality assessment, considering various experimental methods such as participants (experts and non-experts), experimental images, experimental settings such as background illumination and gamma correction of a monitor, and online or in-situ site selection.

<Application for comparing images rendered by CFA>

3-An exercise of quantitative summary/analysis of the results would be welcome.

Sol) A quantitative analysis was conducted and Figure 17 was added as follows.

-------------------------------------------------------------------------------------------------------------

(Line 490) Bayer shows the best performance for adaptive laplacian demosaicing and obtained higher metric scores of 0.0203~0.1228 compared to the other test CFAs. On the other hand, RGBW1 shows the best performance for bilinear, laplacian, and POCS demosacing and received superior metric socres of 0.0026~0.2323, 0.003~0.1014, 0.0084~0.1611 for the respective demosaicing methods. For all the demosaicing methods used in this paper, the best CFA was RGBW1 and acquired higher scores of 0.0228 to 0.1081 compared to the other test CFAs. Based on the analysis in Table 3, the CFA ranks for the respective and total demosaicing methods are shown in Figure 17. We can see that the metric score difference between the worst and the best CFA for each demosaicing method is significant.

Figure 17. CFA rank for (a) Blinear, (b) Laplacian, (c) Adaptive laplacian, (d) POCS, and (e) Total demosaicing.

4-I invite you to revisit my first rounds of comments.

Sol) The revisit is as follows.

--------------------------------------------------------------------------------------------------------------

You need to position your work vs. this article:

[48] Gasparini F, Marini F, Schettini R, Guarnera M. A no-reference metric for demosaicing artifacts that fits psycho-visual experiments. EURASIP Journal on Advances in Signal Processing 2012;2012;1-15. https://doi.org/10.1186/1687-6180-2012-123.

--------------------------------------------------------------------------------------------------------------

(Line 95) Gasparini etc proposed a no-reference metric for measuring demosaicing artifacts through psycho-visual experiments [48]. Using a psycho-visual comparison test adopting a single or double stimulus method, it analyzes the subjective evaluation of the demosaicing artifacts. Then, it introduce a no-reference metric for demosaicing artifacts based on measures of blurriness, chromatic and achromatic distortions that are able to fit psycho-visual experiments. While the method focuses on a no-reference metric definition of subjective (perceptual) IQ assessment for demosaicing methods in a given CFA structure, this paper introduces a combination of proven metrics for automatic and objective IQ evaluation for CFA structures as well as demosaicing methods.

Minor comments:

1-Figs are not readible because too small.

Sol) Figure 11, 12, 13, 14, 15, and 18 have been modified for visibility.

2-PUPPY image and other images are not referenced and it is not sure if the simulation was in spectral or in colour only.

Sol) The test input images are referenced as the following.

------------------------------------------------------------------------------------------------------------

a) (Line 149) SLANTED-BAR (ISO 12233 resolution chart) [51] for calculating the image contrast using MTF50

b) (Line 150) MCC [52] for measuring the color error using CIELAB

c) (Line 150) PUPPY [53] for measuring the color reproduction error (visible distortion)

[51] Williams D. Benchmarking of the ISO 12233 slanted-edge spatial frequency response plug-in. Proc. IS&T's PICS Conference. Portland, May 1998, 133-136.

[52] ColorChecker Charts. Available online:https://www.webcitation.org/671Lyp9Bu?url=http://xritephoto.com/documents/literature/en/ColorData-1p_EN.pdf (accessed on 29 March 2020).

[53] Multispectral. https://github.com/ISET/isetcam/blob/master/data/images/multispectral/StuffedAnimals_tungsten-hdrs.mat (accessed on 29 March 2020).

(Line 155) Of the test input images, PUPPY is the only multipectral scene. The sensor response of multispectral scenes is calculated, then CIE XYZ value at each pixel location is computed by the ISET camera simulator. MCC color image was created based on the Gretag-MCC [52]. The rest of the test input images are color images created by patterns generated by the algorithm.

In general I would like this work to be published, but would appreciate it to be more compact. Beside I am not sure that Plos One is the right support for this technical development (up to the editors to take this decision though)

<I appreciate for your precious comments !!>

***********************************************************************

Reviewer #2: Are there statistically significant differences between the CFA systems tested for quality indices?

Sol) We can see that the metric score difference between the worst and the best CFA for each demosaicing method is significant. A quantitative analysis was conducted and Figure 17 was added as follows.

-------------------------------------------------------------------------------------------------------------

(Line 490) Bayer shows the best performance for adaptive laplacian demosaicing and obtained higher metric scores of 0.0203~0.1228 compared to the other test CFAs. On the other hand, RGBW1 shows the best performance for bilinear, laplacian, and POCS demosacing and received superior metric socres of 0.0026~0.2323, 0.003~0.1014, 0.0084~0.1611 for the respective demosaicing methods. For all the demosaicing methods used in this paper, the best CFA was RGBW1 and acquired higher scores of 0.0228 to 0.1081 compared to the other test CFAs. Based on the analysis in Table 3, the CFA ranks for the respective and total demosaicing methods are shown in Figure 17. We can see that the metric score difference between the worst and the best CFA for each demosaicing method is significant.

Figure 17. CFA rank for (a) Blinear, (b) Laplacian, (c) Adaptive laplacian, (d) POCS, and (e) Total demosaicing.

In Table 1 there are identical values for HPSI, MDSI and others for different CFA systems, maybe you need to use a different way of presenting the results?

Sol) The analysis was conducted as follows.

----------------------------------------------------------------------------------------------------------

(Line 468) MDSI values are similar for each CFA, however the GCS distribution is different (especially in the red and blue color regions). Based on this phenomenon, it can be deduced that MDSI value may vary according to color distribution of the input image. For each test CFAs, HaarPSI ranges 0.60 to 0.75. Total images rendered show entirely high HaarPSI value because the degree of distortion is weak compared to the original image.

<I appreciate for your precious comments !!>

Submitted filename: Response to Reviewers_2(R2).docx

20 Apr 2020

Image Quality Metric System for Color Filter Array Evaluation

PONE-D-19-25883R2

Dear Dr. Bae,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Hocine Cherifi

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thanks for answering my comments.

The article is OK for publication. The experiment is described clearly, which was one of my major concern.

Reviewer #2: The author took into account previous comments and advice. I don't have any additional suggestions for work.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No


23 Apr 2020

PONE-D-19-25883R2

Image-Quality Metric System for Color Filter Array Evaluation

Dear Dr. Bae:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Hocine Cherifi

Academic Editor

PLOS ONE

https://www.researchpad.co/tools/openurl?pubtype=article&doi=10.1371/journal.pone.0232583&title=Image-quality metric system for color filter array evaluation&author=Tae Wuk Bae,Hocine Cherifi,Hocine Cherifi,Hocine Cherifi,Hocine Cherifi,Hocine Cherifi,&keyword=&subject=Research Article,Physical Sciences,Physics,Electromagnetic Radiation,Light,Visible Light,Luminance,Research and Analysis Methods,Imaging Techniques,Biology and Life Sciences,Neuroscience,Sensory Perception,Vision,Color Vision,Biology and Life Sciences,Psychology,Sensory Perception,Vision,Color Vision,Social Sciences,Psychology,Sensory Perception,Vision,Color Vision,Engineering and Technology,Equipment,Optical Equipment,Cameras,Biology and Life Sciences,Neuroscience,Sensory Perception,Biology and Life Sciences,Psychology,Sensory Perception,Social Sciences,Psychology,Sensory Perception,Biology and Life Sciences,Neuroscience,Sensory Perception,Vision,Biology and Life Sciences,Psychology,Sensory Perception,Vision,Social Sciences,Psychology,Sensory Perception,Vision,Biology and Life Sciences,Neuroscience,Cognitive Science,Cognition,Physical Sciences,Mathematics,Numerical Analysis,Interpolation,