ResearchPad - interpolation https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Image-quality metric system for color filter array evaluation]]> https://www.researchpad.co/article/elastic_article_7704 A modern color filter array (CFA) output is rendered into the final output image using a demosaicing algorithm. During this process, the rendered image is affected by optical and carrier cross talk of the CFA pattern and demosaicing algorithm. Although many CFA patterns have been proposed thus far, an image-quality (IQ) evaluation system capable of comprehensively evaluating the IQ of each CFA pattern has yet to be developed, although IQ evaluation items using local characteristics or specific domain have been created. Hence, we present an IQ metric system to evaluate the IQ performance of CFA patterns. The proposed CFA evaluation system includes proposed metrics such as the moiré robustness using the experimentally determined moiré starting point (MSP) and achromatic reproduction (AR) error, as well as existing metrics such as color accuracy using CIELAB, a color reproduction error using spatial CIELAB, structural information using the structure similarity, the image contrast based on MTF50, structural and color distortion using the mean deviation similarity index (MDSI), and perceptual similarity using Haar wavelet-based perceptual similarity index (HaarPSI). Through our experiment, we confirmed that the proposed CFA evaluation system can assess the IQ for an existing CFA. Moreover, the proposed system can be used to design or evaluate new CFAs by automatically checking the individual performance for the metrics used.

]]>
<![CDATA[High capacity reversible data hiding with interpolation and adaptive embedding]]> https://www.researchpad.co/article/5c897722d5eed0c4847d2525

A new Interpolation based Reversible Data Hiding (IRDH) scheme is reported in this paper. For different applications of an IRDH scheme to the digital image, video, multimedia, big-data and biological data, the embedding capacity requirement usually varies. Disregarding this important consideration, existing IRDH schemes do not offer a better embedding rate-distortion performance for varying size payloads. To attain this varying capacity requirement with our proposed adaptive embedding, we formulate a capacity control parameter and propose to utilize it to determine a minimum set of embeddable bits in a pixel. Additionally, we use a logical (or bit-wise) correlation between the embeddable pixel and estimated versions of an embedded pixel. Thereby, while a higher range between an upper and lower limit of the embedding capacity is maintained, a given capacity requirement within that limit is also attained with a better-embedded image quality. Computational modeling of all new processes of the scheme is presented, and performance of the scheme is evaluated with a set of popular test-images. Experimental results of our proposed scheme compared to the prominent IRDH schemes have recorded a significantly better-embedding rate-distortion performance.

]]>
<![CDATA[RaCaT: An open source and easy to use radiomics calculator tool]]> https://www.researchpad.co/article/5c76fe64d5eed0c484e5b9d0

Purpose

The widely known field ‘Radiomics’ aims to provide an extensive image based phenotyping of e.g. tumors using a wide variety of feature values extracted from medical images. Therefore, it is of utmost importance that feature values calculated by different institutes follow the same feature definitions. For this purpose, the imaging biomarker standardization initiative (IBSI) provides detailed mathematical feature descriptions, as well as (mathematical) test phantoms and corresponding reference feature values. We present here an easy to use radiomic feature calculator, RaCaT, which provides the calculation of a large number of radiomic features for all kind of medical images which are in compliance with the standard.

Methods

The calculator is implemented in C++ and comes as a standalone executable. Therefore, it can be easily integrated in any programming language, but can also be called from the command line. No programming skills are required to use the calculator. The software architecture is highly modularized so that it is easily extendible. The user can also download the source code, adapt it if needed and build the calculator from source. The calculated feature values are compliant with the ones provided by the IBSI standard. Source code, example files for the software configuration, and documentation can be found online on GitHub (https://github.com/ellipfaehlerUMCG/RaCat).

Results

The comparison with the standard values shows that all calculated features as well as image preprocessing steps, comply with the IBSI standard. The performance is also demonstrated on clinical examples.

Conclusions

The authors successfully implemented an easy to use Radiomics calculator that can be called from any programming language or from the command line. Image preprocessing and feature settings and calculations can be adjusted by the user.

]]>
<![CDATA[Virtual supersampling as post-processing step preserves the trabecular bone morphometry in human peripheral quantitative computed tomography scans]]> https://www.researchpad.co/article/5c6dc9e5d5eed0c48452a446

In the clinical field of diagnosis and monitoring of bone diseases, high-resolution peripheral quantitative computed tomography (HR-pQCT) is an important imaging modality. It provides a resolution where quantitative bone morphometry can be extracted in vivo on patients. It is known that HR-pQCT provides slight differences in morphometric indices compared to the current standard approach micro-computed tomography (micro-CT). The most obvious reason for this is the restriction of the radiation dose and with this a lower image resolution. With advances in micro-CT evaluation techniques such as patient-specific remodeling simulations or dynamic bone morphometry, a higher image resolution would potentially also allow the application of such novel evaluation techniques to clinical HR-pQCT measurements. Virtual supersampling as post-processing step was considered to increase the image resolution of HR-pQCT scans. The hypothesis was that this technique preserves the structural bone morphometry. Supersampling from 82 μm to virtual 41 μm by trilinear interpolation of the grayscale values of 42 human cadaveric forearms resulted in strong correlations of structural parameters (R2: 0.96–1.00). BV/TV was slightly overestimated (4.3%, R2: 1.00) compared to the HR-pQCT resolution. Tb.N was overestimated (7.47%; R2: 0.99) and Tb.Th was slightly underestimated (-4.20%; R2: 0.98). The technique was reproducible with PE%CV between 1.96% (SMI) and 7.88% (Conn.D). In a clinical setting with 205 human forearms with or without fracture measured at 82 μm resolution HR-pQCT, the technique was sensitive to changes between groups in all parameters (p < 0.05) except trabecular thickness. In conclusion, we demonstrated that supersampling preserves the bone morphometry from HR-pQCT scans and is reproducible and sensitive to changes between groups. Supersampling can be used to investigate on the resolution dependency of HR-pQCT images and gain more insight into this imaging modality.

]]>
<![CDATA[Precise frequency synchronization detection method based on the group quantization stepping law]]> https://www.researchpad.co/article/5c61e8e0d5eed0c48496f2f7

A precise frequency synchronization detection method is proposed based on the group quantization steeping law. Based on the different-frequency group quantization phase processing, high-precision frequency synchronization can be achieved by measuring phase comparison result quantization. If any repeated phase differences in the quantized phase comparison results are used as the starting and stopping signal of the counter gate, the time interval between identical phase differences is a group period as gate time. By measuring and analyzing the quantized phase comparison results, the ±1−word counting error is overcome in the traditional frequency synchronization detection method, and the system response time is significantly shortened. The experimental results show that the proposed frequency synchronization detection method is advanced and scientific. The measurement resolution is notably stable and the frequency stability better than the E-12/s level can be obtained. The method is superior to the traditional frequency synchronization detection method in many aspects, such as system reliability and stability, detection speed, development cost, power consumption and volume.

]]>
<![CDATA[A quadratic trigonometric spline for curve modeling]]> https://www.researchpad.co/article/5c40f762d5eed0c48438600c

An imperative curve modeling technique has been established with a view to its applications in various disciplines of science, engineering and design. It is a new spline method using piecewise quadratic trigonometric functions. It possesses error bounds of order 3. The proposed curve model also owns the most favorable geometric properties. The proposed spline method accomplishes C2 smoothness and produces a Quadratic Trigonometric Spline (QTS) with the view to its applications in curve design and control. It produces a C2 quadratic trigonometric alternative to the traditional cubic polynomial spline (CPS) because of having four control points in its piecewise description. The comparison analysis of QTS and CPS verifies the QTS as better alternate to CPS. Also, the time analysis proves QTS computationally efficient than CPS.

]]>
<![CDATA[Unequal error protection technique for video streaming over MIMO-OFDM systems]]> https://www.researchpad.co/article/5c478c5ad5eed0c484bd1c4d

In this paper, a novel unequal error protection (UEP) technique is proposed for video streaming over multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. Based on the concept of hierarchical quadrature amplitude modulation (HQAM) UEP and multi-antenna UEP, the proposed technique combines the relative protection levels (PLs) of constellation symbols and the differentiated PLs of the transmit antennas. In the proposed technique, standard square quadrature amplitude modulation (QAM) constellations are used instead of HQAM so that the QAM mapper at the transmitter side and the soft decision calculation at the receiver side remain unchanged, but the UEP benefit of HQAM is retained. The superior performance of the proposed technique is explained by the improved connections between data with various priorities and data paths with various PLs. The assumed video compression method is H.264/AVC, which is known to be commercially successful. The IEEE802.16m system is adopted as a data transmission system. With the aid of realistic simulations in strict accordance with the standards of IEEE802.16m systems and H.264/AVC video compression systems, the proposed technique HQAM-multi-antenna UEP is shown to improve the video quality significantly for a given average bit error rate when compared with previous techniques.

]]>
<![CDATA[Two-dimensional local Fourier image reconstruction via domain decomposition Fourier continuation method]]> https://www.researchpad.co/article/5c3fa5aed5eed0c484ca744f

The MRI image is obtained in the spatial domain from the given Fourier coefficients in the frequency domain. It is costly to obtain the high resolution image because it requires higher frequency Fourier data while the lower frequency Fourier data is less costly and effective if the image is smooth. However, the Gibbs ringing, if existent, prevails with the lower frequency Fourier data. We propose an efficient and accurate local reconstruction method with the lower frequency Fourier data that yields sharp image profile near the local edge. The proposed method utilizes only the small number of image data in the local area. Thus the method is efficient. Furthermore the method is accurate because it minimizes the global effects on the reconstruction near the weak edges shown in many other global methods for which all the image data is used for the reconstruction. To utilize the Fourier method locally based on the local non-periodic data, the proposed method is based on the Fourier continuation method. This work is an extension of our previous 1D Fourier domain decomposition method to 2D Fourier data. The proposed method first divides the MRI image in the spatial domain into many subdomains and applies the Fourier continuation method for the smooth periodic extension of the subdomain of interest. Then the proposed method reconstructs the local image based on L2 minimization regularized by the L1 norm of edge sparsity to sharpen the image near edges. Our numerical results suggest that the proposed method should be utilized in dimension-by-dimension manner instead of in a global manner for both the quality of the reconstruction and computational efficiency. The numerical results show that the proposed method is effective when the local reconstruction is sought and that the solution is free of Gibbs oscillations.

]]>
<![CDATA[Implementation and assessment of the black body bias correction in quantitative neutron imaging]]> https://www.researchpad.co/article/5c390ba1d5eed0c48491d96e

We describe in this paper the experimental procedure, the data treatment and the quantification of the black body correction: an experimental approach to compensate for scattering and systematic biases in quantitative neutron imaging based on experimental data. The correction algorithm is based on two steps; estimation of the scattering component and correction using an enhanced normalization formula. The method incorporates correction terms into the image normalization procedure, which usually only includes open beam and dark current images (open beam correction). Our aim is to show its efficiency and reproducibility: we detail the data treatment procedures and quantitatively investigate the effect of the correction. Its implementation is included within the open source CT reconstruction software MuhRec. The performance of the proposed algorithm is demonstrated using simulated and experimental CT datasets acquired at the ICON and NEUTRA beamlines at the Paul Scherrer Institut.

]]>
<![CDATA[Accurate, robust and harmonized implementation of morpho-functional imaging in treatment planning for personalized radiotherapy]]> https://www.researchpad.co/article/5c3fa589d5eed0c484ca5858

In this work we present a methodology able to use harmonized PET/CT imaging in dose painting by number (DPBN) approach by means of a robust and accurate treatment planning system. Image processing and treatment planning were performed by using a Matlab-based platform, called CARMEN, in which a full Monte Carlo simulation is included. Linear programming formulation was developed for a voxel-by-voxel robust optimization and a specific direct aperture optimization was designed for an efficient adaptive radiotherapy implementation. DPBN approach with our methodology was tested to reduce the uncertainties associated with both, the absolute value and the relative value of the information in the functional image. For the same H&N case, a single robust treatment was planned for dose prescription maps corresponding to standardized uptake value distributions from two different image reconstruction protocols: One to fulfill EARL accreditation for harmonization of [18F]FDG PET/CT image, and the other one to use the highest available spatial resolution. Also, a robust treatment was planned to fulfill dose prescription maps corresponding to both approaches, the dose painting by contour based on volumes and our voxel-by-voxel DPBN. Adaptive planning was also carried out to check the suitability of our proposal.

Different plans showed robustness to cover a range of scenarios for implementation of harmonizing strategies by using the highest available resolution. Also, robustness associated to discretization level of dose prescription according to the use of contours or numbers was achieved. All plans showed excellent quality index histogram and quality factors below 2%. Efficient solution for adaptive radiotherapy based directly on changes in functional image was obtained. We proved that by using voxel-by-voxel DPBN approach it is possible to overcome typical drawbacks linked to PET/CT images, providing to the clinical specialist confidence enough for routinely implementation of functional imaging for personalized radiotherapy.

]]>
<![CDATA[Increasing the Depth of Current Understanding: Sensitivity Testing of Deep-Sea Larval Dispersal Models for Ecologists]]> https://www.researchpad.co/article/5989d9e8ab0ee8fa60b6bfaa

Larval dispersal is an important ecological process of great interest to conservation and the establishment of marine protected areas. Increasing numbers of studies are turning to biophysical models to simulate dispersal patterns, including in the deep-sea, but for many ecologists unassisted by a physical oceanographer, a model can present as a black box. Sensitivity testing offers a means to test the models’ abilities and limitations and is a starting point for all modelling efforts. The aim of this study is to illustrate a sensitivity testing process for the unassisted ecologist, through a deep-sea case study example, and demonstrate how sensitivity testing can be used to determine optimal model settings, assess model adequacy, and inform ecological interpretation of model outputs. Five input parameters are tested (timestep of particle simulator (TS), horizontal (HS) and vertical separation (VS) of release points, release frequency (RF), and temporal range (TR) of simulations) using a commonly employed pairing of models. The procedures used are relevant to all marine larval dispersal models. It is shown how the results of these tests can inform the future set up and interpretation of ecological studies in this area. For example, an optimal arrangement of release locations spanning a release area could be deduced; the increased depth range spanned in deep-sea studies may necessitate the stratification of dispersal simulations with different numbers of release locations at different depths; no fewer than 52 releases per year should be used unless biologically informed; three years of simulations chosen based on climatic extremes may provide results with 90% similarity to five years of simulation; and this model setup is not appropriate for simulating rare dispersal events. A step-by-step process, summarising advice on the sensitivity testing procedure, is provided to inform all future unassisted ecologists looking to run a larval dispersal simulation.

]]>
<![CDATA[Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models]]> https://www.researchpad.co/article/5989daadab0ee8fa60baa1c0

This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC).

]]>
<![CDATA[Refining particle positions using circular symmetry]]> https://www.researchpad.co/article/5989db52ab0ee8fa60bdc628

Particle and object tracking is gaining attention in industrial applications and is commonly applied in: colloidal, biophysical, ecological, and micro-fluidic research. Reliable tracking information is heavily dependent on the system under study and algorithms that correctly determine particle position between images. However, in a real environmental context with the presence of noise including particular or dissolved matter in water, and low and fluctuating light conditions, many algorithms fail to obtain reliable information. We propose a new algorithm, the Circular Symmetry algorithm (C-Sym), for detecting the position of a circular particle with high accuracy and precision in noisy conditions. The algorithm takes advantage of the spatial symmetry of the particle allowing for subpixel accuracy. We compare the proposed algorithm with four different methods using both synthetic and experimental datasets. The results show that C-Sym is the most accurate and precise algorithm when tracking micro-particles in all tested conditions and it has the potential for use in applications including tracking biota in their environment.

]]>
<![CDATA[Gridding discretization-based multiple stability switching delay search algorithm: The movement of a human being on a controlled swaying bow]]> https://www.researchpad.co/article/5989db5dab0ee8fa60be0459

Delay represents a significant phenomenon in the dynamics of many human-related systems—including biological ones. It has i.a. a decisive impact on system stability, and the study of this influence is often mathematically demanding. This paper presents a computationally simple numerical gridding algorithm for the determination of stability margin delay values in multiple-delay linear systems. The characteristic quasi-polynomial—the roots of which decide about stability—is subjected to iterative discretization by means of pre-warped bilinear transformation. Then, a linear and a quadratic interpolation are applied to obtain the associated characteristic polynomial with integer powers. The roots of the associated characteristic polynomial are closely related to the estimation of roots of the original characteristic quasi-polynomial which agrees with the system′s eigenvalues. Since the stability border is crossed by the leading one, the switching root locus is enhanced using the Regula Falsi interpolation method. Our methodology is implemented on—and verified by—a numerical bio-cybernetic example of the stabilization of a human-being′s movement on a controlled swaying bow. The advantage of the proposed novel algorithm lies in the possibility of the rapid computation of polynomial zeros by means of standard programs for technical computing; in the low level of mathematical knowledge required; and, in the sufficiently high precision of the roots loci estimation. The relationship to the direct search QuasiPolynomial (mapping) Rootfinder algorithm and computational complexity are discussed as well. This algorithm is also applicable for systems with non-commensurate delays.

]]>
<![CDATA[Deformable registration of 3D ultrasound volumes using automatic landmark generation]]> https://www.researchpad.co/article/5c95523ed5eed0c4846f322e

US image registration is an important task e.g. in Computer Aided Surgery. Due to tissue deformation occurring between pre-operative and interventional images often deformable registration is necessary. We present a registration method focused on surface structures (i.e. saliencies) of soft tissues like organ capsules or vessels. The main concept follows the idea of representative landmarks (so called leading points). These landmarks represent saliencies in each image in a certain region of interest. The determination of deformation was based on a geometric model assuming that saliencies could locally be described by planes. These planes were calculated from the landmarks using two dimensional linear regression. Once corresponding regions in both images were found, a displacement vector field representing the local deformation was computed. Finally, the deformed image was warped to match the pre-operative image. For error calculation we used a phantom representing the urinary bladder and the prostate. The phantom could be deformed to mimic tissue deformation. Error calculation was done using corresponding landmarks in both images. The resulting target registration error of this procedure amounted to 1.63 mm. With respect to patient data a full deformable registration was performed on two 3D-US images of the abdomen. The resulting mean distance error was 2.10 ± 0.66 mm compared to an error of 2.75 ± 0.57 mm from a simple rigid registration. A two-sided paired t-test showed a p-value < 0.001. We conclude that the method improves the results of the rigid registration considerably. Provided an appropriate choice of the filter there are many possible fields of applications.

]]>
<![CDATA[Collecting Kinematic Data on a Ski Track with Optoelectronic Stereophotogrammetry: A Methodological Study Assessing the Feasibility of Bringing the Biomechanics Lab to the Field]]> https://www.researchpad.co/article/5989da0fab0ee8fa60b79259

In the laboratory, optoelectronic stereophotogrammetry is one of the most commonly used motion capture systems; particularly, when position- or orientation-related analyses of human movements are intended. However, for many applied research questions, field experiments are indispensable, and it is not a priori clear whether optoelectronic stereophotogrammetric systems can be expected to perform similarly to in-lab experiments. This study aimed to assess the instrumental errors of kinematic data collected on a ski track using optoelectronic stereophotogrammetry, and to investigate the magnitudes of additional skiing-specific errors and soft tissue/suit artifacts. During a field experiment, the kinematic data of different static and dynamic tasks were captured by the use of 24 infrared-cameras. The distances between three passive markers attached to a rigid bar were stereophotogrammetrically reconstructed and, subsequently, were compared to the manufacturer-specified exact values. While at rest or skiing at low speed, the optoelectronic stereophotogrammetric system’s accuracy and precision for determining inter-marker distances were found to be comparable to those known for in-lab experiments (< 1 mm). However, when measuring a skier’s kinematics under “typical” skiing conditions (i.e., high speeds, inclined/angulated postures and moderate snow spraying), additional errors were found to occur for distances between equipment-fixed markers (total measurement errors: 2.3 ± 2.2 mm). Moreover, for distances between skin-fixed markers, such as the anterior hip markers, additional artifacts were observed (total measurement errors: 8.3 ± 7.1 mm). In summary, these values can be considered sufficient for the detection of meaningful position- or orientation-related differences in alpine skiing. However, it must be emphasized that the use of optoelectronic stereophotogrammetry on a ski track is seriously constrained by limited practical usability, small-sized capture volumes and the occurrence of extensive snow spraying (which results in marker obscuration). The latter limitation possibly might be overcome by the use of more sophisticated cluster-based marker sets.

]]>
<![CDATA[Lagrange Interpolation Learning Particle Swarm Optimization]]> https://www.researchpad.co/article/5989da33ab0ee8fa60b85697

In recent years, comprehensive learning particle swarm optimization (CLPSO) has attracted the attention of many scholars for using in solving multimodal problems, as it is excellent in preserving the particles’ diversity and thus preventing premature convergence. However, CLPSO exhibits low solution accuracy. Aiming to address this issue, we proposed a novel algorithm called LILPSO. First, this algorithm introduced a Lagrange interpolation method to perform a local search for the global best point (gbest). Second, to gain a better exemplar, one gbest, another two particle’s historical best points (pbest) are chosen to perform Lagrange interpolation, then to gain a new exemplar, which replaces the CLPSO’s comparison method. The numerical experiments conducted on various functions demonstrate the superiority of this algorithm, and the two methods are proven to be efficient for accelerating the convergence without leading the particle to premature convergence.

]]>
<![CDATA[Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation]]> https://www.researchpad.co/article/5989db0dab0ee8fa60bcae7e

After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (tt′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.

]]>
<![CDATA[Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions]]> https://www.researchpad.co/article/5989d9faab0ee8fa60b719de

Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.

]]>
<![CDATA[A new, high-resolution global mass coral bleaching database]]> https://www.researchpad.co/article/5989db59ab0ee8fa60bdf129

Episodes of mass coral bleaching have been reported in recent decades and have raised concerns about the future of coral reefs on a warming planet. Despite the efforts to enhance and coordinate coral reef monitoring within and across countries, our knowledge of the geographic extent of mass coral bleaching over the past few decades is incomplete. Existing databases, like ReefBase, are limited by the voluntary nature of contributions, geographical biases in data collection, and the variations in the spatial scale of bleaching reports. In this study, we have developed the first-ever gridded, global-scale historical coral bleaching database. First, we conducted a targeted search for bleaching reports not included in ReefBase by personally contacting scientists and divers conducting monitoring in under-reported locations and by extracting data from the literature. This search increased the number of observed bleaching reports by 79%, from 4146 to 7429. Second, we employed spatial interpolation techniques to develop annual 0.04° × 0.04° latitude-longitude global maps of the probability that bleaching occurred for 1985 through 2010. Initial results indicate that the area of coral reefs with a more likely than not (>50%) or likely (>66%) probability of bleaching was eight times higher in the second half of the assessed time period, after the 1997/1998 El Niño. The results also indicate that annual maximum Degree Heating Weeks, a measure of thermal stress, for coral reefs with a high probability of bleaching increased over time. The database will help the scientific community more accurately assess the change in the frequency of mass coral bleaching events, validate methods of predicting mass coral bleaching, and test whether coral reefs are adjusting to rising ocean temperatures.

]]>