ResearchPad - numerical-analysis https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Image-quality metric system for color filter array evaluation]]> https://www.researchpad.co/article/elastic_article_7704 A modern color filter array (CFA) output is rendered into the final output image using a demosaicing algorithm. During this process, the rendered image is affected by optical and carrier cross talk of the CFA pattern and demosaicing algorithm. Although many CFA patterns have been proposed thus far, an image-quality (IQ) evaluation system capable of comprehensively evaluating the IQ of each CFA pattern has yet to be developed, although IQ evaluation items using local characteristics or specific domain have been created. Hence, we present an IQ metric system to evaluate the IQ performance of CFA patterns. The proposed CFA evaluation system includes proposed metrics such as the moiré robustness using the experimentally determined moiré starting point (MSP) and achromatic reproduction (AR) error, as well as existing metrics such as color accuracy using CIELAB, a color reproduction error using spatial CIELAB, structural information using the structure similarity, the image contrast based on MTF50, structural and color distortion using the mean deviation similarity index (MDSI), and perceptual similarity using Haar wavelet-based perceptual similarity index (HaarPSI). Through our experiment, we confirmed that the proposed CFA evaluation system can assess the IQ for an existing CFA. Moreover, the proposed system can be used to design or evaluate new CFAs by automatically checking the individual performance for the metrics used.

]]>
<![CDATA[High capacity reversible data hiding with interpolation and adaptive embedding]]> https://www.researchpad.co/article/5c897722d5eed0c4847d2525

A new Interpolation based Reversible Data Hiding (IRDH) scheme is reported in this paper. For different applications of an IRDH scheme to the digital image, video, multimedia, big-data and biological data, the embedding capacity requirement usually varies. Disregarding this important consideration, existing IRDH schemes do not offer a better embedding rate-distortion performance for varying size payloads. To attain this varying capacity requirement with our proposed adaptive embedding, we formulate a capacity control parameter and propose to utilize it to determine a minimum set of embeddable bits in a pixel. Additionally, we use a logical (or bit-wise) correlation between the embeddable pixel and estimated versions of an embedded pixel. Thereby, while a higher range between an upper and lower limit of the embedding capacity is maintained, a given capacity requirement within that limit is also attained with a better-embedded image quality. Computational modeling of all new processes of the scheme is presented, and performance of the scheme is evaluated with a set of popular test-images. Experimental results of our proposed scheme compared to the prominent IRDH schemes have recorded a significantly better-embedding rate-distortion performance.

]]>
<![CDATA[A general dose-response relationship for chronic chemical and other health stressors and mixtures based on an emergent illness severity model]]> https://www.researchpad.co/article/5c706744d5eed0c4847c6cf4

Current efforts to assess human health response to chemicals based on high-throughput in vitro assay data on intra-cellular changes have been hindered for some illnesses by lack of information on higher-level extracellular, inter-organ, and organism-level interactions. However, a dose-response function (DRF), informed by various levels of information including apical health response, can represent a template for convergent top-down, bottom-up analysis. In this paper, a general DRF for chronic chemical and other health stressors and mixtures is derived based on a general first-order model previously derived and demonstrated for illness progression. The derivation accounts for essential autocorrelation among initiating event magnitudes along a toxicological mode of action, typical of complex processes in general, and reveals the inverse relationship between the minimum illness-inducing dose, and the illness severity per unit dose (both variable across a population). The resulting emergent DRF is theoretically scale-inclusive and amenable to low-dose extrapolation. The two-parameter single-toxicant version can be monotonic or sigmoidal, and is demonstrated preferable to traditional models (multistage, lognormal, generalized linear) for the published cancer and non-cancer datasets analyzed: chloroform (induced liver necrosis in female mice); bromate (induced dysplastic focia in male inbred rats); and 2-acetylaminofluorene (induced liver neoplasms and bladder carcinomas in 20,328 female mice). Common- and dissimilar-mode mixture models are demonstrated versus orthogonal data on toluene/benzene mixtures (mortality in Japanese medaka, Oryzias latipes, following embryonic exposure). Findings support previous empirical demonstration, and also reveal how a chemical with a typical monotonically-increasing DRF can display a J-shaped DRF when a second, antagonistic common-mode chemical is present. Overall, the general DRF derived here based on an autocorrelated first-order model appears to provide both a strong theoretical/biological basis for, as well as an accurate statistical description of, a diverse, albeit small, sample of observed dose-response data. The further generalizability of this conclusion can be tested in future analyses comparing with traditional modeling approaches across a broader range of datasets.

]]>
<![CDATA[RaCaT: An open source and easy to use radiomics calculator tool]]> https://www.researchpad.co/article/5c76fe64d5eed0c484e5b9d0

Purpose

The widely known field ‘Radiomics’ aims to provide an extensive image based phenotyping of e.g. tumors using a wide variety of feature values extracted from medical images. Therefore, it is of utmost importance that feature values calculated by different institutes follow the same feature definitions. For this purpose, the imaging biomarker standardization initiative (IBSI) provides detailed mathematical feature descriptions, as well as (mathematical) test phantoms and corresponding reference feature values. We present here an easy to use radiomic feature calculator, RaCaT, which provides the calculation of a large number of radiomic features for all kind of medical images which are in compliance with the standard.

Methods

The calculator is implemented in C++ and comes as a standalone executable. Therefore, it can be easily integrated in any programming language, but can also be called from the command line. No programming skills are required to use the calculator. The software architecture is highly modularized so that it is easily extendible. The user can also download the source code, adapt it if needed and build the calculator from source. The calculated feature values are compliant with the ones provided by the IBSI standard. Source code, example files for the software configuration, and documentation can be found online on GitHub (https://github.com/ellipfaehlerUMCG/RaCat).

Results

The comparison with the standard values shows that all calculated features as well as image preprocessing steps, comply with the IBSI standard. The performance is also demonstrated on clinical examples.

Conclusions

The authors successfully implemented an easy to use Radiomics calculator that can be called from any programming language or from the command line. Image preprocessing and feature settings and calculations can be adjusted by the user.

]]>
<![CDATA[Virtual supersampling as post-processing step preserves the trabecular bone morphometry in human peripheral quantitative computed tomography scans]]> https://www.researchpad.co/article/5c6dc9e5d5eed0c48452a446

In the clinical field of diagnosis and monitoring of bone diseases, high-resolution peripheral quantitative computed tomography (HR-pQCT) is an important imaging modality. It provides a resolution where quantitative bone morphometry can be extracted in vivo on patients. It is known that HR-pQCT provides slight differences in morphometric indices compared to the current standard approach micro-computed tomography (micro-CT). The most obvious reason for this is the restriction of the radiation dose and with this a lower image resolution. With advances in micro-CT evaluation techniques such as patient-specific remodeling simulations or dynamic bone morphometry, a higher image resolution would potentially also allow the application of such novel evaluation techniques to clinical HR-pQCT measurements. Virtual supersampling as post-processing step was considered to increase the image resolution of HR-pQCT scans. The hypothesis was that this technique preserves the structural bone morphometry. Supersampling from 82 μm to virtual 41 μm by trilinear interpolation of the grayscale values of 42 human cadaveric forearms resulted in strong correlations of structural parameters (R2: 0.96–1.00). BV/TV was slightly overestimated (4.3%, R2: 1.00) compared to the HR-pQCT resolution. Tb.N was overestimated (7.47%; R2: 0.99) and Tb.Th was slightly underestimated (-4.20%; R2: 0.98). The technique was reproducible with PE%CV between 1.96% (SMI) and 7.88% (Conn.D). In a clinical setting with 205 human forearms with or without fracture measured at 82 μm resolution HR-pQCT, the technique was sensitive to changes between groups in all parameters (p < 0.05) except trabecular thickness. In conclusion, we demonstrated that supersampling preserves the bone morphometry from HR-pQCT scans and is reproducible and sensitive to changes between groups. Supersampling can be used to investigate on the resolution dependency of HR-pQCT images and gain more insight into this imaging modality.

]]>
<![CDATA[Precise frequency synchronization detection method based on the group quantization stepping law]]> https://www.researchpad.co/article/5c61e8e0d5eed0c48496f2f7

A precise frequency synchronization detection method is proposed based on the group quantization steeping law. Based on the different-frequency group quantization phase processing, high-precision frequency synchronization can be achieved by measuring phase comparison result quantization. If any repeated phase differences in the quantized phase comparison results are used as the starting and stopping signal of the counter gate, the time interval between identical phase differences is a group period as gate time. By measuring and analyzing the quantized phase comparison results, the ±1−word counting error is overcome in the traditional frequency synchronization detection method, and the system response time is significantly shortened. The experimental results show that the proposed frequency synchronization detection method is advanced and scientific. The measurement resolution is notably stable and the frequency stability better than the E-12/s level can be obtained. The method is superior to the traditional frequency synchronization detection method in many aspects, such as system reliability and stability, detection speed, development cost, power consumption and volume.

]]>
<![CDATA[The natural selection of words: Finding the features of fitness]]> https://www.researchpad.co/article/5c58d625d5eed0c484031768

We introduce a dataset for studying the evolution of words, constructed from WordNet and the Google Books Ngram Corpus. The dataset tracks the evolution of 4,000 synonym sets (synsets), containing 9,000 English words, from 1800 AD to 2000 AD. We present a supervised learning algorithm that is able to predict the future leader of a synset: the word in the synset that will have the highest frequency. The algorithm uses features based on a word’s length, the characters in the word, and the historical frequencies of the word. It can predict change of leadership (including the identity of the new leader) fifty years in the future, with an F-score considerably above random guessing. Analysis of the learned models provides insight into the causes of change in the leader of a synset. The algorithm confirms observations linguists have made, such as the trend to replace the -ise suffix with -ize, the rivalry between the -ity and -ness suffixes, and the struggle between economy (shorter words are easier to remember and to write) and clarity (longer words are more distinctive and less likely to be confused with one another). The results indicate that integration of the Google Books Ngram Corpus with WordNet has significant potential for improving our understanding of how language evolves.

]]>
<![CDATA[A quadratic trigonometric spline for curve modeling]]> https://www.researchpad.co/article/5c40f762d5eed0c48438600c

An imperative curve modeling technique has been established with a view to its applications in various disciplines of science, engineering and design. It is a new spline method using piecewise quadratic trigonometric functions. It possesses error bounds of order 3. The proposed curve model also owns the most favorable geometric properties. The proposed spline method accomplishes C2 smoothness and produces a Quadratic Trigonometric Spline (QTS) with the view to its applications in curve design and control. It produces a C2 quadratic trigonometric alternative to the traditional cubic polynomial spline (CPS) because of having four control points in its piecewise description. The comparison analysis of QTS and CPS verifies the QTS as better alternate to CPS. Also, the time analysis proves QTS computationally efficient than CPS.

]]>
<![CDATA[Unequal error protection technique for video streaming over MIMO-OFDM systems]]> https://www.researchpad.co/article/5c478c5ad5eed0c484bd1c4d

In this paper, a novel unequal error protection (UEP) technique is proposed for video streaming over multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. Based on the concept of hierarchical quadrature amplitude modulation (HQAM) UEP and multi-antenna UEP, the proposed technique combines the relative protection levels (PLs) of constellation symbols and the differentiated PLs of the transmit antennas. In the proposed technique, standard square quadrature amplitude modulation (QAM) constellations are used instead of HQAM so that the QAM mapper at the transmitter side and the soft decision calculation at the receiver side remain unchanged, but the UEP benefit of HQAM is retained. The superior performance of the proposed technique is explained by the improved connections between data with various priorities and data paths with various PLs. The assumed video compression method is H.264/AVC, which is known to be commercially successful. The IEEE802.16m system is adopted as a data transmission system. With the aid of realistic simulations in strict accordance with the standards of IEEE802.16m systems and H.264/AVC video compression systems, the proposed technique HQAM-multi-antenna UEP is shown to improve the video quality significantly for a given average bit error rate when compared with previous techniques.

]]>
<![CDATA[Two-dimensional local Fourier image reconstruction via domain decomposition Fourier continuation method]]> https://www.researchpad.co/article/5c3fa5aed5eed0c484ca744f

The MRI image is obtained in the spatial domain from the given Fourier coefficients in the frequency domain. It is costly to obtain the high resolution image because it requires higher frequency Fourier data while the lower frequency Fourier data is less costly and effective if the image is smooth. However, the Gibbs ringing, if existent, prevails with the lower frequency Fourier data. We propose an efficient and accurate local reconstruction method with the lower frequency Fourier data that yields sharp image profile near the local edge. The proposed method utilizes only the small number of image data in the local area. Thus the method is efficient. Furthermore the method is accurate because it minimizes the global effects on the reconstruction near the weak edges shown in many other global methods for which all the image data is used for the reconstruction. To utilize the Fourier method locally based on the local non-periodic data, the proposed method is based on the Fourier continuation method. This work is an extension of our previous 1D Fourier domain decomposition method to 2D Fourier data. The proposed method first divides the MRI image in the spatial domain into many subdomains and applies the Fourier continuation method for the smooth periodic extension of the subdomain of interest. Then the proposed method reconstructs the local image based on L2 minimization regularized by the L1 norm of edge sparsity to sharpen the image near edges. Our numerical results suggest that the proposed method should be utilized in dimension-by-dimension manner instead of in a global manner for both the quality of the reconstruction and computational efficiency. The numerical results show that the proposed method is effective when the local reconstruction is sought and that the solution is free of Gibbs oscillations.

]]>
<![CDATA[Implementation and assessment of the black body bias correction in quantitative neutron imaging]]> https://www.researchpad.co/article/5c390ba1d5eed0c48491d96e

We describe in this paper the experimental procedure, the data treatment and the quantification of the black body correction: an experimental approach to compensate for scattering and systematic biases in quantitative neutron imaging based on experimental data. The correction algorithm is based on two steps; estimation of the scattering component and correction using an enhanced normalization formula. The method incorporates correction terms into the image normalization procedure, which usually only includes open beam and dark current images (open beam correction). Our aim is to show its efficiency and reproducibility: we detail the data treatment procedures and quantitatively investigate the effect of the correction. Its implementation is included within the open source CT reconstruction software MuhRec. The performance of the proposed algorithm is demonstrated using simulated and experimental CT datasets acquired at the ICON and NEUTRA beamlines at the Paul Scherrer Institut.

]]>
<![CDATA[Accurate, robust and harmonized implementation of morpho-functional imaging in treatment planning for personalized radiotherapy]]> https://www.researchpad.co/article/5c3fa589d5eed0c484ca5858

In this work we present a methodology able to use harmonized PET/CT imaging in dose painting by number (DPBN) approach by means of a robust and accurate treatment planning system. Image processing and treatment planning were performed by using a Matlab-based platform, called CARMEN, in which a full Monte Carlo simulation is included. Linear programming formulation was developed for a voxel-by-voxel robust optimization and a specific direct aperture optimization was designed for an efficient adaptive radiotherapy implementation. DPBN approach with our methodology was tested to reduce the uncertainties associated with both, the absolute value and the relative value of the information in the functional image. For the same H&N case, a single robust treatment was planned for dose prescription maps corresponding to standardized uptake value distributions from two different image reconstruction protocols: One to fulfill EARL accreditation for harmonization of [18F]FDG PET/CT image, and the other one to use the highest available spatial resolution. Also, a robust treatment was planned to fulfill dose prescription maps corresponding to both approaches, the dose painting by contour based on volumes and our voxel-by-voxel DPBN. Adaptive planning was also carried out to check the suitability of our proposal.

Different plans showed robustness to cover a range of scenarios for implementation of harmonizing strategies by using the highest available resolution. Also, robustness associated to discretization level of dose prescription according to the use of contours or numbers was achieved. All plans showed excellent quality index histogram and quality factors below 2%. Efficient solution for adaptive radiotherapy based directly on changes in functional image was obtained. We proved that by using voxel-by-voxel DPBN approach it is possible to overcome typical drawbacks linked to PET/CT images, providing to the clinical specialist confidence enough for routinely implementation of functional imaging for personalized radiotherapy.

]]>
<![CDATA[Current and Future Distribution of the Lone Star Tick, Amblyomma americanum (L.) (Acari: Ixodidae) in North America]]> https://www.researchpad.co/article/5c36679cd5eed0c4841a5d65

Acarological surveys in areas outside the currently believed leading edge of the distribution of lone star ticks (Amblyomma americanum), coupled with recent reports of their identification in previously uninvaded areas in the public health literature, suggest that this species is more broadly distributed in North America than currently understood. Therefore, we evaluated the potential geographic extent under present and future conditions using ecological niche modeling approach based on museum records available for this species at the Walter Reed Biosystematics Unit (WRBU). The median prediction of a best fitting model indicated that lone star ticks are currently likely to be present in broader regions across the Eastern Seaboard as well as in the Upper Midwest, where this species could be expanding its range. Further northward and westward expansion of these ticks can be expected as a result of ongoing climate change, under both low- and high-emissions scenarios.

]]>
<![CDATA[Seismic site classification and amplification of shallow bedrock sites]]> https://www.researchpad.co/article/5c2d2ec3d5eed0c484d9b7b5

This study attempts to develop empirical correlations between average penetration resistance (NSPTR¯), averaged velocities over depth up to bedrock depth (VSR¯) and 30 m (VS30¯) for shallow depth sites (having bedrock at a depth less than 25 m). A total of 63 shallow sites were assessed for penetration resistance values up to the bedrock from Standard Penetration Tests (SPT) and dynamic soil property analysis, i.e., Shear Wave Velocity (VS) from Multichannel Analysis of Surface Waves. The study shows that 30 m averaged shear wave velocities are more than the average velocity up to bedrock depth in shallow bedrock sites because of inclusion of rock site velocity. Furthermore, averaged SPT-N(NSPTR¯) and average VS (VSR¯) up to bedrock depth were correlated with the 30 m average(VS30¯) values. This is the first attempt in developing empirical relationships of this kind for seismic site classification. These correlations can be made useful for seismic site classification of sites in regions with Standard Penetration Test (NSPT) values and limited VS values. Further surface and bedrock motion recordings of 12 selected KiK-net shallow depth sites were collected and amplifications were estimated with the respective peak ground acceleration, spectral acceleration and thereby related to the average shear wave velocity up to bedrock and 30 m. The results show that the amplification is better correlated to the VSR¯ than VS30¯ for shallow depth sites, and more data can be added to strengthen this correlation.

]]>
<![CDATA[Universal health insurance, health inequality and oral cancer in Taiwan]]> https://www.researchpad.co/article/5bd2323f40307c60de5e996e

Introduction

The introduction of universal health insurance coverage aims to provide equal accessibility and affordability of health care, but whether such a policy eliminates health inequalities has not been conclusively determined. This research aims to examine the healthcare outcomes of oral cancer and determine whether the universal coverage system in Taiwan has reduced health inequality.

Methods

Linking the databases of the National Cancer Registry with the National Mortality Registry in Taiwan, we stratified patients with oral squamous cell carcinoma by gender and income to estimate the incidence rate, cumulative incidence rate aged from 20 to 79 (CIR20-79), life expectancy, and expected years of life lost (EYLL). The difficulties with asymmetries and short follow-up periods were resolved through applying survival analysis extrapolation methods.

Results

While all people showed a general improvement in life expectancy after the introduction of the NHI, the estimated change in EYLL’s of the high-, middle-, and low-income female patients were found to have +0.3, -0.5 and -7 years of EYLL, respectively, indicating a reduction in health inequality. Improvements for the male patients were unremarkable. There was no drop in the CIR20-79 of oral cancer in disadvantaged groups as in those with higher incomes.

Conclusions

Universal coverage alone may not reduce health inequality across different income groups for oral cancer unless effective preventive measures are implemented for economically disadvantaged regions.

]]>
<![CDATA[Wind Data Mining by Kohonen Neural Networks]]> https://www.researchpad.co/article/5989dae1ab0ee8fa60bbc1ae

Time series of Circulation Weather Type (CWT), including daily averaged wind direction and vorticity, are self-classified by similarity using Kohonen Neural Networks (KNN). It is shown that KNN is able to map by similarity all 7300 five-day CWT sequences during the period of 1975–94, in London, United Kingdom. It gives, as a first result, the most probable wind sequences preceding each one of the 27 CWT Lamb classes in that period. Inversely, as a second result, the observed diffuse correlation between both five-day CWT sequences and the CWT of the 6th day, in the long 20-year period, can be generalized to predict the last from the previous CWT sequence in a different test period, like 1995, as both time series are similar. Although the average prediction error is comparable to that obtained by forecasting standard methods, the KNN approach gives complementary results, as they depend only on an objective classification of observed CWT data, without any model assumption. The 27 CWT of the Lamb Catalogue were coded with binary three-dimensional vectors, pointing to faces, edges and vertex of a “wind-cube,” so that similar CWT vectors were close.

]]>
<![CDATA[Increasing the Depth of Current Understanding: Sensitivity Testing of Deep-Sea Larval Dispersal Models for Ecologists]]> https://www.researchpad.co/article/5989d9e8ab0ee8fa60b6bfaa

Larval dispersal is an important ecological process of great interest to conservation and the establishment of marine protected areas. Increasing numbers of studies are turning to biophysical models to simulate dispersal patterns, including in the deep-sea, but for many ecologists unassisted by a physical oceanographer, a model can present as a black box. Sensitivity testing offers a means to test the models’ abilities and limitations and is a starting point for all modelling efforts. The aim of this study is to illustrate a sensitivity testing process for the unassisted ecologist, through a deep-sea case study example, and demonstrate how sensitivity testing can be used to determine optimal model settings, assess model adequacy, and inform ecological interpretation of model outputs. Five input parameters are tested (timestep of particle simulator (TS), horizontal (HS) and vertical separation (VS) of release points, release frequency (RF), and temporal range (TR) of simulations) using a commonly employed pairing of models. The procedures used are relevant to all marine larval dispersal models. It is shown how the results of these tests can inform the future set up and interpretation of ecological studies in this area. For example, an optimal arrangement of release locations spanning a release area could be deduced; the increased depth range spanned in deep-sea studies may necessitate the stratification of dispersal simulations with different numbers of release locations at different depths; no fewer than 52 releases per year should be used unless biologically informed; three years of simulations chosen based on climatic extremes may provide results with 90% similarity to five years of simulation; and this model setup is not appropriate for simulating rare dispersal events. A step-by-step process, summarising advice on the sensitivity testing procedure, is provided to inform all future unassisted ecologists looking to run a larval dispersal simulation.

]]>
<![CDATA[Biomechanics of the Chick Embryonic Heart Outflow Tract at HH18 Using 4D Optical Coherence Tomography Imaging and Computational Modeling]]> https://www.researchpad.co/article/5989db41ab0ee8fa60bd6f7f

During developmental stages, biomechanical stimuli on cardiac cells modulate genetic programs, and deviations from normal stimuli can lead to cardiac defects. Therefore, it is important to characterize normal cardiac biomechanical stimuli during early developmental stages. Using the chicken embryo model of cardiac development, we focused on characterizing biomechanical stimuli on the Hamburger–Hamilton (HH) 18 chick cardiac outflow tract (OFT), the distal portion of the heart from which a large portion of defects observed in humans originate. To characterize biomechanical stimuli in the OFT, we used a combination of in vivo optical coherence tomography (OCT) imaging, physiological measurements and computational fluid dynamics (CFD) modeling. We found that, at HH18, the proximal portion of the OFT wall undergoes larger circumferential strains than its distal portion, while the distal portion of the OFT wall undergoes larger wall stresses. Maximal wall shear stresses were generally found on the surface of endocardial cushions, which are protrusions of extracellular matrix onto the OFT lumen that later during development give rise to cardiac septa and valves. The non-uniform spatial and temporal distributions of stresses and strains in the OFT walls provide biomechanical cues to cardiac cells that likely aid in the extensive differential growth and remodeling patterns observed during normal development.

]]>
<![CDATA[Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models]]> https://www.researchpad.co/article/5989daadab0ee8fa60baa1c0

This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC).

]]>
<![CDATA[Application of Differential Evolution Algorithm on Self-Potential Data]]> https://www.researchpad.co/article/5989d9d2ab0ee8fa60b647b8

Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods.

]]>