ResearchPad - polynomials https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Genetic algorithm-based personalized models of human cardiac action potential]]> https://www.researchpad.co/article/elastic_article_7669 We present a novel modification of genetic algorithm (GA) which determines personalized parameters of cardiomyocyte electrophysiology model based on set of experimental human action potential (AP) recorded at different heart rates. In order to find the steady state solution, the optimized algorithm performs simultaneous search in the parametric and slow variables spaces. We demonstrate that several GA modifications are required for effective convergence. Firstly, we used Cauchy mutation along a random direction in the parametric space. Secondly, relatively large number of elite organisms (6–10% of the population passed on to new generation) was required for effective convergence. Test runs with synthetic AP as input data indicate that algorithm error is low for high amplitude ionic currents (1.6±1.6% for IKr, 3.2±3.5% for IK1, 3.9±3.5% for INa, 8.2±6.3% for ICaL). Experimental signal-to-noise ratio above 28 dB was required for high quality GA performance. GA was validated against optical mapping recordings of human ventricular AP and mRNA expression profile of donor hearts. In particular, GA output parameters were rescaled proportionally to mRNA levels ratio between patients. We have demonstrated that mRNA-based models predict the AP waveform dependence on heart rate with high precision. The latter also provides a novel technique of model personalization that makes it possible to map gene expression profile to cardiac function.

]]>
<![CDATA[Lean back and wait for the alarm? Testing an automated alarm system for nosocomial outbreaks to provide support for infection control professionals]]> https://www.researchpad.co/article/N4571fdc0-2a2e-4467-acc9-eeadc2652757

Introduction

Outbreaks of communicable diseases in hospitals need to be quickly detected in order to enable immediate control. The increasing digitalization of hospital data processing offers potential solutions for automated outbreak detection systems (AODS). Our goal was to assess a newly developed AODS.

Methods

Our AODS was based on the diagnostic results of routine clinical microbiological examinations. The system prospectively counted detections per bacterial pathogen over time for the years 2016 and 2017. The baseline data covers data from 2013–2015. The comparative analysis was based on six different mathematical algorithms (normal/Poisson and score prediction intervals, the early aberration reporting system, negative binomial CUSUMs, and the Farrington algorithm). The clusters automatically detected were then compared with the results of our manual outbreak detection system.

Results

During the analysis period, 14 different hospital outbreaks were detected as a result of conventional manual outbreak detection. Based on the pathogens’ overall incidence, outbreaks were divided into two categories: outbreaks with rarely detected pathogens (sporadic) and outbreaks with often detected pathogens (endemic). For outbreaks with sporadic pathogens, the detection rate of our AODS ranged from 83% to 100%. Every algorithm detected 6 of 7 outbreaks with a sporadic pathogen. The AODS identified outbreaks with an endemic pathogen were at a detection rate of 33% to 100%. For endemic pathogens, the results varied based on the epidemiological characteristics of each outbreak and pathogen.

Conclusion

AODS for hospitals based on routine microbiological data is feasible and can provide relevant benefits for infection control teams. It offers in-time automated notification of suspected pathogen clusters especially for sporadically occurring pathogens. However, outbreaks of endemically detected pathogens need further individual pathogen-specific and setting-specific adjustments.

]]>
<![CDATA[Indirect treatment comparisons including network meta-analysis: Lenvatinib plus everolimus for the second-line treatment of advanced/metastatic renal cell carcinoma]]> https://www.researchpad.co/article/5c8823dbd5eed0c484639163

Background

In the absence of clinical trials providing direct efficacy results, this study compares different methods of indirect treatment comparison (ITC), and their respective impacts on efficacy estimates for lenvatinib (LEN) plus everolimus (EVE) combination therapy compared to other second-line treatments for advanced/metastatic renal cell carcinoma (a/mRCC).

Methods

Using EVE alone as the common comparator, the Bucher method for ITC compared LEN + EVE with cabozantinib (CAB), nivolumab (NIV), placebo (PBO) and axitinib (AXI). Hazard ratios (HR) for overall survival (OS) and progression-free survival (PFS) estimated the impact of applying three versions of the LEN+EVE trial data in separate ITCs. Last, to overcome exchangeability bias and potential violations to the proportional hazards assumption, a network meta-analysis using fractional polynomials was performed.

Results

Bucher ITCs demonstrated LEN + EVE superiority over EVE for PFS, indirect superiority to NIV, AXI, and PBO, and no difference to CAB. For OS, LEN + EVE was superior to EVE and indirectly superior to PBO, applying original HOPE 205 data. Using European Medicines Agency data, LEN + EVE was directly superior to EVE for OS. Fractional polynomial HRs for PFS and OS substantially overlapped with Bucher estimates, demonstrating LEN+EVE superiority over EVE, alone, NIV, and CAB. However, there were no statistically significant results as the credible intervals for HR crossed 1.0.

Conclusions

Comparing three Bucher ITCs, LEN + EVE demonstrated superior PFS when indirectly compared to NIV, AXI, and PBO, and mixed results for OS. While fractional polynomial modelling for PFS and OS failed to find statistically significant differences in LEN + EVE efficacy, the overall HR trends were comparable.

]]>
<![CDATA[Security analysis of elliptic curves with embedding degree 1 proposed in PLOS ONE 2016]]> https://www.researchpad.co/article/5c75ac88d5eed0c484d089b5

Wang et al. proposed a method for obtaining elliptic curves with embedding degree 1 for securing critical infrastructures, and presented several elliptic curves generated by their method with torsion points of 160 bits and 189 bits orders. They also presented some experimental results and claimed that their implementation of an elliptic curve generated with their method is faster than an implementation for embedded devices presented by Bertoni et al. In this paper, we point out that the security and efficiency claims given by Wang et al. are flawed. Specifically, we show that it is possible to solve finite field discrete logarithm problems defined over their elliptic curves in practice. On the elliptic curves with torsion points of 160 bits orders generated by Wang et al., their instances of finite field discrete logarithm problems are solved in around 4 hours by using a standard desktop PC. On the torsion points of 189 bits orders, their instances are solved in around 10 days by using two standard desktop PCs. The hardness of the finite field discrete logarithm problems is one of the most important bases of security; therefore, their elliptic curves should not be used for cryptographic purposes.

]]>
<![CDATA[Reliability of a new analysis to compute time to stabilization following a single leg drop jump landing in children]]> https://www.researchpad.co/article/5c6c75d0d5eed0c4843d024a

Although a number of different methods have been proposed to assess the time to stabilization (TTS), none is reliable in every axis and no tests of this type have been carried out on children. The purpose of this study was thus to develop a new computational method to obtain TTS using a time-scale (frequency) approach [i.e. continuous wavelet transformation (WAV)] in children. Thirty normally-developed children (mean age 10.16 years, SD = 1.52) participated in the study. Every participant performed 30 single-leg drop jump landings with the dominant lower limb (barefoot) on a force plate from three different heights (15cm, 20cm and 25cm). Five signals were used to compute the TTS: i) Raw, ii) Root mean squared, iii) Sequential average processing, iv) the fitting curve of the signal using an unbounded third order polynomial fit, and v) WAV. The reliability of the TTS was determined by computing both the Intraclass Correlation Coefficient (ICC) and the Standard Error of the Measurement (SEM).In the antero-posterior and vertical axes, the values obtained with the WAV signal from all heights were similar to those obtained by raw, root mean squared and sequential average processing. The values obtained for the medio-lateral axis were relatively small. This WAV provided substantial-to-good ICC values and low SEM for almost all the axes and heights. The results of the current study thus suggest the WAV method could be used to compute overall TTS when studying children’s dynamic postural stability.

]]>
<![CDATA[Systematically false positives in early warning signal analysis]]> https://www.researchpad.co/article/5c648ce2d5eed0c484c819e6

Many systems in various scientific fields like medicine, ecology, economics or climate science exhibit so-called critical transitions, through which a system abruptly changes from one state to a different state. Typical examples are epileptic seizures, changes in the climate system or catastrophic shifts in ecosystems. In order to predict imminent critical transitions, a mathematical apparatus called early warning signals has been developed and this method is used successfully in many scientific areas. However, not all critical transitions can be detected by this approach (false negative) and the appearance of early warning signals does not necessarily proof that a critical transition is imminent (false positive). Furthermore, there are whole classes of systems that always show early warning signals, even though they do not feature critical transitions. In this study we identify such classes in order to provide a safeguard against a misinterpretation of the results of an early warning signal analysis of such systems. Furthermore, we discuss strategies to avoid such systematic false positives and test our theoretical insights by applying them to real world data.

]]>
<![CDATA[Resolution invariant wavelet features of melanoma studied by SVM classifiers]]> https://www.researchpad.co/article/5c648cd2d5eed0c484c81893

This article refers to the Computer Aided Diagnosis of the melanoma skin cancer. We derive wavelet-based features of melanoma from the dermoscopic images of pigmental skin lesions and apply binary C-SVM classifiers to discriminate malignant melanoma from dysplastic nevus. The aim of this research is to select the most efficient model of the SVM classifier for various image resolutions and to search for the best resolution-invariant wavelet bases. We show AUC as a function of the wavelet number and SVM kernels optimized by the Bayesian search for two independent data sets. Our results are compatible with the previous experiments to discriminate melanoma in dermoscopy images with ensembling and feed-forward neural networks.

]]>
<![CDATA[A fast threshold segmentation method for froth image base on the pixel distribution characteristic]]> https://www.researchpad.co/article/5c40f77bd5eed0c484386242

With the increase of the camera resolution, the number of pixels contained in froth image is increased, which brings many challenges to image segmentation. Froth size and distribution are the important index in froth flotation. The segmentation of froth images is always a problem in building flotation model. In segmenting froth images, Otsu method is usually used to get a binary image for classification of froth images, this method can get a satisfactory segmentation result. However, each gray level is required to calculate each of the between-class variance, it takes a longer time in froth images with a large number of pixels. To solve this problem, an improved method is proposed in this paper. Most froth images have the pixel distribution characteristic that the gray histogram curve is a sawtooth shape. The proposed method uses polynomial to fit the curve of gray histogram and takes the characteristic of gray histogram's valley into consideration in Otsu method. Two performance comparison methods are introduced and used. Experimental comparison between Otsu method and the proposed method shows that the proposed method has a satisfactory image segmentation with a low computing time.

]]>
<![CDATA[A quadratic trigonometric spline for curve modeling]]> https://www.researchpad.co/article/5c40f762d5eed0c48438600c

An imperative curve modeling technique has been established with a view to its applications in various disciplines of science, engineering and design. It is a new spline method using piecewise quadratic trigonometric functions. It possesses error bounds of order 3. The proposed curve model also owns the most favorable geometric properties. The proposed spline method accomplishes C2 smoothness and produces a Quadratic Trigonometric Spline (QTS) with the view to its applications in curve design and control. It produces a C2 quadratic trigonometric alternative to the traditional cubic polynomial spline (CPS) because of having four control points in its piecewise description. The comparison analysis of QTS and CPS verifies the QTS as better alternate to CPS. Also, the time analysis proves QTS computationally efficient than CPS.

]]>
<![CDATA[Two-dimensional local Fourier image reconstruction via domain decomposition Fourier continuation method]]> https://www.researchpad.co/article/5c3fa5aed5eed0c484ca744f

The MRI image is obtained in the spatial domain from the given Fourier coefficients in the frequency domain. It is costly to obtain the high resolution image because it requires higher frequency Fourier data while the lower frequency Fourier data is less costly and effective if the image is smooth. However, the Gibbs ringing, if existent, prevails with the lower frequency Fourier data. We propose an efficient and accurate local reconstruction method with the lower frequency Fourier data that yields sharp image profile near the local edge. The proposed method utilizes only the small number of image data in the local area. Thus the method is efficient. Furthermore the method is accurate because it minimizes the global effects on the reconstruction near the weak edges shown in many other global methods for which all the image data is used for the reconstruction. To utilize the Fourier method locally based on the local non-periodic data, the proposed method is based on the Fourier continuation method. This work is an extension of our previous 1D Fourier domain decomposition method to 2D Fourier data. The proposed method first divides the MRI image in the spatial domain into many subdomains and applies the Fourier continuation method for the smooth periodic extension of the subdomain of interest. Then the proposed method reconstructs the local image based on L2 minimization regularized by the L1 norm of edge sparsity to sharpen the image near edges. Our numerical results suggest that the proposed method should be utilized in dimension-by-dimension manner instead of in a global manner for both the quality of the reconstruction and computational efficiency. The numerical results show that the proposed method is effective when the local reconstruction is sought and that the solution is free of Gibbs oscillations.

]]>
<![CDATA[Implementation and assessment of the black body bias correction in quantitative neutron imaging]]> https://www.researchpad.co/article/5c390ba1d5eed0c48491d96e

We describe in this paper the experimental procedure, the data treatment and the quantification of the black body correction: an experimental approach to compensate for scattering and systematic biases in quantitative neutron imaging based on experimental data. The correction algorithm is based on two steps; estimation of the scattering component and correction using an enhanced normalization formula. The method incorporates correction terms into the image normalization procedure, which usually only includes open beam and dark current images (open beam correction). Our aim is to show its efficiency and reproducibility: we detail the data treatment procedures and quantitatively investigate the effect of the correction. Its implementation is included within the open source CT reconstruction software MuhRec. The performance of the proposed algorithm is demonstrated using simulated and experimental CT datasets acquired at the ICON and NEUTRA beamlines at the Paul Scherrer Institut.

]]>
<![CDATA[Clostridium butyricum population balance model: Predicting dynamic metabolic flux distributions using an objective function related to extracellular glycerol content]]> https://www.researchpad.co/article/5c25450ad5eed0c48442bd72

Background

Extensive experimentation has been conducted to increment 1,3-propanediol (PDO) production using Clostridium butyricum cultures in glycerol, but computational predictions are limited. Previously, we reconstructed the genome-scale metabolic (GSM) model iCbu641, the first such model of a PDO-producing Clostridium strain, which was validated at steady state using flux balance analysis (FBA). However, the prediction ability of FBA is limited for batch and fed-batch cultures, which are the most often employed industrial processes.

Results

We used the iCbu641 GSM model to develop a dynamic flux balance analysis (DFBA) approach to predict the PDO production of the Colombian strain Clostridium sp IBUN 158B. First, we compared the predictions of the dynamic optimization approach (DOA), static optimization approach (SOA), and direct approach (DA). We found no differences between approaches, but the DOA simulation duration was nearly 5000 times that of the SOA and DA simulations. Experimental results at glycerol limitation and glycerol excess allowed for validating dynamic predictions of growth, glycerol consumption, and PDO formation. These results indicated a 4.4% error in PDO prediction and therefore validated the previously proposed objective functions. We performed two global sensitivity analyses, finding that the kinetic input parameters of glycerol uptake flux had the most significant effect on PDO predictions. The other input parameters evaluated during global sensitivity analysis were biomass composition (precursors and macromolecules), death constants, and the kinetic parameters of acetic acid secretion flux. These last input parameters, all obtained from other Clostridium butyricum cultures, were used to develop a population balance model (PBM). Finally, we simulated fed-batch cultures, predicting a final PDO production near to 66 g/L, almost three times the PDO predicted in the best batch culture.

Conclusions

We developed and validated a dynamic approach to predict PDO production using the iCbu641 GSM model and the previously proposed objective functions. This validated approach was used to propose a population model and then an increment in predictions of PDO production through fed-batch cultures. Therefore, this dynamic model could predict different scenarios, including its integration into downstream processes to predict technical-economic feasibilities and reducing the time and costs associated with experimentation.

]]>
<![CDATA[Early-life environmental exposures and childhood growth: A comparison of statistical methods]]> https://www.researchpad.co/article/5c215137d5eed0c4843f9318

There is a growing literature that suggests environmental exposure during key developmental periods could have harmful impacts on growth and development of humans. Understanding and estimating the relationship between early-life exposure and human growth is vital to studying the adverse health impacts of environmental exposure. We compare two statistical tools, mixed-effects models with interaction terms and growth mixture models, used to measure the association between exposure and change over time within the context of non-linear growth and non-monotonic relationships between exposure and growth. We illustrate their strengths and weaknesses through a real data example and simulation study. The data example, which focuses on the relationship between phthalates and the body mass index growth of children, indicates that the conclusions from the two models can differ. The simulation study provides a broader understanding of the robustness of these models in detecting the relationships between any exposure and growth that could be observed. Data-driven growth mixture models are more robust to non-monotonic growth and stochastic relationships but at the expense of interpretability. We offer concrete modeling strategies to estimate complex relationships with growth patterns.

]]>
<![CDATA[Patterns of brown bear damages on apiaries and management recommendations in the Cantabrian Mountains, Spain]]> https://www.researchpad.co/article/5c0841c1d5eed0c484fcaa7a

Large carnivores are often persecuted due to conflict with human activities, making their conservation in human-modified landscapes very challenging. Conflict-related scenarios are increasing worldwide, due to the expansion of human activities or to the recovery of carnivore populations. In general, brown bears Ursus arctos avoid humans and their settlements, but they may use some areas close to people or human infrastructures. Bear damages in human-modified landscapes may be related to the availability of food resources of human origin, such as beehives. However, the association of damage events with factors that may predispose bears to cause damages has rarely been investigated. We investigated bear damages to apiaries in the Cantabrian Mountains (Spain), an area with relatively high density of bears. We included spatial, temporal and environmental factors and damage prevention measures in our analyses, as factors that may influence the occurrence and intensity of damages. In 2006–2008, we located 61 apiaries, which included 435 beehives damaged in the study area (346 km2). The probability of an apiary being attacked was positively related to both the intensity of the damage suffered the year before and the distance to the nearest damaged apiary, and negatively related to the number of prevention measures employed as well as the intensity of the damage suffered by the nearest damage apiary. The intensity of damage to apiaries was positively related to the size of the apiary and to vegetation cover in the surroundings, and negatively related to the number of human settlements. Minimizing the occurrence of bear damages to apiaries seems feasible by applying and maintaining proper prevention measures, especially before an attack occurs and selecting appropriate locations for beehives (e.g. away from forest areas). This applies to areas currently occupied by bears, and to neighbouring areas where dispersing individuals may expand their range.

]]>
<![CDATA[A searchable personal health records framework with fine-grained access control in cloud-fog computing]]> https://www.researchpad.co/article/5c2400a1d5eed0c484098466

Fog computing can extend cloud computing to the edge of the network so as to reduce latency and network congestion. However, existing encryption schemes were rarely used in fog environment, resulting in high computational and storage overhead. Aiming at the demands of local information for terminal device and the shortcomings of cloud computing framework in supporting mobile applications, by taking the hospital scene as an example, a searchable personal health records framework with fine-grained access control in cloud-fog computing is proposed. The proposed framework combines the attribute-based encryption (ABE) technology and search encryption (SE) technology to implement keyword search function and fine-grained access control ability. When keyword index and trapdoor match are successful, the cloud server provider only returns relevant search results to the user, thus achieving a more accurate search. At the same time, the scheme is multi-authority, and the key leakage problem is solved by dividing the user secret key distribution task. Moreover, in the proposed scheme, we securely outsource part of the encryption and decryption operations to the fog node. It is effective both in local resources and in resource-constrained mobile devices. Based on the decisional q-parallel bilinear Diffie-Hellman exponent (q-DBDHE) assumption and decisional bilinear Diffie-Hellman (DBDH) assumption, our scheme is proven to be secure. Simulation experiments show that our scheme is efficient in the cloud-fog environment.

]]>
<![CDATA[Introducing chaos behavior to kernel relevance vector machine (RVM) for four-class EEG classification]]> https://www.researchpad.co/article/5b4a0345463d7e3e7a97116d

This paper addresses a chaos kernel function for the relevance vector machine (RVM) in EEG signal classification, which is an important component of Brain-Computer Interface (BCI). The novel kernel function has evolved from a chaotic system, which is inspired by the fact that human brain signals depict some chaotic characteristics and behaviors. By introducing the chaotic dynamics to the kernel function, the RVM will be enabled for higher classification capacity. The proposed method is validated within the framework of one versus one common spatial pattern (OVO-CSP) classifier to classify motor imagination (MI) of four movements in a public accessible dataset. To illustrate the performance of the proposed kernel function, Gaussian and Polynomial kernel functions are considered for comparison. Experimental results show that the proposed kernel function achieved higher accuracy than Gaussian and Polynomial kernel functions, which shows that the chaotic behavior consideration is helpful in the EEG signal classification.

]]>
<![CDATA[Enzyme sequestration by the substrate: An analysis in the deterministic and stochastic domains]]> https://www.researchpad.co/article/5b07d0f1463d7e0d4a37a6f4

This paper is concerned with the potential multistability of protein concentrations in the cell. That is, situations where one, or a family of, proteins may sit at one of two or more different steady state concentrations in otherwise identical cells, and in spite of them being in the same environment. For models of multisite protein phosphorylation for example, in the presence of excess substrate, it has been shown that the achievable number of stable steady states can increase linearly with the number of phosphosites available. In this paper, we analyse the consequences of adding enzyme docking to these and similar models, with the resultant sequestration of phosphatase and kinase by the fully unphosphorylated and by the fully phosphorylated substrates respectively. In the large molecule numbers limit, where deterministic analysis is applicable, we prove that there are always values for these rates of sequestration which, when exceeded, limit the extent of multistability. For the models considered here, these numbers are much smaller than the affinity of the enzymes to the substrate when it is in a modifiable state. As substrate enzyme-sequestration is increased, we further prove that the number of steady states will inevitably be reduced to one. For smaller molecule numbers a stochastic analysis is more appropriate, where multistability in the large molecule numbers limit can manifest itself as multimodality of the probability distribution; the system spending periods of time in the vicinity of one mode before jumping to another. Here, we find that substrate enzyme sequestration can induce bimodality even in systems where only a single steady state can exist at large numbers. To facilitate this analysis, we develop a weakly chained diagonally dominant M-matrix formulation of the Chemical Master Equation, allowing greater insights in the way particular mechanisms, like enzyme sequestration, can shape probability distributions and therefore exhibit different behaviour across different regimes.

]]>
<![CDATA[A Prediction Model of the Capillary Pressure J-Function]]> https://www.researchpad.co/article/5989dab7ab0ee8fa60bad23d

The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

]]>
<![CDATA[Increasing the Depth of Current Understanding: Sensitivity Testing of Deep-Sea Larval Dispersal Models for Ecologists]]> https://www.researchpad.co/article/5989d9e8ab0ee8fa60b6bfaa

Larval dispersal is an important ecological process of great interest to conservation and the establishment of marine protected areas. Increasing numbers of studies are turning to biophysical models to simulate dispersal patterns, including in the deep-sea, but for many ecologists unassisted by a physical oceanographer, a model can present as a black box. Sensitivity testing offers a means to test the models’ abilities and limitations and is a starting point for all modelling efforts. The aim of this study is to illustrate a sensitivity testing process for the unassisted ecologist, through a deep-sea case study example, and demonstrate how sensitivity testing can be used to determine optimal model settings, assess model adequacy, and inform ecological interpretation of model outputs. Five input parameters are tested (timestep of particle simulator (TS), horizontal (HS) and vertical separation (VS) of release points, release frequency (RF), and temporal range (TR) of simulations) using a commonly employed pairing of models. The procedures used are relevant to all marine larval dispersal models. It is shown how the results of these tests can inform the future set up and interpretation of ecological studies in this area. For example, an optimal arrangement of release locations spanning a release area could be deduced; the increased depth range spanned in deep-sea studies may necessitate the stratification of dispersal simulations with different numbers of release locations at different depths; no fewer than 52 releases per year should be used unless biologically informed; three years of simulations chosen based on climatic extremes may provide results with 90% similarity to five years of simulation; and this model setup is not appropriate for simulating rare dispersal events. A step-by-step process, summarising advice on the sensitivity testing procedure, is provided to inform all future unassisted ecologists looking to run a larval dispersal simulation.

]]>
<![CDATA[Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models]]> https://www.researchpad.co/article/5989daadab0ee8fa60baa1c0

This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC).

]]>