ResearchPad - kernel-methods https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Association test using Copy Number Profile Curves (CONCUR) enhances power in rare copy number variant analysis]]> https://www.researchpad.co/article/elastic_article_14642 Copy number variants comprise a large proportion of variation in human genomes. Large rare CNVs, especially those disrupting genes or changing the dosages of genes, can carry relatively strong risks for neurodevelopmental and neuropsychiatric disorders. Kernel-based association methods have been developed for the analysis of rare CNVs and shown to be a valuable tool. Kernel methods model the collective effect of rare CNVs using flexible kernel functions that capture the characteristics of CNVs and measure CNV similarity of individual pairs. Typically kernels are created by summarizing similarity within an artificially defined “CNV locus” and then collapsing across all loci. In this work, we propose a new kernel-based test, CONCUR, that is based on the CNV location information contained in standard processing of the variants and which obviates the need for arbitrarily defined CNV loci. CONCUR quantifies similarity between individual pairs as the common area under their copy number profile curves and is designed to detect CNV dosage, length and dosage-length interaction effects. In simulation studies and real data analysis, we demonstrate the ability of the CONCUR test to detect CNV effects under diverse CNV architectures with power and robustness over existing methods.

]]>
<![CDATA[A new method for modelling biological invasions from early spread data accounting for anthropogenic dispersal]]> https://www.researchpad.co/article/5c06f05bd5eed0c484c6d7d6

Biological invasions are one of the major causes of biodiversity loss worldwide. In spite of human aided (anthropogenic) dispersal being the key element in the spread of invasive species, no framework published so far accounts for its peculiar characteristics, such as very rapid dispersal and independence from the existing species distribution. We present a new method for modelling biological invasions using historical spatio-temporal records. This method first discriminates between data points of anthropogenic origin and those originating from natural dispersal, then estimates the natural dispersal kernel. We use the expectation-maximisation algorithm for the first step; we then use Ripley’s K-function as a spatial similarity metric to estimate the dispersal kernel. This is done accounting for habitat suitability and providing estimates of the inference precision. Tests on simulated data show good accuracy and precision for this method, even in the presence of challenging, but realistic, limitations of data in the invasion time series, such as gaps in the survey times and low number of records. We also provide a real case application of our method using the case of Litoria frogs in New Zealand. This method is widely applicable across the field of biological invasions, epidemics and climate change induced range shifts and provides a valuable contribution to the management of such issues. Functions to implement this methodology are made available as the R package Biolinv (https://cran.r-project.org/package=Biolinv).

]]>
<![CDATA[Introducing chaos behavior to kernel relevance vector machine (RVM) for four-class EEG classification]]> https://www.researchpad.co/article/5b4a0345463d7e3e7a97116d

This paper addresses a chaos kernel function for the relevance vector machine (RVM) in EEG signal classification, which is an important component of Brain-Computer Interface (BCI). The novel kernel function has evolved from a chaotic system, which is inspired by the fact that human brain signals depict some chaotic characteristics and behaviors. By introducing the chaotic dynamics to the kernel function, the RVM will be enabled for higher classification capacity. The proposed method is validated within the framework of one versus one common spatial pattern (OVO-CSP) classifier to classify motor imagination (MI) of four movements in a public accessible dataset. To illustrate the performance of the proposed kernel function, Gaussian and Polynomial kernel functions are considered for comparison. Experimental results show that the proposed kernel function achieved higher accuracy than Gaussian and Polynomial kernel functions, which shows that the chaotic behavior consideration is helpful in the EEG signal classification.

]]>
<![CDATA[Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions]]> https://www.researchpad.co/article/5b30b251463d7e0b2827fd22

Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on “delta densities”, and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC’s 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

]]>
<![CDATA[Instance-based generalization for human judgments about uncertainty]]> https://www.researchpad.co/article/5b28b5e2463d7e1340e24748

While previous studies have shown that human behavior adjusts in response to uncertainty, it is still not well understood how uncertainty is estimated and represented. As probability distributions are high dimensional objects, only constrained families of distributions with a low number of parameters can be specified from finite data. However, it is unknown what the structural assumptions are that the brain uses to estimate them. We introduce a novel paradigm that requires human participants of either sex to explicitly estimate the dispersion of a distribution over future observations. Judgments are based on a very small sample from a centered, normally distributed random variable that was suggested by the framing of the task. This probability density estimation task could optimally be solved by inferring the dispersion parameter of a normal distribution. We find that although behavior closely tracks uncertainty on a trial-by-trial basis and resists an explanation with simple heuristics, it is hardly consistent with parametric inference of a normal distribution. Despite the transparency of the simple generating process, participants estimate a distribution biased towards the observed instances while still strongly generalizing beyond the sample. The inferred internal distributions can be well approximated by a nonparametric mixture of spatially extended basis distributions. Thus, our results suggest that fluctuations have an excessive effect on human uncertainty judgments because of representations that can adapt overly flexibly to the sample. This might be of greater utility in more general conditions in structurally uncertain environments.

]]>
<![CDATA[Semi-Supervised Feature Transformation for Tissue Image Classification]]> https://www.researchpad.co/article/5989da08ab0ee8fa60b76a30

Various systems have been proposed to support biological image analysis, with the intent of decreasing false annotations and reducing the heavy burden on biologists. These systems generally comprise a feature extraction method and a classification method. Task-oriented methods for feature extraction leverage characteristic images for each problem, and they are very effective at improving the classification accuracy. However, it is difficult to utilize such feature extraction methods for versatile task in practice, because few biologists specialize in Computer Vision and/or Pattern Recognition to design the task-oriented methods. Thus, in order to improve the usability of these supporting systems, it will be useful to develop a method that can automatically transform the image features of general propose into the effective form toward the task of their interest. In this paper, we propose a semi-supervised feature transformation method, which is formulated as a natural coupling of principal component analysis (PCA) and linear discriminant analysis (LDA) in the framework of graph-embedding. Compared with other feature transformation methods, our method showed favorable classification performance in biological image analysis.

]]>
<![CDATA[Coronary Stent Artifact Reduction with an Edge-Enhancing Reconstruction Kernel – A Prospective Cross-Sectional Study with 256-Slice CT]]> https://www.researchpad.co/article/5989dac5ab0ee8fa60bb2272

Purpose

Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel.

Methods

This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale.

Results

Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001).

Conclusion

In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality scores than the standard kernel.

]]>
<![CDATA[Multi-level, multi-scale resource selection functions and resistance surfaces for conservation planning: Pumas as a case study]]> https://www.researchpad.co/article/5989db5eab0ee8fa60be0b83

The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.

]]>
<![CDATA[Explaining Support Vector Machines: A Color Based Nomogram]]> https://www.researchpad.co/article/5989daa2ab0ee8fa60ba6257

Problem setting

Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models.

Objective

In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables.

Results

Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable.

Conclusions

This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method.

]]>
<![CDATA[Remembered or Forgotten?—An EEG-Based Computational Prediction Approach]]> https://www.researchpad.co/article/5989d9f2ab0ee8fa60b6ef2e

Prediction of memory performance (remembered or forgotten) has various potential applications not only for knowledge learning but also for disease diagnosis. Recently, subsequent memory effects (SMEs)—the statistical differences in electroencephalography (EEG) signals before or during learning between subsequently remembered and forgotten events—have been found. This finding indicates that EEG signals convey the information relevant to memory performance. In this paper, based on SMEs we propose a computational approach to predict memory performance of an event from EEG signals. We devise a convolutional neural network for EEG, called ConvEEGNN, to predict subsequently remembered and forgotten events from EEG recorded during memory process. With the ConvEEGNN, prediction of memory performance can be achieved by integrating two main stages: feature extraction and classification. To verify the proposed approach, we employ an auditory memory task to collect EEG signals from scalp electrodes. For ConvEEGNN, the average prediction accuracy was 72.07% by using EEG data from pre-stimulus and during-stimulus periods, outperforming other approaches. It was observed that signals from pre-stimulus period and those from during-stimulus period had comparable contributions to memory performance. Furthermore, the connection weights of ConvEEGNN network can reveal prominent channels, which are consistent with the distribution of SME studied previously.

]]>
<![CDATA[PBMDA: A novel and effective path-based computational model for miRNA-disease association prediction]]> https://www.researchpad.co/article/5989db54ab0ee8fa60bdcf97

In the recent few years, an increasing number of studies have shown that microRNAs (miRNAs) play critical roles in many fundamental and important biological processes. As one of pathogenetic factors, the molecular mechanisms underlying human complex diseases still have not been completely understood from the perspective of miRNA. Predicting potential miRNA-disease associations makes important contributions to understanding the pathogenesis of diseases, developing new drugs, and formulating individualized diagnosis and treatment for diverse human complex diseases. Instead of only depending on expensive and time-consuming biological experiments, computational prediction models are effective by predicting potential miRNA-disease associations, prioritizing candidate miRNAs for the investigated diseases, and selecting those miRNAs with higher association probabilities for further experimental validation. In this study, Path-Based MiRNA-Disease Association (PBMDA) prediction model was proposed by integrating known human miRNA-disease associations, miRNA functional similarity, disease semantic similarity, and Gaussian interaction profile kernel similarity for miRNAs and diseases. This model constructed a heterogeneous graph consisting of three interlinked sub-graphs and further adopted depth-first search algorithm to infer potential miRNA-disease associations. As a result, PBMDA achieved reliable performance in the frameworks of both local and global LOOCV (AUCs of 0.8341 and 0.9169, respectively) and 5-fold cross validation (average AUC of 0.9172). In the cases studies of three important human diseases, 88% (Esophageal Neoplasms), 88% (Kidney Neoplasms) and 90% (Colon Neoplasms) of top-50 predicted miRNAs have been manually confirmed by previous experimental reports from literatures. Through the comparison performance between PBMDA and other previous models in case studies, the reliable performance also demonstrates that PBMDA could serve as a powerful computational tool to accelerate the identification of disease-miRNA associations.

]]>
<![CDATA[Inferring epidemiological parameters from phylogenies using regression-ABC: A comparative study]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdcdb5

Inferring epidemiological parameters such as the R0 from time-scaled phylogenies is a timely challenge. Most current approaches rely on likelihood functions, which raise specific issues that range from computing these functions to finding their maxima numerically. Here, we present a new regression-based Approximate Bayesian Computation (ABC) approach, which we base on a large variety of summary statistics intended to capture the information contained in the phylogeny and its corresponding lineage-through-time plot. The regression step involves the Least Absolute Shrinkage and Selection Operator (LASSO) method, which is a robust machine learning technique. It allows us to readily deal with the large number of summary statistics, while avoiding resorting to Markov Chain Monte Carlo (MCMC) techniques. To compare our approach to existing ones, we simulated target trees under a variety of epidemiological models and settings, and inferred parameters of interest using the same priors. We found that, for large phylogenies, the accuracy of our regression-ABC is comparable to that of likelihood-based approaches involving birth-death processes implemented in BEAST2. Our approach even outperformed these when inferring the host population size with a Susceptible-Infected-Removed epidemiological model. It also clearly outperformed a recent kernel-ABC approach when assuming a Susceptible-Infected epidemiological model with two host types. Lastly, by re-analyzing data from the early stages of the recent Ebola epidemic in Sierra Leone, we showed that regression-ABC provides more realistic estimates for the duration parameters (latency and infectiousness) than the likelihood-based method. Overall, ABC based on a large variety of summary statistics and a regression method able to perform variable selection and avoid overfitting is a promising approach to analyze large phylogenies.

]]>
<![CDATA[Phe362Tyr in AChE: A Major Factor Responsible for Azamethiphos Resistance in Lepeophtheirus salmonis in Norway]]> https://www.researchpad.co/article/5989da32ab0ee8fa60b85126

Organophosphates (OP) are one of the major treatments used against the salmon louse (Lepeophtherius salmonis) in Norwegian salmonid aquaculture. The use of OP since the late 1970s has resulted in widespread resistant parasites. Recently, we reported a single mutation (Phe362Tyr) in acetylcholinesterase (AChE) as the major mechanism behind resistance in salmon louse towards OP. The present study was carried out to validate this mechanism at the field level. A total of 6658 salmon louse samples were enrolled from 56 different fish farms across the Norwegian coast, from Vest Agder in the south to Finnmark in the north. All the samples were genotyped using a TaqMan probe assay for the Phe362Tyr mutation. A strong association was observed between areas with frequent use of the OP (azamethiphos) and the Phe362Tyr mutation. This was confirmed at 15 sites where results from independently conducted bioassays and genotyping of parasites correlated well. Furthermore, genotyping of surviving and moribund parasites from six bioassay experiments demonstrated a highly significant negative correlation between the frequency of resistance alleles and the probability of dying when exposed to azamethiphos in a bioassay. Based on these observations, we could strongly conclude that the Phe362Tyr mutation is a major factor responsible for OP resistance in salmon louse on Norwegian fish farms.

]]>
<![CDATA[Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models]]> https://www.researchpad.co/article/5989d9f2ab0ee8fa60b6ed94

The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution.

]]>
<![CDATA[Permissible Home Range Estimation (PHRE) in Restricted Habitats: A New Algorithm and an Evaluation for Sea Otters]]> https://www.researchpad.co/article/5989da41ab0ee8fa60b89e80

Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or “PHRE”) that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore). Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.

]]>
<![CDATA[Discriminating between HuR and TTP binding sites using the k-spectrum kernel method]]> https://www.researchpad.co/article/5989db50ab0ee8fa60bdc17e

Background

The RNA binding proteins (RBPs) human antigen R (HuR) and Tristetraprolin (TTP) are known to exhibit competitive binding but have opposing effects on the bound messenger RNA (mRNA). How cells discriminate between the two proteins is an interesting problem. Machine learning approaches, such as support vector machines (SVMs), may be useful in the identification of discriminative features. However, this method has yet to be applied to studies of RNA binding protein motifs.

Results

Applying the k-spectrum kernel to a support vector machine (SVM), we first verified the published binding sites of both HuR and TTP. Additional feature engineering highlighted the U-rich binding preference of HuR and AU-rich binding preference for TTP. Domain adaptation along with multi-task learning was used to predict the common binding sites.

Conclusion

The distinction between HuR and TTP binding appears to be subtle content features. HuR prefers strongly U-rich sequences whereas TTP prefers AU-rich as with increasing A content, the sequences are more likely to be bound only by TTP. Our model is consistent with competitive binding of the two proteins, particularly at intermediate AU-balanced sequences. This suggests that fine changes in the A/U balance within a untranslated region (UTR) can alter the binding and subsequent stability of the message. Both feature engineering and domain adaptation emphasized the extent to which these proteins recognize similar general sequence features. This work suggests that the k-spectrum kernel method could be useful when studying RNA binding proteins and domain adaptation techniques such as feature augmentation could be employed particularly when examining RBPs with similar binding preferences.

]]>
<![CDATA[Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes]]> https://www.researchpad.co/article/5989da27ab0ee8fa60b81063

Accurate estimation of neuronal receptive fields is essential for understanding sensory processing in the early visual system. Yet a full characterization of receptive fields is still incomplete, especially with regard to natural visual stimuli and in complete populations of cortical neurons. While previous work has incorporated known structural properties of the early visual system, such as lateral connectivity, or imposing simple-cell-like receptive field structure, no study has exploited the fact that nearby V1 neurons share common feed-forward input from thalamus and other upstream cortical neurons. We introduce a new method for estimating receptive fields simultaneously for a population of V1 neurons, using a model-based analysis incorporating knowledge of the feed-forward visual hierarchy. We assume that a population of V1 neurons shares a common pool of thalamic inputs, and consists of two layers of simple and complex-like V1 neurons. When fit to recordings of a local population of mouse layer 2/3 V1 neurons, our model offers an accurate description of their response to natural images and significant improvement of prediction power over the current state-of-the-art methods. We show that the responses of a large local population of V1 neurons with locally diverse receptive fields can be described with surprisingly limited number of thalamic inputs, consistent with recent experimental findings. Our structural model not only offers an improved functional characterization of V1 neurons, but also provides a framework for studying the relationship between connectivity and function in visual cortical areas.

]]>
<![CDATA[Local structure preserving sparse coding for infrared target recognition]]> https://www.researchpad.co/article/5989db50ab0ee8fa60bdbfbf

Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions.

]]>
<![CDATA[MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods]]> https://www.researchpad.co/article/5989da83ab0ee8fa60b9b89d

An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and 1-regularized parallel imaging methods.

]]>
<![CDATA[Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus)]]> https://www.researchpad.co/article/5989db50ab0ee8fa60bdc158

The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus), we aimed to: 1) compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2) evaluate how well these methods identify known physical barriers (e.g. rivers); and 3) test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24–165 ha (mean 80.89 ha). The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes.

]]>