ResearchPad - statistical-noise https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Robust pollution source parameter identification based on the artificial bee colony algorithm using a wireless sensor network]]> https://www.researchpad.co/article/elastic_article_14751 Pollution source parameter identification (PSPI) is significant for pollution control, since it can provide important information and save a lot of time for subsequent pollution elimination works. For solving the PSPI problem, a large number of pollution sensor nodes can be rapidly deployed to cover a large area and form a wireless sensor network (WSN). Based on the measurements of WSN, least-squares estimation methods can solve the PSPI problem by searching for the solution that minimize the sum of squared measurement noises. They are independent of the measurement noise distribution, i.e., robust to the noise distribution. To search for the least-squares solution, population-based parallel search techniques usually can overcome the premature convergence problem, which can stagnate the single-point search algorithm. In this paper, we adapt the relatively newly presented artificial bee colony (ABC) algorithm to solve the WSN-based PSPI problem and verifies its feasibility and robustness. Extensive simulation results show that the ABC and the particle swarm optimization (PSO) algorithm obtained similar identification results in the same simulation scenario. Moreover, the ABC and the PSO achieved much better performance than a traditionally used single-point search algorithm, i.e., the trust-region reflective algorithm.

]]>
<![CDATA[Evaluation of functional methods of joint centre determination for quasi-planar movement]]> https://www.researchpad.co/article/5c605a9ed5eed0c4847cd32a

Functional methods identify joint centres as the centre of rotation (CoR) of two adjacent movements during an ad-hoc movement. The methods have been used for functionally determining hip joint centre in gait analysis and have revealed advantages compared to predictive regression techniques. However, the current implementation of functional methods hinders its application in clinical use when subjects have difficulties performing multi-plane movements over the required range. In this study, we systematically investigated whether functional methods can be used to localise the CoR during a quasi-planar movement. The effects of the following factors were analysed: the algorithms, the range and speed of the movement, marker cluster location, marker cluster size and distance to the joint centre. A mechanical linkage was used in our study to isolate the factors of interest and give insight to variation in implementation of functional methods. Our results showed the algorithms and cluster locations significantly affected the estimate results. For all algorithms, a significantly positive relationship between CoR errors and the distance of proximal cluster coordinate location to the joint centre along the medial-lateral direction was observed while the distal marker clusters were best located as close as possible to the joint centre. By optimising the analytical and experimental factors, the transformation algorithms achieved a root mean square error (RMSE) of 5.3 mm while the sphere fitting methods yielded the best estimation with an RMSE of 2.6 mm. The transformation algorithms performed better in presence of random noise and simulated soft tissue artefacts.

]]>
<![CDATA[Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex]]> https://www.researchpad.co/article/5c0ed73fd5eed0c484f13d8e

A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and “model-matched” stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex—in which spectrogram-like peripheral input is processed by linear spectrotemporal filters—can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.

]]>
<![CDATA[Deepbinner: Demultiplexing barcoded Oxford Nanopore reads with deep convolutional neural networks]]> https://www.researchpad.co/article/5bfdb372d5eed0c4845c990b

Multiplexing, the simultaneous sequencing of multiple barcoded DNA samples on a single flow cell, has made Oxford Nanopore sequencing cost-effective for small genomes. However, it depends on the ability to sort the resulting sequencing reads by barcode, and current demultiplexing tools fail to classify many reads. Here we present Deepbinner, a tool for Oxford Nanopore demultiplexing that uses a deep neural network to classify reads based on the raw electrical read signal. This ‘signal-space’ approach allows for greater accuracy than existing ‘base-space’ tools (Albacore and Porechop) for which signals must first be converted to DNA base calls, itself a complex problem that can introduce noise into the barcode sequence. To assess Deepbinner and existing tools, we performed multiplex sequencing on 12 amplicons chosen for their distinguishability. This allowed us to establish a ground truth classification for each read based on internal sequence alone. Deepbinner had the lowest rate of unclassified reads (7.8%) and the highest demultiplexing precision (98.5% of classified reads were correctly assigned). It can be used alone (to maximise the number of classified reads) or in conjunction with other demultiplexers (to maximise precision and minimise false positive classifications). We also found cross-sample chimeric reads (0.3%) and evidence of barcode switching (0.3%) in our dataset, which likely arise during library preparation and may be detrimental for quantitative studies that use multiplexing. Deepbinner is open source (GPLv3) and available at https://github.com/rrwick/Deepbinner.

]]>
<![CDATA[Two-stage motion artefact reduction algorithm for electrocardiogram using weighted adaptive noise cancelling and recursive Hampel filter]]> https://www.researchpad.co/article/5bfdb380d5eed0c4845c9fc3

The presence of motion artefacts in ECG signals can cause misleading interpretation of cardiovascular status. Recently, reducing the motion artefact from ECG signal has gained the interest of many researchers. Due to the overlapping nature of the motion artefact with the ECG signal, it is difficult to reduce motion artefact without distorting the original ECG signal. However, the application of an adaptive noise canceler has shown that it is effective in reducing motion artefacts if the appropriate noise reference that is correlated with the noise in the ECG signal is available. Unfortunately, the noise reference is not always correlated with motion artefact. Consequently, filtering with such a noise reference may lead to contaminating the ECG signal. In this paper, a two-stage filtering motion artefact reduction algorithm is proposed. In the algorithm, two methods are proposed, each of which works in one stage. The weighted adaptive noise filtering method (WAF) is proposed for the first stage. The acceleration derivative is used as motion artefact reference and the Pearson correlation coefficient between acceleration and ECG signal is used as a weighting factor. In the second stage, a recursive Hampel filter-based estimation method (RHFBE) is proposed for estimating the ECG signal segments, based on the spatial correlation of the ECG segment component that is obtained from successive ECG signals. Real-World dataset is used to evaluate the effectiveness of the proposed methods compared to the conventional adaptive filter. The results show a promising enhancement in terms of reducing motion artefacts from the ECG signals recorded by a cost-effective single lead ECG sensor during several activities of different subjects.

]]>
<![CDATA[Effects of Physiological Internal Noise on Model Predictions of Concurrent Vowel Identification for Normal-Hearing Listeners]]> https://www.researchpad.co/article/5989dad7ab0ee8fa60bb84df

Previous studies have shown that concurrent vowel identification improves with increasing temporal onset asynchrony of the vowels, even if the vowels have the same fundamental frequency. The current study investigated the possible underlying neural processing involved in concurrent vowel perception. The individual vowel stimuli from a previously published study were used as inputs for a phenomenological auditory-nerve (AN) model. Spectrotemporal representations of simulated neural excitation patterns were constructed (i.e., neurograms) and then matched quantitatively with the neurograms of the single vowels using the Neurogram Similarity Index Measure (NSIM). A novel computational decision model was used to predict concurrent vowel identification. To facilitate optimum matches between the model predictions and the behavioral human data, internal noise was added at either neurogram generation or neurogram matching using the NSIM procedure. The best fit to the behavioral data was achieved with a signal-to-noise ratio (SNR) of 8 dB for internal noise added at the neurogram but with a much smaller amount of internal noise (SNR of 60 dB) for internal noise added at the level of the NSIM computations. The results suggest that accurate modeling of concurrent vowel data from listeners with normal hearing may partly depend on internal noise and where internal noise is hypothesized to occur during the concurrent vowel identification process.

]]>
<![CDATA[Estimation of measurement error in plasma HIV-1 RNA assays near their limit of quantification]]> https://www.researchpad.co/article/5989db51ab0ee8fa60bdc45c

Background

Plasma HIV-1 RNA levels (pVLs), routinely used for clinical management, are influenced by measurement error (ME) due to physiologic and assay variation.

Objective

To assess the ME of the COBAS HIV-1 Ampliprep AMPLICOR MONITOR ultrasensitive assay version 1.5 and the COBAS Ampliprep Taqman HIV-1 assay versions 1.0 and 2.0 close to their lower limit of detection. Secondly to examine whether there was any evidence that pVL measurements closest to the lower limit of quantification, where clinical decisions are made, were susceptible to a higher degree of random noise than the remaining range.

Methods

We analysed longitudinal pVL of treatment-naïve patients from British Columbia, Canada, during their first six months on treatment, for time periods when each assay was uniquely available: Period 1 (Amplicor): 08/03/2000–01/02/2008; Period 2 (Taqman v1.0): 07/01/2010–07/03/2012; Period 3 (Taqman v2.0): 08/03/2012–30/06/2014. ME was estimated via generalized additive mixed effects models, adjusting for several clinical and demographic variables and follow-up time.

Results

The ME associated with each assay was approximately 0.5 log10 copies/mL. The number of pVL measurements, at a given pVL value, was not randomly distributed; values ≤250 copies/mL were strongly systematically overrepresented in all assays, with the prevalence decreasing monotonically as the pVL increased. Model residuals for pVL ≤250 copies/mL were approximately three times higher than that for the higher range, and pVL measurements in this range could not be modelled effectively due to considerable random noise of the data.

Conclusions

Although the ME was stable across assays, there is substantial increase in random noise in measuring pVL close to the lower level of detection. These findings have important clinical significance, especially in the range where key clinical decisions are made. Thus, pVL values ≤250 copies/mL should not be taken as the “truth” and repeat pVL measurement is encouraged to confirm viral suppression.

]]>
<![CDATA[A Method for Non-Rigid Face Alignment via Combining Local and Holistic Matching]]> https://www.researchpad.co/article/5989daf1ab0ee8fa60bc12bb

We propose a method for non-rigid face alignment which only needs a single template, such as using a person’s smile face to match his surprise face. First, in order to be robust to outliers caused by complex geometric deformations, a new local feature matching method called K Patch Pairs (K-PP) is proposed. Specifically, inspired by the state-of-art similarity measure used in template matching, K-PP is to find the mutual K nearest neighbors between two images. A weight matrix is then presented to balance the similarity and the number of local matching. Second, we proposed a modified Lucas-Kanade algorithm combined with local matching constraint to solve the non-rigid face alignment, so that a holistic face representation and local features can be jointly modeled in the object function. Both the flexible ability of local matching and the robust ability of holistic fitting are included in our method. Furthermore, we show that the optimization problem can be efficiently solved by the inverse compositional algorithm. Comparison results with conventional methods demonstrate our superiority in terms of both accuracy and robustness.

]]>
<![CDATA[Robust information propagation through noisy neural circuits]]> https://www.researchpad.co/article/5989db5aab0ee8fa60bdf260

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise.

]]>
<![CDATA[Suppressing Respiration Effects when Geometric Distortion Is Corrected Dynamically by Phase Labeling for Additional Coordinate Encoding (PLACE) during Functional MRI]]> https://www.researchpad.co/article/5989d9f5ab0ee8fa60b6fd9e

Echo planar imaging (EPI) suffers from geometric distortions caused by magnetic field inhomogeneities, which can be time-varying as a result of small amounts of head motion that occur over seconds and minutes during fMRI experiments, also known as “dynamic geometric distortion”. Phase Labeling for Additional Coordinate Encoding (PLACE) is a promising technique for geometric distortion correction without reduced temporal resolution and in principle can be used to correct for motion-induced dynamic geometric distortion. PLACE requires at least two EPI images of the same anatomy that are ideally acquired with no variation in the magnetic field inhomogeneities. However, head motion and lung ventilation during the respiratory cycle can cause changes in magnetic field inhomogeneities within the EPI pair used for PLACE. In this work, we exploited dynamic off-resonance in k-space (DORK) and averaging to correct the within EPI pair magnetic field inhomogeneities; and hence proposed a combined technique (DORK+PLACE+averaging) to mitigate dynamic geometric distortion in EPI-based fMRI while preserving the temporal resolution. The performance of the combined DORK, PLACE and averaging technique was characterized through several imaging experiments involving test phantoms and six healthy adult volunteers. Phantom data illustrate reduced temporal standard deviation of fMRI signal intensities after use of combined dynamic PLACE, DORK and averaging compared to the standard processing and static geometric distortion correction. The combined technique also substantially improved the temporal standard deviation and activation maps obtained from human fMRI data in comparison to the results obtained by standard processing and static geometric distortion correction, highlighting the utility of the approach.

]]>
<![CDATA[A New Variational Approach for Multiplicative Noise and Blur Removal]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdcb12

This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model.

]]>
<![CDATA[How to Distinguish Conformational Selection and Induced Fit Based on Chemical Relaxation Rates]]> https://www.researchpad.co/article/5989da50ab0ee8fa60b8da57

Protein binding often involves conformational changes. Important questions are whether a conformational change occurs prior to a binding event (‘conformational selection’) or after a binding event (‘induced fit’), and how conformational transition rates can be obtained from experiments. In this article, we present general results for the chemical relaxation rates of conformational-selection and induced-fit binding processes that hold for all concentrations of proteins and ligands and, thus, go beyond the standard pseudo-first-order approximation of large ligand concentration. These results allow to distinguish conformational-selection from induced-fit processes—also in cases in which such a distinction is not possible under pseudo-first-order conditions—and to extract conformational transition rates of proteins from chemical relaxation data.

]]>
<![CDATA[A No-Reference Adaptive Blockiness Measure for JPEG Compressed Images]]> https://www.researchpad.co/article/5989da71ab0ee8fa60b9516c

Digital images have been extensively used in education, research, and entertainment. Many of these images, taken by consumer cameras, are compressed by the JPEG algorithm for effective storage and transmission. Blocking artifact is a well-known problem caused by this algorithm. Effective measurement of blocking artifacts plays an important role in the design, optimization, and evaluation of image compression algorithms. In this paper, we propose a no-reference objective blockiness measure, which is adaptive to high frequency component in an image. Difference of entropies across blocks and variation of block boundary pixel values in edge images are adopted to calculate the blockiness level in areas with low and high frequency component, respectively. Extensive experimental results prove that the proposed measure is effective and stable across a wide variety of images. It is robust to image noise and can be used for real-world image quality monitoring and control. Index Terms—JPEG, no-reference, blockiness measure

]]>
<![CDATA[Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network]]> https://www.researchpad.co/article/5989db5cab0ee8fa60be009c

Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

]]>
<![CDATA[A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes]]> https://www.researchpad.co/article/5989db28ab0ee8fa60bd0a50

In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.

]]>
<![CDATA[Let’s Not Waste Time: Using Temporal Information in Clustered Activity Estimation with Spatial Adjacency Restrictions (CAESAR) for Parcellating FMRI Data]]> https://www.researchpad.co/article/5989da2fab0ee8fa60b83e08

We have proposed a Bayesian approach for functional parcellation of whole-brain FMRI measurements which we call Clustered Activity Estimation with Spatial Adjacency Restrictions (CAESAR). We use distance-dependent Chinese restaurant processes (dd-CRPs) to define a flexible prior which partitions the voxel measurements into clusters whose number and shapes are unknown a priori. With dd-CRPs we can conveniently implement spatial constraints to ensure that our parcellations remain spatially contiguous and thereby physiologically meaningful. In the present work, we extend CAESAR by using Gaussian process (GP) priors to model the temporally smooth haemodynamic signals that give rise to the measured FMRI data. A challenge for GP inference in our setting is the cubic scaling with respect to the number of time points, which can become computationally prohibitive with FMRI measurements, potentially consisting of long time series. As a solution we describe an efficient implementation that is practically as fast as the corresponding time-independent non-GP model with typically-sized FMRI data sets. We also employ a population Monte-Carlo algorithm that can significantly speed up convergence compared to traditional single-chain methods. First we illustrate the benefits of CAESAR and the GP priors with simulated experiments. Next, we demonstrate our approach by parcellating resting state FMRI data measured from twenty participants as taken from the Human Connectome Project data repository. Results show that CAESAR affords highly robust and scalable whole-brain clustering of FMRI timecourses.

]]>
<![CDATA[Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy]]> https://www.researchpad.co/article/5989da46ab0ee8fa60b8bc63

The purposes of this study were to optimize a proton computed tomography system (pCT) for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT) 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy.

]]>
<![CDATA[Systematic Design of a Metal Ion Biosensor: A Multi-Objective Optimization Approach]]> https://www.researchpad.co/article/5989da77ab0ee8fa60b971c2

With the recent industrial expansion, heavy metals and other pollutants have increasingly contaminated our living surroundings. Heavy metals, being non-degradable, tend to accumulate in the food chain, resulting in potentially damaging toxicity to organisms. Thus, techniques to detect metal ions have gradually begun to receive attention. Recent progress in research on synthetic biology offers an alternative means for metal ion detection via the help of promoter elements derived from microorganisms. To make the design easier, it is necessary to develop a systemic design method for evaluating and selecting adequate components to achieve a desired detection performance. A multi-objective (MO) H2/H performance criterion is derived here for design specifications of a metal ion biosensor to achieve the H2 optimal matching of a desired input/output (I/O) response and simultaneous H optimal filtering of intrinsic parameter fluctuations and external cellular noise. According to the two design specifications, a Takagi-Sugeno (T-S) fuzzy model is employed to interpolate several local linear stochastic systems to approximate the nonlinear stochastic metal ion biosensor system so that the multi-objective H2/H design of the metal ion biosensor can be solved by an associated linear matrix inequality (LMI)-constrained multi-objective (MO) design problem. The analysis and design of a metal ion biosensor with optimal I/O response matching and optimal noise filtering ability then can be achieved by solving the multi-objective problem under a set of LMIs. Moreover, a multi-objective evolutionary algorithm (MOEA)-based library search method is employed to find adequate components from corresponding libraries to solve LMI-constrained MO H2/H design problems. It is a useful tool for the design of metal ion biosensors, particularly regarding the tradeoffs between the design factors under consideration.

]]>
<![CDATA[Elucidation of molecular kinetic schemes from macroscopic traces using system identification]]> https://www.researchpad.co/article/5989db54ab0ee8fa60bdd040

Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic) processes from the overall (macroscopic) response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE). SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems.

]]>
<![CDATA[Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise]]> https://www.researchpad.co/article/5989db50ab0ee8fa60bdbecc

The objective of this study was to understand how people adapt to errors when using a myoelectric control interface. We compared adaptation across 1) non-amputee subjects using joint angle, joint torque, and myoelectric control interfaces, and 2) amputee subjects using myoelectric control interfaces with residual and intact limbs (five total control interface conditions). We measured trial-by-trial adaptation to self-generated errors and random perturbations during a virtual, single degree-of-freedom task with two levels of feedback uncertainty, and evaluated adaptation by fitting a hierarchical Kalman filter model. We have two main results. First, adaptation to random perturbations was similar across all control interfaces, whereas adaptation to self-generated errors differed. These patterns matched predictions of our model, which was fit to each control interface by changing the process noise parameter that represented system variability. Second, in amputee subjects, we found similar adaptation rates and error levels between residual and intact limbs. These results link prosthesis control to broader areas of motor learning and adaptation and provide a useful model of adaptation with myoelectric control. The model of adaptation will help us understand and solve prosthesis control challenges, such as providing additional sensory feedback.

]]>