ResearchPad - signal-filtering Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Genuine cross-frequency coupling networks in human resting-state electrophysiological recordings]]> Genuine interareal cross-frequency coupling (CFC) can be identified from human resting state activity using magnetoencephalography, stereoelectroencephalography, and novel network approaches. CFC couples slow theta and alpha oscillations to faster oscillations across brain regions.

<![CDATA[ECG-based prediction algorithm for imminent malignant ventricular arrhythmias using decision tree]]> Spontaneous prediction of malignant ventricular arrhythmia (MVA) is useful to avoid delay in rescue operations. Recently, researchers have developed several algorithms to predict MVA using various features derived from electrocardiogram (ECG). However, there are several unresolved issues regarding MVA prediction such as the effect of number of ECG features on a prediction remaining unclear, possibility that an alert for occurring MVA may arrive very late and uncertainty in the performance of the algorithm predicting MVA minutes before onset. To overcome the aforementioned problems, this research conducts an in-depth study on the number and types of ECG features that are implemented in a decision tree classifier. In addition, this research also investigates an algorithm’s execution time before the occurrence of MVA to minimize delays in warnings for MVA. Lastly, this research aims to study both the sensitivity and specificity of an algorithm to reveal the performance of MVA prediction algorithms from time to time. To strengthen the results of analysis, several classifiers such as support vector machine and naive Bayes are also examined for the purpose of comparison study. There are three phases required to achieve the objectives. The first phase is literature review on existing relevant studies. The second phase deals with design and development of four modules for predicting MVA. Rigorous experiments are performed in the feature selection and classification modules. The results show that eight ECG features with decision tree classifier achieved good prediction performance in terms of execution time and sensitivity. In addition, the results show that the highest percentage for sensitivity and specificity is 95% and 90% respectively, in the fourth 5-minute interval (15.1 minutes–20 minutes) that preceded the onset of an arrhythmia event. Such results imply that the fourth 5-minute interval would be the best time to perform prediction.

<![CDATA[Reliability of a new analysis to compute time to stabilization following a single leg drop jump landing in children]]>

Although a number of different methods have been proposed to assess the time to stabilization (TTS), none is reliable in every axis and no tests of this type have been carried out on children. The purpose of this study was thus to develop a new computational method to obtain TTS using a time-scale (frequency) approach [i.e. continuous wavelet transformation (WAV)] in children. Thirty normally-developed children (mean age 10.16 years, SD = 1.52) participated in the study. Every participant performed 30 single-leg drop jump landings with the dominant lower limb (barefoot) on a force plate from three different heights (15cm, 20cm and 25cm). Five signals were used to compute the TTS: i) Raw, ii) Root mean squared, iii) Sequential average processing, iv) the fitting curve of the signal using an unbounded third order polynomial fit, and v) WAV. The reliability of the TTS was determined by computing both the Intraclass Correlation Coefficient (ICC) and the Standard Error of the Measurement (SEM).In the antero-posterior and vertical axes, the values obtained with the WAV signal from all heights were similar to those obtained by raw, root mean squared and sequential average processing. The values obtained for the medio-lateral axis were relatively small. This WAV provided substantial-to-good ICC values and low SEM for almost all the axes and heights. The results of the current study thus suggest the WAV method could be used to compute overall TTS when studying children’s dynamic postural stability.

<![CDATA[Minimal force transmission between human thumb and index finger muscles under passive conditions]]>

It has been hypothesized that force can be transmitted between adjacent muscles. Intermuscle force transmission violates the assumption that muscles act in mechanical isolation, and implies that predictions from biomechanical models are in error due to mechanical interactions between muscles, but the functional relevance of intermuscle force transmission is unclear. To investigate intermuscle force transmission between human flexor pollicis longus and the index finger part of flexor digitorum profundus, we compared finger flexion force produced by passive thumb flexion after one of three conditioning protocols: passive thumb flexion-extension cycling, thumb flexion maximal voluntary contraction (MVC), and thumb extension stretch. Finger flexion force increased after all three conditions. Compared to passive thumb flexion-extension cycling, change in finger flexion force was less after thumb extension stretch (mean difference 0.028 N, 95% CI 0.005 to 0.051 N), but not after thumb flexion MVC (0.007 N, 95% CI -0.020 to 0.033 N). As muscle conditioning changed finger flexion force produced by passive thumb flexion, the change in force is likely due to intermuscle force transmission. Thus, intermuscle force transmission resulting from passive stretch of an adjacent muscle is probably small enough to be ignored.

<![CDATA[A large scale screening study with a SMR-based BCI: Categorization of BCI users and differences in their SMR activity]]>

Brain-Computer Interfaces (BCIs) are inefficient for a non-negligible part of the population, estimated around 25%. To understand this phenomenon in Sensorimotor Rhythm (SMR) based BCIs, data from a large-scale screening study conducted on 80 novice participants with the Berlin BCI system and its standard machine-learning approach were investigated. Each participant performed one BCI session with resting state Encephalography, Motor Observation, Motor Execution and Motor Imagery recordings and 128 electrodes. A significant portion of the participants (40%) could not achieve BCI control (feedback performance > 70%). Based on the performance of the calibration and feedback runs, BCI users were stratified in three groups. Analyses directed to detect and elucidate the differences in the SMR activity of these groups were performed. Statistics on reactive frequencies, task prevalence and classification results are reported. Based on their SMR activity, also a systematic list of potential reasons leading to performance drops and thus hints for possible improvements of BCI experimental design are given. The categorization of BCI users has several advantages, allowing researchers 1) to select subjects for further analyses as well as for testing new BCI paradigms or algorithms, 2) to adopt a better subject-dependent training strategy and 3) easier comparisons between different studies.

<![CDATA[Ambulatory assessment of phonotraumatic vocal hyperfunction using glottal airflow measures estimated from neck-surface acceleration]]>

Phonotraumatic vocal hyperfunction (PVH) is associated with chronic misuse and/or abuse of voice that can result in lesions such as vocal fold nodules. The clinical aerodynamic assessment of vocal function has been recently shown to differentiate between patients with PVH and healthy controls to provide meaningful insight into pathophysiological mechanisms associated with these disorders. However, all current clinical assessment of PVH is incomplete because of its inability to objectively identify the type and extent of detrimental phonatory function that is associated with PVH during daily voice use. The current study sought to address this issue by incorporating, for the first time in a comprehensive ambulatory assessment, glottal airflow parameters estimated from a neck-mounted accelerometer and recorded to a smartphone-based voice monitor. We tested this approach on 48 patients with vocal fold nodules and 48 matched healthy-control subjects who each wore the voice monitor for a week. Seven glottal airflow features were estimated every 50 ms using an impedance-based inverse filtering scheme, and seven high-order summary statistics of each feature were computed every 5 minutes over voiced segments. Based on a univariate hypothesis testing, eight glottal airflow summary statistics were found to be statistically different between patient and healthy-control groups. L1-regularized logistic regression for a supervised classification task yielded a mean (standard deviation) area under the ROC curve of 0.82 (0.25) and an accuracy of 0.83 (0.14). These results outperform the state-of-the-art classification for the same classification task and provide a new avenue to improve the assessment and treatment of hyperfunctional voice disorders.

<![CDATA[Poincaré plot analysis of cerebral blood flow signals: Feature extraction and classification methods for apnea detection]]>


Rheoencephalography is a simple and inexpensive technique for cerebral blood flow assessment, however, it is not used in clinical practice since its correlation to clinical conditions has not yet been extensively proved. The present study investigates the ability of Poincaré Plot descriptors from rheoencephalography signals to detect apneas in volunteers.


A group of 16 subjects participated in the study. Rheoencephalography data from baseline and apnea periods were recorded and Poincaré Plot descriptors were extracted from the reconstructed attractors with different time lags (τ). Among the set of extracted features, those presenting significant differences between baseline and apnea recordings were used as inputs to four different classifiers to optimize the apnea detection.


Three features showed significant differences between apnea and baseline signals: the Poincaré Plot ratio (SDratio), its correlation (R) and the Complex Correlation Measure (CCM). Those differences were optimized for time lags smaller than those recommended in previous works for other biomedical signals, all of them being lower than the threshold established by the position of the inflection point in the CCM curves. The classifier showing the best performance was the classification tree, with 81% accuracy and an area under the curve of the receiver operating characteristic of 0.927. This performance was obtained using a single input parameter, either SDratio or R.


Poincaré Plot features extracted from the attractors of rheoencephalographic signals were able to track cerebral blood flow changes provoked by breath holding. Even though further validation with independent datasets is needed, those results suggest that nonlinear analysis of rheoencephalography might be a useful approach to assess the correlation of cerebral impedance with clinical changes.

<![CDATA[Detection and analysis of spatiotemporal patterns in brain activity]]>

There is growing evidence that population-level brain activity is often organized into propagating waves that are structured in both space and time. Such spatiotemporal patterns have been linked to brain function and observed across multiple recording methodologies and scales. The ability to detect and analyze these patterns is thus essential for understanding the working mechanisms of neural circuits. Here we present a mathematical and computational framework for the identification and analysis of multiple classes of wave patterns in neural population-level recordings. By drawing a conceptual link between spatiotemporal patterns found in the brain and coherent structures such as vortices found in turbulent flows, we introduce velocity vector fields to characterize neural population activity. These vector fields are calculated for both phase and amplitude of oscillatory neural signals by adapting optical flow estimation methods from the field of computer vision. Based on these velocity vector fields, we then introduce order parameters and critical point analysis to detect and characterize a diverse range of propagating wave patterns, including planar waves, sources, sinks, spiral waves, and saddle patterns. We also introduce a novel vector field decomposition method that extracts the dominant spatiotemporal structures in a recording. This enables neural data to be represented by the activity of a small number of independent spatiotemporal modes, providing an alternative to existing dimensionality reduction techniques which separate space and time components. We demonstrate the capabilities of the framework and toolbox with simulated data, local field potentials from marmoset visual cortex and optical voltage recordings from whole mouse cortex, and we show that pattern dynamics are non-random and are modulated by the presence of visual stimuli. These methods are implemented in a MATLAB toolbox, which is freely available under an open-source licensing agreement.

<![CDATA[Nil effects of μ-rhythm phase-dependent burst-rTMS on cortical excitability in humans: A resting-state EEG and TMS-EEG study]]>

Repetitive transcranial magnetic stimulation (rTMS) can induce excitability changes of a stimulated brain area through synaptic plasticity mechanisms. High-frequency (100 Hz) triplets of rTMS synchronized to the negative but not the positive peak of the ongoing sensorimotor μ-rhythm isolated with the concurrently acquired electroencephalography (EEG) resulted in a reproducible long-term potentiation like increase of motor evoked potential (MEP) amplitude, an index of corticospinal excitability (Zrenner et al. 2018, Brain Stimul 11:374–389). Here, we analyzed the EEG and TMS-EEG data from (Zrenner et al., 2018) to investigate the effects of μ-rhythm-phase-dependent burst-rTMS on EEG-based measures of cortical excitability. We used resting-state EEG to assess μ- and β-power in the motor cortex ipsi- and contralateral to the stimulation, and single-pulse TMS-evoked and induced EEG responses in the stimulated motor cortex. We found that μ-rhythm-phase-dependent burst-rTMS did not significantly change any of these EEG measures, despite the presence of a significant differential and reproducible effect on MEP amplitude. We conclude that EEG measures of cortical excitability do not reflect corticospinal excitability as measured by MEP amplitude. Most likely this is explained by the fact that rTMS induces complex changes at the molecular and synaptic level towards both excitation and inhibition that cannot be differentiated at the macroscopic level by EEG.

<![CDATA[Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex]]>

A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and “model-matched” stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex—in which spectrogram-like peripheral input is processed by linear spectrotemporal filters—can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.

<![CDATA[Altering alpha-frequency brain oscillations with rapid analog feedback-driven neurostimulation]]>

Oscillations of the brain’s local field potential (LFP) may coordinate neural ensembles and brain networks. It has been difficult to causally test this model or to translate its implications into treatments, because there are few reliable ways to alter LFP oscillations. We developed a closed-loop analog circuit to enhance brain oscillations by feeding them back into cortex through phase-locked transcranial electrical stimulation. We tested the system in a rhesus macaque with chronically implanted electrode arrays, targeting 8–15 Hz (alpha) oscillations. Ten seconds of stimulation increased alpha oscillatory power for up to 1 second after stimulation offset. In contrast, open-loop stimulation decreased alpha power. There was no effect in the neighboring 15–30 Hz (beta) LFP rhythm or on a neighboring array that did not participate in closed-loop feedback. Analog closed-loop neurostimulation might thus be a useful strategy for altering brain oscillations, both for basic research and the treatment of neuro-psychiatric disease.

<![CDATA[Two-stage motion artefact reduction algorithm for electrocardiogram using weighted adaptive noise cancelling and recursive Hampel filter]]>

The presence of motion artefacts in ECG signals can cause misleading interpretation of cardiovascular status. Recently, reducing the motion artefact from ECG signal has gained the interest of many researchers. Due to the overlapping nature of the motion artefact with the ECG signal, it is difficult to reduce motion artefact without distorting the original ECG signal. However, the application of an adaptive noise canceler has shown that it is effective in reducing motion artefacts if the appropriate noise reference that is correlated with the noise in the ECG signal is available. Unfortunately, the noise reference is not always correlated with motion artefact. Consequently, filtering with such a noise reference may lead to contaminating the ECG signal. In this paper, a two-stage filtering motion artefact reduction algorithm is proposed. In the algorithm, two methods are proposed, each of which works in one stage. The weighted adaptive noise filtering method (WAF) is proposed for the first stage. The acceleration derivative is used as motion artefact reference and the Pearson correlation coefficient between acceleration and ECG signal is used as a weighting factor. In the second stage, a recursive Hampel filter-based estimation method (RHFBE) is proposed for estimating the ECG signal segments, based on the spatial correlation of the ECG segment component that is obtained from successive ECG signals. Real-World dataset is used to evaluate the effectiveness of the proposed methods compared to the conventional adaptive filter. The results show a promising enhancement in terms of reducing motion artefacts from the ECG signals recorded by a cost-effective single lead ECG sensor during several activities of different subjects.

<![CDATA[Adaptive feature detection from differential processing in parallel retinal pathways]]>

To transmit information efficiently in a changing environment, the retina adapts to visual contrast by adjusting its gain, latency and mean response. Additionally, the temporal frequency selectivity, or bandwidth changes to encode the absolute intensity when the stimulus environment is noisy, and intensity differences when noise is low. We show that the On pathway of On-Off retinal amacrine and ganglion cells is required to change temporal bandwidth but not other adaptive properties. This remarkably specific adaptive mechanism arises from differential effects of contrast on the On and Off pathways. We analyzed a biophysical model fit only to a cell’s membrane potential, and verified pharmacologically that it accurately revealed the two pathways. We conclude that changes in bandwidth arise mostly from differences in synaptic threshold in the two pathways, rather than synaptic release dynamics as has previously been proposed to underlie contrast adaptation. Different efficient codes are selected by different thresholds in two independently adapting neural pathways.

<![CDATA[Scalable preprocessing of high volume environmental acoustic data for bioacoustic monitoring]]>

In this work, we examine the problem of efficiently preprocessing and denoising high volume environmental acoustic data, which is a necessary step in many bird monitoring tasks. Preprocessing is typically made up of multiple steps which are considered separately from each other. These are often resource intensive, particularly because the volume of data involved is high. We focus on addressing two challenges within this problem: how to combine existing preprocessing tasks while maximising the effectiveness of each step, and how to process this pipeline quickly and efficiently, so that it can be used to process high volumes of acoustic data. We describe a distributed system designed specifically for this problem, utilising a master-slave model with data parallelisation. By investigating the impact of individual preprocessing tasks on each other, and their execution times, we determine an efficient and accurate order for preprocessing tasks within the distributed system. We find that, using a single core, our pipeline executes 1.40 times faster compared to manually executing all preprocessing tasks. We then apply our pipeline in the distributed system and evaluate its performance. We find that our system is capable of preprocessing bird acoustic recordings at a rate of 174.8 seconds of audio per second of real time with 32 cores over 8 virtual machines, which is 21.76 times faster than a serial process.

<![CDATA[The positioning of the asymmetric septum during sporulation in Bacillus subtilis]]>

Probably one of the most controversial questions about the cell division of Bacillus subtilis, a rod-shaped bacterium, concerns the mechanism that ensures correct division septum placement–at mid-cell during vegetative growth but closer to one end during sporulation. In general, bacteria multiply by binary fission, in which the division septum forms almost exactly at the cell centre. How the division machinery achieves such accuracy is a question of continuing interest. We understand in some detail how this is achieved during vegetative growth in Escherichia coli and B. subtilis, where two main negative regulators, nucleoid occlusion and the Min system, help to determine the division site, but we still do not know exactly how the asymmetric septation site is determined during sporulation in B. subtilis. Clearly, the inhibitory effects of the nucleoid occlusion and Min system on polar division have to be overcome. We evaluated the positioning of the asymmetric septum and its accuracy by statistical analysis of the site of septation. We also clarified the role of SpoIIE, RefZ and MinCD on the accuracy of this process. We determined that the sporulation septum forms approximately 1/6 of a cell length from one of the cell poles with high precision and that SpoIIE, RefZ and MinCD have a crucial role in precisely localizing the sporulation septum. Our results strongly support the idea that asymmetric septum formation is a very precise and highly controlled process regulated by a still unknown mechanism.

<![CDATA[Introducing chaos behavior to kernel relevance vector machine (RVM) for four-class EEG classification]]>

This paper addresses a chaos kernel function for the relevance vector machine (RVM) in EEG signal classification, which is an important component of Brain-Computer Interface (BCI). The novel kernel function has evolved from a chaotic system, which is inspired by the fact that human brain signals depict some chaotic characteristics and behaviors. By introducing the chaotic dynamics to the kernel function, the RVM will be enabled for higher classification capacity. The proposed method is validated within the framework of one versus one common spatial pattern (OVO-CSP) classifier to classify motor imagination (MI) of four movements in a public accessible dataset. To illustrate the performance of the proposed kernel function, Gaussian and Polynomial kernel functions are considered for comparison. Experimental results show that the proposed kernel function achieved higher accuracy than Gaussian and Polynomial kernel functions, which shows that the chaotic behavior consideration is helpful in the EEG signal classification.

<![CDATA[The Effects of Sensorineural Hearing Impairment on Asynchronous Glimpsing of Speech]]>

In a previous study with normal-hearing listeners, we evaluated consonant identification masked by two or more spectrally contiguous bands of noise, with asynchronous square-wave modulation applied to neighboring bands. Speech recognition thresholds were 5.1–8.5 dB better when neighboring bands were presented to different ears (dichotic) than when all bands were presented to one ear (monaural), depending on the spectral width of the frequency bands. This dichotic advantage was interpreted as reflecting masking release from peripheral spread of masking from neighboring frequency bands. The present study evaluated this effect in listeners with sensorineural hearing loss, a population more susceptible to spread of masking. Speech perception (vowel-consonant-vowel stimuli, as in /aBa/) was measured in the presence of fluctuating noise that was either modulated synchronously across frequency or asynchronously. Hearing-impaired listeners (n = 9) and normal-hearing controls were tested at either the same intensity (n = 7) or same sensation level (n = 8). Hearing-impaired listeners had mild-to-moderate hearing loss and symmetrical, flat audiometric thresholds. While all groups of listeners performed better in the dichotic than monaural condition, this effect was smaller for the hearing-impaired (3.5 dB) and equivalent-sensation-level controls (3.3 dB) than controls tested at the same intensity (11.0 dB). The present study is consistent with the idea that dichotic presentation can improve speech-in-noise listening for hearing-impaired listeners, and may be enhanced when combined with amplification.

<![CDATA[Impaired Cerebral Autoregulation Is Associated with Brain Atrophy and Worse Functional Status in Chronic Ischemic Stroke]]>

Dynamic cerebral autoregulation (dCA) is impaired following stroke. However, the relationship between dCA, brain atrophy, and functional outcomes following stroke remains unclear. In this study, we aimed to determine whether impairment of dCA is associated with atrophy in specific regions or globally, thereby affecting daily functions in stroke patients.

We performed a retrospective analysis of 33 subjects with chronic infarctions in the middle cerebral artery territory, and 109 age-matched non-stroke subjects. dCA was assessed via the phase relationship between arterial blood pressure and cerebral blood flow velocity. Brain tissue volumes were quantified from MRI. Functional status was assessed by gait speed, instrumental activities of daily living (IADL), modified Rankin Scale, and NIH Stroke Score.

Compared to the non-stroke group, stroke subjects showed degraded dCA bilaterally, and showed gray matter atrophy in the frontal, parietal and temporal lobes ipsilateral to infarct. In stroke subjects, better dCA was associated with less temporal lobe gray matter atrophy on the infracted side ( = 0.029), faster gait speed ( = 0.018) and lower IADL score (0.002). Our results indicate that better dynamic cerebral perfusion regulation is associated with less atrophy and better long-term functional status in older adults with chronic ischemic infarctions.

<![CDATA[A New Generation of FRET Sensors for Robust Measurement of Gαi1, Gαi2 and Gαi3 Activation Kinetics in Single Cells]]>

G-protein coupled receptors (GPCRs) can activate a heterotrimeric G-protein complex with subsecond kinetics. Genetically encoded biosensors based on Förster resonance energy transfer (FRET) are ideally suited for the study of such fast signaling events in single living cells. Here we report on the construction and characterization of three FRET biosensors for the measurement of Gαi1, Gαi2 and Gαi3 activation. To enable quantitative long-term imaging of FRET biosensors with high dynamic range, fluorescent proteins with enhanced photophysical properties are required. Therefore, we use the currently brightest and most photostable CFP variant, mTurquoise2, as donor fused to Gαi subunit, and cp173Venus fused to the Gγ2 subunit as acceptor. The Gαi FRET biosensors constructs are expressed together with Gβ1 from a single plasmid, providing preferred relative expression levels with reduced variation in mammalian cells. The Gαi FRET sensors showed a robust response to activation of endogenous or over-expressed alpha-2A-adrenergic receptors, which was inhibited by pertussis toxin. Moreover, we observed activation of the Gαi FRET sensor in single cells upon stimulation of several GPCRs, including the LPA2, M3 and BK2 receptor. Furthermore, we show that the sensors are well suited to extract kinetic parameters from fast measurements in the millisecond time range. This new generation of FRET biosensors for Gαi1, Gαi2 and Gαi3 activation will be valuable for live-cell measurements that probe Gαi activation.

<![CDATA[Birdsong Denoising Using Wavelets]]>

Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings.