ResearchPad - computational-neuroscience https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Innovative machine learning approach and evaluation campaign for predicting the subjective feeling of work-life balance among employees]]> https://www.researchpad.co/article/elastic_article_14744 At present, many researchers see hope that artificial intelligence, machine learning in particular, will improve several aspects of the everyday life for individuals, cities and whole nations alike. For example, it has been speculated that the so-called machine learning could soon relieve employees of part of the duties, which may improve processes or help to find the most effective ways of performing tasks. Consequently, in the long run, it would help to enhance employees’ work-life balance. Thus, workers’ overall quality of life would improve, too. However, what would happen if machine learning as such were employed to try and find the ways of achieving work-life balance? This is why the authors of the paper decided to utilize a machine learning tool to search for the factors that influence the subjective feeling of one’s work-life balance. The possible results could help to predict and prevent the occurrence of work-life imbalance in the future. In order to do so, the data provided by an exceptionally sizeable group of 800 employees was utilised; it was one of the largest sample groups in similar studies in Poland so far. Additionally, this was one of the first studies where so many employees had been analysed using an artificial neural network. In order to enable replicability of the study, the specific setup of the study and the description of the dataset are provided. Having analysed the data and having conducted several experiments, the correlations between some factors and work-life balance have indeed been identified: it has been found that the most significant was the relation between the feeling of balance and the actual working hours; shifting it resulted in the tool predicting the switch from balance to imbalance, and vice versa. Other factors that proved significant for the predicted WLB are the amount of free time a week the employee has for themselves, working at weekends only, being self-employed and the subjective assessment of one’s financial status. In the study the dataset gets balanced, the most important features are selected with the selectKbest algorithm, an artificial neural network of 2 hidden layers with 50 and 25 neurons, ReLU and ADAM is constructed and trained on 90% of the dataset. In tests, it predicts WLB based on the prepared dataset and selected features with 81% accuracy.

]]>
<![CDATA[Forecasting the monthly incidence rate of brucellosis in west of Iran using time series and data mining from 2010 to 2019]]> https://www.researchpad.co/article/elastic_article_13811 The identification of statistical models for the accurate forecast and timely determination of the outbreak of infectious diseases is very important for the healthcare system. Thus, this study was conducted to assess and compare the performance of four machine-learning methods in modeling and forecasting brucellosis time series data based on climatic parameters.MethodsIn this cohort study, human brucellosis cases and climatic parameters were analyzed on a monthly basis for the Qazvin province–located in northwestern Iran- over a period of 9 years (2010–2018). The data were classified into two subsets of education (80%) and testing (20%). Artificial neural network methods (radial basis function and multilayer perceptron), support vector machine and random forest were fitted to each set. Performance analysis of the models were done using the Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Root Error (MARE), and R2 criteria.ResultsThe incidence rate of the brucellosis in Qazvin province was 27.43 per 100,000 during 2010–2019. Based on our results, the values of the RMSE (0.22), MAE (0.175), MARE (0.007) criteria were smaller for the multilayer perceptron neural network than their values in the other three models. Moreover, the R2 (0.99) value was bigger in this model. Therefore, the multilayer perceptron neural network exhibited better performance in forecasting the studied data. The average wind speed and mean temperature were the most effective climatic parameters in the incidence of this disease.ConclusionsThe multilayer perceptron neural network can be used as an effective method in detecting the behavioral trend of brucellosis over time. Nevertheless, further studies focusing on the application and comparison of these methods are needed to detect the most appropriate forecast method for this disease. ]]> <![CDATA[Predicting 30-day hospital readmissions using artificial neural networks with medical code embedding]]> https://www.researchpad.co/article/N1f40719a-4631-45e6-bedb-5cf8a42ecf53

Reducing unplanned readmissions is a major focus of current hospital quality efforts. In order to avoid unfair penalization, administrators and policymakers use prediction models to adjust for the performance of hospitals from healthcare claims data. Regression-based models are a commonly utilized method for such risk-standardization across hospitals; however, these models often suffer in accuracy. In this study we, compare four prediction models for unplanned patient readmission for patients hospitalized with acute myocardial infarction (AMI), congestive health failure (HF), and pneumonia (PNA) within the Nationwide Readmissions Database in 2014. We evaluated hierarchical logistic regression and compared its performance with gradient boosting and two models that utilize artificial neural networks. We show that unsupervised Global Vector for Word Representations embedding representations of administrative claims data combined with artificial neural network classification models improves prediction of 30-day readmission. Our best models increased the AUC for prediction of 30-day readmissions from 0.68 to 0.72 for AMI, 0.60 to 0.64 for HF, and 0.63 to 0.68 for PNA compared to hierarchical logistic regression. Furthermore, risk-standardized hospital readmission rates calculated from our artificial neural network model that employed embeddings led to reclassification of approximately 10% of hospitals across categories of hospital performance. This finding suggests that prediction models that incorporate new methods classify hospitals differently than traditional regression-based approaches and that their role in assessing hospital performance warrants further investigation.

]]>
<![CDATA[Analysis on urban densification dynamics and future modes in southeastern Wisconsin, USA]]> https://www.researchpad.co/article/5c89775dd5eed0c4847d2b08

Urban change (urbanization) has dominated land change science for several decades. However, few studies have focused on what many scholars call the urban densification process (i.e., urban intensity expansion) despite its importance to both planning and subsequent impacts to the environment and local economies. This paper documents past urban densification patterns and uses this information to predict future densification trends in southeastern Wisconsin (SEWI) by using a rich dataset from the United States and by adapting the well-known Land Transformation Model (LTM) for this purpose. Urban densification is a significant and progressive process that often accompanies urbanization more generally. The increasing proportion of lower density areas, rather than higher density areas, was the main characteristic of the urban densification in SEWI from 2001 to 2011. We believe that improving urban land use efficiency to maintain rational densification are effective means toward a sustainable urban landscape. Multiple goodness-of-fit metrics demonstrated that the reconfigured LTM performed relatively well to simulate urban densification patterns in 2006 and 2011, enabling us to forecast densification to 2016 and 2021. The predicted future urban densification patterns are likely to be characterized by higher densities continue to increase at the expense of lower densities. We argue that detailed categories of urban density and specific relevant predictor variables are indispensable for densification prediction. Our study provides researchers working in land change science with important insights into urban densification process modeling. The outcome of this model can help planners to identify the current trajectory of urban development, enabling them to take informed action to promote planning objectives, which could benefit sustainable urbanization definitely.

]]>
<![CDATA[Applications of artificial neural networks in health care organizational decision-making: A scoping review]]> https://www.researchpad.co/article/5c75ac5bd5eed0c484d08619

Health care organizations are leveraging machine-learning techniques, such as artificial neural networks (ANN), to improve delivery of care at a reduced cost. Applications of ANN to diagnosis are well-known; however, ANN are increasingly used to inform health care management decisions. We provide a seminal review of the applications of ANN to health care organizational decision-making. We screened 3,397 articles from six databases with coverage of Health Administration, Computer Science and Business Administration. We extracted study characteristics, aim, methodology and context (including level of analysis) from 80 articles meeting inclusion criteria. Articles were published from 1997–2018 and originated from 24 countries, with a plurality of papers (26 articles) published by authors from the United States. Types of ANN used included ANN (36 articles), feed-forward networks (25 articles), or hybrid models (23 articles); reported accuracy varied from 50% to 100%. The majority of ANN informed decision-making at the micro level (61 articles), between patients and health care providers. Fewer ANN were deployed for intra-organizational (meso- level, 29 articles) and system, policy or inter-organizational (macro- level, 10 articles) decision-making. Our review identifies key characteristics and drivers for market uptake of ANN for health care organizational decision-making to guide further adoption of this technique.

]]>
<![CDATA[Evidence of a trans-kingdom plant disease complex between a fungus and plant-parasitic nematodes]]> https://www.researchpad.co/article/5c6dca20d5eed0c48452a801

Disease prediction tools improve management efforts for many plant diseases. Prediction and downstream prevention demand information about disease etiology, which can be complicated for some diseases, like those caused by soilborne microorganisms. Fortunately, the availability of machine learning methods has enabled researchers to elucidate complex relationships between hosts and pathogens without invoking difficult-to-satisfy assumptions. The etiology of a destructive plant disease, Verticillium wilt of mint, caused by the fungus Verticillium dahliae was reevaluated with several supervised machine learning methods. Specifically, the objective of this research was to identify drivers of wilt in commercial mint fields, describe the relationships between these drivers, and predict wilt. Soil samples were collected from commercial mint fields. Wilt foci, V. dahliae, and plant-parasitic nematodes that can exacerbate wilt were quantified. Multiple linear regression, a generalized additive model, random forest, and an artificial neural network were fit to the data, validated with 10-fold cross-validation, and measures of explanatory and predictive performance were compared. All models selected nematodes within the genus Pratylenchus as the most important predictor of wilt. The fungus after which this disease is named, V. dahliae, was the fourth most important predictor of wilt, after crop age and cultivar. All models explained around 50% of the total variation (R2 ≤ 0.46), and exhibited comparable predictive error (RMSE ≤ 1.21). Collectively, these models revealed that the quantitative relationships between two pathogens, mint cultivars and age are required to explain wilt. The ascendance of Pratylenchus spp. in predicting symptoms of a disease assumed to primarily be caused by V. dahliae exposes the underestimated contribution of these nematodes to wilt. This research provides a foundation on which predictive forecasting tools can be developed for mint growers and reminds us of the lessons that can be learned by revisiting assumptions about disease etiology.

]]>
<![CDATA[STRFs in primary auditory cortex emerge from masking-based statistics of natural sounds]]> https://www.researchpad.co/article/5c4a3057d5eed0c4844bfd7a

We investigate how the neural processing in auditory cortex is shaped by the statistics of natural sounds. Hypothesising that auditory cortex (A1) represents the structural primitives out of which sounds are composed, we employ a statistical model to extract such components. The input to the model are cochleagrams which approximate the non-linear transformations a sound undergoes from the outer ear, through the cochlea to the auditory nerve. Cochleagram components do not superimpose linearly, but rather according to a rule which can be approximated using the max function. This is a consequence of the compression inherent in the cochleagram and the sparsity of natural sounds. Furthermore, cochleagrams do not have negative values. Cochleagrams are therefore not matched well by the assumptions of standard linear approaches such as sparse coding or ICA. We therefore consider a new encoding approach for natural sounds, which combines a model of early auditory processing with maximal causes analysis (MCA), a sparse coding model which captures both the non-linear combination rule and non-negativity of the data. An efficient truncated EM algorithm is used to fit the MCA model to cochleagram data. We characterize the generative fields (GFs) inferred by MCA with respect to in vivo neural responses in A1 by applying reverse correlation to estimate spectro-temporal receptive fields (STRFs) implied by the learned GFs. Despite the GFs being non-negative, the STRF estimates are found to contain both positive and negative subfields, where the negative subfields can be attributed to explaining away effects as captured by the applied inference method. A direct comparison with ferret A1 shows many similar forms, and the spectral and temporal modulation tuning of both ferret and model STRFs show similar ranges over the population. In summary, our model represents an alternative to linear approaches for biological auditory encoding while it captures salient data properties and links inhibitory subfields to explaining away effects.

]]>
<![CDATA[Resolution invariant wavelet features of melanoma studied by SVM classifiers]]> https://www.researchpad.co/article/5c648cd2d5eed0c484c81893

This article refers to the Computer Aided Diagnosis of the melanoma skin cancer. We derive wavelet-based features of melanoma from the dermoscopic images of pigmental skin lesions and apply binary C-SVM classifiers to discriminate malignant melanoma from dysplastic nevus. The aim of this research is to select the most efficient model of the SVM classifier for various image resolutions and to search for the best resolution-invariant wavelet bases. We show AUC as a function of the wavelet number and SVM kernels optimized by the Bayesian search for two independent data sets. Our results are compatible with the previous experiments to discriminate melanoma in dermoscopy images with ensembling and feed-forward neural networks.

]]>
<![CDATA[A low-threshold potassium current enhances sparseness and reliability in a model of avian auditory cortex]]> https://www.researchpad.co/article/5c58d63bd5eed0c484031922

Birdsong is a complex vocal communication signal, and like humans, birds need to discriminate between similar sequences of sound with different meanings. The caudal mesopallium (CM) is a cortical-level auditory area implicated in song discrimination. CM neurons respond sparsely to conspecific song and are tolerant of production variability. Intracellular recordings in CM have identified a diversity of intrinsic membrane dynamics, which could contribute to the emergence of these higher-order functional properties. We investigated this hypothesis using a novel linear-dynamical cascade model that incorporated detailed biophysical dynamics to simulate auditory responses to birdsong. Neuron models that included a low-threshold potassium current present in a subset of CM neurons showed increased selectivity and coding efficiency relative to models without this current. These results demonstrate the impact of intrinsic dynamics on sensory coding and the importance of including the biophysical characteristics of neural populations in simulation studies.

]]>
<![CDATA[Coding of low-level position and orientation information in human naturalistic vision]]> https://www.researchpad.co/article/5c6b266cd5eed0c484289a44

Orientation and position of small image segments are considered to be two fundamental low-level attributes in early visual processing, yet their encoding in complex natural stimuli is underexplored. By measuring the just-noticeable differences in noise perturbation, we investigated how orientation and position information of a large number of local elements (Gabors) were encoded separately or jointly. Importantly, the Gabors composed various classes of naturalistic stimuli that were equated by all low-level attributes and differed only in their higher-order configural complexity and familiarity. Although unable to consciously tell apart the type of perturbation, observers detected orientation and position noise significantly differently. Furthermore, when the Gabors were perturbed by both types of noise simultaneously, performance adhered to a reliability-based optimal probabilistic combination of individual attribute noises. Our results suggest that orientation and position are independently coded and probabilistically combined for naturalistic stimuli at the earliest stage of visual processing.

]]>
<![CDATA[An ecologically constrained procedure for sensitivity analysis of Artificial Neural Networks and other empirical models]]> https://www.researchpad.co/article/5c5b5252d5eed0c4842bc656

Sensitivity analysis applied to Artificial Neural Networks (ANNs) as well as to other types of empirical ecological models allows assessing the importance of environmental predictive variables in affecting species distribution or other target variables. However, approaches that only consider values of the environmental variables that are likely to be observed in real-world conditions, given the underlying ecological relationships with other variables, have not yet been proposed. Here, a constrained sensitivity analysis procedure is presented, which evaluates the importance of the environmental variables considering only their plausible changes, thereby exploring only ecological meaningful scenarios. To demonstrate the procedure, we applied it to an ANN model predicting fish species richness, as identifying relationships between environmental variables and fish species occurrence in river ecosystems is a recurring topic in freshwater ecology. Results showed that several environmental variables played a less relevant role in driving the model output when that sensitivity analysis allowed them to vary only within an ecologically meaningful range of values, i.e. avoiding values that the model would never handle in its practical applications. By comparing percent changes in MSE between constrained and unconstrained sensitivity analysis, the relative importance of environmental variables was found to be different, with habitat descriptors and urbanization factors that played a more relevant role according to the constrained procedure. The ecologically constrained procedure can be applied to any sensitivity analysis method for ANNs, but obviously it can also be applied to other types of empirical ecological models.

]]>
<![CDATA[Identifying Parkinson's disease and parkinsonism cases using routinely collected healthcare data: A systematic review]]> https://www.researchpad.co/article/5c5ca30bd5eed0c48441f045

Background

Population-based, prospective studies can provide important insights into Parkinson’s disease (PD) and other parkinsonian disorders. Participant follow-up in such studies is often achieved through linkage to routinely collected healthcare datasets. We systematically reviewed the published literature on the accuracy of these datasets for this purpose.

Methods

We searched four electronic databases for published studies that compared PD and parkinsonism cases identified using routinely collected data to a reference standard. We extracted study characteristics and two accuracy measures: positive predictive value (PPV) and/or sensitivity.

Results

We identified 18 articles, resulting in 27 measures of PPV and 14 of sensitivity. For PD, PPV ranged from 56–90% in hospital datasets, 53–87% in prescription datasets, 81–90% in primary care datasets and was 67% in mortality datasets. Combining diagnostic and medication codes increased PPV. For parkinsonism, PPV ranged from 36–88% in hospital datasets, 40–74% in prescription datasets, and was 94% in mortality datasets. Sensitivity ranged from 15–73% in single datasets for PD and 43–63% in single datasets for parkinsonism.

Conclusions

In many settings, routinely collected datasets generate good PPVs and reasonable sensitivities for identifying PD and parkinsonism cases. However, given the wide range of identified accuracy estimates, we recommend cohorts conduct their own context-specific validation studies if existing evidence is lacking. Further research is warranted to investigate primary care and medication datasets, and to develop algorithms that balance a high PPV with acceptable sensitivity.

]]>
<![CDATA[A machine learning approach of predicting high potential archers by means of physical fitness indicators]]> https://www.researchpad.co/article/5c37b7c7d5eed0c484490cdc

k-nearest neighbour (k-NN) has been shown to be an effective learning algorithm for classification and prediction. However, the application of k-NN for prediction and classification in specific sport is still in its infancy. The present study classified and predicted high and low potential archers from a set of physical fitness variables trained on a variation of k-NN algorithms and logistic regression. 50 youth archers with the mean age and standard deviation of (17.0 ± 0.56) years drawn from various archery programmes completed a one end archery shooting score test. Standard fitness measurements of the handgrip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were conducted. Multiple linear regression was utilised to ascertain the significant variables that affect the shooting score. It was demonstrated from the analysis that core muscle strength and vertical jump were statistically significant. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the significant variables identified. k-NN model variations, i.e., fine, medium, coarse, cosine, cubic and weighted functions as well as logistic regression, were trained based on the significant performance variables. The HACA clustered the archers into high potential archers (HPA) and low potential archers (LPA). The weighted k-NN outperformed all the tested models at itdemonstrated reasonably good classification on the evaluated indicators with an accuracy of 82.5 ± 4.75% for the prediction of the HPA and the LPA. Moreover, the performance of the classifiers was further investigated against fresh data, which also indicates the efficacy of the weighted k-NN model. These findings could be valuable to coaches and sports managers to recognise high potential archers from a combination of the selected few physical fitness performance indicators identified which would subsequently save cost, time and energy for a talent identification programme.

]]>
<![CDATA[Adaptation and spectral enhancement at auditory temporal perceptual boundaries - Measurements via temporal precision of auditory brainstem responses]]> https://www.researchpad.co/article/5c2544c6d5eed0c48442b871

In human and animal auditory perception the perceived quality of sound streams changes depending on the duration of inter-sound intervals (ISIs). Here, we studied whether adaptation and the precision of temporal coding in the auditory periphery reproduce general perceptual boundaries in the time domain near 20, 100, and 400 ms ISIs, the physiological origin of which are unknown. In four experiments, we recorded auditory brainstem responses with five wave peaks (P1 –P5) in response to acoustic models of communication calls of house mice, who perceived these calls with the mentioned boundaries. The newly introduced measure of average standard deviations of wave latencies of individual animals indicate the waves’ temporal precision (latency jitter) mostly in the range of 30–100 μs, very similar to latency jitter of single neurons. Adaptation effects of response latencies and latency jitter were measured for ISIs of 10–1000 ms. Adaptation decreased with increasing ISI duration following exponential or linear (on a logarithmic scale) functions in the range of up to about 200 ms ISIs. Adaptation effects were specific for each processing level in the auditory system. The perceptual boundaries near 20–30 and 100 ms ISIs were reflected in significant adaptation of latencies together with increases of latency jitter at P2-P5 for ISIs < ~30 ms and at P5 for ISIs < ~100 ms, respectively. Adaptation effects occurred when frequencies in a sound stream were within the same critical band. Ongoing low-frequency components/formants in a sound enhanced (decrease of latencies) coding of high-frequency components/formants when the frequencies concerned different critical bands. The results are discussed in the context of coding multi-harmonic sounds and stop-consonants-vowel pairs in the auditory brainstem. Furthermore, latency data at P1 (cochlea level) offer a reasonable value for the base-to-apex cochlear travel time in the mouse (0.342 ms) that has not been determined experimentally.

]]>
<![CDATA[A stochastic framework to model axon interactions within growing neuronal populations]]> https://www.researchpad.co/article/5c0ed74ed5eed0c484f13eaa

The confined and crowded environment of developing brains imposes spatial constraints on neuronal cells that have evolved individual and collective strategies to optimize their growth. These include organizing neurons into populations extending their axons to common target territories. How individual axons interact with each other within such populations to optimize innervation is currently unclear and difficult to analyze experimentally in vivo. Here, we developed a stochastic model of 3D axon growth that takes into account spatial environmental constraints, physical interactions between neighboring axons, and branch formation. This general, predictive and robust model, when fed with parameters estimated on real neurons from the Drosophila brain, enabled the study of the mechanistic principles underlying the growth of axonal populations. First, it provided a novel explanation for the diversity of growth and branching patterns observed in vivo within populations of genetically identical neurons. Second, it uncovered that axon branching could be a strategy optimizing the overall growth of axons competing with others in contexts of high axonal density. The flexibility of this framework will make it possible to investigate the rules underlying axon growth and regeneration in the context of various neuronal populations.

]]>
<![CDATA[Automatic classification of pediatric pneumonia based on lung ultrasound pattern recognition]]> https://www.researchpad.co/article/5c117b51d5eed0c484698a67

Pneumonia is one of the major causes of child mortality, yet with a timely diagnosis, it is usually curable with antibiotic therapy. In many developing regions, diagnosing pneumonia remains a challenge, due to shortages of medical resources. Lung ultrasound has proved to be a useful tool to detect lung consolidation as evidence of pneumonia. However, diagnosis of pneumonia by ultrasound has limitations: it is operator-dependent, and it needs to be carried out and interpreted by trained personnel. Pattern recognition and image analysis is a potential tool to enable automatic diagnosis of pneumonia consolidation without requiring an expert analyst. This paper presents a method for automatic classification of pneumonia using ultrasound imaging of the lungs and pattern recognition. The approach presented here is based on the analysis of brightness distribution patterns present in rectangular segments (here called “characteristic vectors“) from the ultrasound digital images. In a first step we identified and eliminated the skin and subcutaneous tissue (fat and muscle) in lung ultrasound frames, and the “characteristic vectors”were analyzed using standard neural networks using artificial intelligence methods. We analyzed 60 lung ultrasound frames corresponding to 21 children under age 5 years (15 children with confirmed pneumonia by clinical examination and X-rays, and 6 children with no pulmonary disease) from a hospital based population in Lima, Peru. Lung ultrasound images were obtained using an Ultrasonix ultrasound device. A total of 1450 positive (pneumonia) and 1605 negative (normal lung) vectors were analyzed with standard neural networks, and used to create an algorithm to differentiate lung infiltrates from healthy lung. A neural network was trained using the algorithm and it was able to correctly identify pneumonia infiltrates, with 90.9% sensitivity and 100% specificity. This approach may be used to develop operator-independent computer algorithms for pneumonia diagnosis using ultrasound in young children.

]]>
<![CDATA[Estimation of paddy rice leaf area index using machine learning methods based on hyperspectral data from multi-year experiments]]> https://www.researchpad.co/article/5c117b6fd5eed0c484699303

The performance of three machine learning methods (support vector regression, random forests and artificial neural network) for estimating the LAI of paddy rice was evaluated in this study. Traditional univariate regression models involving narrowband NDVI with optimized band combinations as well as linear multivariate calibration partial least squares regression models were also evaluated for comparison. A four year field-collected dataset was used to test the robustness of LAI estimation models against temporal variation. The partial least squares regression and three machine learning methods were built on the raw hyperspectral reflectance and the first derivative separately. Two different rules were used to determine the models’ key parameters. The results showed that the combination of the red edge and NIR bands (766 nm and 830 nm) as well as the combination of SWIR bands (1114 nm and 1190 nm) were optimal for producing the narrowband NDVI. The models built on the first derivative spectra yielded more accurate results than the corresponding models built on the raw spectra. Properly selected model parameters resulted in comparable accuracy and robustness with the empirical optimal parameter and significantly reduced the model complexity. The machine learning methods were more accurate and robust than the VI methods and partial least squares regression. When validating the calibrated models against the standalone validation dataset, the VI method yielded a validation RMSE value of 1.17 for NDVI(766,830) and 1.01 for NDVI(1114,1190), while the best models for the partial least squares, support vector machine and artificial neural network methods yielded validation RMSE values of 0.84, 0.82, 0.67 and 0.84, respectively. The RF models built on the first derivative spectra with mtry = 10 showed the highest potential for estimating the LAI of paddy rice.

]]>
<![CDATA[Characterising risk of in-hospital mortality following cardiac arrest using machine learning: A retrospective international registry study]]> https://www.researchpad.co/article/5c0ae470d5eed0c484589b1f

Background

Resuscitated cardiac arrest is associated with high mortality; however, the ability to estimate risk of adverse outcomes using existing illness severity scores is limited. Using in-hospital data available within the first 24 hours of admission, we aimed to develop more accurate models of risk prediction using both logistic regression (LR) and machine learning (ML) techniques, with a combination of demographic, physiologic, and biochemical information.

Methods and findings

Patient-level data were extracted from the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database for patients who had experienced a cardiac arrest within 24 hours prior to admission to an intensive care unit (ICU) during the period January 2006 to December 2016. The primary outcome was in-hospital mortality. The models were trained and tested on a dataset (split 90:10) including age, lowest and highest physiologic variables during the first 24 hours, and key past medical history. LR and 5 ML approaches (gradient boosting machine [GBM], support vector classifier [SVC], random forest [RF], artificial neural network [ANN], and an ensemble) were compared to the APACHE III and Australian and New Zealand Risk of Death (ANZROD) predictions. In all, 39,566 patients from 186 ICUs were analysed. Mean (±SD) age was 61 ± 17 years; 65% were male. Overall in-hospital mortality was 45.5%. Models were evaluated in the test set. The APACHE III and ANZROD scores demonstrated good discrimination (area under the receiver operating characteristic curve [AUROC] = 0.80 [95% CI 0.79–0.82] and 0.81 [95% CI 0.8–0.82], respectively) and modest calibration (Brier score 0.19 for both), which was slightly improved by LR (AUROC = 0.82 [95% CI 0.81–0.83], DeLong test, p < 0.001). Discrimination was significantly improved using ML models (ensemble and GBM AUROCs = 0.87 [95% CI 0.86–0.88], DeLong test, p < 0.001), with an improvement in performance (Brier score reduction of 22%). Explainability models were created to assist in identifying the physiologic features that most contributed to an individual patient’s survival. Key limitations include the absence of pre-hospital data and absence of external validation.

Conclusions

ML approaches significantly enhance predictive discrimination for mortality following cardiac arrest compared to existing illness severity scores and LR, without the use of pre-hospital data. The discriminative ability of these ML models requires validation in external cohorts to establish generalisability.

]]>
<![CDATA[Serial representation of items during working memory maintenance at letter-selective cortical sites]]> https://www.researchpad.co/article/5b8acde540307c144d0de055

A key component of working memory is the ability to remember multiple items simultaneously. To understand how the human brain maintains multiple items in memory, we examined direct brain recordings of neural oscillations from neurosurgical patients as they performed a working memory task. We analyzed the data to identify the neural representations of individual memory items by identifying recording sites with broadband gamma activity that varied according to the identity of the letter a subject viewed. Next, we tested a previously proposed model of working memory, which had hypothesized that the neural representations of individual memory items sequentially occurred at different phases of the theta/alpha cycle. Consistent with this model, the phase of the theta/alpha oscillation when stimulus-related gamma activity occurred during maintenance reflected the order of list presentation. These results suggest that working memory is organized by a cortical phase code coordinated by coupled theta/alpha and gamma oscillations and, more broadly, provide support for the serial representation of items in working memory.

]]>
<![CDATA[<i>PLoS Computational Biology</i> Issue Image | Vol. 14(11) November 2018]]> https://www.researchpad.co/article/5c0ae482d5eed0c484589d9c

Moth antennal neurons adjust their encoding optimally with respect to pheromone fluctuations

Sensory neural systems of living organisms encode the representation of their environment with remarkable efficiency. This is manifested, e.g., in the way how male moths perform long-distance searches of their females by tracking the pheromone plumes. In the study "Moth olfactory receptor neurons adjust their encoding efficiency to temporal statistics of pheromone fluctuations" Levakova et al. analyzed responses of pheromone-specific antennal neurons to naturalistic stimulation. It was shown that the coding accuracy and the stimulus distribution are in the optimal relationship as predicted by both information theory and statistical estimation theory.

Image Credit: Marie Levakova

]]>