ResearchPad - management-engineering https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[iterb-PPse: Identification of transcriptional terminators in bacterial by incorporating nucleotide properties into PseKNC]]> https://www.researchpad.co/article/elastic_article_14750 Terminator is a DNA sequence that gives the RNA polymerase the transcriptional termination signal. Identifying terminators correctly can optimize the genome annotation, more importantly, it has considerable application value in disease diagnosis and therapies. However, accurate prediction methods are deficient and in urgent need. Therefore, we proposed a prediction method “iterb-PPse” for terminators by incorporating 47 nucleotide properties into PseKNC-Ⅰ and PseKNC-Ⅱ and utilizing Extreme Gradient Boosting to predict terminators based on Escherichia coli and Bacillus subtilis. Combing with the preceding methods, we employed three new feature extraction methods K-pwm, Base-content, Nucleotidepro to formulate raw samples. The two-step method was applied to select features. When identifying terminators based on optimized features, we compared five single models as well as 16 ensemble models. As a result, the accuracy of our method on benchmark dataset achieved 99.88%, higher than the existing state-of-the-art predictor iTerm-PseKNC in 100 times five-fold cross-validation test. Its prediction accuracy for two independent datasets reached 94.24% and 99.45% respectively. For the convenience of users, we developed a software on the basis of “iterb-PPse” with the same name. The open software and source code of “iterb-PPse” are available at https://github.com/Sarahyouzi/iterb-PPse.

]]>
<![CDATA[Longitudinal analysis of cost and dental utilization patterns for older adults in outpatient and long-term care settings in Minnesota]]> https://www.researchpad.co/article/elastic_article_14553 Dental utilization patterns and costs of providing comprehensive oral healthcare for older adults in different settings have not been examined.MethodsRetrospective longitudinal cohort data from Apple Tree Dental (ATD) were analyzed (N = 1,159 total; 503 outpatients, 656 long-term care residents) to describe oral health status at presentation, service utilization patterns, and care costs. Generalized estimating equation (GEE) repeated measures analysis identified significant contributors to service cost over the three-year study period.ResultsCohort mean age was 74 years (range = 55–104); the outpatient (OP) group was younger compared to the long-term care (LTC) group. Half (56%) had Medicaid, 22% had other insurance, and 22% self-paid. Most (72%) had functional dentitions (20+ teeth), 15% had impaired dentitions (9–19 teeth), 6% had severe tooth loss (1–8 teeth), and 7% were edentulous (OP = 2%, LTC = 11%). More in the OP group had functional dentition (83% vs. 63% LTC). The number of appointments declined from 5.0 in Year 1 (OP = 5.7, LTC = 4.4) to 3.3 in Year 3 (OP = 3.6, LTC = 3.0). The average cost to provide dental services was $1,375/year for three years (OP = $1,427, LTC = $1,336), and costs declined each year, from an average of $1,959 (OP = $2,068, LTC = $1,876) in Year 1 to $1,016 (OP = $989, LTC = $1,037) by Year 3. Those with functional dentition at presentation were significantly less costly than those with 1–19 teeth, while edentulous patients demonstrated the lowest cost and utilization. Year in treatment, insurance type, dentition type, and problem-focused first exam were significantly associated with year-over-year cost change in both OP and LTC patients.ConclusionCosts for providing comprehensive dental care in OP and LTC settings were similar, modest, and declined over time. Dentate patients with functional dentition and edentulous patients were less costly to treat. LTC patients had lower utilization than OP patients. Care patterns shifted over time to increased preventive care and decreased restorative care visits. ]]> <![CDATA[ECG-based prediction algorithm for imminent malignant ventricular arrhythmias using decision tree]]> https://www.researchpad.co/article/elastic_article_14548 Spontaneous prediction of malignant ventricular arrhythmia (MVA) is useful to avoid delay in rescue operations. Recently, researchers have developed several algorithms to predict MVA using various features derived from electrocardiogram (ECG). However, there are several unresolved issues regarding MVA prediction such as the effect of number of ECG features on a prediction remaining unclear, possibility that an alert for occurring MVA may arrive very late and uncertainty in the performance of the algorithm predicting MVA minutes before onset. To overcome the aforementioned problems, this research conducts an in-depth study on the number and types of ECG features that are implemented in a decision tree classifier. In addition, this research also investigates an algorithm’s execution time before the occurrence of MVA to minimize delays in warnings for MVA. Lastly, this research aims to study both the sensitivity and specificity of an algorithm to reveal the performance of MVA prediction algorithms from time to time. To strengthen the results of analysis, several classifiers such as support vector machine and naive Bayes are also examined for the purpose of comparison study. There are three phases required to achieve the objectives. The first phase is literature review on existing relevant studies. The second phase deals with design and development of four modules for predicting MVA. Rigorous experiments are performed in the feature selection and classification modules. The results show that eight ECG features with decision tree classifier achieved good prediction performance in terms of execution time and sensitivity. In addition, the results show that the highest percentage for sensitivity and specificity is 95% and 90% respectively, in the fourth 5-minute interval (15.1 minutes–20 minutes) that preceded the onset of an arrhythmia event. Such results imply that the fourth 5-minute interval would be the best time to perform prediction.

]]>
<![CDATA[Improvement of electrocardiographic diagnostic accuracy of left ventricular hypertrophy using a Machine Learning approach]]> https://www.researchpad.co/article/elastic_article_14491 The electrocardiogram (ECG) is the most common tool used to predict left ventricular hypertrophy (LVH). However, it is limited by its low accuracy (<60%) and sensitivity (30%). We set forth the hypothesis that the Machine Learning (ML) C5.0 algorithm could optimize the ECG in the prediction of LVH by echocardiography (Echo) while also establishing ECG-LVH phenotypes. We used Echo as the standard diagnostic tool to detect LVH and measured the ECG abnormalities found in Echo-LVH. We included 432 patients (power = 99%). Of these, 202 patients (46.7%) had Echo-LVH and 240 (55.6%) were males. We included a wide range of ventricular masses and Echo-LVH severities which were classified as mild (n = 77, 38.1%), moderate (n = 50, 24.7%) and severe (n = 75, 37.1%). Data was divided into a training/testing set (80%/20%) and we applied logistic regression analysis on the ECG measurements. The logistic regression model with the best ability to identify Echo-LVH was introduced into the C5.0 ML algorithm. We created multiple decision trees and selected the tree with the highest performance. The resultant five-level binary decision tree used only six predictive variables and had an accuracy of 71.4% (95%CI, 65.5–80.2), a sensitivity of 79.6%, specificity of 53%, positive predictive value of 66.6% and a negative predictive value of 69.3%. Internal validation reached a mean accuracy of 71.4% (64.4–78.5). Our results were reproduced in a second validation group and a similar diagnostic accuracy was obtained, 73.3% (95%CI, 65.5–80.2), sensitivity (81.6%), specificity (69.3%), positive predictive value (56.3%) and negative predictive value (88.6%). We calculated the Romhilt-Estes multilevel score and compared it to our model. The accuracy of the Romhilt-Estes system had an accuracy of 61.3% (CI95%, 56.5–65.9), a sensitivity of 23.2% and a specificity of 94.8% with similar results in the external validation group. In conclusion, the C5.0 ML algorithm surpassed the accuracy of current ECG criteria in the detection of Echo-LVH. Our new criteria hinge on ECG abnormalities that identify high-risk patients and provide some insight on electrogenesis in Echo-LVH.

]]>
<![CDATA[Oxycodone versus morphine for cancer pain titration: A systematic review and pharmacoeconomic evaluation]]> https://www.researchpad.co/article/N5c0f7a4c-4090-42ec-ba95-57e120b0c99c

Objective

To evaluate the efficacy, safety and cost-effectiveness of Oxycodone Hydrochloride Controlled-release Tablets (CR oxycodone) and Morphine Sulfate Sustained-release Tablets (SR morphine) for moderate to severe cancer pain titration.

Methods

Randomized controlled trials meeting the inclusion criteria were searched through Medline, Cochrane Library, Pubmed, EMbase, CNKI,VIP and WanFang database from the data of their establishment to June 2019. The efficacy and safety data were extracted from the included literature. The pain control rate was calculated to eatimate efficacy. Meta-analysis was conducted by Revman5.1.4. A decision tree model was built to simulate cancer pain titration process. The initial dose of CR oxycodone and SR morphine group were 20mg and 30mg respectively. Oral immediate-release morphine was administered to treat break-out pain. The incremental cost-effectiveness ratio was performed with TreeAge Pro 2019.

Results

19 studies (1680 patients)were included in this study. Meta-analysis showed that the pain control rate of CR oxycodone and SR morphine were 86% and 82.98% respectively. The costs of CR oxycodone and SR morphine were $23.27 and $13.31. The incremental cost-effectiveness ratio per unit was approximate $329.76. At the willingness-to-pay threshold of $8836, CR oxycodone was cost-effective, while the corresponding probability of being cost-effective at the willingness-to-pay threshold of $300 was 31.6%. One-way sensitivity analysis confirmed robustness of results.

Conclusions

CR oxycodone could be a cost-effective option compared with SR morphine for moderate to severe cancer pain titration in China, according to the threshold defined by the WHO.

]]>
<![CDATA[Distinguishing moral hazard from access for high-cost healthcare under insurance]]> https://www.researchpad.co/article/N9aa1c21e-eb0c-47d9-9336-743c9eef5b98

Context

Health policy has long been preoccupied with the problem that health insurance stimulates spending (“moral hazard”). However, much health spending is costly healthcare that uninsured individuals could not otherwise access. Field studies comparing those with more or less insurance cannot disaggregate moral hazard versus access. Moreover, studies of patients consuming routine low-dollar healthcare are not informative for the high-dollar healthcare that drives most of aggregate healthcare spending in the United States.

Methods

We test indemnities as an alternative theory-driven counterfactual. Such conditional cash transfers would maintain an opportunity cost for patients, unlike standard insurance, but also guarantee access to the care. Since indemnities do not exist in U.S. healthcare, we fielded two blinded vignette-based survey experiments with 3,000 respondents, randomized to eight clinical vignettes and three insurance types. Our replication uses a population that is weighted to national demographics on three dimensions.

Findings

Most or all of the spending due to insurance would occur even under an indemnity. The waste attributable to moral hazard is undetectable.

Conclusions

For high-cost care, policymakers should be more concerned about the foregone efficient spending for those lacking full insurance, rather than the wasteful spending that occurs with full insurance.

]]>
<![CDATA[Use of non-insulin diabetes medicines after insulin initiation: A retrospective cohort study]]> https://www.researchpad.co/article/5c6dc9a1d5eed0c484529f41

Background

Clinical guidelines recommend that metformin be continued after insulin is initiated among patients with type 2 diabetes, yet little is known regarding how often metformin or other non-insulin diabetes medications are continued in this setting.

Methods

We conducted a retrospective cohort study to characterize rates and use patterns of six classes of non-insulin diabetes medications: biguanides (metformin), sulfonylureas, thiazolidinediones (TZDs), glucagon-like peptide 1 receptor agonists (GLP1 receptor agonists), dipeptidyl peptidase 4 inhibitors (DPP4 inhibitors), and sodium-glucose co-transporter inhibitors (SGLT2 inhibitors), among patients with type 2 diabetes initiating insulin. We used the 2010–2015 MarketScan Commercial Claims and Encounters data examining 72,971 patients with type 2 diabetes aged 18–65 years old who initiated insulin and had filled a prescription for a non-insulin diabetes medication in the 90 days prior to insulin initiation. Our primary outcome was the proportion of patients refilling the various non-insulin diabetes medications during the first 90 days after insulin initiation. We also used time-to-event analysis to characterize the time to discontinuation of specific medication classes.

Results

Metformin was the most common non-insulin medication used prior to insulin initiation (N = 53,017, 72.7%), followed by sulfonylureas (N = 25,439, 34.9%) and DPP4 inhibitors (N = 8,540, 11.7%). More than four out of five patients (N = 65,902, 84.7%) refilled prescriptions for any non-insulin diabetes medications within 90 days after insulin initiation. Within that period, metformin remained the most common medication with the highest continuation rate of 84.6%, followed by SGLT2 inhibitors (81.9%) and TZDs (79.3%). Sulfonylureas were the least likely medications to be continued (73.6% continuation) though they remained the second most common medication class used after insulin initiation. The median time to discontinuation varied by therapeutic class from the longest time to discontinuation of 26.4 months among metformin users to the shortest (3.0 months) among SGLT2 inhibitor users.

Conclusion

While metformin was commonly continued among commercially insured adults starting insulin, rates of continuation of other non-insulin diabetes medications were also high. Further studies are needed to determine the comparative effectiveness and safety of continuing insulin secretagogues and newer diabetes medications after insulin initiation.

]]>
<![CDATA[Automated localization and quality control of the aorta in cine CMR can significantly accelerate processing of the UK Biobank population data]]> https://www.researchpad.co/article/5c6f151bd5eed0c48467adda

Introduction

Aortic distensibility can be calculated using semi-automated methods to segment the aortic lumen on cine CMR (Cardiovascular Magnetic Resonance) images. However, these methods require visual quality control and manual localization of the region of interest (ROI) of ascending (AA) and proximal descending (PDA) aorta, which limit the analysis in large-scale population-based studies. Using 5100 scans from UK Biobank, this study sought to develop and validate a fully automated method to 1) detect and locate the ROIs of AA and PDA, and 2) provide a quality control mechanism.

Methods

The automated AA and PDA detection-localization algorithm followed these steps: 1) foreground segmentation; 2) detection of candidate ROIs by Circular Hough Transform (CHT); 3) spatial, histogram and shape feature extraction for candidate ROIs; 4) AA and PDA detection using Random Forest (RF); 5) quality control based on RF detection probability. To provide the ground truth, overall image quality (IQ = 0–3 from poor to good) and aortic locations were visually assessed by 13 observers. The automated algorithm was trained on 1200 scans and Dice Similarity Coefficient (DSC) was used to calculate the agreement between ground truth and automatically detected ROIs.

Results

The automated algorithm was tested on 3900 scans. Detection accuracy was 99.4% for AA and 99.8% for PDA. Aorta localization showed excellent agreement with the ground truth, with DSC ≥ 0.9 in 94.8% of AA (DSC = 0.97 ± 0.04) and 99.5% of PDA cases (DSC = 0.98 ± 0.03). AA×PDA detection probabilities could discriminate scans with IQ ≥ 1 from those severely corrupted by artefacts (AUC = 90.6%). If scans with detection probability < 0.75 were excluded (350 scans), the algorithm was able to correctly detect and localize AA and PDA in all the remaining 3550 scans (100% accuracy).

Conclusion

The proposed method for automated AA and PDA localization was extremely accurate and the automatically derived detection probabilities provided a robust mechanism to detect low quality scans for further human review. Applying the proposed localization and quality control techniques promises at least a ten-fold reduction in human involvement without sacrificing any accuracy.

]]>
<![CDATA[Selection of the optimal trading model for stock investment in different industries]]> https://www.researchpad.co/article/5c6dc9d9d5eed0c48452a2e0

In general, the stock prices of the same industry have a similar trend, but those of different industries do not. When investing in stocks of different industries, one should select the optimal model from lots of trading models for each industry because any model may not be suitable for capturing the stock trends of all industries. However, the study has not been carried out at present. In this paper, firstly we select 424 S&P 500 index component stocks (SPICS) and 185 CSI 300 index component stocks (CSICS) as the research objects from 2010 to 2017, divide them into 9 industries such as finance and energy respectively. Secondly, we apply 12 widely used machine learning algorithms to generate stock trading signals in different industries and execute the back-testing based on the trading signals. Thirdly, we use a non-parametric statistical test to evaluate whether there are significant differences among the trading performance evaluation indicators (PEI) of different models in the same industry. Finally, we propose a series of rules to select the optimal models for stock investment of every industry. The analytical results on SPICS and CSICS show that we can find the optimal trading models for each industry based on the statistical tests and the rules. Most importantly, the PEI of the best algorithms can be significantly better than that of the benchmark index and “Buy and Hold” strategy. Therefore, the algorithms can be used for making profits from industry stock trading.

]]>
<![CDATA[Development and validation of clinical prediction models to distinguish influenza from other viruses causing acute respiratory infections in children and adults]]> https://www.researchpad.co/article/5c6b26add5eed0c484289e58

Predictive models have been developed for influenza but have seldom been validated. Typically they have focused on patients meeting a definition of infection that includes fever. Less is known about how models perform when more symptoms are considered. We, therefore, aimed to create and internally validate predictive scores of acute respiratory infection (ARI) symptoms to diagnose influenza virus infection as confirmed by polymerase chain reaction (PCR) from respiratory specimens. Data from a completed trial to study the indirect effect of influenza immunization in Hutterite communities were randomly split into two independent groups for model derivation and validation. We applied different multivariable modelling techniques and constructed Receiver Operating Characteristics (ROC) curves to determine predictive indexes at different cut-points. From 2008–2011, 3288 first seasonal ARI episodes and 321 (9.8%) influenza positive events occurred in 2202 individuals. In children up to 17 years, the significant predictors of influenza virus infection were fever, chills, and cough along with being of age 6 years and older. In adults, presence of chills and cough but not fever were highly specific for influenza virus infection (sensitivity 30%, specificity 96%). Performance of the models in the validation set was not significantly different. The predictors were consistently found to be significant irrespective of the multivariable technique. Symptomatic predictors of influenza virus infection vary between children and adults. The scores could assist clinicians in their test and treat decisions but the results need to be externally validated prior to application in clinical practice.

]]>
<![CDATA[Role of insurance in determining utilization of healthcare and financial risk protection in India]]> https://www.researchpad.co/article/5c633943d5eed0c484ae6374

Background

Universal health coverage has become a policy goal in most developing economies. We assess the association of health insurance (HI) schemes in general, and RSBY (National Health Insurance Scheme) in particular, on extent and pattern of healthcare utilization. Secondly, we assess the relationship of HI and RSBY on out-of-pocket (OOP) expenditures and financial risk protection (FRP).

Methods

A cross-sectional study was undertaken to interview 62335 individuals among 12,134 households in 8 districts of three states in India i.e. Gujarat, Haryana and Uttar Pradesh (UP). Data on socio-demographic characteristics, assets, education, occupation, consumption expenditure, illness in last 15 days or hospitalization during last 365 days, treatment sought and its OOP expenditure was collected. We computed catastrophic health expenditures (CHE) as indicator for FRP. Hospitalization rate, choice of care provider and CHE were regressed to assess their association with insurance status and type of insurance scheme, after adjusting for other covariates.

Results

Mean OOP expenditures for outpatient care among insured and uninsured were INR 961 (USD 16) and INR 840 (USD 14); and INR 32573 (USD 543) and INR 24788 (USD 413) for an episode of hospitalization respectively. The prevalence of CHE for hospitalization was 28% and 26% among the insured and uninsured population respectively. No significant association was observed in multivariate analysis between hospitalization rate, choice of care provider or CHE with insurance status or RSBY in particular.

Conclusion

Health insurance in its present form does not seem to provide requisite improvement in access to care or financial risk protection.

]]>
<![CDATA[Multi-sensor movement analysis for transport safety and health applications]]> https://www.researchpad.co/article/5c5ca2c0d5eed0c48441ea09

Recent increases in the use of and applications for wearable technology has opened up many new avenues of research. In this paper, we consider the use of lifelogging and GPS data to extend fine-grained movement analysis for improving applications in health and safety. We first design a framework to solve the problem of indoor and outdoor movement detection from sensor readings associated with images captured by a lifelogging wearable device. Second we propose a set of measures related with hazard on the road network derived from the combination of GPS movement data, road network data and the sensor readings from a wearable device. Third, we identify the relationship between different socio-demographic groups and the patterns of indoor physical activity and sedentary behaviour routines as well as disturbance levels on different road settings.

]]>
<![CDATA[In silico identification of critical proteins associated with learning process and immune system for Down syndrome]]> https://www.researchpad.co/article/5c58d652d5eed0c484031b96

Understanding expression levels of proteins and their interactions is a key factor to diagnose and explain the Down syndrome which can be considered as the most prevalent reason of intellectual disability in human beings. In the previous studies, the expression levels of 77 proteins obtained from normal genotype control mice and from trisomic Ts65Dn mice have been analyzed after training in contextual fear conditioning with and without injection of the memantine drug using statistical methods and machine learning techniques. Recent studies have also pointed out that there may be a linkage between the Down syndrome and the immune system. Thus, the research presented in this paper aim at in silico identification of proteins which are significant to the learning process and the immune system and to derive the most accurate model for classification of mice. In this paper, the features are selected by implementing forward feature selection method after preprocessing step of the dataset. Later, deep neural network, gradient boosting tree, support vector machine and random forest classification methods are implemented to identify the accuracy. It is observed that the selected feature subsets not only yield higher accuracy classification results but also are composed of protein responses which are important for the learning and memory process and the immune system.

]]>
<![CDATA[Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity]]> https://www.researchpad.co/article/5c633970d5eed0c484ae6711

Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings when combined with interpretable machine learning methods, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.

]]>
<![CDATA[A conditional model predicting the 10-year annual extra mortality risk compared to the general population: a large population-based study in Dutch breast cancer patients]]> https://www.researchpad.co/article/5c536c65d5eed0c484a49ea5

Objective

Many cancer survivors are facing difficulties in getting a life insurance; raised premiums and declinatures are common. We generated a prediction model estimating the conditional extra mortality risk of breast cancer patients in the Netherlands. This model can be used by life insurers to accurately estimate the additional risk of an individual patient, conditional on the years survived.

Methodology

All women diagnosed with stage I-III breast cancer in 2005–2006, treated with surgery, were selected from the Netherlands Cancer Registry. For all stages separately, multivariable logistic regression was used to estimate annual mortality risks, conditional on the years survived, until 10 years after diagnosis, resulting in 30 models. The conditional extra mortality risk was calculated by subtracting mortality rates of the general Dutch population from the patient mortality rates, matched by age, gender and year. The final model was internally and externally validated, and tested by life insurers.

Results

We included 23,234 patients: 10,101 stage I, 9,868 stage II and 3,265 stage III. The final models included age, tumor stage, nodal stage, lateralization, location within the breast, grade, multifocality, hormonal receptor status, HER2 status, type of surgery, axillary lymph node dissection, radiotherapy, (neo)adjuvant systemic therapy and targeted therapy. All models showed good calibration and discrimination. Testing of the model by life insurers showed that insurability using the newly-developed model increased with 13%, ranging from 0%-24% among subgroups.

Conclusion

The final model provides accurate conditional extra mortality risks of breast cancer patients, which can be used by life insurers to make more reliable calculations. The model is expected to increase breast cancer patients’ insurability and transparency among life insurers.

]]>
<![CDATA[Readmission risk and costs of firearm injuries in the United States, 2010-2015]]> https://www.researchpad.co/article/5c536c37d5eed0c484a49bcc

Background

In 2015 there were 36,252 firearm-related deaths and 84,997 nonfatal injuries in the United States. The longitudinal burden of these injuries through readmissions is currently underestimated. We aimed to determine the 6-month readmission risk and hospital costs for patients injured by firearms.

Methods

We used the Nationwide Readmission Database 2010–2015 to assess the frequency of readmissions at 6 months, and hospital costs associated with readmissions for patients with firearm-related injuries. We produced nationally representative estimates of readmission risks and costs.

Results

Of patients discharged following a firearm injury, 15.6% were readmitted within 6 months. The average annual cost of inpatient hospitalizations for firearm injury was over $911 million, 9.5% of which was due to readmissions. Medicare and Medicaid covered 45.2% of total costs for the 5 years, and uninsured patients were responsible for 20.1%.

Conclusions

From 2010–2015, the average total cost of hospitalization for firearm injuries per patient was $32,700, almost 10% of which was due to readmissions within 6 months. Government insurance programs and the uninsured shouldered most of this.

]]>
<![CDATA[Survey of suspected dysphagia prevalence in home-dwelling older people using the 10-Item Eating Assessment Tool (EAT-10)]]> https://www.researchpad.co/article/5c5217d3d5eed0c48479462b

Objective

This study was carried out to determine the prevalence of suspected dysphagia and its features in both independent and dependent older people living at home.

Materials and methods

The 10-Item Eating Assessment Tool (EAT-10) questionnaire was sent to 1,000 independent older people and 2,000 dependent older people living at home in a municipal district of Tokyo, Japan. The participants were selected by stratified randomization according to age and care level. We set the cut-off value of EAT-10 at a score of ≥3. The percentage of participants with an EAT-10 score ≥3 was defined as the prevalence of suspected dysphagia. The chi-square test was used for analyzing prevalence in each group. Analysis of the distribution of EAT-10 scores, and comparisons among items, age groups, and care levels to identify symptom features were performed using the Kruskal-Wallis test and Mann-Whitney U test.

Results

Valid responses were received from 510 independent older people aged 65 years or older (mean age 75.0 ± 7.2) and 886 dependent older people (mean age 82.3 ± 6.7). The prevalences of suspected dysphagia were 25.1% and 53.8%, respectively, and showed significant increases with advancing age and care level. In both groups, many older people assigned high scores to the item about coughing, whereas individuals requiring high-level care assigned higher scores to the items about not only coughing but also swallowing of solids and quality of life.

Conclusion

In independent people, approximately one in four individuals showed suspected dysphagia and coughing was the most perceivable symptom. In dependent people, approximately one in two individuals showed suspected dysphagia and their specifically perceivable symptoms were coughing, difficulties in swallowing solids and psychological burden.

]]>
<![CDATA[Can diabetes patients seeking a second hospital get better care? Results from nested case–control study]]> https://www.researchpad.co/article/5c50c495d5eed0c4845e8986

This study investigates the effects of the number of medical institutions visited on risk of death. This study conducted a nested case-control design using the National Health Insurance Service–Senior database from 2002 to 2013. Cases were defined as those with death among outpatients who had first diagnosis of diabetes mellitus (E10-E14) after entry into the base cohort and controls were selected by incidence density sampling and matched to cases based on age, and sex. Our main results were presented by conditional logistic regression for nested case-controls design. Of total 55,558 final study samples, there were 9,313 (16.8%) cases and 46,245 (83.2%) controls. With an increase by one point in the number of hospitals per medical utilization, risk of death significantly increased by 4.1% (odds ratio (OR): 1.041, 95% confidence interval [CI]: 1.039–1.043). In both medical utilization and number of hospitals, those with high medical utilization (OR: 1.065, 95% CI: 1.059–1.070) and number of hospitals (OR: 1.049, 95% CI: 1.041–1.058) for risk of death were significantly higher than those with low medical utilization (OR: 1.040, 95% CI: 1.037–1.043) and number of hospitals (OR: 1.029, 95% CI: 1.027–1.032), respectively. The number of medical institution visited was significantly associated with risk of death. Therefore, diabetics should be warned about the potential of risk of death incurred from excessive access to medical utilizations.

]]>
<![CDATA[Computational prediction of diagnosis and feature selection on mesothelioma patient health records]]> https://www.researchpad.co/article/5c40f7e0d5eed0c484386b51

Background

Mesothelioma is a lung cancer that kills thousands of people worldwide annually, especially those with exposure to asbestos. Diagnosis of mesothelioma in patients often requires time-consuming imaging techniques and biopsies. Machine learning can provide for a more effective, cheaper, and faster patient diagnosis and feature selection from clinical data in patient records.

Methods and findings

We analyzed a dataset of health records of 324 patients having mesothelioma symptoms from Turkey. The patients had prior asbestos exposure and displayed symptoms consistent with mesothelioma. We compared probabilistic neural network, perceptron-based neural network, random forest, one rule, and decision tree classifiers to predict diagnosis of the patient records. We measured classifiers’ performance through standard confusion matrix scores such as Matthews correlation coefficient (MCC). Random forest outperformed all models tried, obtaining MCC = +0.37 on the complete imbalanced dataset and MCC = +0.64 on the under-sampled balanced dataset. We then employed random forest feature selection to identify the two most relevant dataset traits associated with mesothelioma: lung side and platelet count. These two risk factors resulted so predictive, that decision tree focusing on them achieved the second top accuracy on the complete dataset diagnosis prediction (MCC = +0.28), outperforming all other methods and even decision tree itself applied to all features.

Conclusions

Our results show that machine learning can predict diagnoses of patients having mesothelioma symptoms with high accuracy, sensitivity, and specificity, in few minutes. Additionally, random forest can efficiently select the most important features of this clinical dataset (lung side and platelet count) in few seconds. The importance of pleural plaques in lung sides and blood platelets in mesothelioma diagnosis indicates that physicians should focus on these two features when reading records of patients with mesothelioma symptoms. Moreover, doctors can exploit our machinery to predict patient diagnosis when only lung side and platelet data are available.

]]>
<![CDATA[Attachment strength and on-farm die-off rate of Escherichia coli on watermelon surfaces]]> https://www.researchpad.co/article/5c3e4f8fd5eed0c484d76beb

Pre-harvest contamination of produce has been a major food safety focus. Insight into the behavior of enteric pathogens on produce in pre-harvest conditions will aid in developing pre-harvest and post-harvest risk management strategies. In this study, the attachment strength (SR) and die-off rate of E. coli on the surface of watermelon fruits and the efficacy of aqueous chlorine treatment against strongly attached E. coli population were investigated. Watermelon seedlings were transplanted into eighteen plots. Prior to harvesting, a cocktail of generic E. coli (ATCC 23716, 25922 and 11775) was inoculated on the surface of the watermelon fruits (n = 162) and the attachment strength (SR) values and the daily die-off rates were examined up to 6 days by attachment assay. After 120 h, watermelon samples were treated with aqueous chlorine (150 ppm free chlorine for 3 min). The SR value of the E. coli cells on watermelon surfaces significantly increased (P<0.05) from 0.04 to 0.99 in the first 24 h, which was primarily due to the decrease in loosely attached population, given that the population of strongly attached cells was constant. Thereafter, there was no significant change in SR values, up to 120 h. The daily die-off rate of E. coli ranged from -0.12 to 1.3 log CFU/cm2. The chlorine treatment reduced the E. coli level by 4.2 log CFU/cm2 (initial level 5.6 log CFU/cm2) and 0.62 log CFU/cm2 (initial level 1.8 log CFU/cm2), on the watermelons that had an attachment time of 30 min and 120 h respectively. Overall, our findings revealed that the population of E. coli on watermelon surfaces declined over time in an agricultural environment. Microbial contamination during pre-harvest stages may promote the formation of strongly attached cells on the produce surfaces, which could influence the efficacy of post-harvest washing and sanitation techniques.

]]>