ResearchPad - data-acquisition https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Genuine cross-frequency coupling networks in human resting-state electrophysiological recordings]]> https://www.researchpad.co/article/elastic_article_15766 Genuine interareal cross-frequency coupling (CFC) can be identified from human resting state activity using magnetoencephalography, stereoelectroencephalography, and novel network approaches. CFC couples slow theta and alpha oscillations to faster oscillations across brain regions.

]]>
<![CDATA[Direct comparison of activation maps during galvanic vestibular stimulation: A hybrid H<sub>2</sub>[<sup>15</sup> O] PET—BOLD MRI activation study]]> https://www.researchpad.co/article/elastic_article_14749 Previous unimodal PET and fMRI studies in humans revealed a reproducible vestibular brain activation pattern, but with variations in its weighting and expansiveness. Hybrid studies minimizing methodological variations at baseline conditions are rare and still lacking for task-based designs. Thus, we applied for the first time hybrid 3T PET-MRI scanning (Siemens mMR) in healthy volunteers using galvanic vestibular stimulation (GVS) in healthy volunteers in order to directly compare H215O-PET and BOLD MRI responses. List mode PET acquisition started with the injection of 750 MBq H215O simultaneously to MRI EPI sequences. Group-level statistical parametric maps were generated for GVS vs. rest contrasts of PET, MR-onset (event-related), and MR-block. All contrasts showed a similar bilateral vestibular activation pattern with remarkable proximity of activation foci. Both BOLD contrasts gave more bilateral wide-spread activation clusters than PET; no area showed contradictory signal responses. PET still confirmed the right-hemispheric lateralization of the vestibular system, whereas BOLD-onset revealed only a tendency. The reciprocal inhibitory visual-vestibular interaction concept was confirmed by PET signal decreases in primary and secondary visual cortices, and BOLD-block decreases in secondary visual areas. In conclusion, MRI activation maps contained a mixture of CBF measured using H215O-PET and additional non-CBF effects, and the activation-deactivation pattern of the BOLD-block appears to be more similar to the H215O-PET than the BOLD-onset.

]]>
<![CDATA[Applicability of personal laser scanning in forestry inventory]]> https://www.researchpad.co/article/5c803c6ad5eed0c484ad8913

Light Detection and Ranging (LiDAR) technology has been widely used in forestry surveys in the form of airborne laser scanning (ALS), terrestrial laser scanning (TLS), and mobile laser scanning (MLS). The acquisition of important basic tree parameters (e.g., diameter at breast height and tree position) in forest inventory did not solve the problem of low measurement efficiency or weak GNSS signal under the canopy. A personal laser scanning (PLS) device combined with SLAM technology provides an effective solution for forest inventory under complex conditions with its light weight and flexible mobility. This study proposes a new method for calculating the volume of a cylinder using point cloud data obtained by a PLS device by fitting to a polygonal cylinder to calculate the diameter of the trunk. The point cloud data of tree trunks of different thickness were modeled using different fitting methods. The rate of correct tree trunk detection was 93.3% and the total deviation of the estimations of tree diameter at breast height (DBH) was -1.26 cm. The root mean square errors (RMSEs) of the estimations of the extracted DBH and the tree position were 1.58 cm and 26 cm, respectively. The survey efficiency of the personal laser scanning (PLS) device was 30m2/min for each investigator, compared with 0.91m2/min for the field survey. The test demonstrated that the PLS device combined with the SLAM algorithm provides an efficient and convenient solution for forest inventory.

]]>
<![CDATA[Cross-comparative analysis of evacuation behavior after earthquakes using mobile phone data]]> https://www.researchpad.co/article/5c76fe4dd5eed0c484e5b867

Despite the importance of predicting evacuation mobility dynamics after large scale disasters for effective first response and disaster relief, our general understanding of evacuation behavior remains limited because of the lack of empirical evidence on the evacuation movement of individuals across multiple disaster instances. Here we investigate the GPS trajectories of a total of more than 1 million anonymized mobile phone users whose positions were tracked for a period of 2 months before and after four of the major earthquakes that occurred in Japan. Through a cross comparative analysis between the four disaster instances, we find that in contrast to the assumed complexity of evacuation decision making mechanisms in crisis situations, an individual’s evacuation probability is strongly dependent on the seismic intensity that they experience. In fact, we show that the evacuation probabilities in all earthquakes collapse into a similar pattern, with a critical threshold at around seismic intensity 5.5. This indicates that despite the diversity in the earthquakes profiles and urban characteristics, evacuation behavior is similarly dependent on seismic intensity. Moreover, we found that probability density functions of the distances that individuals evacuate are not dependent on seismic intensities that individuals experience. These insights from empirical analysis on evacuation from multiple earthquake instances using large scale mobility data contributes to a deeper understanding of how people react to earthquakes, and can potentially assist decision makers to simulate and predict the number of evacuees in urban areas with little computational time and cost. This can be achieved by utilizing only the information on population density distribution and seismic intensity distribution, which can be observed instantaneously after the shock.

]]>
<![CDATA[A rapid methods development workflow for high-throughput quantitative proteomic applications]]> https://www.researchpad.co/article/5c6f152ed5eed0c48467ae98

Recent improvements in the speed and sensitivity of liquid chromatography-mass spectrometry systems have driven significant progress toward system-wide characterization of the proteome of many species. These efforts create large proteomic datasets that provide insight into biological processes and identify diagnostic proteins whose abundance changes significantly under different experimental conditions. Yet, these system-wide experiments are typically the starting point for hypothesis-driven, follow-up experiments to elucidate the extent of the phenomenon or the utility of the diagnostic marker, wherein many samples must be analyzed. Transitioning from a few discovery experiments to quantitative analyses on hundreds of samples requires significant resources both to develop sensitive and specific methods as well as analyze them in a high-throughput manner. To aid these efforts, we developed a workflow using data acquired from discovery proteomic experiments, retention time prediction, and standard-flow chromatography to rapidly develop targeted proteomic assays. We demonstrated this workflow by developing MRM assays to quantify proteins of multiple metabolic pathways from multiple microbes under different experimental conditions. With this workflow, one can also target peptides in scheduled/dynamic acquisition methods from a shotgun proteomic dataset downloaded from online repositories, validate with appropriate control samples or standard peptides, and begin analyzing hundreds of samples in only a few minutes.

]]>
<![CDATA[Optimal threshold of three-dimensional echocardiographic fully automated software for quantification of left ventricular volumes and ejection fraction: Comparison with cardiac magnetic resonance disk-area summation method and feature tracking method]]> https://www.researchpad.co/article/5c58d637d5eed0c4840318e4

Aims

Novel fully automated left chamber quantification software for three-dimensional echocardiography (3DE) has a potential for reliable measurement of left ventricular (LV) volumes and ejection fraction (LVEF). However, the optimal setting of global LV endocardial border threshold has not been settled.

Methods and results

We performed LV volumes and LVEF analysis using fully automated left chamber quantification software (Dynamic HeartModelA.I., Philips Medical Systems) in 65 patients who had undergone both 3DE and cardiac magnetic resonance (CMR) examinations on the same day. We recorded LV end-diastolic volume (LVEDV) and LV end-systolic volume (LVESV) according to the change in LV global border threshold settings from 0-point to 100-point with each increment of 10-point. These values were compared to the corresponding values of CMR with disk-area summation method and feature tracking (FT) method. Coverage probability (CP) was calculated as an index of accuracy and reliability. Fully automated software provided LV volumes and LVEF in 57 patients (Feasibility: 88%). LVEDV and LVESV increased steadily according to the increase in border threshold and reached minimal bias when border threshold setting was 80 against CMR disk-summation method and 90 against CMR FT method. Corresponding CP of LVEF was 0.74 and 0.84 against disk-area summation method and FT method.

Conclusions

With CMR values as a reference, LV endocardial border threshold value can be set around 80 to 90 with the same number of LV end-diastole and end-systole threshold to approximate LVEDV, LVESV and LVEF with clinically acceptable CP values of LVEF.

]]>
<![CDATA[Accurate and reproducible enumeration of T-, B-, and NK lymphocytes using the BD FACSLyric 10-color system: A multisite clinical evaluation]]> https://www.researchpad.co/article/5c58d639d5eed0c4840318fc

Clinical flow cytometry is a reliable methodology for whole blood cell phenotyping for different applications. The BD FACSLyric™ system comprises a flow cytometer available in different optical configurations, BD FACSuite™ Clinical software, and optional BD FACS™ Universal Loader. BD FACSuite Clinical software used with BD™ FC Beads and BD CS&T Beads enable universal setup for performance QC, instrument control, data acquisition/storage, online/offline data analysis, and instrument standardization. BD Biosciences sponsored the clinical evaluation of the BD FACSLyric 10-color configuration at seven clinical sites using delinked and de-identified blood specimens from HIV-infected and uninfected subjects to enumerate T-, B-, and NK-lymphocytes with the BD Multitest™ reagents (BD Multitest IMK kit and BD Multitest 6-color TBNK). Samples were analyzed on the BD FACSLyric system with BD FACSuite Clinical software, and on the BD FACSCanto™ II system with BD FACSCanto clinical software and BD FACS 7-Color Setup beads. For equivalency between methods, data (n = 362) were analyzed with Deming regression for absolute count and percentage of lymphocytes. Results gave R2 ≥0.98, with slope values ≥0.96, and slope ranges between 0.90–1.05. The percent (%) bias values were <10% for T- and NK cells and <15% for B- cells. The between-site (n = 4) total precision was tested for 5 days (2 runs/day), and gave %coefficient of variation below 10% for absolute cell counts. The stability claims were confirmed (n = 186) for the two BD Multitest reagents. The reference intervals were re-established in male and female adults (n = 134). The analysis by gender showed statistically significant differences for CD3+ and CD4+ T-cell counts and %CD4. In summary, the BD FACSLyric and the BD FACSCanto II systems generated comparable measurements of T-, B-, and NK-cells using BD Multitest assays.

]]>
<![CDATA[Parametrical modelling for texture characterization—A novel approach applied to ultrasound thyroid segmentation]]> https://www.researchpad.co/article/5c59fefed5eed0c4841358a3

Texture analysis is an important topic in Ultrasound (US) image analysis for structure segmentation and tissue classification. In this work a novel approach for US image texture feature extraction is presented. It is mainly based on parametrical modelling of a signal version of the US image in order to process it as data resulting from a dynamical process. Because of the predictive characteristics of such a model representation, good estimations of texture features can be obtained with less data than generally used methods require, allowing higher robustness to low Signal-to-Noise ratio and a more localized US image analysis. The usability of the proposed approach was demonstrated by extracting texture features for segmenting the thyroid in US images. The obtained results showed that features corresponding to energy ratios between different modelled texture frequency bands allowed to clearly distinguish between thyroid and non-thyroid texture. A simple k-means clustering algorithm has been used for separating US image patches as belonging to thyroid or not. Segmentation of thyroid was performed in two different datasets obtaining Dice coefficients over 85%.

]]>
<![CDATA[Spontaneous eye movements during focused-attention mindfulness meditation]]> https://www.researchpad.co/article/5c536b75d5eed0c484a48aab

Oculometric measures have been proven to be useful markers of mind-wandering during visual tasks such as reading. However, little is known about ocular activity during mindfulness meditation, a mental practice naturally involving mind-wandering episodes. In order to explore this issue, we extracted closed-eyes ocular movement measurements via a covert technique (EEG recordings) from expert meditators during two repetitions of a 7-minute mindfulness meditation session, focusing on the breath, and two repetitions of a 7-minute instructed mind-wandering task. Power spectral density was estimated on both the vertical and horizontal components of eye movements. The results show a significantly smaller average amplitude of eye movements in the delta band (1–4 Hz) during mindfulness meditation than instructed mind-wandering. Moreover, participants’ meditation expertise correlated significantly with this average amplitude during both tasks, with more experienced meditators generally moving their eyes less than less experienced meditators. These findings suggest the potential use of this measure to detect mind-wandering episodes during mindfulness meditation and to assess meditation performance.

]]>
<![CDATA[Tensor framelet based iterative image reconstruction algorithm for low-dose multislice helical CT]]> https://www.researchpad.co/article/5c424392d5eed0c4845e0633

In this study, we investigate the feasibility of improving the imaging quality for low-dose multislice helical computed tomography (CT) via iterative reconstruction with tensor framelet (TF) regularization. TF based algorithm is a high-order generalization of isotropic total variation regularization. It is implemented on a GPU platform for a fast parallel algorithm of X-ray forward band backward projections, with the flying focal spot into account. The solution algorithm for image reconstruction is based on the alternating direction method of multipliers or the so-called split Bregman method. The proposed method is validated using the experimental data from a Siemens SOMATOM Definition 64-slice helical CT scanner, in comparison with FDK, the Katsevich and the total variation (TV) algorithm. To test the algorithm performance with low-dose data, ACR and Rando phantoms were scanned with different dosages and the data was equally undersampled with various factors. The proposed method is robust for the low-dose data with 25% undersampling factor. Quantitative metrics have demonstrated that the proposed algorithm achieves superior results over other existing methods.

]]>
<![CDATA[Validation of the extended thrombolysis in cerebral infarction score in a real world cohort]]> https://www.researchpad.co/article/5c40f781d5eed0c4843862a0

Background

A thrombolysis in cerebral infarction (TICI) score of 2b is defined as a good recanalization result although the reperfusion may only cover 50% of the affected territory. An additional mTICI2c category was introduced to further differentiate between mTICI scores. Despite the new mTICI2c category, mTICI2b still covers a range of 50–90% reperfusion which might be too imprecise to predict neurological improvement after therapy.

Aim

To compare the 7-point “expanded TICI” (eTICI) scale with the traditional mTICI in regard to predict functional independence at 90 days.

Methods

Retrospective review of 225 patients with large artery occlusion. Angiograms were graded by 2 readers according the 7-point eTICI score (0% = eTICI0; reduced clot = eTICI1; 1–49% = eTICI2a, 50–66% = eTICI2b50; 67–89% = eTICI2b67, 90–99% = eTICI2c and complete reperfusion = eTICI3) and the conventional mTICI score. The ability of e- and mTICI to predict favorable outcome at 90days was compared.

Results

Given the ROC analysis eTICI was the better predictor of favorable outcome (p-value 0.047). Additionally, eTICI scores 2b50, 2b67 and 2c (former mTICI2b) were significantly superior at predicting the probability of a favorable outcome at 90 days after endovascular therapy with a p-value of 0.033 (probabilities of 17% for mTICI2b50, 24% for mTICI2b67 and 54% for mTICI2c vs. 36% for mTICI2b).

Conclusions

The 7-point eTICI allows for a more accurate outcome prediction compared to the mTICI score because it refines the broad range of former mTICI2b results.

]]>
<![CDATA[Improved and semi-automated reductive β-elimination workflow for higher throughput protein O-glycosylation analysis]]> https://www.researchpad.co/article/5c61b7c0d5eed0c484937ef2

Protein O-glycosylation has shown to be critical for a wide range of biological processes, resulting in an increased interest in studying the alterations in O-glycosylation patterns of biological samples as disease biomarkers as well as for patient stratification and personalized medicine. Given the complexity of O-glycans, often a large number of samples have to be analysed in order to obtain conclusive results. However, most of the O-glycan analysis work done so far has been performed using glycoanalytical technologies that would not be suitable for the analysis of large sample sets, mainly due to limitations in sample throughput and affordability of the methods. Here we report a largely automated system for O-glycan analysis. We adapted reductive β-elimination release of O-glycans to a 96-well plate system and transferred the protocol onto a liquid handling robot. The workflow includes O-glycan release, purification and derivatization through permethylation followed by MALDI-TOF-MS. The method has been validated according to the ICH Q2 (R1) guidelines for the validation of analytical procedures. The semi-automated reductive β-elimination system enabled for the characterization and relative quantitation of O-glycans from commercially available standards. Results of the semi-automated method were in good agreement with the conventional manual in-solution method while even outperforming it in terms of repeatability. Release of O-glycans for 96 samples was achieved within 2.5 hours, and the automated data acquisition on MALDI-TOF-MS took less than 1 minute per sample. This largely automated workflow for O-glycosylation analysis showed to produce rapid, accurate and reliable data, and has the potential to be applied for O-glycan characterization of biological samples, biopharmaceuticals as well as for biomarker discovery.

]]>
<![CDATA[Altered reward processing following an acute social stressor in adolescents]]> https://www.researchpad.co/article/5c390bd4d5eed0c48491e907

Altered reward processing is a transdiagnostic factor implicated in a wide range of psychiatric disorders. While prior animal and adult research has shown that stress contributes to reward dysfunction, less is known about how stress impacts reward processing in youth. Towards addressing this gap, the present study probed neural activation associated with reward processing following an acute stressor. Healthy adolescents (n = 40) completed a clinical assessment, and fMRI data were acquired while participants completed a monetary guessing task under a no-stress condition and then under a stress condition. Based on prior literature, analyses focused on a priori defined regions-of-interest, specifically the striatum (win trials) and dorsal anterior cingulate cortex [dACC] and insula (loss trials). Two main findings emerged. First, reward-related neural activation (i.e., striatum) was blunted in the stress relative to the no-stress condition. Second, the stress condition also contributed to blunted neural response following reward in loss-related regions (i.e., dACC, anterior insula); however, there were no changes in loss sensitivity. These results highlight the importance of conceptualizing neural vulnerability within the presence of stress, as this may clarify risk for mental disorders during a critical period of development.

]]>
<![CDATA[Implementation and assessment of the black body bias correction in quantitative neutron imaging]]> https://www.researchpad.co/article/5c390ba1d5eed0c48491d96e

We describe in this paper the experimental procedure, the data treatment and the quantification of the black body correction: an experimental approach to compensate for scattering and systematic biases in quantitative neutron imaging based on experimental data. The correction algorithm is based on two steps; estimation of the scattering component and correction using an enhanced normalization formula. The method incorporates correction terms into the image normalization procedure, which usually only includes open beam and dark current images (open beam correction). Our aim is to show its efficiency and reproducibility: we detail the data treatment procedures and quantitatively investigate the effect of the correction. Its implementation is included within the open source CT reconstruction software MuhRec. The performance of the proposed algorithm is demonstrated using simulated and experimental CT datasets acquired at the ICON and NEUTRA beamlines at the Paul Scherrer Institut.

]]>
<![CDATA[cellSTORM—Cost-effective super-resolution on a cellphone using dSTORM]]> https://www.researchpad.co/article/5c3fa5b7d5eed0c484ca7b36

High optical resolution in microscopy usually goes along with costly hardware components, such as lenses, mechanical setups and cameras. Several studies proved that Single Molecular Localization Microscopy can be made affordable, relying on off-the-shelf optical components and industry grade CMOS cameras. Recent technological advantages have yielded consumer-grade camera devices with surprisingly good performance. The camera sensors of smartphones have benefited of this development. Combined with computing power smartphones provide a fantastic opportunity for “imaging on a budget”. Here we show that a consumer cellphone is capable of optical super-resolution imaging by (direct) Stochastic Optical Reconstruction Microscopy (dSTORM), achieving optical resolution better than 80 nm. In addition to the use of standard reconstruction algorithms, we used a trained image-to-image generative adversarial network (GAN) to reconstruct video sequences under conditions where traditional algorithms provide sub-optimal localization performance directly on the smartphone. We believe that “cellSTORM” paves the way to make super-resolution microscopy not only affordable but available due to the ubiquity of cellphone cameras.

]]>
<![CDATA[Improving efficiency in neuroimaging research through application of Lean principles]]> https://www.researchpad.co/article/5c23ff87d5eed0c484092575

Introduction

“Lean” is a set of management principles which focus on increasing value and efficiency by reducing or avoiding waste (e.g., overproduction, defects, inventory, transportation, waiting, motion, over processing). It has been applied to manufacturing, education, and health care, leading to optimized process flow, increased efficiency and increased team empowerment. However, to date, it has not been applied to neuroimaging research.

Methods

Lean principles, such as Value stream mapping (e.g. a tool with which steps in the workflow can be identified on which to focus improvement efforts), 5S (e.g. an organizational method to boost workplace efficiency and efficacy) and Root-cause analysis (e.g. a problem-solving approach to identify key points of failure in a system) were applied to an ongoing, large neuroimaging study that included seven research visits per participant. All team members participated in a half-day exercise in which the entire project flow was visualized and areas of inefficiency were identified. Changes focused on removing obstacles, standardization, optimal arrangement of equipment and root-cause-analysis. A process for continuous improvement was also implemented. Total time of an experiment was recorded before implementation of Lean for two participants and after implementation of Lean for two participants. Satisfaction of team members was assessed anonymously on a 5-item Likert scale, ranging from much worsened to much improved.

Results

All team members (N = 6) considered the overall experience of conducting an experiment much improved after implementation of Lean. Five out of six team members indicated a much-improved reduction in time, with the final team member considering this somewhat improved. Average experiment time was reduced by 13% after implementation of Lean (from 142 and 147 minutes to 124 and 128 minutes).

Discussion

Lean principles can be successfully applied to neuroimaging research. Training in Lean principles for junior research scientists is recommended.

]]>
<![CDATA[Comparison of fluctuations in global network topology of modeled and empirical brain functional connectivity]]> https://www.researchpad.co/article/5bb3df3d40307c54ff8ce8f3

Dynamic models of large-scale brain activity have been used for reproducing many empirical findings on human brain functional connectivity. Features that have been shown to be reproducible by comparing modeled to empirical data include functional connectivity measured over several minutes of resting-state functional magnetic resonance imaging, as well as its time-resolved fluctuations on a time scale of tens of seconds. However, comparison of modeled and empirical data has not been conducted yet for fluctuations in global network topology of functional connectivity, such as fluctuations between segregated and integrated topology or between high and low modularity topology. Since these global network-level fluctuations have been shown to be related to human cognition and behavior, there is an emerging need for clarifying their reproducibility with computational models. To address this problem, we directly compared fluctuations in global network topology of functional connectivity between modeled and empirical data, and clarified the degree to which a stationary model of spontaneous brain dynamics can reproduce the empirically observed fluctuations. Modeled fluctuations were simulated using a system of coupled phase oscillators wired according to brain structural connectivity. By performing model parameter search, we found that modeled fluctuations in global metrics quantifying network integration and modularity had more than 80% of magnitudes of those observed in the empirical data. Temporal properties of network states determined based on fluctuations in these metrics were also found to be reproducible, although their spatial patterns in functional connectivity did not perfectly matched. These results suggest that stationary models simulating resting-state activity can reproduce the magnitude of empirical fluctuations in segregation and integration, whereas additional factors, such as active mechanisms controlling non-stationary dynamics and/or greater accuracy of mapping brain structural connectivity, would be necessary for fully reproducing the spatial patterning associated with these fluctuations.

]]>
<![CDATA[System for automatic gait analysis based on a single RGB-D camera]]> https://www.researchpad.co/article/5b6dda0e463d7e7491b405e8

Human gait analysis provides valuable information regarding the way of walking of a given subject. Low-cost RGB-D cameras, such as the Microsoft Kinect, are able to estimate the 3-D position of several body joints without requiring the use of markers. This 3-D information can be used to perform objective gait analysis in an affordable, portable, and non-intrusive way. In this contribution, we present a system for fully automatic gait analysis using a single RGB-D camera, namely the second version of the Kinect. Our system does not require any manual intervention (except for starting/stopping the data acquisition), since it firstly recognizes whether the subject is walking or not, and identifies the different gait cycles only when walking is detected. For each gait cycle, it then computes several gait parameters, which can provide useful information in various contexts, such as sports, healthcare, and biometric identification. The activity recognition is performed by a predictive model that distinguishes between three activities (walking, standing and marching), and between two postures of the subject (facing the sensor, and facing away from it). The model was built using a multilayer perceptron algorithm and several measures extracted from 3-D joint data, achieving an overall accuracy and F1 score of 98%. For gait cycle detection, we implemented an algorithm that estimates the instants corresponding to left and right heel strikes, relying on the distance between ankles, and the velocity of left and right ankles. The algorithm achieved errors for heel strike instant and stride duration estimation of 15 ± 25 ms and 1 ± 29 ms (walking towards the sensor), and 12 ± 23 ms and 2 ± 24 ms (walking away from the sensor). Our gait cycle detection solution can be used with any other RGB-D camera that provides the 3-D position of the main body joints.

]]>
<![CDATA[The gap between cause-of-death statistics and Household Registration reports in Shandong, China during 2011-2013: Evaluation and adjustment for underreporting in the mortality data for 262 subcounty level populations]]> https://www.researchpad.co/article/5b49cac8463d7e33e4eac05b

Underreporting is a quality concern in mortality statistics. The purpose of this study was to assess and adjust underreporting in the population-based cause-of-death statistics. The total population (96 million) in Shandong, China was divided into 262 subcounty level populations geographically and by residence type (urban/rural). For each subpopulation, the total number of deaths during the years 2011–2013 was determined using data from the Household Registration System (HRS), and was used as a reference to assess the underreporting rate (UR) in the cause-of-death data from the Shandong Death Registration System (SDRS). It was estimated that 454,615 deaths, or 21.5% (95% CI: 21.4–21.5%) were unreported. Underreporting was more pronounced in rural (22.1%) versus urban communities (20.0%), in economically underdeveloped regions (32% versus 16% in least disadvantaged areas), and in newly included sites with no prior experience in cause-of-death reporting (24% versus 17%). Geographic variation was large with a UR at the prefectural level ranging from 11.2% to 43.7%. A stratified analysis showed that UR was higher in rural populations in high-income regions, but in middle- and low-income regions, was higher in urban communities. An adjustment factor (AF) was calculated for each of the 262 subpopulations (ranging from 0.9 to 2.5 with an average of 1.27). The total morality rate was adjusted from 6.03 to 7.67 deaths per 1000 persons. Underreporting in the SDRS varies greatly between areas and populations and is related to residence type, prior experience and local economy. Correcting underreporting at a local level is needed especially for comparative analyses across geographical areas or populations.

]]>
<![CDATA[Evidence of cyclical light/dark-regulated expression of freezing tolerance in young winter wheat plants]]> https://www.researchpad.co/article/5b498fa8463d7e0897c6e01c

The ability of winter wheat (Triticum aestivum L.) plants to develop freezing tolerance through cold acclimation is a complex rait that responds to many environmental cues including day length and temperature. A large part of the freezing tolerance is conditioned by the C-repeat binding factor (CBF) gene regulon. We investigated whether the level of freezing tolerance of 12 winter wheat lines varied throughout the day and night in plants grown under a constant low temperature and a 12-hour photoperiod. Freezing tolerance was significantly greater (P<0.0001) when exposure to subfreezing temperatures began at the midpoint of the light period, or the midpoint of the dark period, compared to the end of either period, with an average of 21.3% improvement in survival. Thus, freezing survival was related to the photoperiod, but cycled from low, to high, to low within each 12-hour light period and within each 12-hour dark period, indicating ultradian cyclic variation of freezing tolerance. Quantitative real-time PCR analysis of expression levels of CBF genes 14 and 15 indicated that expression of these two genes also varied cyclically, but essentially 180° out of phase with each other. Proton nuclear magnetic resonance analysis (1H-NMR) showed that the chemical composition of the wheat plants' cellular fluid varied diurnally, with consistent separation of the light and dark phases of growth. A compound identified as glutamine was consistently found in greater concentration in a strongly freezing-tolerant wheat line, compared to moderately and poorly freezing-tolerant lines. The glutamine also varied in ultradian fashion in the freezing-tolerant wheat line, consistent with the ultradian variation in freezing tolerance, but did not vary in the less-tolerant lines. These results suggest at least two distinct signaling pathways, one conditioning freezing tolerance in the light, and one conditioning freezing tolerance in the dark; both are at least partially under the control of the CBF regulon.

]]>