ResearchPad - methodology https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[High-throughput method for detection and quantification of lesions on leaf scale based on trypan blue staining and digital image analysis]]> https://www.researchpad.co/article/elastic_article_12715 Field-grown leafy vegetables can be damaged by biotic and abiotic factors, or mechanically damaged by farming practices. Available methods to evaluate leaf tissue damage mainly rely on colour differentiation between healthy and damaged tissues. Alternatively, sophisticated equipment such as microscopy and hyperspectral cameras can be employed. Depending on the causal factor, colour change in the wounded area is not always induced and, by the time symptoms become visible, a plant can already be severely affected. To accurately detect and quantify damage on leaf scale, including microlesions, reliable differentiation between healthy and damaged tissue is essential. We stained whole leaves with trypan blue dye, which traverses compromised cell membranes but is not absorbed in viable cells, followed by automated quantification of damage on leaf scale.ResultsWe present a robust, fast and sensitive method for leaf-scale visualisation, accurate automated extraction and measurement of damaged area on leaves of leafy vegetables. The image analysis pipeline we developed automatically identifies leaf area and individual stained (lesion) areas down to cell level. As proof of principle, we tested the methodology for damage detection and quantification on two field-grown leafy vegetable species, spinach and Swiss chard.ConclusionsOur novel lesion quantification method can be used for detection of large (macro) or single-cell (micro) lesions on leaf scale, enabling quantification of lesions at any stage and without requiring symptoms to be in the visible spectrum. Quantifying the wounded area on leaf scale is necessary for generating prediction models for economic losses and produce shelf-life. In addition, risk assessments are based on accurate prediction of the relationship between leaf damage and infection rates by opportunistic pathogens and our method helps determine the severity of leaf damage at fine resolution. ]]> <![CDATA[Transfer posterior error probability estimation for peptide identification]]> https://www.researchpad.co/article/elastic_article_12116 In shotgun proteomics, database searching of tandem mass spectra results in a great number of peptide-spectrum matches (PSMs), many of which are false positives. Quality control of PSMs is a multiple hypothesis testing problem, and the false discovery rate (FDR) or the posterior error probability (PEP) is the commonly used statistical confidence measure. PEP, also called local FDR, can evaluate the confidence of individual PSMs and thus is more desirable than FDR, which evaluates the global confidence of a collection of PSMs. Estimation of PEP can be achieved by decomposing the null and alternative distributions of PSM scores as long as the given data is sufficient. However, in many proteomic studies, only a group (subset) of PSMs, e.g. those with specific post-translational modifications, are of interest. The group can be very small, making the direct PEP estimation by the group data inaccurate, especially for the high-score area where the score threshold is taken. Using the whole set of PSMs to estimate the group PEP is inappropriate either, because the null and/or alternative distributions of the group can be very different from those of combined scores.ResultsThe transfer PEP algorithm is proposed to more accurately estimate the PEPs of peptide identifications in small groups. Transfer PEP derives the group null distribution through its empirical relationship with the combined null distribution, and estimates the group alternative distribution, as well as the null proportion, using an iterative semi-parametric method. Validated on both simulated data and real proteomic data, transfer PEP showed remarkably higher accuracy than the direct combined and separate PEP estimation methods.ConclusionsWe presented a novel approach to group PEP estimation for small groups and implemented it for the peptide identification problem in proteomics. The methodology of the approach is in principle applicable to the small-group PEP estimation problems in other fields. ]]> <![CDATA[An automated aquatic rack system for rearing marine invertebrates]]> https://www.researchpad.co/article/elastic_article_12087 One hundred years ago, marine organisms were the dominant systems for the study of developmental biology. The challenges in rearing these organisms outside of a marine setting ultimately contributed to a shift towards work on a smaller number of so-called model systems. Those animals are typically non-marine organisms with advantages afforded by short life cycles, high fecundity, and relative ease in laboratory culture. However, a full understanding of biodiversity, evolution, and anthropogenic effects on biological systems requires a broader survey of development in the animal kingdom. To this day, marine organisms remain relatively understudied, particularly the members of the Lophotrochozoa (Spiralia), which include well over one third of the metazoan phyla (such as the annelids, mollusks, flatworms) and exhibit a tremendous diversity of body plans and developmental modes. To facilitate studies of this group, we have previously described the development and culture of one lophotrochozoan representative, the slipper snail Crepidula atrasolea, which is easy to rear in recirculating marine aquaria. Lab-based culture and rearing of larger populations of animals remain a general challenge for many marine organisms, particularly for inland laboratories.ResultsHere, we describe the development of an automated marine aquatic rack system for the high-density culture of marine species, which is particularly well suited for rearing filter-feeding animals. Based on existing freshwater recirculating aquatic rack systems, our system is specific to the needs of marine organisms and incorporates robust filtration measures to eliminate wastes, reducing the need for regular water changes. In addition, this system incorporates sensors and associated equipment for automated assessment and adjustment of water quality. An automated feeding system permits precise delivery of liquid food (e.g., phytoplankton) throughout the day, mimicking real-life feeding conditions that contribute to increased growth rates and fecundity.ConclusionThis automated system makes laboratory culture of marine animals feasible for both large and small research groups, significantly reducing the time, labor, and overall costs needed to rear these organisms. ]]> <![CDATA[Detecting PCOS susceptibility loci from genome-wide association studies via iterative trend correlation based feature screening]]> https://www.researchpad.co/article/elastic_article_12077 Feature screening plays a critical role in handling ultrahigh dimensional data analyses when the number of features exponentially exceeds the number of observations. It is increasingly common in biomedical research to have case-control (binary) response and an extremely large-scale categorical features. However, the approach considering such data types is limited in extant literature. In this article, we propose a new feature screening approach based on the iterative trend correlation (ITC-SIS, for short) to detect important susceptibility loci that are associated with the polycystic ovary syndrome (PCOS) affection status by screening 731,442 SNP features that were collected from the genome-wide association studies.ResultsWe prove that the trend correlation based screening approach satisfies the theoretical strong screening consistency property under a set of reasonable conditions, which provides an appealing theoretical support for its outperformance. We demonstrate that the finite sample performance of ITC-SIS is accurate and fast through various simulation designs.ConclusionITC-SIS serves as a good alternative method to detect disease susceptibility loci for clinic genomic data. ]]> <![CDATA[Parametric mapping of cellular morphology in plant tissue sections by gray level granulometry]]> https://www.researchpad.co/article/elastic_article_11993 The cellular morphology of plant organs is strongly related to other physical properties such as shape, size, growth, mechanical properties or chemical composition. Cell morphology often vary depending on the type of tissue, or on the distance to a specific tissue. A common challenge in quantitative plant histology is to quantify not only the cellular morphology, but also its variations within the image or the organ. Image texture analysis is a fundamental tool in many areas of image analysis, that was proven efficient for plant histology, but at the scale of the whole image.ResultsThis work presents a method that generates a parametric mapping of cellular morphology within images of plant tissues. It is based on gray level granulometry from mathematical morphology for extracting image texture features, and on Centroidal Voronoi Diagram for generating a partition of the image. Resulting granulometric curves can be interpreted either through multivariate data analysis or by using summary features corresponding to the local average cell size. The resulting parametric maps describe the variations of cellular morphology within the organ.ConclusionsWe propose a methodology for the quantification of cellular morphology and of its variations within images of tissue sections. The results should help understanding how the cellular morphology is related to genotypic and / or environmental variations, and clarify the relationships between cellular morphology and chemical composition of cell walls. ]]> <![CDATA[Power analysis for RNA-Seq differential expression studies using generalized linear mixed effects models]]> https://www.researchpad.co/article/elastic_article_9713 Power analysis becomes an inevitable step in experimental design of current biomedical research. Complex designs allowing diverse correlation structures are commonly used in RNA-Seq experiments. However, the field currently lacks statistical methods to calculate sample size and estimate power for RNA-Seq differential expression studies using such designs. To fill the gap, simulation based methods have a great advantage by providing numerical solutions, since theoretical distributions of test statistics are typically unavailable for such designs.ResultsIn this paper, we propose a novel simulation based procedure for power estimation of differential expression with the employment of generalized linear mixed effects models for correlated expression data. We also propose a new procedure for power estimation of differential expression with the use of a bivariate negative binomial distribution for paired designs. We compare the performance of both the likelihood ratio test and Wald test under a variety of simulation scenarios with the proposed procedures. The simulated distribution was used to estimate the null distribution of test statistics in order to achieve the desired false positive control and was compared to the asymptotic Chi-square distribution. In addition, we applied the procedure for paired designs to the TCGA breast cancer data set.ConclusionsIn summary, we provide a framework for power estimation of RNA-Seq differential expression under complex experimental designs. Simulation results demonstrate that both the proposed procedures properly control the false positive rate at the nominal level. ]]> <![CDATA[Graphing and reporting heterogeneous treatment effects through reference classes]]> https://www.researchpad.co/article/elastic_article_9625 Exploration and modelling of heterogeneous treatment effects as a function of baseline covariates is an important aspect of precision medicine in randomised controlled trials (RCTs). Randomisation generally guarantees the internal validity of an RCT, but heterogeneity in treatment effect can reduce external validity. Estimation of heterogeneous treatment effects is usually done via a predictive model for individual outcomes, where one searches for interactions between treatment allocation and important patient baseline covariates. However, such models are prone to overfitting and multiple testing and typically demand a transformation of the outcome measurement, for example, from the absolute risk in the original RCT to log-odds of risk in the predictive model.MethodsWe show how reference classes derived from baseline covariates can be used to explore heterogeneous treatment effects via a two-stage approach. We first estimate a risk score which captures on a single dimension some of the heterogeneity in outcomes of the trial population. Heterogeneity in the treatment effect can then be explored via reweighting schemes along this axis of variation. This two-stage approach bypasses the search for interactions with multiple covariates, thus protecting against multiple testing. It also allows for exploration of heterogeneous treatment effects on the original outcome scale of the RCT. This approach would typically be applied to multivariable models of baseline risk to assess the stability of average treatment effects with respect to the distribution of risk in the population studied.Case studyWe illustrate this approach using the single largest randomised treatment trial in severe falciparum malaria and demonstrate how the estimated treatment effect in terms of absolute mortality risk reduction increases considerably in higher risk strata.Conclusions‘Local’ and ‘tilting’ reweighting schemes based on ranking patients by baseline risk can be used as a general approach for exploring, graphing and reporting heterogeneity of treatment effect in RCTs.Trial registrationISRCTN clinical trials registry: ISRCTN50258054. Prospectively registered on 22 July 2005. ]]> <![CDATA[The design and statistical aspects of VIETNARMS: a strategic post-licensing trial of multiple oral direct-acting antiviral hepatitis C treatment strategies in Vietnam]]> https://www.researchpad.co/article/elastic_article_9415 Eliminating hepatitis C is hampered by the costs of direct-acting antiviral treatment and the need to treat hard-to-reach populations. Access could be widened by shortening or simplifying treatment, but limited research means it is unclear which approaches could achieve sufficiently high cure rates to be acceptable. We present the statistical aspects of a multi-arm trial designed to test multiple strategies simultaneously and a monitoring mechanism to detect and stop individual randomly assigned groups with unacceptably low cure rates quickly.MethodsThe VIETNARMS trial will factorially randomly assign patients to two drug regimens, three treatment-shortening strategies or control, and adjunctive ribavirin or no adjunctive ribavirin with shortening strategies (14 randomly assigned groups). We will use Bayesian monitoring at interim analyses to detect and stop recruitment into unsuccessful strategies, defined by more than 0.95 posterior probability that the true cure rate is less than 90% for the individual randomly assigned group (non-comparative). Final comparisons will be non-inferiority for regimens (margin 5%) and strategies (margin 10%) and superiority for adjunctive ribavirin. Here, we tested the operating characteristics of the stopping guideline for individual randomly assigned groups, planned interim analysis timings and explored power at the final analysis.ResultsA beta (4.5, 0.5) prior for the true cure rate produces less than 0.05 probability of incorrectly stopping an individual randomly assigned group with a true cure rate of more than 90%. Groups with very low cure rates (<60%) are very likely (>0.9 probability) to stop after about 25% of patients are recruited. Groups with moderately low cure rates (80%) are likely to stop (0.7 probability) before overall recruitment finishes. Interim analyses 7, 10, 13 and 18 months after recruitment commences provide good probabilities of stopping inferior individual randomly assigned groups. For an overall true cure rate of 95%, power is more than 90% to confirm non-inferiority in the regimen and strategy comparisons, regardless of the control cure rate, and to detect a 5% absolute difference in the ribavirin comparison.ConclusionsThe operating characteristics of the stopping guideline are appropriate, and interim analyses can be timed to detect individual randomly assigned groups that are highly likely to have suboptimal performance at various stages. Therefore, our design is suitable for evaluating treatment-shortening or -simplifying strategies.Trial registrationISRCTN registry: ISRCTN61522291. Registered on 4 October 2019. ]]> <![CDATA[Visualization of spatial gene expression in plants by modified RNAscope fluorescent in situ hybridization]]> https://www.researchpad.co/article/elastic_article_9258 In situ analysis of biomarkers such as DNA, RNA and proteins are important for research and diagnostic purposes. At the RNA level, plant gene expression studies rely on qPCR, RNAseq and probe-based in situ hybridization (ISH). However, for ISH experiments poor stability of RNA and RNA based probes commonly results in poor detection or poor reproducibility. Recently, the development and availability of the RNAscope RNA-ISH method addressed these problems by novel signal amplification and background suppression. This method is capable of simultaneous detection of multiple target RNAs down to the single molecule level in individual cells, allowing researchers to study spatio-temporal patterning of gene expression. However, this method has not been optimized thus poorly utilized for plant specific gene expression studies which would allow for fluorescent multiplex detection. Here we provide a step-by-step method for sample collection and pretreatment optimization to perform the RNAscope assay in the leaf tissues of model monocot plant barley. We have shown the spatial distribution pattern of HvGAPDH and the low expressed disease resistance gene Rpg1 in leaf tissue sections of barley and discuss precautions that should be followed during image analysis.ResultsWe have shown the ubiquitous HvGAPH and predominantly stomatal guard cell associated subsidiary cell expressed Rpg1 expression pattern in barley leaf sections and described the improve RNAscope methodology suitable for plant tissues using confocal laser microscope. By addressing the problems in the sample collection and incorporating additional sample backing steps we have significantly reduced the section detachment and experiment failure problems. Further, by reducing the time of protease treatment, we minimized the sample disintegration due to over digestion of barley tissues.ConclusionsRNAscope multiplex fluorescent RNA-ISH detection is well described and adapted for animal tissue samples, however due to morphological and structural differences in the plant tissues the standard protocol is deficient and required optimization. Utilizing barley specific HvGAPDH and Rpg1 RNA probes we report an optimized method which can be used for RNAscope detection to determine the spatial expression and semi-quantification of target RNAs. This optimized method will be immensely useful in other plant species such as the widely utilized Arabidopsis. ]]> <![CDATA[PretiMeth: precise prediction models for DNA methylation based on single methylation mark]]> https://www.researchpad.co/article/elastic_article_9105 The computational prediction of methylation levels at single CpG resolution is promising to explore the methylation levels of CpGs uncovered by existing array techniques, especially for the 450 K beadchip array data with huge reserves. General prediction models concentrate on improving the overall prediction accuracy for the bulk of CpG loci while neglecting whether each locus is precisely predicted. This leads to the limited application of the prediction results, especially when performing downstream analysis with high precision requirements.ResultsHere we reported PretiMeth, a method for constructing precise prediction models for each single CpG locus. PretiMeth used a logistic regression algorithm to build a prediction model for each interested locus. Only one DNA methylation feature that shared the most similar methylation pattern with the CpG locus to be predicted was applied in the model. We found that PretiMeth outperformed other algorithms in the prediction accuracy, and kept robust across platforms and cell types. Furthermore, PretiMeth was applied to The Cancer Genome Atlas data (TCGA), the intensive analysis based on precise prediction results showed that several CpG loci and genes (differentially methylated between the tumor and normal samples) were worthy for further biological validation.ConclusionThe precise prediction of single CpG locus is important for both methylation array data expansion and downstream analysis of prediction results. PretiMeth achieved precise modeling for each CpG locus by using only one significant feature, which also suggested that our precise prediction models could be probably used for reference in the probe set design when the DNA methylation beadchip update. PretiMeth is provided as an open source tool via https://github.com/JxTang-bioinformatics/PretiMeth. ]]> <![CDATA[CSN: unsupervised approach for inferring biological networks based on the genome alone]]> https://www.researchpad.co/article/elastic_article_8972 Most organisms cannot be cultivated, as they live in unique ecological conditions that cannot be mimicked in the lab. Understanding the functionality of those organisms’ genes and their interactions by performing large-scale measurements of transcription levels, protein-protein interactions or metabolism, is extremely difficult and, in some cases, impossible. Thus, efficient algorithms for deciphering genome functionality based only on the genomic sequences with no other experimental measurements are needed.ResultsIn this study, we describe a novel algorithm that infers gene networks that we name Common Substring Network (CSN). The algorithm enables inferring novel regulatory relations among genes based only on the genomic sequence of a given organism and partial homolog/ortholog-based functional annotation. It can specifically infer the functional annotation of genes with unknown homology.This approach is based on the assumption that related genes, not necessarily homologs, tend to share sub-sequences, which may be related to common regulatory mechanisms, similar functionality of encoded proteins, common evolutionary history, and more.We demonstrate that CSNs, which are based on S. cerevisiae and E. coli genomes, have properties similar to ‘traditional’ biological networks inferred from experiments. Highly expressed genes tend to have higher degree nodes in the CSN, genes with similar protein functionality tend to be closer, and the CSN graph exhibits a power-law degree distribution. Also, we show how the CSN can be used for predicting gene interactions and functions.ConclusionsThe reported results suggest that ‘silent’ code inside the transcript can help to predict central features of biological networks and gene function. This approach can help researchers to understand the genome of novel microorganisms, analyze metagenomic data, and can help to decipher new gene functions.AvailabilityOur MATLAB implementation of CSN is available at https://www.cs.tau.ac.il/~tamirtul/CSN-Autogen ]]> <![CDATA[Development of a surgical procedure for removal of a placentome from a pregnant ewe during gestation]]> https://www.researchpad.co/article/elastic_article_8495 In recent decades, there has been a growing interest in the impact of insults during pregnancy on postnatal health and disease. It is known that changes in placental development can impact fetal growth and subsequent susceptibility to adult onset diseases; however, a method to collect sufficient placental tissues for both histological and gene expression analyses during gestation without compromising the pregnancy has not been described. The ewe is an established biomedical model for the study of fetal development. Due to its cotyledonary placental type, the sheep has potential for surgical removal of materno-fetal exchange tissues, i.e., placentomes. A novel surgical procedure was developed in well-fed control ewes to excise a single placentome at mid-gestation.ResultsA follow-up study was performed in a cohort of nutrient-restricted ewes to investigate rapid placental changes in response to undernutrition. The surgery averaged 19 min, and there were no viability differences between control and sham ewes. Nutrient restricted fetuses were smaller than controls (4.7 ± 0.1 kg vs. 5.6 ± 0.2 kg; P < 0.05), with greater dam weight loss (− 32.4 ± 1.3 kg vs. 14.2 ± 2.2 kg; P < 0.01), and smaller placentomes at necropsy (5.7 ± 0.3 g vs. 7.2 ± 0.9 g; P < 0.05). Weight of sampled placentomes and placentome numbers did not differ.ConclusionsWith this technique, gestational studies in the sheep model will provide insight into the onset and complexity of changes in gene expression in placentomes resulting from undernutrition (as described in our study), overnutrition, alcohol or substance abuse, and environmental or disease factors of relevance and concern regarding the reproductive health and developmental origins of health and disease in humans and in animals. ]]> <![CDATA[Water–fat separation in spiral magnetic resonance fingerprinting for high temporal resolution tissue relaxation time quantification in muscle]]> https://www.researchpad.co/article/elastic_article_7049 To minimize the known biases introduced by fat in rapid T1 and T2 quantification in muscle using a single‐run magnetic resonance fingerprinting (MRF) water–fat separation sequence.MethodsThe single‐run MRF acquisition uses an alternating in‐phase/out‐of‐phase TE pattern to achieve water–fat separation based on a 2‐point DIXON method. Conjugate phase reconstruction and fat deblurring were applied to correct for B 0 inhomogeneities and chemical shift blurring. Water and fat signals were matched to the on‐resonance MRF dictionary. The method was first tested in a multicompartment phantom. To test whether the approach is capable of measuring small in vivo dynamic changes in relaxation times, experiments were run in 9 healthy volunteers; parameter values were compared with and without water–fat separation during muscle recovery after plantar flexion exercise.ResultsPhantom results show the robustness of the water–fat resolving MRF approach to undersampling. Parameter maps in volunteers show a significant (P < .01) increase in T1 (105 ± 94 ms) and decrease in T2 (14 ± 6 ms) when using water–fat‐separated MRF, suggesting improved parameter quantification by reducing the well‐known biases introduced by fat. Exercise results showed smooth T1 and T2 recovery curves.ConclusionWater–fat separation using conjugate phase reconstruction is possible within a single‐run MRF scan. This technique can be used to rapidly map relaxation times in studies requiring dynamic scanning, in which the presence of fat is problematic. ]]> <![CDATA[Assessment and correction of macroscopic field variations in 2D spoiled gradient‐echo sequences]]> https://www.researchpad.co/article/elastic_article_6835 To model and correct the dephasing effects in the gradient‐echo signal for arbitrary RF excitation pulses with large flip angles in the presence of macroscopic field variations.MethodsThe dephasing of the spoiled 2D gradient‐echo signal was modeled using a numerical solution of the Bloch equations to calculate the magnitude and phase of the transverse magnetization across the slice profile. Additionally, regional variations of the transmit RF field and slice profile scaling due to macroscopic field gradients were included. Simulations, phantom, and in vivo measurements at 3 T were conducted for R2∗ and myelin water fraction (MWF) mapping.ResultsThe influence of macroscopic field gradients on R2∗ and myelin water fraction estimation can be substantially reduced by applying the proposed model. Moreover, it was shown that the dephasing over time for flip angles of 60° or greater also depends on the polarity of the slice‐selection gradient because of phase variation along the slice profile.ConclusionSubstantial improvements in R2∗ accuracy and myelin water fraction mapping coverage can be achieved using the proposed model if higher flip angles are required. In this context, we demonstrated that the phase along the slice profile and the polarity of the slice‐selection gradient are essential for proper modeling of the gradient‐echo signal in the presence of macroscopic field variations. ]]> <![CDATA[Modeling an equivalent b‐value in diffusion‐weighted steady‐state free precession]]> https://www.researchpad.co/article/elastic_article_6791 Diffusion‐weighted steady‐state free precession (DW‐SSFP) is shown to provide a means to probe non‐Gaussian diffusion through manipulation of the flip angle. A framework is presented to define an effective b‐value in DW‐SSFP.TheoryThe DW‐SSFP signal is a summation of coherence pathways with different b‐values. The relative contribution of each pathway is dictated by the flip angle. This leads to an apparent diffusion coefficient (ADC) estimate that depends on the flip angle in non‐Gaussian diffusion regimes. By acquiring DW‐SSFP data at multiple flip angles and modeling the variation in ADC for a given form of non‐Gaussianity, the ADC can be estimated at a well‐defined effective b‐value.MethodsA gamma distribution is used to model non‐Gaussian diffusion, embedded in the Buxton signal model for DW‐SSFP. Monte‐Carlo simulations of non‐Gaussian diffusion in DW‐SSFP and diffusion‐weighted spin‐echo sequences are used to verify the proposed framework. Dependence of ADC on flip angle in DW‐SSFP is verified with experimental measurements in a whole, human postmortem brain.ResultsMonte‐Carlo simulations reveal excellent agreement between ADCs estimated with diffusion‐weighted spin‐echo and the proposed framework. Experimental ADC estimates vary as a function of flip angle over the corpus callosum of the postmortem brain, estimating the mean and standard deviation of the gamma distribution as 1.50·10-4 mm2/s and 2.10·10-4 mm2/s.ConclusionDW‐SSFP can be used to investigate non‐Gaussian diffusion by varying the flip angle. By fitting a model of non‐Gaussian diffusion, the ADC in DW‐SSFP can be estimated at an effective b‐value, comparable to more conventional diffusion sequences. ]]> <![CDATA[Improving PCASL at ultra‐high field using a VERSE‐guided parallel transmission strategy]]> https://www.researchpad.co/article/elastic_article_6774 To improve the labeling efficiency of pseudo‐continuous arterial spin labeling (PCASL) at 7T using parallel transmission (pTx).MethodsFive healthy subjects were scanned on an 8‐channel‐transmit 7T human MRI scanner. Time‐of‐flight (TOF) angiography was acquired to identify regions of interest (ROIs) around the 4 major feeding arteries to the brain, and B1+ and B0 maps were acquired in the labeling plane for tagging pulse design. Complex weights of the labeling pulses for each of the 8 transmit channels were calculated to produce a homogenous radiofrequency (RF) ‐shimmed labeling across the ROIs. Variable‐Rate Selective Excitation (VERSE) pulses were also implemented as a part of the labeling pulse train. Whole‐brain perfusion‐weighted images were acquired under conditions of RF shimming, VERSE with RF shimming, and standard circularly polarized (CP) mode. The same subjects were scanned on a 3T scanner for comparison.ResultsIn simulation, VERSE with RF shimming improved the flip‐angles across the ROIs in the labeling plane by 90% compared with CP mode. VERSE with RF shimming improved the temporal signal‐to‐noise ratio by 375% compared with CP mode, but did not outperform a matched 3T sequence with a matched flip‐angle.ConclusionWe have demonstrated improved PCASL tagging at 7T using VERSE with RF shimming on a commercial head coil under conservative SAR limits at 7T. However, improvements of 7T over 3T may require strategies with less conservative SAR restrictions. ]]> <![CDATA[Magnetization transfer and frequency distribution effects in the SSFP ellipse]]> https://www.researchpad.co/article/elastic_article_6701 To demonstrate that quantitative magnetization transfer (qMT) parameters can be extracted from steady‐state free‐precession (SSFP) data with no external T 1 map or banding artifacts.MethodsSSFP images with multiple MT weightings were acquired and qMT parameters fitted with a two‐stage elliptical signal model.ResultsMonte Carlo simulations and data from a 3T scanner indicated that most qMT parameters could be recovered with reasonable accuracy. Systematic deviations from theory were observed in white matter, consistent with previous literature on frequency distribution effects.ConclusionsqMT parameters can be extracted from SSFP data alone, in a manner robust to banding artifacts, despite several confounds. ]]> <![CDATA[A pragmatic method for costing implementation strategies using time-driven activity-based costing]]> https://www.researchpad.co/article/N496ec226-da7e-421d-b28f-981b6ca98375 Implementation strategies increase the adoption of evidence-based practices, but they require resources. Although information about implementation costs is critical for decision-makers with budget constraints, cost information is not typically reported in the literature. This is at least partly due to a need for clearly defined, standardized costing methods that can be integrated into implementation effectiveness evaluation efforts.MethodsWe present a pragmatic approach to systematically estimating detailed, specific resource use and costs of implementation strategies that combine time-driven activity-based costing (TDABC), a business accounting method based on process mapping and known for its practicality, with a leading implementation science framework developed by Proctor and colleagues, which guides specification and reporting of implementation strategies. We illustrate the application of this method using a case study with synthetic data.ResultsThis step-by-step method produces a clear map of the implementation process by specifying the names, actions, actors, and temporality of each implementation strategy; determining the frequency and duration of each action associated with individual strategies; and assigning a dollar value to the resources that each action consumes. The method provides transparent and granular cost estimation, allowing a cost comparison of different implementation strategies. The resulting data allow researchers and stakeholders to understand how specific components of an implementation strategy influence its overall cost.ConclusionTDABC can serve as a pragmatic method for estimating resource use and costs associated with distinct implementation strategies and their individual components. Our use of the Proctor framework for the process mapping stage of the TDABC provides a way to incorporate cost estimation into implementation evaluation and may reduce the burden associated with economic evaluations in implementation science. ]]> <![CDATA[A random forest based computational model for predicting novel lncRNA-disease associations]]> https://www.researchpad.co/article/Neaf3fca6-41a2-4cac-978b-5db6a45f1097

Background

Accumulated evidence shows that the abnormal regulation of long non-coding RNA (lncRNA) is associated with various human diseases. Accurately identifying disease-associated lncRNAs is helpful to study the mechanism of lncRNAs in diseases and explore new therapies of diseases. Many lncRNA-disease association (LDA) prediction models have been implemented by integrating multiple kinds of data resources. However, most of the existing models ignore the interference of noisy and redundancy information among these data resources.

Results

To improve the ability of LDA prediction models, we implemented a random forest and feature selection based LDA prediction model (RFLDA in short). First, the RFLDA integrates the experiment-supported miRNA-disease associations (MDAs) and LDAs, the disease semantic similarity (DSS), the lncRNA functional similarity (LFS) and the lncRNA-miRNA interactions (LMI) as input features. Then, the RFLDA chooses the most useful features to train prediction model by feature selection based on the random forest variable importance score that takes into account not only the effect of individual feature on prediction results but also the joint effects of multiple features on prediction results. Finally, a random forest regression model is trained to score potential lncRNA-disease associations. In terms of the area under the receiver operating characteristic curve (AUC) of 0.976 and the area under the precision-recall curve (AUPR) of 0.779 under 5-fold cross-validation, the performance of the RFLDA is better than several state-of-the-art LDA prediction models. Moreover, case studies on three cancers demonstrate that 43 of the 45 lncRNAs predicted by the RFLDA are validated by experimental data, and the other two predicted lncRNAs are supported by other LDA prediction models.

Conclusions

Cross-validation and case studies indicate that the RFLDA has excellent ability to identify potential disease-associated lncRNAs.

]]>
<![CDATA[Novel approach in whole genome mining and transcriptome analysis reveal conserved RiPPs in Trichoderma spp]]> https://www.researchpad.co/article/N07e4cbdc-b3ae-49a3-9bb6-563996407d27

Background

Ribosomally synthesized and post-translationally modified peptides (RiPPs) are a highly diverse group of secondary metabolites (SM) of bacterial and fungal origin. While RiPPs have been intensively studied in bacteria, little is known about fungal RiPPs. In Fungi only six classes of RiPPs are described. Current strategies for genome mining are based on these six known classes. However, the genes involved in the biosynthesis of theses RiPPs are normally organized in biosynthetic gene clusters (BGC) in fungi.

Results

Here we describe a comprehensive strategy to mine fungal genomes for RiPPs by combining and adapting existing tools (e.g. antiSMASH and RiPPMiner) followed by extensive manual curation based on conserved domain identification, (comparative) phylogenetic analysis, and RNASeq data. Deploying this strategy, we could successfully rediscover already known fungal RiPPs. Further, we analysed four fungal genomes from the Trichoderma genus. We were able to find novel potential RiPP BGCs in Trichoderma using our unconventional mining approach.

Conclusion

We demonstrate that the unusual mining approach using tools developed for bacteria can be used in fungi, when carefully curated. Our study is the first report of the potential of Trichoderma to produce RiPPs, the detected clusters encode novel uncharacterized RiPPs. The method described in our study will lead to further mining efforts in all subdivisions of the fungal kingdom.

]]>