ResearchPad - careers-in-research Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Predicting the impact of patient and private provider behavior on diagnostic delay for pulmonary tuberculosis patients in India: A simulation modeling study]]> India contributes more than a quarter of the 10 million global tuberculosis (TB) cases every year.Several studies capture long, circuitous care pathways followed by TB patients until their diagnosis. However, these studies do not quantify the link between diagnostic delay and underlying patient and provider behavior characteristics.What did the researchers do and find?We developed a quantitative simulation model to estimate the impact of behavioral characteristics of patients and providers on diagnostic delay and estimated the parameters of this model using data from detailed interviews of 76 patients from Mumbai and 64 patients from Patna.We found that earlier test ordering by providers would yield a much larger reduction in diagnostic delay than increasing their diagnostic accuracy.What do these findings mean?Policy-makers and implementing agencies should encourage early test ordering behavior by providers to reduce diagnostic delay, and, consequently, to reduce disease transmission. ]]> <![CDATA[What makes an effective grants peer reviewer? An exploratory study of the necessary skills]]> This exploratory mixed methods study describes skills required to be an effective peer reviewer as a member of review panels conducted for federal agencies that fund research, and examines how reviewer experience and the use of technology within such panels impacts reviewer skill development. Two specific review panel formats are considered: in-person face-to-face and virtual video conference. Data were collected through interviews with seven program officers and five expert peer review panelists, and surveys from 51 respondents. Results include the skills reviewers’ consider necessary for effective review panel participation, their assessment of the relative importance of these skills, how they are learned, and how review format affects skill development and improvement. Results are discussed relative to the peer review literature and with consideration of the importance of professional skills needed by successful scientists and peer reviewers.

<![CDATA[Beware of vested interests: Epistemic vigilance improves reasoning about scientific evidence (for some people)]]>

In public disputes, stakeholders sometimes misrepresent statistics or other types of scientific evidence to support their claims. One of the reasons this is problematic is that citizens often do not have the motivation nor the cognitive skills to accurately judge the meaning of statistics and thus run the risk of being misinformed. This study reports an experiment investigating the conditions under which people become vigilant towards a source’s claim and thus reason more carefully about the supporting evidence. For this, participants were presented with a claim by a vested-interest or a neutral source and with statistical evidence which was cited by the source as being in support of the claim. However, this statistical evidence actually contradicted the source’s claim but was presented as a contingency table, which are typically difficult for people to interpret correctly. When the source was a lobbyist arguing for his company’s product people were better at interpreting the evidence compared to when the same source argued against the product. This was not the case for a different vested-interests source nor for the neutral source. Further, while all sources were rated as less trustworthy when participants realized that the source had misrepresented the evidence, only for the lobbyist source was this seen as a deliberate attempt at deception. Implications for research on epistemic trust, source credibility effects and science communication are discussed.

<![CDATA[Will COVID-19 become the next neglected tropical disease?]]> ]]> <![CDATA[Height of overburden fracture based on key strata theory in longwall face]]>

Among the three overburden zones (the caving zone, the fracture zone, and the continuous deformation zone) in longwall coal mining, the continuous deformation zone is often considered to be continuous without cracks, so continuum mechanics can be used to calculate the subsidence of overburden strata. Longwall coal mining, however, will induce the generation of wide cracks in the surface and thus may cause the continuous deformation zone to fracture. In this paper, whether there are cracks in the continuous deformation zone as well as the height of overburden fracture in longwall face and the subsidence and deformation of strata of different fracture penetration ratios were studied by means of physical simulation, theoretical analysis and numerical simulation. The results show that: (1) Rock stratum starts to fracture as long as it has slightly subsided for only tens of millimeters, and the height of fracture development is the height of working face overburden. (2) With the increase of fracture penetration ratio, the subsidence of key strata remains basically unchanged; the surface deformation range and the maximum compression deformation decrease, while the maximum horizontal movement and maximum horizontal tensile deformation increase. Therefore, the subsidence of overburden strata which have fractured but have not broken can be calculated through the continuum mechanics method.

<![CDATA[On the value of preprints: An early career researcher perspective]]>

Peer-reviewed journal publication is the main means for academic researchers in the life sciences to create a permanent public record of their work. These publications are also the de facto currency for career progress, with a strong link between journal brand recognition and perceived value. The current peer-review process can lead to long delays between submission and publication, with cycles of rejection, revision, and resubmission causing redundant peer review. This situation creates unique challenges for early career researchers (ECRs), who rely heavily on timely publication of their work to gain recognition for their efforts. Today, ECRs face a changing academic landscape, including the increased interdisciplinarity of life sciences research, expansion of the researcher population, and consequent shifts in employer and funding demands. The publication of preprints, publicly available scientific manuscripts posted on dedicated preprint servers prior to journal-managed peer review, can play a key role in addressing these ECR challenges. Preprinting benefits include rapid dissemination of academic work, open access, establishing priority or concurrence, receiving feedback, and facilitating collaborations. Although there is a growing appreciation for and adoption of preprints, a minority of all articles in life sciences and medicine are preprinted. The current low rate of preprint submissions in life sciences and ECR concerns regarding preprinting need to be addressed. We provide a perspective from an interdisciplinary group of ECRs on the value of preprints and advocate their wide adoption to advance knowledge and facilitate career development.

<![CDATA[DeephESC 2.0: Deep Generative Multi Adversarial Networks for improving the classification of hESC]]>

Human embryonic stem cells (hESC), derived from the blastocysts, provide unique cellular models for numerous potential applications. They have great promise in the treatment of diseases such as Parkinson’s, Huntington’s, diabetes mellitus, etc. hESC are a reliable developmental model for early embryonic growth because of their ability to divide indefinitely (pluripotency), and differentiate, or functionally change, into any adult cell type. Their adaptation to toxicological studies is particularly attractive as pluripotent stem cells can be used to model various stages of prenatal development. Automated detection and classification of human embryonic stem cell in videos is of great interest among biologists for quantified analysis of various states of hESC in experimental work. Currently video annotation is done by hand, a process which is very time consuming and exhaustive. To solve this problem, this paper introduces DeephESC 2.0 an automated machine learning approach consisting of two parts: (a) Generative Multi Adversarial Networks (GMAN) for generating synthetic images of hESC, (b) a hierarchical classification system consisting of Convolution Neural Networks (CNN) and Triplet CNNs to classify phase contrast hESC images into six different classes namely: Cell clusters, Debris, Unattached cells, Attached cells, Dynamically Blebbing cells and Apoptically Blebbing cells. The approach is totally non-invasive and does not require any chemical or staining of hESC. DeephESC 2.0 is able to classify hESC images with an accuracy of 93.23% out performing state-of-the-art approaches by at least 20%. Furthermore, DeephESC 2.0 is able to generate large number of synthetic images which can be used for augmenting the dataset. Experimental results show that training DeephESC 2.0 exclusively on a large amount of synthetic images helps to improve the performance of the classifier on original images from 93.23% to 94.46%. This paper also evaluates the quality of the generated synthetic images using the Structural SIMilarity (SSIM) index, Peak Signal to Noise ratio (PSNR) and statistical p-value metrics and compares them with state-of-the-art approaches for generating synthetic images. DeephESC 2.0 saves hundreds of hours of manual labor which would otherwise be spent on manually/semi-manually annotating more and more videos.

<![CDATA[Late-life mortality is underestimated because of data errors]]>

Knowledge of true mortality trajectory at extreme old ages is important for biologists who test their theories of aging with demographic data. Studies using both simulation and direct age validation found that longevity records for ages 105 years and older are often incorrect and may lead to spurious mortality deceleration and mortality plateau. After age 105 years, longevity claims should be considered as extraordinary claims that require extraordinary evidence. Traditional methods of data cleaning and data quality control are just not sufficient. New, more strict methodologies of data quality control need to be developed and tested. Before this happens, all mortality estimates for ages above 105 years should be treated with caution.

<![CDATA[Reliable novelty: New should not trump true]]>

Although a case can be made for rewarding scientists for risky, novel science rather than for incremental, reliable science, novelty without reliability ceases to be science. The currently available evidence suggests that the most prestigious journals are no better at detecting unreliable science than other journals. In fact, some of the most convincing studies show a negative correlation, with the most prestigious journals publishing the least reliable science. With the credibility of science increasingly under siege, how much longer can we afford to reward novelty at the expense of reliability? Here, I argue for replacing the legacy journals with a modern information infrastructure that is governed by scholars. This infrastructure would allow renewed focus on scientific reliability, with improved sort, filter, and discovery functionalities, at massive cost savings. If these savings were invested in additional infrastructure for research data and scientific code and/or software, scientific reliability would receive additional support, and funding woes—for, e.g., biological databases—would be a concern of the past.

<![CDATA[A proposal for the future of scientific publishing in the life sciences]]>

Science advances through rich, scholarly discussion. More than ever before, digital tools allow us to take that dialogue online. To chart a new future for open publishing, we must consider alternatives to the core features of the legacy print publishing system, such as an access paywall and editorial selection before publication. Although journals have their strengths, the traditional approach of selecting articles before publication (“curate first, publish second”) forces a focus on “getting into the right journals,” which can delay dissemination of scientific work, create opportunity costs for pushing science forward, and promote undesirable behaviors among scientists and the institutions that evaluate them. We believe that a “publish first, curate second” approach with the following features would be a strong alternative: authors decide when and what to publish; peer review reports are published, either anonymously or with attribution; and curation occurs after publication, incorporating community feedback and expert judgment to select articles for target audiences and to evaluate whether scientific work has stood the test of time. These proposed changes could optimize publishing practices for the digital age, emphasizing transparency, peer-mediated improvement, and post-publication appraisal of scientific articles.

<![CDATA[Gender and cultural bias in student evaluations: Why representation matters]]>

Gendered and racial inequalities persist in even the most progressive of workplaces. There is increasing evidence to suggest that all aspects of employment, from hiring to performance evaluation to promotion, are affected by gender and cultural background. In higher education, bias in performance evaluation has been posited as one of the reasons why few women make it to the upper echelons of the academic hierarchy. With unprecedented access to institution-wide student survey data from a large public university in Australia, we investigated the role of conscious or unconscious bias in terms of gender and cultural background. We found potential bias against women and teachers with non-English speaking backgrounds. Our findings suggest that bias may decrease with better representation of minority groups in the university workforce. Our findings have implications for society beyond the academy, as over 40% of the Australian population now go to university, and graduates may carry these biases with them into the workforce.

<![CDATA[Developing a modern data workflow for regularly updated data]]>

Over the past decade, biology has undergone a data revolution in how researchers collect data and the amount of data being collected. An emerging challenge that has received limited attention in biology is managing, working with, and providing access to data under continual active collection. Regularly updated data present unique challenges in quality assurance and control, data publication, archiving, and reproducibility. We developed a workflow for a long-term ecological study that addresses many of the challenges associated with managing this type of data. We do this by leveraging existing tools to 1) perform quality assurance and control; 2) import, restructure, version, and archive data; 3) rapidly publish new data in ways that ensure appropriate credit to all contributors; and 4) automate most steps in the data pipeline to reduce the time and effort required by researchers. The workflow leverages tools from software development, including version control and continuous integration, to create a modern data management system that automates the pipeline.

<![CDATA[Open notebook science can maximize impact for rare disease projects]]>

Transparency lies at the heart of the open lab notebook movement. Open notebook scientists publish laboratory experiments and findings in the public domain in real time, without restrictions or omissions. Research on rare diseases is especially amenable to the open notebook model because it can both increase scientific impact and serve as a mechanism to engage patient groups in the scientific process. Here, I outline and describe my own success with my open notebook project, LabScribbles, as well as other efforts included in the initiative.

<![CDATA[Maintenance and inspection as risk factors in helicopter accidents: Analysis and recommendations]]>

In this work, we establish that maintenance and inspection are a risk factor in helicopter accidents. Between 2005 and 2015, flawed maintenance and inspection were causal factors in 14% to 21% of helicopter accidents in the U.S. civil fleet. For these maintenance-related accidents, we examined the incubation time from when the maintenance error was committed to the time when it resulted in an accident. We found a significant clustering of maintenance accidents within a short number of flight-hours after maintenance was performed. Of these accidents, 31% of these accidents occurred within the first 10 flight-hours. This is reminiscent of infant mortality in reliability engineering, and we characterized it as maintenance error infant mortality. The last quartile of maintenance-related accidents occurred after 60 flight-hours following maintenance and inspection. We then examined the “physics of failures” underlying maintenance-related accidents and analyzed the prevalence of different types of maintenance errors in helicopter accidents. We found, for instance, that the improper or incomplete (re)assembly or installation of a part category accounted for the majority of maintenance errors with 57% of such cases, and within this category, the incorrect torquing of the B-nut and incomplete assembly of critical linkages were the most prevalent maintenance errors. We also found that within the failure to perform a required preventive maintenance and inspection task category, the majority of the maintenance programs were not executed in compliance with federal regulations, nor with the manufacturer maintenance plan. Maintenance-related accidents are particularly hurtful for the rotorcraft community, and they can be eliminated. This is a reachable objective when technical competence meets organizational proficiency and the collective will of all the stakeholders in this community. We conclude with a set of recommendations based on our findings, which borrow from the ideas underlying the defense-in-depth safety principle to address this disquieting problem.

<![CDATA[Faculty perceptions and knowledge of career development of trainees in biomedical science: What do we (think we) know?]]>

The Broadening Experiences in Scientific Training (BEST) program is an NIH-funded effort testing the impact of career development interventions (e.g. internships, workshops, classes) on biomedical trainees (graduate students and postdoctoral fellows). BEST Programs seek to increase trainees’ knowledge, skills and confidence to explore and pursue expanded career options, as well as to increase training in new skills that enable multiple career pathways. Faculty mentors are vital to a trainee’s professional development, but data about how faculty members of biomedical trainees view the value of, and the time spent on, career development are lacking. Seven BEST institutions investigated this issue by conducting faculty surveys during their BEST experiment. The survey intent was to understand faculty perceptions around professional and career development for their trainees. Two different, complementary surveys were employed, one designed by Michigan State University (MSU) and the other by Vanderbilt University. Faculty (592) across five institutions responded to the MSU survey; 225 faculty members from two institutions responded to the Vanderbilt University survey. Participating faculty were largely tenure track and male; approximately 1/3 had spent time in a professional position outside of academia. Respondents felt a sense of urgency in introducing broad career activities for trainees given a recognized shortage of tenure track positions. They reported believing career development needs are different between a graduate student and postdoctoral fellow, and they indicated that they actively mentor trainees in career development. However, faculty were uncertain as to whether they actually have the knowledge or training to do so effectively. Faculty perceived that trainees themselves lack a knowledge base of skills that are of interest to non-academic employers. Thus, there is a need for exposure and training in such skills. Faculty stated unequivocally that institutional support for career development is important and needed. BEST Programs were considered beneficial to trainees, but the awareness of local BEST Programs and the national BEST Consortium was low at the time surveys were employed at some institutions. It is our hope that the work presented here will increase the awareness of the BEST national effort and the need for further career development for biomedical trainees.

<![CDATA[Comparison of two bovine serum pregnancy tests in detection of artificial insemination pregnancies and pregnancy loss in beef cattle]]>

Blood tests for early detection of pregnancy in cattle based on pregnancy-associated glycoproteins (PAGs) are commercially available. The objective of these studies were to compare the accuracy of blood tests to transrectal ultrasonography in detecting AI pregnancies, and to compare the accuracy of blood tests in predicting pregnancy loss. Beef cattle from 6 herds were synchronized using a recommended CIDR based protocol (Study 1: n = 460; Study 2: n = 472). Pregnancy status was determined by transrectal ultrasonography between days 28–40 following AI, blood samples were collected at this time. In study 2 a final pregnancy determination was performed at the end of the breeding season to determine pregnancy loss. Each serum sample was examined for PAG concentrations using a microtiter plate reader and/or scored by two technicians blind to pregnancy status and pregnancy loss. For study 1 Cohen’s kappa statistics were calculated to assess the agreement between each test and transrectal ultrasonography. For study 2 data was analyzed using the GLIMMIX procedure of SAS with herd as a random effect, and loss, age, and their interaction included in the model. Agreement was good to very good for each test. There was no difference (P = 0.79) in sensitivity, but a difference (P<0.01) in specificity of the assays (88%, 64%, 87%, 90%) and in the overall percent correct (93%, 84%, 93%, 93%). There was an effect of pregnancy loss (P = 0.04), age (P = 0.0002), and their interaction (P = 0.06) on PAG concentrations. In conclusion both pregnancy tests were accurate at detecting AI pregnancies, and were in very good agreement with transrectal ultrasonography. Both tests detected differences in PAGs among females that maintained and lost pregnancy; however, prediction proved to be difficult as most females were above the threshold and would have been considered pregnant on the day of testing.

<![CDATA[CD4-T cell enumeration in human immunodeficiency virus (HIV)-infected patients: A laboratory performance evaluation of Muse Auto CD4/CD4% system by World Health Organization prequalification of in vitro diagnostics]]>


CD4 T-cell counts are still widely used to assess treatment eligibility and follow-up of HIV-infected patients. The World Health Organization (WHO) prequalification of in vitro diagnostics requested a manufacturer independent laboratory evaluation of the analytical performance at the Institute of Tropical Medicine (ITM) Antwerp, Belgium, of the Muse Auto CD4/CD4% system (Millipore), a new small capillary-flow cytometer dedicated to count absolute CD4-T cells and percentages in venous blood samples from HIV-infected patients.


Two hundred and fifty (250) patients were recruited from the HIV outpatient clinic at ITM. Accuracy and precision of CD4 T cell counting on fresh EDTA anticoagulated venous blood samples were assessed in the laboratory on a Muse Auto CD4/CD4% system. Extensive precision analyses were performed both on fresh blood and on normal and low stabilized whole blood controls. Accuracy ((bias) was assessed by comparing results from Muse CD4/CD4% to the reference (single-platform FACSCalibur). Clinical misclassification was measured at 500, 350, 200 and 100 cells/μL thresholds.


Intra-assay precision was < 5%, and inter-assay was < 9%. CD4 T cell counts measured on Muse Auto CD4/CD4% System and on the reference instrument resulted in regression slopes of 0.97 for absolute counts and 1.03 for CD4 T cell percentages and a correlation coefficient of 0.99 for both. The average absolute bias as compared to the reference was negligible (4 cells/μL or 0.5%). The absolute average bias on CD4 T cell percentages was < 1%. Clinical misclassification at different CD4 T cell thresholds was small resulting in sensitivities and specificities equal or >90% at all thresholds except at 100 cells/μL (sensitivity = 87%). All samples could be analyzed as there was no repetitive rejection errors recorded.


The Muse Auto CD4/CD4% System performed very well on fresh venous blood samples and met all WHO acceptance criteria for analytical performance of CD4 technologies.

<![CDATA[The PLOS ONE collection on machine learning in health and biomedicine: Towards open code and open data]]>

Recent years have seen a surge of studies in machine learning in health and biomedicine, driven by digitalization of healthcare environments and increasingly accessible computer systems for conducting analyses. Many of us believe that these developments will lead to significant improvements in patient care. Like many academic disciplines, however, progress is hampered by lack of code and data sharing. In bringing together this PLOS ONE collection on machine learning in health and biomedicine, we sought to focus on the importance of reproducibility, making it a requirement, as far as possible, for authors to share data and code alongside their papers.

<![CDATA[Enabling precision medicine via standard communication of HTS provenance, analysis, and results]]>

A personalized approach based on a patient's or pathogen’s unique genomic sequence is the foundation of precision medicine. Genomic findings must be robust and reproducible, and experimental data capture should adhere to findable, accessible, interoperable, and reusable (FAIR) guiding principles. Moreover, effective precision medicine requires standardized reporting that extends beyond wet-lab procedures to computational methods. The BioCompute framework ( enables standardized reporting of genomic sequence data provenance, including provenance domain, usability domain, execution domain, verification kit, and error domain. This framework facilitates communication and promotes interoperability. Bioinformatics computation instances that employ the BioCompute framework are easily relayed, repeated if needed, and compared by scientists, regulators, test developers, and clinicians. Easing the burden of performing the aforementioned tasks greatly extends the range of practical application. Large clinical trials, precision medicine, and regulatory submissions require a set of agreed upon standards that ensures efficient communication and documentation of genomic analyses. The BioCompute paradigm and the resulting BioCompute Objects (BCOs) offer that standard and are freely accessible as a GitHub organization ( following the “ principles for collaborative open standards development.” With high-throughput sequencing (HTS) studies communicated using a BCO, regulatory agencies (e.g., Food and Drug Administration [FDA]), diagnostic test developers, researchers, and clinicians can expand collaboration to drive innovation in precision medicine, potentially decreasing the time and cost associated with next-generation sequencing workflow exchange, reporting, and regulatory reviews.

<![CDATA[Crowdfunding scientific research: Descriptive insights and correlates of funding success]]>

Crowdfunding has gained traction as a mechanism to raise resources for entrepreneurial and artistic projects, yet there is little systematic evidence on the potential of crowdfunding for scientific research. We first briefly review prior research on crowdfunding and give an overview of dedicated platforms for crowdfunding research. We then analyze data from over 700 campaigns on the largest dedicated platform, Our descriptive analysis provides insights regarding the creators seeking funding, the projects they are seeking funding for, and the campaigns themselves. We then examine how these characteristics relate to fundraising success. The findings highlight important differences between crowdfunding and traditional funding mechanisms for research, including high use by students and other junior investigators but also relatively small project size. Students and junior investigators are more likely to succeed than senior scientists, and women have higher success rates than men. Conventional signals of quality–including scientists’ prior publications–have little relationship with funding success, suggesting that the crowd may apply different decision criteria than traditional funding agencies. Our results highlight significant opportunities for crowdfunding in the context of science while also pointing towards unique challenges. We relate our findings to research on the economics of science and on crowdfunding, and we discuss connections with other emerging mechanisms to involve the public in scientific research.