ResearchPad - research-reporting-guidelines https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[The prevalence of hepatitis C virus in hemodialysis patients in Pakistan: A systematic review and meta-analysis]]> https://www.researchpad.co/article/elastic_article_14616 Hepatitis C virus (HCV) infection is one of the most common bloodborne viral infections reported in Pakistan. Frequent dialysis treatment of hemodialysis patients exposes them to a high risk of HCV infection. The main purpose of this paper is to quantify the prevalence of HCV in hemodialysis patients through a systematic review and meta-analysis.MethodsWe systematically searched PubMed, Medline, EMBASE, Pakistani Journals Online and Web of Science to identify studies published between 1 January 1995 and 30 October 2019, reporting on the prevalence of HCV infection in hemodialysis patients. Meta-analysis was performed using a random-effects model to obtain pooled estimates. A funnel plot was used in conjunction with Egger’s regression test for asymmetry and to assess publication bias. Meta-regression and subgroup analyses were used to identify potential sources of heterogeneity among the included studies. This review was registered on PROSPERO (registration number CRD42019159345).ResultsOut of 248 potential studies, 19 studies involving 3446 hemodialysis patients were included in the meta-analysis. The pooled prevalence of HCV in hemodialysis patients in Pakistan was 32.33% (95% CI: 25.73–39.30; I2 = 94.3%, p < 0.01). The subgroup analysis showed that the prevalence of HCV among hemodialysis patients in Punjab was significantly higher (37.52%; 95% CI: 26.66–49.03; I2 = 94.5, p < 0.01) than 34.42% (95% CI: 14.95–57.05; I2 = 91.3%, p < 0.01) in Baluchistan, 27.11% (95% CI: 15.81–40.12; I2 = 94.5, p < 0.01) in Sindh and 22.61% (95% CI: 17.45–28.2; I2 = 78.6, p < 0.0117) in Khyber Pukhtoonkhuwa.ConclusionsIn this study, we found a high prevalence (32.33%) of HCV infection in hemodialysis patients in Pakistan. Clinically, hemodialysis patients require more attention and resources than the general population. Preventive interventions are urgently needed to decrease the high risk of HCV infection in hemodialysis patients in Pakistan. ]]> <![CDATA[Would you like to participate in this trial? The practice of informed consent in intrapartum research in the last 30 years]]> https://www.researchpad.co/article/Na45ec8a9-d35b-4ecd-a654-0f10371697fd

Background

Informed consent is the cornerstone of the ethical conduct and protection of the rights and wellbeing of participants in clinical research. Therefore, it is important to identify the most appropriate moments for the participants to be informed and to give consent, so that they are able to make a responsible and autonomous decision. However, the optimal timing of consent in clinical research during the intrapartum period remains controversial, and currently, there is no clear guidance.

Objective

We aimed to describe practices of informed consent in intrapartum care clinical research in the last three decades, as reported in uterotonics for postpartum haemorrhage prevention trials.

Methods

This is a secondary analysis of the studies included in the Cochrane review entitled “Uterotonic agents for preventing postpartum haemorrhage: a network meta-analysis” published in 2018. All the reports included in the Cochrane network meta-analysis were eligible for inclusion in this analysis, except for those reported in languages other than English, French or Spanish. We extracted and synthesized data on the time each of the components of the informed consent process occurred.

Results

We assessed data from 192 studies, out of 196 studies included in the Cochrane review. The majority of studies (59.9%, 115 studies) reported that women were informed about the study, without specifying the timing. When reported, most studies informed women at admission to the facility for childbirth. Most of the studies reported that consent was sought, but only 59.9% reported the timing, which in most of the cases, was at admission for childbirth. Among these, 32 studies obtained consent in the active phase of labour, 17 in the latent phase and in 10 studies the labour status was unknown. Women were consented antenatally in 6 studies and in 8 studies the consent was obtained indistinctly during antenatal care or at admission. Most of the studies did not specified who was the person who sought the informed consent.

Conclusion

Practices of informed consent in trials on use of uterotonics for prevention of postpartum haemorrhage showed variability and substandard reporting. Informed consent sought at admission for childbirth was the most frequent approach implemented in these trials.

]]>
<![CDATA[Description of network meta-analysis geometry: A metrics design study]]> https://www.researchpad.co/article/5c76fe29d5eed0c484e5b60f

Background

The conduction and report of network meta-analysis (NMA), including the presentation of the network-plot, should be transparent. We aimed to propose metrics adapted from graph theory and social network-analysis literature to numerically describe NMA geometry.

Methods

A previous systematic review of NMAs of pharmacological interventions was performed. Data on the graph’s presentation were collected. Network-plots were reproduced using Gephi 0.9.1. Eleven geometric metrics were tested. The Spearman test for non-parametric correlation analyses and the Bland-Altman and Lin’s Concordance tests were performed (IBM SPSS Statistics 24.0).

Results

From the 477 identified NMAs only 167 graphs could be reproduced because they provided enough information on the plot characteristics. The median nodes and edges were 8 (IQR 6–11) and 10 (IQR 6–16), respectively, with 22 included studies (IQR 13–35). Metrics such as density (median 0.39, ranged 0.07–1.00), median thickness (2.0, IQR 1.0–3.0), percentages of common comparators (median 68%), and strong edges (median 53%) were found to contribute to the description of NMA geometry. Mean thickness, average weighted degree and average path length produced similar results than other metrics, but they can lead to misleading conclusions.

Conclusions

We suggest the incorporation of seven simple metrics to report NMA geometry. Editors and peer-reviews should ensure that guidelines for NMA report are strictly followed before publication.

]]>
<![CDATA[Prevalence and determinants of antenatal depression in Ethiopia: A systematic review and meta-analysis]]> https://www.researchpad.co/article/5c75ac84d5eed0c484d0895e

Background

Maternal depression is the most prevalent psychiatric disorder during pregnancy, can alter fetal development and have a lasting impact on the offspring's neurological and behavioral development. However, no review has been conducted to report the consolidated magnitude of antenatal depression (AND) in Ethiopia. Therefore, this review aimed to systematically summarize the existing evidence on the epidemiology of AND in Ethiopia.

Methods

Using PRISMA guideline, we systematically reviewed and meta-analyzed studies that examined the prevalence and associated factors of AND from three electronic databases (PubMed, EMBASE, and SCOPUS). We used predefined inclusion criteria to screen identified studies. A qualitative and quantitative analysis was employed. Heterogeneity across the studies was evaluated using Q and the I² test. Publication bias was assessed by funnel plot and Egger’s regression test.

Results

In this review, a total of 193 studies were initially identified and evaluated. Of these, five eligible articles were included in the final analysis. In our meta-analysis, the pooled prevalence of AND in Ethiopia was 21.28% (95% CI; 15.96–27.78). The prevalence of AND was highest in the third trimester of pregnancy at 32.10% and it was 19.13% in the first trimester and 18.86% in the second trimester of pregnancy. The prevalence of AND was 26.48% and 18.28% as measured by Beck depression inventory (BDI) and the Edinburgh Postnatal Depression Scale (EPDS), respectively. Moreover, the prevalence of AND was 15.50% for the studies conducted in the community setting and it was 25.77% for the studies conducted in the institution-based setting. In our qualitative synthesis, we found that those pregnant women who had a history of stillbirth, complications during pregnancy, previous history of depression, no ANC follow-up, irregular ANC follow-up, not satisfied by ANC follow-up, and monthly income <1500 Ethiopian birr were linked with a greater risk of developing ANC. We also found that those women who experienced partner violence during pregnancy, food insecurity, medium and low social support, and those who were unmarried, age group 20–29, house wives and farmers were associated with a higher risk of developing ANC.

Conclusion and recommendations

Our meta-analysis found that the pooled prevalence of AND in Ethiopia was 21.28%. The prevalence of AND was high in the third trimester of pregnancy as compared to the first and second trimesters of pregnancy. The prevalence of AND was high in studies conducted using BDI than EPDS. Studies on the magnitude of AND as well as the possible determinants in each trimester of pregnancy with representative sample size are recommended. Screening of depression in a pregnant woman in perinatal setting might be considered backed by integration of family planning and mental health services. The use of validated and a standard instrument to assess AND is warranted.

Systematic review registration

The protocol for this systematic review and meta-analysis was registered at PROSPERO (record ID=CRD42017076521, 06 December 2017)

]]>
<![CDATA[A systematic review of the quality of distal radius systematic reviews: Methodology and reporting assessment]]> https://www.researchpad.co/article/5c52187cd5eed0c484798844

Background

Many systematic reviews (SRs) have been published about the various treatments for distal radius fractures (DRF). The heterogeneity of SRs results may come from the misuse of SR methods, and literature overviews have demonstrated that SRs should be considered with caution as they may not always be synonymous with high-quality standards. Our objective is to evaluate the quality of published SRs on the treatment of DRF through these tools.

Methods

The methods utilized in this review were previously published in the PROSPERO database. We considered SRs of surgical and nonsurgical interventions for acute DRF in adults. A comprehensive search strategy was performed in the MEDLINE database (inception to May 2017) and we manually searched the grey literature for non-indexed research. Data were independently extracted by two authors. We assessed SR internal validity and reporting using AMSTAR (Assessing the Methodological Quality of Systematic Reviews and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyzes). Scores were calculated as the sum of reported items. We also extracted article characteristics and provided Spearman’s correlation measurements.

Results

Forty-one articles fulfilled the eligibility criteria. The mean score for PRISMA was 15.90 (CI 95%, 13.9–17.89) and AMSTAR was 6.48 (CI 95% 5.72–7.23). SRs that considered only RCTs had better AMSTAR [7.56 (2.1) vs. 5.62 (2.3); p = 0.014] and PRISMA scores [18.61 (5.22) vs. 13.93 (6.47), p = 0.027]. The presence of meta-analysis on the SRs altered PRISMA scores [19.17 (4.75) vs. 10.21 (4.51), p = 0.001] and AMSTAR scores [7.68 (1.9) vs. 4.39 (1.66), p = 0.001]. Journal impact factor or declaration of conflict of interest did not change PRISMA and AMSTAR scores. We found substantial inter observer agreement for PRISMA (0.82, 95% CI 0.62–0.94; p = 0.01) and AMSTAR (0.65, 95% CI 0.43–0.81; p = 0.01), and moderate correlation between PRISMA and AMSTAR scores (0.83, 95% CI 0.62–0.92; p = 0.01).

Conclusions

DRF RCT-only SRs have better PRISMA and AMSTAR scores. These tools have substantial inter-observer agreement and moderate inter-tool correlation. We exposed the current research panorama and pointed out some factors that can contribute to improvements on the topic.

]]>
<![CDATA[One thousand simple rules]]> https://www.researchpad.co/article/5c254554d5eed0c48442c516 ]]> <![CDATA[Assessment of the ergonomic risk from saddle and conventional seats in dentistry: A systematic review and meta-analysis]]> https://www.researchpad.co/article/5c215184d5eed0c4843fa8ee

Objective

This study aimed to verify whether the saddle seat provides lower ergonomic risk than conventional seats in dentistry.

Methods

This review followed the PRISMA statement and a protocol was created and registered in PROSPERO (CRD42017074918). Six electronic databases were searched as primary study sources. The "grey literature" was included to prevent selection and publication biases. The risk of bias among the studies included was assessed with the Joanna Briggs Institute Critical Appraisal Tool for Systematic Reviews. Meta-analysis was performed to estimate the effect of seat type on the ergonomic risk score in dentistry. The heterogeneity among studies was assessed using I2 statistics.

Results

The search resulted in 3147 records, from which two were considered eligible for this review. Both studies were conducted with a total of 150 second-year dental students who were starting their laboratory activities using phantom heads. Saddle seats were associated with a significantly lower ergonomic risk than conventional seats [right side (mean difference = -3.18; 95% CI = -4.96, -1.40; p < 0.001) and left side (mean difference = -3.12; 95% CI = -4.56, -1.68; p < 0.001)], indicating posture improvement.

Conclusion

The two eligible studies for this review provide moderate evidence that saddle seats provided lower ergonomic risk than conventional seats in the examined population of dental students.

]]>
<![CDATA[Improving the Transparency of Prognosis Research: The Role of Reporting, Data Sharing, Registration, and Protocols]]> https://www.researchpad.co/article/5989d9d1ab0ee8fa60b64516

George Peat and colleagues review and discuss current approaches to transparency and published debates and concerns about efforts to standardize prognosis research practice, and make five recommendations.

Please see later in the article for the Editors' Summary

]]>
<![CDATA[Center of Excellence in Research Reporting in Neurosurgery - Diagnostic Ontology]]> https://www.researchpad.co/article/5989da75ab0ee8fa60b9639e

Motivation: Evidence-based medicine (EBM), in the field of neurosurgery, relies on diagnostic studies since Randomized Controlled Trials (RCTs) are uncommon. However, diagnostic study reporting is less standardized which increases the difficulty in reliably aggregating results. Although there have been several initiatives to standardize reporting, they have shown to be sub-optimal. Additionally, there is no central repository for storing and retrieving related articles. Results: In our approach we formulate a computational diagnostic ontology containing 91 elements, including classes and sub-classes, which are required to conduct Systematic Reviews - Meta Analysis (SR-MA) for diagnostic studies, which will assist in standardized reporting of diagnostic articles. SR-MA are studies that aggregate several studies to come to one conclusion for a particular research question. We also report high percentage of agreement among five observers as a result of the interobserver agreement test that we conducted among them to annotate 13 articles using the diagnostic ontology. Moreover, we extend our existing repository CERR-N to include diagnostic studies. Availability: The ontology is available for download as an.owl file at: http://bioportal.bioontology.org/ontologies/3013.

]]>
<![CDATA[Did the reporting of prognostic studies of tumour markers improve since the introduction of REMARK guideline? A comparison of reporting in published articles]]> https://www.researchpad.co/article/5989db5fab0ee8fa60be10a2

Although biomarkers are perceived as highly relevant for future clinical practice, few biomarkers reach clinical utility for several reasons. Among them, poor reporting of studies is one of the major problems. To aid improvement, reporting guidelines like REMARK for tumour marker prognostic (TMP) studies were introduced several years ago. The aims of this project were to assess whether reporting quality of TMP-studies improved in comparison to a previously conducted study assessing reporting quality of TMP-studies (PRE-study) and to assess whether articles citing REMARK (citing group) are better reported, in comparison to articles not citing REMARK (not-citing group).

For the POST-study, recent articles citing and not citing REMARK (53 each) were identified in selected journals through systematic literature search and evaluated in same way as in the PRE-study. Ten of the 20 items of the REMARK checklist were evaluated and used to define an overall score of reporting quality.

The observed overall scores were 53.4% (range: 10%-90%) for the PRE-study, 57.7% (range: 20%-100%) for the not-citing group and 58.1% (range: 30%-100%) for the citing group of the POST-study. While there is no difference between the two groups of the POST-study, the POST-study shows a slight but not relevant improvement in reporting relative to the PRE-study. Not all the articles of the citing group, cited REMARK appropriately. Irrespective of whether REMARK was cited, the overall score was slightly higher for articles published in journals requesting adherence to REMARK than for those published in journals not requesting it: 59.9% versus 51.9%, respectively.

Several years after the introduction of REMARK, many key items of TMP-studies are still very poorly reported. A combined effort is needed from authors, editors, reviewers and methodologists to improve the current situation. Good reporting is not just nice to have but is essential for any research to be useful.

]]>
<![CDATA[Quality of Reporting of Bioequivalence Trials Comparing Generic to Brand Name Drugs: A Methodological Systematic Review]]> https://www.researchpad.co/article/5989da7aab0ee8fa60b983ac

Background

Generic drugs are used by millions of patients for economic reasons, so their evaluation must be highly transparent.

Objective

To assess the quality of reporting of bioequivalence trials comparing generic to brand-name drugs.

Methodology/Principal Findings

PubMed was searched for reports of bioequivalence trials comparing generic to brand-name drugs between January 2005 and December 2008. Articles were included if the aim of the study was to assess the bioequivalency of generic and brand-name drugs. We excluded case studies, pharmaco-economic evaluations, and validation dosage assays of drugs. We evaluated whether important information about funding, methodology, location of trials, and participants were reported. We also assessed whether the criteria required by the Food and Drug Administration (FDA) and the European Medicine Agency (EMA) to conclude bioequivalence were reported and that the conclusions were in agreement with the results. We identified 134 potentially relevant articles but eliminated 55 because the brand-name or generic drug status of the reference drug was unknown. Thus, we evaluated 79 articles. The funding source and location of the trial were reported in 41% and 56% of articles, respectively. The type of statistical analysis was reported in 94% of articles, but the methods to generate the randomization sequence and to conceal allocation were reported in only 15% and 5%, respectively. In total, 65 articles of single-dose trials (89%) concluded bioequivalence. Of these, 20 (31%) did not report the 3 criteria within the limits required by the FDA and 11 (17%) did not report the 2 criteria within the limits required by the EMA.

Conclusions/Significance

Important information to judge the validity and relevance of results are frequently missing in published reports of trials assessing generic drugs. The quality of reporting of such trials is in need of improvement.

]]>
<![CDATA[Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science]]> https://www.researchpad.co/article/5989da11ab0ee8fa60b79bc2

Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

]]>
<![CDATA[On the Lack of Consensus over the Meaning of Openness: An Empirical Study]]> https://www.researchpad.co/article/5989dacfab0ee8fa60bb58d0

This study set out to explore the views and motivations of those involved in a number of recent and current advocacy efforts (such as open science, computational provenance, and reproducible research) aimed at making science and scientific artifacts accessible to a wider audience. Using a exploratory approach, the study tested whether a consensus exists among advocates of these initiatives about the key concepts, exploring the meanings that scientists attach to the various mechanisms for sharing their work, and the social context in which this takes place. The study used a purposive sampling strategy to target scientists who have been active participants in these advocacy efforts, and an open-ended questionnaire to collect detailed opinions on the topics of reproducibility, credibility, scooping, data sharing, results sharing, and the effectiveness of the peer review process. We found evidence of a lack of agreement on the meaning of key terminology, and a lack of consensus on some of the broader goals of these advocacy efforts. These results can be explained through a closer examination of the divergent goals and approaches adopted by different advocacy efforts. We suggest that the scientific community could benefit from a broader discussion of what it means to make scientific research more accessible and how this might best be achieved.

]]>
<![CDATA[Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes]]> https://www.researchpad.co/article/5989da41ab0ee8fa60b8a001

Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.

]]>
<![CDATA[Completeness and Changes in Registered Data and Reporting Bias of Randomized Controlled Trials in ICMJE Journals after Trial Registration Policy]]> https://www.researchpad.co/article/5989da9aab0ee8fa60ba33d8

Objective

We assessed the adequacy of randomized controlled trial (RCT) registration, changes to registration data and reporting completeness for articles in ICMJE journals during 2.5 years after registration requirement policy.

Methods

For a set of 149 reports of 152 RCTs with ClinicalTrials.gov registration number, published from September 2005 to April 2008, we evaluated the completeness of 9 items from WHO 20-item Minimum Data Set relevant for assessing trial quality. We also assessed changes to the registration elements at the Archive site of ClinicalTrials.gov and compared published and registry data.

Results

RCTs were mostly registered before 13 September 2005 deadline (n = 101, 66.4%); 118 (77.6%) started recruitment before and 31 (20.4%) after registration. At the time of registration, 152 RCTs had a total of 224 missing registry fields, most commonly ‘Key secondary outcomes’ (44.1% RCTs) and ‘Primary outcome’ (38.8%). More RCTs with post-registration recruitment had missing Minimum Data Set items than RCTs with pre-registration recruitment: 57/118 (48.3%) vs. 24/31 (77.4%) (χ21 = 7.255, P = 0.007). Major changes in the data entries were found for 31 (25.2%) RCTs. The number of RCTs with differences between registered and published data ranged from 21 (13.8%) for Study type to 118 (77.6%) for Target sample size.

Conclusions

ICMJE journals published RCTs with proper registration but the registration data were often not adequate, underwent substantial changes in the registry over time and differed in registered and published data. Editors need to establish quality control procedures in the journals so that they continue to contribute to the increased transparency of clinical trials.

]]>
<![CDATA[Systematic Omics Analysis Review (SOAR) Tool to Support Risk Assessment]]> https://www.researchpad.co/article/5989da7cab0ee8fa60b98b79

Environmental health risk assessors are challenged to understand and incorporate new data streams as the field of toxicology continues to adopt new molecular and systems biology technologies. Systematic screening reviews can help risk assessors and assessment teams determine which studies to consider for inclusion in a human health assessment. A tool for systematic reviews should be standardized and transparent in order to consistently determine which studies meet minimum quality criteria prior to performing in-depth analyses of the data. The Systematic Omics Analysis Review (SOAR) tool is focused on assisting risk assessment support teams in performing systematic reviews of transcriptomic studies. SOAR is a spreadsheet tool of 35 objective questions developed by domain experts, focused on transcriptomic microarray studies, and including four main topics: test system, test substance, experimental design, and microarray data. The tool will be used as a guide to identify studies that meet basic published quality criteria, such as those defined by the Minimum Information About a Microarray Experiment standard and the Toxicological Data Reliability Assessment Tool. Seven scientists were recruited to test the tool by using it to independently rate 15 published manuscripts that study chemical exposures with microarrays. Using their feedback, questions were weighted based on importance of the information and a suitability cutoff was set for each of the four topic sections. The final validation resulted in 100% agreement between the users on four separate manuscripts, showing that the SOAR tool may be used to facilitate the standardized and transparent screening of microarray literature for environmental human health risk assessment.

]]>
<![CDATA[Misconduct Policies in High-Impact Biomedical Journals]]> https://www.researchpad.co/article/5989dab4ab0ee8fa60bac3dc

Background

It is not clear which research misconduct policies are adopted by biomedical journals. This study assessed the prevalence and content policies of the most influential biomedical journals on misconduct and procedures for handling and responding to allegations of misconduct.

Methods

We conducted a cross-sectional study of misconduct policies of 399 high-impact biomedical journals in 27 biomedical categories of the Journal Citation Reports in December 2011. Journal websites were reviewed for information relevant to misconduct policies.

Results

Of 399 journals, 140 (35.1%) provided explicit definitions of misconduct. Falsification was explicitly mentioned by 113 (28.3%) journals, fabrication by 104 (26.1%), plagiarism by 224 (56.1%), duplication by 242 (60.7%) and image manipulation by 154 (38.6%). Procedures for responding to misconduct were described in 179 (44.9%) websites, including retraction, (30.8%) and expression of concern (16.3%). Plagiarism-checking services were used by 112 (28.1%) journals. The prevalences of all types of misconduct policies were higher in journals that endorsed any policy from editors’ associations, Office of Research Integrity or professional societies compared to those that did not state adherence to these policy-producing bodies. Elsevier and Wiley-Blackwell had the most journals included (22.6% and 14.8%, respectively), with Wiley journals having greater a prevalence of misconduct definition and policies on falsification, fabrication and expression of concern and Elsevier of plagiarism-checking services.

Conclusions

Only a third of top-ranking peer-reviewed journals had publicly-available definitions of misconduct and less than a half described procedures for handling allegations of misconduct. As endorsement of international policies from policy-producing bodies was positively associated with implementation of policies and procedures, journals and their publishers should standardize their policies globally in order to increase public trust in the integrity of the published record in biomedicine.

]]>
<![CDATA[Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK): Explanation and Elaboration]]> https://www.researchpad.co/article/5989da94ab0ee8fa60ba164f

The REMARK “elaboration and explanation” guideline, by Doug Altman and colleagues, provides a detailed reference for authors on important issues to consider when designing, conducting, and analyzing tumor marker prognostic studies.

]]>
<![CDATA[Results and Outcome Reporting In ClinicalTrials.gov, What Makes it Happen?]]> https://www.researchpad.co/article/5989da39ab0ee8fa60b872bc

Background

At the end of the past century there were multiple concerns regarding lack of transparency in the conduct of clinical trials as well as some ethical and scientific issues affecting the trials’ design and reporting. In 2000 ClinicalTrials.gov data repository was developed and deployed to serve public and scientific communities with valid data on clinical trials. Later in order to increase deposited data completeness and transparency of medical research a set of restrains had been imposed making the results deposition compulsory for multiple cases.

Methods

We investigated efficiency of the results deposition and outcome reporting as well as what factors make positive impact on providing information of interest and what makes it more difficult, whether efficiency depends on what kind of institution was a trial sponsor. Data from the ClinicalTrials.gov repository has been classified based on what kind of institution a trial sponsor was. The odds ratio was calculated for results and outcome reporting by different sponsors’ class.

Results

As of 01/01/2012 118,602 clinical trials data deposits were made to the depository. They came from 9068 different sources. 35344 (29.8%) of them are assigned as FDA regulated and 25151 (21.2%) as Section 801 controlled substances. Despite multiple regulatory requirements, only about 35% of trials had clinical study results deposited, the maximum 55.56% of trials with the results, was observed for trials completed in 2008.

Conclusions

The most positive impact on depositing results, the imposed restrains made for hospitals and clinics. Health care companies showed much higher efficiency than other investigated classes both in higher fraction of trials with results and in providing at least one outcome for their trials. They also more often than others deposit results when it is not strictly required, particularly, in the case of non-interventional studies.

]]>
<![CDATA[The Devil Is in the Details: Incomplete Reporting in Preclinical Animal Research]]> https://www.researchpad.co/article/5989d9e5ab0ee8fa60b6ae86

Incomplete reporting of study methods and results has become a focal point for failures in the reproducibility and translation of findings from preclinical research. Here we demonstrate that incomplete reporting of preclinical research is not limited to a few elements of research design, but rather is a broader problem that extends to the reporting of the methods and results. We evaluated 47 preclinical research studies from a systematic review of acute lung injury that use mesenchymal stem cells (MSCs) as a treatment. We operationalized the ARRIVE (Animal Research: Reporting of In Vivo Experiments) reporting guidelines for pre-clinical studies into 109 discrete reporting sub-items and extracted 5,123 data elements. Overall, studies reported less than half (47%) of all sub-items (median 51 items; range 37–64). Across all studies, the Methods Section reported less than half (45%) and the Results Section reported less than a third (29%). There was no association between journal impact factor and completeness of reporting, which suggests that incomplete reporting of preclinical research occurs across all journals regardless of their perceived prestige. Incomplete reporting of methods and results will impede attempts to replicate research findings and maximize the value of preclinical studies.

]]>