ResearchPad - government-laboratories https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[The perceived impact of isoniazid resistance on outcome of first-line rifampicin-throughout regimens is largely due to missed rifampicin resistance]]> https://www.researchpad.co/article/elastic_article_15716 Meta-analyses on impact of isoniazid-resistant tuberculosis informed the World Health Organization recommendation of a levofloxacin-strengthened rifampicin-based regimen.We estimated the effect of initial rifampicin resistance (Rr) and/or isoniazid resistance (Hr) on treatment failure or relapse. We also determined the frequency of missed initial and acquired Rr to estimate the impact of true Hr.MethodsRetrospective analysis of 7291 treatment episodes with known initial isoniazid and rifampicin status obtained from individual patient databases maintained by the Damien Foundation Bangladesh over 20 years. Drug susceptibility test results were confirmed by the programme’s designated supra-national tuberculosis laboratory. To detect missed Rr among isolates routinely classified as Hr, rpoB gene sequencing was done randomly and on a sample selected for suspected missed Rr.ResultsInitial Hr caused a large recurrence excess after the 8-month regimen for new cases (rifampicin for two months), but had little impact on rifampicin-throughout regimens: (6 months, new cases; 3.8%; OR 0.8, 95%CI:0.3,2.8; 8 months, retreatment cases: 7.3%, OR 1.8; 95%CI:1.3,2.6). Rr was missed in 7.6% of randomly selected "Hr" strains. Acquired Rr was frequent among recurrences on rifampicin-throughout regimens, particularly after the retreatment regimen (31.9%). It was higher in mono-Hr (29.3%; aOR 3.5, 95%CI:1.5,8.5) and poly-Hr (53.3%; aOR 10.2, 95%CI 4.4,23.7) than in susceptible tuberculosis, but virtually absent after the 8-month new case regimen. Comparing Bangladesh (low Rr prevalence) with a high Rr prevalence setting,true Hr corrected for missed Rr caused only 2–3 treatment failures per 1000 TB cases (of whom 27% were retreatments) in both.ConclusionsOur analysis reveals a non-negligible extent of misclassifying as isoniazid resistance of what is actually missed multidrug-resistant tuberculosis. Recommending for such cases a “strengthened” regimen containing a fluoroquinolone provokes a direct route to extensive resistance while offering little benefit against the minor role of true Hr tuberculosis in rifampicin-throughout first-line regimen. ]]> <![CDATA[A twenty-eight-year laboratory-based retrospective trend analysis of malaria in Dakar, Senegal]]> https://www.researchpad.co/article/elastic_article_14736 Health facility-based records offer a rich source of information to understand trends and changes in malaria cases over time. This study is aimed at determining the changes in malaria occurrence over the last 28 years, from 1989 to 2016 in Dakar, Senegal.MethodsLaboratory suspected and confirmed malaria records from 1989 to 2016 were reviewed from the laboratory registers of the Laboratory of Parasitology and Mycology of Aristide Le Dantec Hospital. Interrupted time series (ITS) analysis was used to estimate the changes by comparing malaria cases post-intervention (2006–2016) with that of the pre-intervention (1989–2005) period.ResultsA total of 5,876 laboratory confirmed malaria cases were reported out of 29,852 tested cases, with total slide positivity rate (SPR) of 19.7%. Malaria case counts exhibited a fluctuating trend with major peaks occurring in the years 1995 and 2003 with SPR of 42.3% and 42.5%, respectively. Overall, a remarkable decline in the total number of laboratory confirmed malaria cases was observed over the last 28 years. P. falciparum was almost the only reported species, accounting for 99.98% of cases. The highest SPR was observed in the age group of under five years during the pre-intervention period while this shifted to the age group of 6–15 years old for the subsequent years. Two major malaria peak seasons were observed: one in September during the pre-intervention period and the other in November for the post-intervention period. The ITS analysis showed a dramatic decline of 83.6% in SPR following the scale-up of interventions in 2006.ConclusionA remarkable decline in laboratory confirmed malaria cases in Dakar over 28 years was observed. The period of rapid decline in malaria SPR coincided with the scale-up in interventions beginning in 2006 with the introduction of ACTs, followed by the widespread introduction in 2008 of bed nets treated with insecticides. Robust surveillance data should be maintained in the context of malaria elimination efforts. ]]> <![CDATA[Comparison of triglyceride glucose index, and related parameters to predict insulin resistance in Korean adults: An analysis of the 2007-2010 Korean National Health and Nutrition Examination Survey]]> https://www.researchpad.co/article/5c990272d5eed0c484b97e62

The triglyceride glucose (TyG) index, a product of triglyceride and fasting glucose, is a reliable marker for insulin resistance (IR). Obesity is also known to be closely related with IR. Recently, the efficiency of TyG-related markers that combine obesity markers with TyG index has been studied; however, earlier studies were limited in number and the results were inconsistent. Therefore, in this study, we investigated the efficiency of several combinations of TyG index and obesity indices, namely, body mass index (BMI), waist circumference (WC), and waist-to-height ratio (WHtR), in reflecting IR. Data were obtained from the Korean National Health and Nutrition Examination Survey from 2007–2010. A total of 11,149 subjects (4,777 men and 6,372 women) were included. IR was defined as the homeostasis model assessment for IR (HOMA-IR) of above the 75th percentile for each gender. Logistic regression analysis was performed after adjusting for confounding factors, to compare and identify the associations of the 4 parameters (TyG index, TyG-BMI, TyG-WC, and TyG-WHtR) with IR. For each parameter, odds ratios (OR) and 95% confidence intervals (CIs) of quartiles 2–4 were calculated and compared with quartile 1 as a reference. A receiver operating characteristic (ROC) curve analysis was conducted to evaluate the ability of each parameter to predict IR. The adjusted ORs of quartile 4 in comparison with quartile 1 (95% CIs) for IR were 7.60 (6.52–8.87) for TyG index, 12.82 (10.89–15.10) for TyG-BMI, 16.29 (13.70–19.38) for TyG-WC, and 14.86 (12.53–17.62) for TyG-WHtR. The areas under the ROC curve for each parameter were 0.690 for TyG index, 0.748 for TyG-BMI, 0.731 for TyG-WC, and 0.733 for TyG-WHtR. In conclusion, TyG-BMI was found to be superior to other parameters for IR prediction. We propose TyG-BMI as an alternative marker for assessing IR in clinical settings.

]]>
<![CDATA[Evaluation of direct costs associated with alveolar and cystic echinococcosis in Austria]]> https://www.researchpad.co/article/5c5ca2d2d5eed0c48441eb7a

Background

Cystic echinococcosis (CE) is a globally occurring zoonosis, whereas alveolar echinococcosis (AE) is endemic only in certain parts of the Northern Hemisphere. The socioeconomic impact of human echinococcosis has been shown to be considerable in highly endemic regions. However, detailed data on direct healthcare-related costs associated with CE and AE are scarce for high income countries. The aim of this study was to evaluate direct costs of human disease caused by CE and AE in Austria.

Methods

Clinical data from a registry maintained at a national reference center for echinococcosis at the Medical University of Vienna were obtained for the years 2012–2014. These data were used in conjunction with epidemiological data from Austria’s national disease reporting system and diagnostic reference laboratory for echinococcosis to assess nationwide costs attributable to CE and AE.

Results

In Austria, total modelled direct costs were 486,598€ (95%CI 341,825€ – 631,372€) per year for CE, and 683,824€ (95%CI 469,161€ - 898,486€) for AE. Median costs per patient with AE from diagnosis until the end of a 10-year follow-up period were 30,832€ (25th– 75th percentile: 23,197€ - 31,220€) and 62,777€ (25th– 75th percentile: 60,806€ - 67,867€) for inoperable and operable patients, respectively. Median costs per patients with CE from diagnosis until end of follow-up after 10 years were 16,253€ (25th– 75th percentile: 8,555€ - 24,832€) and 1,786€ (25th– 75th percentile: 736€ - 2,146€) for patients with active and inactive cyst stages, respectively. The first year after inclusion was the most cost-intense year in the observed period, with hospitalizations and albendazole therapy the main contributors to direct costs.

Conclusions

This study provides detailed information on direct healthcare-related costs associated with CE and AE in Austria, which may reflect trends for other high-income countries. Surgery and albendazole therapy, due to surprisingly high drug prices, were identified as important cost-drivers. These data will be important for cost-effectiveness analyses of possible prevention programs.

]]>
<![CDATA[BRCA Challenge: BRCA Exchange as a global resource for variants in BRCA1 and BRCA2]]> https://www.researchpad.co/article/5c2d2eb3d5eed0c484d9b2c0

The BRCA Challenge is a long-term data-sharing project initiated within the Global Alliance for Genomics and Health (GA4GH) to aggregate BRCA1 and BRCA2 data to support highly collaborative research activities. Its goal is to generate an informed and current understanding of the impact of genetic variation on cancer risk across the iconic cancer predisposition genes, BRCA1 and BRCA2. Initially, reported variants in BRCA1 and BRCA2 available from public databases were integrated into a single, newly created site, www.brcaexchange.org. The purpose of the BRCA Exchange is to provide the community with a reliable and easily accessible record of variants interpreted for a high-penetrance phenotype. More than 20,000 variants have been aggregated, three times the number found in the next-largest public database at the project’s outset, of which approximately 7,250 have expert classifications. The data set is based on shared information from existing clinical databases—Breast Cancer Information Core (BIC), ClinVar, and the Leiden Open Variation Database (LOVD)—as well as population databases, all linked to a single point of access. The BRCA Challenge has brought together the existing international Evidence-based Network for the Interpretation of Germline Mutant Alleles (ENIGMA) consortium expert panel, along with expert clinicians, diagnosticians, researchers, and database providers, all with a common goal of advancing our understanding of BRCA1 and BRCA2 variation. Ongoing work includes direct contact with national centers with access to BRCA1 and BRCA2 diagnostic data to encourage data sharing, development of methods suitable for extraction of genetic variation at the level of individual laboratory reports, and engagement with participant communities to enable a more comprehensive understanding of the clinical significance of genetic variation in BRCA1 and BRCA2.

]]>
<![CDATA[Laboratory challenges of Plasmodium species identification in Aceh Province, Indonesia, a malaria elimination setting with newly discovered P. knowlesi]]> https://www.researchpad.co/article/5c0ae437d5eed0c4845892c2

The discovery of the life-threatening zoonotic infection Plasmodium knowlesi has added to the challenges of prompt and accurate malaria diagnosis and surveillance. In this study from Aceh Province, Indonesia, a malaria elimination setting where P. knowlesi endemicity was not previously known, we report the laboratory investigation and difficulties encountered when using molecular detection methods for quality assurance of microscopically identified clinical cases. From 2014 to 2015, 20 (49%) P. falciparum, 16 (39%) P. vivax, 3 (7%) P. malariae, and 2 (5%) indeterminate species were identified by microscopy from four sentinel health facilities. At a provincial-level reference laboratory, loop-mediated isothermal amplification (LAMP), a field-friendly molecular method, was performed and confirmed Plasmodium in all samples though further species-identification was limited by the unavailability of non-falciparum species-specific testing with the platform used. At a national reference laboratory, several molecular methods including nested PCR (nPCR) targeting the 18 small sub-unit (18S) ribosomal RNA, nPCR targeting the cytochrome-b (cytb) gene, a P. knowlesi-specific nPCR, and finally sequencing, were necessary to ultimately classify the samples as: 19 (46%) P. knowlesi, 8 (20%) P. falciparum, 14 (34%) P. vivax. Microscopy was unable to identify or mis-classified up to 56% of confirmed cases, including all cases of P. knowlesi. With the nPCR methods targeting the four human-only species, P. knowlesi was missed (18S rRNA method) or showed cross-reactivity for P. vivax (cytb method). To facilitate diagnosis and management of potentially fatal P. knowlesi infection and surveillance for elimination of human-only malaria in Indonesia and other affected settings, new detection methods are needed for testing at the point-of-care and in local reference laboratories.

]]>
<![CDATA[Cervical cancer screening in Sweden 2014-2016]]> https://www.researchpad.co/article/5c215143d5eed0c4843f9669

Background

To enable incremental optimization of screening, regular reporting of quality indicators is required.

Aim

To report key quality indicators and basic statistics about cervical screening in Sweden.

Methods

We collected individual level data on all cervical cytologies, histopathologies, human papillomavirus tests and all invitations for cervical screening in Sweden during 2013–2016.

Results

There were over 2,278,000 cervical samples collected in Sweden in 2014–2016. Organized samples (resulting from an invitation) constituted 69% of samples. The screening test coverage of all resident women aged 23–60 was 82%. The coverage has slowly increased for >10 years. There is large variability between counties (from 71% to 92%) over time. There were 25,725 women with high-grade lesions in cytology during 2013–2015. Only 96% of these women had a follow-up histopathology within a year. Cervical cancer incidence showed an increasing trend.

Conclusion

Key quality indicators such as population coverage and follow-up rates were stable or improving, but there was nevertheless an unexplained cervical cancer increase.

]]>
<![CDATA[Distribution of HCV genotypes in Belgium from 2008 to 2015]]> https://www.researchpad.co/article/5c117b69d5eed0c484699195

Background

The knowledge of circulating HCV genotypes and subtypes in a country is crucial to guide antiviral therapy and to understand local epidemiology. Studies investigating circulating HCV genotypes and their trends have been conducted in Belgium. However they are outdated, lack nationwide representativeness or were not conducted in the general population.

Methods

In order to determine the distribution of different circulating HCV genotypes in Belgium, we conducted a multicentre study with all the 19 Belgian laboratories performing reimbursed HCV genotyping assays. Available genotype and subtype data were collected for the period from 2008 till 2015. Furthermore, a limited number of other variables were collected: some demographic characteristics from the patients and the laboratory technique used for the determination of the HCV genotype.

Results

For the study period, 11,033 unique records collected by the participating laboratories were used for further investigation.

HCV genotype 1 was the most prevalent (53.6%) genotype in Belgium, with G1a and G1b representing 19.7% and 31.6%, respectively. Genotype 3 was the next most prevalent (22.0%). Further, genotype 4, 2, and 5 were responsible for respectively 16.1%, 6.2%, and 1.9% of HCV infections. Genotype 6 and 7 comprise the remaining <1%. Throughout the years, a stable distribution was observed for most genotypes. Only for genotype 5, a decrease as a function of the year of analysis was observed, with respectively 3.6% for 2008, 2.3% for 2009 and 1.6% for the remaining years.

The overall M:F ratio was 1.59 and was mainly driven by the high M:F ratio of 3.03 for patients infected with genotype 3. Patients infected with genotype 3 are also younger (mean age 41.7 years) than patients infected with other genotypes (mean age above 50 years for all genotypes). The patients for whom a genotyping assay was performed in 2008 were younger than those from 2015.

Geographical distribution demonstrates that an important number of genotyped HCV patients live outside the Belgian metropolitan cities.

Conclusion

This national monitoring study allowed a clear and objective view of the circulating HCV genotypes in Belgium and will help health authorities in the establishment of cost effectiveness determinations before implementation of new treatment strategies.

This baseline characterization of the circulating genotypes is indispensable for a continuous surveillance, especially for the investigation of the possible impact of migration from endemic regions and prior to the increasing use of highly potent direct-acting antiviral (DAA) agents.

]]>
<![CDATA[Importance of real-time RT-PCR to supplement the laboratory diagnosis in the measles elimination program in China]]> https://www.researchpad.co/article/5c0ae44dd5eed0c484589580

In addition to high vaccination coverage, timely and accurate laboratory confirmation of measles cases is critical to interrupt measles transmission. To evaluate the role of real-time reverse transcription-polymerase chain reaction (RT-PCR) in the diagnosis of measles cases, 46,363 suspected measles cases with rash and 395 suspected measles cases without rash were analyzed in this study; the cases were obtained from the Chinese measles surveillance system (MSS) during 2014–2017 and simultaneously detected by measles-specific IgM enzyme-linked immunosorbent assay (ELISA) and real-time RT-PCR. However, some IgM-negative measles cases were identified by real-time RT-PCR. The proportion of these IgM-negative and viral nucleic acid-positive measles cases was high among measles cases with measles vaccination history, cases without rash symptoms, and cases within 3 days of specimen collection after onset. The proportion of IgM-negative and viral nucleic acid-positive measles cases in the 0–3 day group was up to 14.4% for measles cases with rash and 40% for measles cases without rash. Moreover, the proportions of IgM-negative and nucleic acid-positive measles cases gradually increased with the increase in the measles vaccination dose. Therefore, integrated with IgM ELISA, real-time RT-PCR would greatly improve the accurate diagnosis of measles cases and avoid missing the measles cases, especially for measles cases during the first few days after onset when the patients were highly contagious and for measles cases with secondary vaccine failure. In conclusion, our study reconfirmed that IgM ELISA is the gold-standard detection assay for measles cases confirmation. However, real-time RT-PCR should be introduced and used to supplement the laboratory diagnosis, especially in the setting of pre-elimination and/or elimination wherever appropriate.

]]>
<![CDATA[Cryptococcal antigen positivity combined with the percentage of HIV-seropositive samples with CD4 counts <100 cells/μl identifies districts in South Africa with advanced burden of disease]]> https://www.researchpad.co/article/5b297b7d463d7e06147f8d6f

Introduction

Cryptococcal meningitis (CM) is an opportunistic fungal disease with a high mortality among HIV-positive patients with severe immunosuppression (CD4 count <100 cells/μl). Reflexed screening for cryptococcal antigen (CrAg) in remnant blood samples was initially piloted at selected CD4 testing laboratories of the National Health Laboratory Service (NHLS) prior to the implementation of a national screening programme using a lateral flow assay (LFA) (IMMY, Norman, OK, USA). The aim of this study was to assess CrAg positivity nationally, per province and district in combination with the percentage of CD4 samples tested with a CD4 count <100 cells/μl to identify areas with advanced HIV/CrAg disease burden.

Methods

CrAg and CD4 laboratory result data were extracted from the NHLS corporate data warehouse. Monthly test volumes were used to assess CrAg test volumes and coverage, while bubble charts were used to display the relationship between CD4 <100 cells/μl, CrAg positivity and number of positive CrAg samples by district. ArcGIS software was used to spatially report CrAg positivity.

Results

CrAg screening coverage was stable at around 96% after November 2016. Samples with a CD4 <100 cell/μl and CrAg positivity were also stable over the study period at 10% and ~5% respectively. The highest CrAg positivity was reported for the Kwa-Zulu Natal province (7.3%), which also had the lowest percentage of samples with a CD4 <100 cells/μl (7.2%). Uthungulu and Umkhanyakude districts had the highest CrAg positivity (9.3% and 8.9% respectively). Ethekwini and Johannesburg Metro districts contributed to 22% of the total number of CrAg-positive samples tested across South Africa for the period reported.

Conclusion

Existing CD4 testing services were used to rapidly scale up CrAg reflex testing in South Africa. Districts with advanced HIV and CrAg disease burden were identified that need further investigation of patient management interventions.

]]>
<![CDATA[Molecular Genetics External Quality Assessment Pilot Scheme for Irinotecan-Related UGT1A1 Genotyping in China]]> https://www.researchpad.co/article/5989d9f3ab0ee8fa60b6f0e8

Irinotecan is widely used in the treatment of solid tumors, especially in colorectal cancer and lung cancer. Molecular testing for UGT1A1 genotyping is increasingly required in China for optimum irinotecan administration. In order to determine the performance of laboratories with regard to the whole testing process for UGT1A1 to ensure the consistency and accuracy of the test results, the National Center for Clinical Laboratories conducted an external quality assessment program for UGT1A1*28 genotyping in 2015. The panel, which comprised of four known mutational samples and six wild-type samples, was distributed to 45 laboratories that test for the presence of UGT1A1*28 polymorphisms. Participating laboratories were allowed to perform polymorphism analysis by using their routine methods. The accuracy of the genotyping and reporting of results was analyzed. Other information from the individual laboratories, including the number of samples tested each month, accreditation/certification status, and test methodology, was reviewed. Forty-four of the 45 participants reported the correct results for all samples. There was only one genotyping error, with a corresponding analytical sensitivity of 99.44% (179/180 challenges; 95% confidence interval: 96.94−99.99%) and an analytical specificity of 100% (270/270 challenges; 95% confidence interval: 98.64−100%). Both commercial kits and laboratory development tests were commonly used by the laboratories, and pyrosequencing was the main methodology used (n = 26, 57.8%). The style of the written reports showed large variation, and many reports showed a shortage of information. In summary, the first UGT1A1 genotyping external quality assessment result demonstrated that UGT1A1 genotype analysis of good quality was performed in the majority of pharmacogenetic testing centers that were investigated. However, greater education on the reporting of UGT1A1 genetic testing results is needed.

]]>
<![CDATA[Specimen origin, type and testing laboratory are linked to longer turnaround times for HIV viral load testing in Malawi]]> https://www.researchpad.co/article/5989db4fab0ee8fa60bdbc2c

Background

Efforts to reach UNAIDS’ treatment and viral suppression targets have increased demand for viral load (VL) testing and strained existing laboratory networks, affecting turnaround time. Longer VL turnaround times delay both initiation of formal adherence counseling and switches to second-line therapy for persons failing treatment and contribute to poorer health outcomes.

Methods

We utilized descriptive statistics and logistic regression to analyze VL testing data collected in Malawi between January 2013 and March 2016. The primary outcomes assessed were greater-than-median pretest phase turnaround time (days elapsed from specimen collection to receipt at the laboratory) and greater-than-median test phase turnaround time (days from receipt to testing).

Results

The median number of days between specimen collection and testing increased 3-fold between 2013 (8 days, interquartile range (IQR) = 6–16) and 2015 (24, IQR = 13–39) (p<0.001). Multivariable analysis indicated that the odds of longer pretest phase turnaround time were significantly higher for specimen collection districts without laboratories capable of conducting viral load tests (adjusted odds ratio (aOR) = 5.16; 95% confidence interval (CI) = 5.04–5.27) as well as for Malawi’s Northern and Southern regions. Longer test phase turnaround time was significantly associated with use of dried blood spots instead of plasma (aOR = 2.30; 95% CI = 2.23–2.37) and for certain testing months and testing laboratories.

Conclusion

Increased turnaround time for VL testing appeared to be driven in part by categorical factors specific to the phase of turnaround time assessed. Given the implications of longer turnaround time and the global effort to scale up VL testing, addressing these factors via increasing efficiencies, improving quality management systems and generally strengthening the VL spectrum should be considered essential components of controlling the HIV epidemic.

]]>
<![CDATA[Diagnosis of Persistent Fever in the Tropics: Set of Standard Operating Procedures Used in the NIDIAG Febrile Syndrome Study]]> https://www.researchpad.co/article/5989da41ab0ee8fa60b8a2f5

In resource-limited settings, the scarcity of skilled personnel and adequate laboratory facilities makes the differential diagnosis of fevers complex [15]. Febrile illnesses are diagnosed clinically in most rural centers, and both Rapid Diagnostic Tests (RDTs) and clinical algorithms can be valuable aids to health workers and facilitate therapeutic decisions [6,7]. The persistent fever syndrome targeted by NIDIAG is defined as presence of fever for at least one week. The NIDIAG clinical research consortium focused on potentially severe and treatable infections and therefore targeted the following conditions as differential diagnosis of persistent fever: visceral leishmaniasis (VL), human African trypanosomiasis (HAT), enteric (typhoid and paratyphoid) fever, brucellosis, melioidosis, leptospirosis, malaria, tuberculosis, amoebic liver abscess, relapsing fever, HIV/AIDS, rickettsiosis, and other infectious diseases (e.g., pneumonia). From January 2013 to October 2014, a prospective clinical phase III diagnostic accuracy study was conducted in one site in Cambodia, two sites in Nepal, two sites in Democratic Republic of the Congo (DRC), and one site in Sudan (clinicaltrials.gov no. NCT01766830). The study objectives were to (1) determine the prevalence of the target diseases in patients presenting with persistent fever, (2) assess the predictive value of clinical and first-line laboratory features, and (3) assess the diagnostic accuracy of several RDTs for the diagnosis of the different target conditions.

]]>
<![CDATA[Performance Evaluation of the Becton Dickinson FACSPresto™ Near-Patient CD4 Instrument in a Laboratory and Typical Field Clinic Setting in South Africa]]> https://www.researchpad.co/article/5989d9e0ab0ee8fa60b6952f

Background

The BD-FACSPresto CD4 is a new, point-of-care (POC) instrument utilising finger-stick capillary blood sampling. This study evaluated its performance against predicate CD4 testing in South Africa.

Methods

Phase-I testing: HIV+ patient samples (n = 214) were analysed on the Presto under ideal laboratory conditions using venous blood. During Phase-II, 135 patients were capillary-bled for CD4 testing on FACSPresto, performed according to manufacturer instruction. Comparative statistical analyses against predicate PLG/CD4 method and industry standards were done using GraphPad Prism 6. It included Bland-Altman with 95% limits of agreement (LOA) and percentage similarity with coefficient of variation (%CV) analyses for absolute CD4 count (cells/μl) and CD4 percentage of lymphocytes (CD4%).

Results

In Phase-I, 179/217 samples yielded reportable results with Presto using venous blood filled cartridges. Compared to predicate, a mean bias of 40.4±45.8 (LOA of -49.2 to 130.2) and %similarity (%CV) of 106.1%±7.75 (7.3%) was noted for CD4 absolute counts. In Phase-2 field study, 118/135 capillary-bled Presto samples resulted CD4 parameters. Compared to predicate, a mean bias of 50.2±92.8 (LOA of -131.7 to 232) with %similarity (%CV) 105%±10.8 (10.3%), and 2.87±2.7 (LOA of -8.2 to 2.5) with similarity of 94.7±6.5% (6.83%) noted for absolute CD4 and CD4% respectively. No significant clinical differences were indicated for either parameter using two sampling methods.

Conclusion

The Presto produced remarkable precision to predicate methods, irrespective of venous or capillary blood sampling. A consistent, clinically insignificant over-estimation (5–7%) of counts against PLG/CD4 and equivalency to FACSCount was noted. Further field studies are awaited to confirm longer-term use.

]]>
<![CDATA[HIV Testing among Patients with Presumptive Tuberculosis: How Do We Implement in a Routine Programmatic Setting? Results of a Large Operational Research from India]]> https://www.researchpad.co/article/5989da85ab0ee8fa60b9c1c3

Background

In March 2012, World Health Organization recommended that HIV testing should be offered to all patients with presumptive TB (previously called TB suspects). How this is best implemented and monitored in routine health care settings in India was not known. An operational research was conducted in Karnataka State (South India, population 64 million, accounts for 10% of India’s HIV burden), to test processes and learn results and challenges of screening presumptive TB patients for HIV within routine health care settings.

Methods

In this cross-sectional study conducted between January-March 2012, all presumptive TB patients attending public sector sputum microscopy centres state-wide were offered HIV testing by the laboratory technician, and referred to the nearest public sector HIV counselling and testing services, usually within the same facility. The HIV status of the patients was recorded in the routine TB laboratory form and TB laboratory register. The laboratory register was compiled to obtain the number of presumptive TB patients whose HIV status was ascertained, and the number found HIV positive. Aggregate data on reasons for non-testing were compiled at district level.

Results

Overall, 115,308 patients with presumptive TB were examined for sputum smear microscopy at 645 microscopy centres state-wide. Of these, HIV status was ascertained for 62,847(55%) among whom 7,559(12%) were HIV-positive, and of these, 3,034(40%) were newly diagnosed. Reasons for non-testing were reported for 37,700(72%) of the 52,461 patients without HIV testing; non-availability of testing services at site of sputum collection was cited by health staff in 54% of respondents. Only 4% of patients opted out of HIV testing.

Conclusion

Offering HIV testing routinely to presumptive TB patients detected large numbers of previously-undetected instances of HIV infection. Several operational challenges were noted which provide useful lessons for improving uptake of HIV testing in this important group.

]]>
<![CDATA[National Approaches to Monitoring Population Salt Intake: A Trade-Off between Accuracy and Practicality?]]> https://www.researchpad.co/article/5989db17ab0ee8fa60bcd555

Aims

There is strong evidence that diets high in salt are bad for health and that salt reduction strategies are cost effective. However, whilst it is clear that most people are eating too much salt, obtaining an accurate assessment of population salt intake is not straightforward, particularly in resource poor settings. The objective of this study is to identify what approaches governments are taking to monitoring salt intake, with the ultimate goal of identifying what actions are needed to address challenges to monitoring salt intake, especially in low and middle-income countries.

Methods and Results

A written survey was issued to governments to establish the details of their monitoring methods. Of the 30 countries that reported conducting formal government salt monitoring activities, 73% were high income countries. Less than half of the 30 countries, used the most accurate assessment of salt through 24 hour urine, and only two of these were developing countries. The remainder mainly relied on estimates through dietary surveys.

Conclusions

The study identified a strong need to establish more practical ways of assessing salt intake as well as technical support and advice to ensure that low and middle income countries can implement salt monitoring activities effectively.

]]>
<![CDATA[The Challenges of Conducting Clinical Research on Neglected Tropical Diseases in Remote Endemic Areas in Sudan]]> https://www.researchpad.co/article/5989dac2ab0ee8fa60bb0fcc ]]> <![CDATA[Invasive Group B Streptococcal Disease in South Africa: Importance of Surveillance Methodology]]> https://www.researchpad.co/article/5989da0fab0ee8fa60b79247

Data on neonatal group B streptococcal (GBS) invasive disease burden are needed to refine prevention policies. Differences in surveillance methods and investigating for cases can lead to varying disease burden estimates. We compared the findings of laboratory-based passive surveillance for GBS disease across South Africa, and for one of the provinces compared this to a real-time, systematic, clinical surveillance in a population-defined region in Johannesburg, Soweto. Passive surveillance identified a total of 799 early-onset disease (EOD, <7 days age) and 818 LOD (late onset disease, 7–89 days age) cases nationwide. The passive surveillance provincial incidence varied for EOD (range 0.00 to 1.23/1000 live births), and was 0.03 to 1.04/1000 live births for LOD. The passive surveillance rates for Soweto, were not significantly different compared to those from the systematic surveillance (EOD 1.23 [95%CI 1.06–1.43] vs. 1.50 [95%CI 1.30–1.71], respectively, rate ratio 0.82 [95%CI 0.67–1.01]; LOD 1.04 [95% CI 0.90–1.23] vs. 1.22 [95%CI 1.05–1.42], rate ratio 0.85 [95% CI 0.68–1.07]). A review of the few cases missed in the passive system in Soweto, suggested that missing key identifiers, such as date of birth, resulted in their omission during the electronic data extraction process. Our analysis suggests that passive surveillance provides a modestly lower estimate of invasive GBS rates compared to real time sentinel-site systematic surveillance, however, this is unlikely to be the reason for the provincial variability in incidence of invasive GBS disease in South Africa. This, possibly reflects that invasive GBS disease goes undiagnosed due to issues related to access to healthcare, poor laboratory capacity and varying diagnostic procedures or empiric antibiotic treatment of neonates with suspected sepsis in the absence of attempting to making a microbiological diagnosis. An efficacious GBS vaccine for pregnant women, when available, could be used as a probe to better quantify the burden of invasive GBS disease in low-middle resourced settings such as ours. From our study passive systems are important to monitor trends over time as long as they are interpreted with caution; active systems give better detailed information and will have greater representivity when expanded to other surveillance sites.

]]>
<![CDATA[A Study on the Analytical Sensitivity of 6 BSE Tests Used by the Canadian BSE Reference Laboratory]]> https://www.researchpad.co/article/5989da02ab0ee8fa60b749b0

Bovine spongiform encephalopathy (BSE) surveillance programs have been employed in numerous countries to monitor BSE prevalence and to protect animal and human health. Since 1999, the European Commission (EC) authorized the evaluation and approval of 20 molecular based tests for the rapid detection of the pathological prion protein (PrPsc) in BSE infection. The diagnostic sensitivity, convenience, and speed of these tests have made molecular diagnostics the preferred method for BSE surveillance. The aim of this study was to determine the analytical sensitivity of 4 commercially available BSE rapid-test kits, including the Prionics®-Check WESTERN, the Prionics® Check-PrioSTRIP™, the BioRad® TeSeE™ ELISA, and the IDEXX® HerdChek™ EIA. Performances of these tests were then compared to 2 confirmatory tests, including the BioRad® TeSeEWestern Blot and the modified Scrapie Associated Fibrils (SAF)/OIE Immunoblot. One 50% w/v homogenate was made from experimentally generated C-type BSE brain tissues in ddH2O. Homogenates were diluted through a background of BSE-negative brainstem homogenate. Masses of both positive and negative tissues in each dilution were calculated to maintain the appropriate tissue amounts for each test platform. Specific concentrated homogenization buffer was added accordingly to maintain the correct buffer condition for each test. ELISA-based tests were evaluated using their respective software/detection platforms. Blot-protocols were evaluated by manual measurements of blot signal density. Detection limitations were determined by fitted curves intersecting the manufacturers' positive/negative criteria. The confirmatory SAF Immunoblot displayed the highest analytical sensitivity, followed by the IDEXX® HerdChek™ EIA, Bio-Rad® TeSeEWestern Blot, the Bio-Rad® TeSeE™ ELISA, Prionics®-Check PrioSTRIP™, and Prionics®-Check WESTERN™, respectively. Although the tests performed at different levels of sensitivity, the most sensitive and least sensitive of the rapid tests were separated by 2 logs in analytical sensitivity, meeting European performance requirements. All rapid tests appear suitable for targeted BSE surveillance programs, as implemented in Canada.

]]>
<![CDATA[Analysis and Presentation of Cumulative Antimicrobial Susceptibility Test Data – The Influence of Different Parameters in a Routine Clinical Microbiology Laboratory]]> https://www.researchpad.co/article/5989db03ab0ee8fa60bc745f

Introduction

Many clinical microbiology laboratories report on cumulative antimicrobial susceptibility testing (cAST) data on a regular basis. Criteria for generation of cAST reports, however, are often obscure and inconsistent. Whereas the CLSI has published a guideline for analysis and presentation of cAST data, national guidelines directed at clinical microbiology laboratories are not available in Europe. Thus, we sought to describe the influence of different parameters in the process of cAST data analysis in the setting of a German routine clinical microbiology laboratory during 2 consecutive years.

Material and Methods

We developed various program scripts to assess the consequences ensuing from different algorithms for calculation of cumulative antibiograms from the data collected in our clinical microbiology laboratory in 2013 and 2014.

Results

One of the most pronounced effects was caused by exclusion of screening cultures for multi-drug resistant organisms which decreased the MRSA rate in some cases to one third. Dependent on the handling of duplicate isolates, i.e. isolates of the same species recovered from successive cultures on the same patient during the time period analyzed, we recorded differences in resistance rates of up to 5 percentage points for S. aureus, E. coli and K. pneumoniae and up to 10 percentage points for P. aeruginosa. Stratification by site of care and specimen type, testing of antimicrobials selectively on resistant isolates, change of interpretation rules and analysis at genus level instead of species level resulted in further changes of calculated antimicrobial resistance rates.

Conclusion

The choice of parameters for cAST data analysis may have a substantial influence on calculated antimicrobial resistance rates. Consequently, comparability of cAST reports from different clinical microbiology laboratories may be limited. We suggest that laboratories communicate the strategy used for cAST data analysis as long as national guidelines for standardized cAST data analysis and reporting do not exist in Europe.

]]>