ResearchPad - digestive-system-procedures https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[A pilot study of ex-vivo MRI-PDFF of donor livers for assessment of steatosis and predicting early graft dysfunction]]> https://www.researchpad.co/article/elastic_article_14544 The utility of ex vivo Magnetic resonance imaging proton density fat fraction (MRI-PDFF) in donor liver fat quantification is unknown.PurposeTo evaluate the diagnostic accuracy and utility in predicting early allograft dysfunction (EAD) of ex vivo MRI-PDFF measurement of fat in deceased donor livers using histology as the gold standard.MethodsWe performed Ex vivo, 1.5 Tesla MRI-PDFF on 33 human deceased donor livers before implantation, enroute to the operating room. After the exclusion of 4 images (technical errors), 29 MRI images were evaluable. Histology was evaluable in 27 of 29 patients. EAD was defined as a peak value of aminotransferase >2000 IU/mL during the first week or an INR of ≥1.6 or bilirubin ≥10 mg/dL at day 7.ResultsMRI-PDFF values showed a strong positive correlation (Pearson’s correlation coefficient) when histology (macro-steatosis) was included (r = 0.78, 95% confidence interval 0.57‐0.89, p<0.0001). The correlation appeared much stronger when macro plus micro-steatosis were included (r = 0.87, 95% confidence interval 0.72‐0.94, p<0.0001). EAD was noted in 7(25%) subjects. AUC (Area Under the Curve) for macro steatosis (histology) predicted EAD in 73% (95% CI: 48–99), micro plus macro steatosis in 76% (95% CI: 49–100). AUC for PDFF values predicted EAD in 67(35–98). Comparison of the ROC curves in a multivariate model revealed, adding MRI PDFF values to macro steatosis increased the ability of the model in predicting EAD (AUC: 79%, 95% CI: 59–99), and addition of macro plus micro steatosis based on histology predicted EAD even better (AUC: 90%: 79–100, P = 0.054).ConclusionIn this pilot study, MRI-PDFF imaging showed potential utility in quantifying hepatic steatosis ex-vivo donor liver evaluation and the ability to predict EAD related to severe allograft steatosis in the recipient. ]]> <![CDATA[Newborn body composition after maternal bariatric surgery]]> https://www.researchpad.co/article/elastic_article_13862 In pregnancy after Roux-en-Y gastric bypass (RYGB), there is increased risk of low birthweight in the offspring. The present study examined how offspring body composition was affected by RYGB.Material and methodsMother-newborn dyads, where the mothers had undergone RYGB were included. Main outcome measure was neonatal body composition. Neonatal body composition was assessed by dual-energy X-ray absorptiometry scanning (DXA) within 48 hours after birth. In a statistical model offspring born after RYGB were compared with a reference material of offspring and analyses were made to estimate the effect of maternal pre-pregnancy body mass index (BMI), gestational weight gain, parity, gestational age at birth and newborn sex on newborn body composition. Analyses were made to estimate the impact of maternal weight loss before pregnancy and of other effects of bariatric surgery respectively. The study was performed at a university hospital between October 2012 and December 2013.ResultsWe included 25 mother-newborn dyads where the mothers had undergone RYGB and compared them to a reference material of 311 mother-newborn dyads with comparable pre-pregnancy BMI. Offspring born by mothers after RYGB had lower birthweight (335g, p<0.001), fat-free mass (268g, p<0.001) and fat% (2.8%, p<0.001) compared with reference material. Only 2% of the average reduction in newborn fat free mass could be attributed to maternal pre-pregnancy weight loss whereas other effects of RYGB accounted for 98%. Regarding reduction in fat mass 52% was attributed to weight loss and 47% to other effects of surgery.ConclusionOffspring born after maternal bariatric surgery, had lower birthweight, fat-free mass and fat percentage when compared with a reference material. RYGB itself and not the pre-pregnancy weight loss seems to have had the greatest impact on fetal growth. ]]> <![CDATA[Improvement of steatotic rat liver function with a defatting cocktail during <i>ex situ</i> normothermic machine perfusion is not directly related to liver fat content]]> https://www.researchpad.co/article/elastic_article_13803 There is a significant organ shortage in the field of liver transplantation, partly due to a high discard rate of steatotic livers from donors. These organs are known to function poorly if transplanted but make up a significant portion of the available pool of donated livers. This study demonstrates the ability to improve the function of steatotic rat livers using a combination of ex situ machine perfusion and a “defatting” drug cocktail. After 6 hours of perfusion, defatted livers demonstrated lower perfusate lactate levels and improved bile quality as demonstrated by higher bile bicarbonate and lower bile lactate. Furthermore, defatting was associated with decreased gene expression of pro-inflammatory cytokines and increased expression of enzymes involved in mitochondrial fatty acid oxidation. Rehabilitation of marginal or discarded steatotic livers using machine perfusion and tailored drug therapy can significantly increase the supply of donor livers for transplantation.

]]>
<![CDATA[Distinguishing moral hazard from access for high-cost healthcare under insurance]]> https://www.researchpad.co/article/N9aa1c21e-eb0c-47d9-9336-743c9eef5b98

Context

Health policy has long been preoccupied with the problem that health insurance stimulates spending (“moral hazard”). However, much health spending is costly healthcare that uninsured individuals could not otherwise access. Field studies comparing those with more or less insurance cannot disaggregate moral hazard versus access. Moreover, studies of patients consuming routine low-dollar healthcare are not informative for the high-dollar healthcare that drives most of aggregate healthcare spending in the United States.

Methods

We test indemnities as an alternative theory-driven counterfactual. Such conditional cash transfers would maintain an opportunity cost for patients, unlike standard insurance, but also guarantee access to the care. Since indemnities do not exist in U.S. healthcare, we fielded two blinded vignette-based survey experiments with 3,000 respondents, randomized to eight clinical vignettes and three insurance types. Our replication uses a population that is weighted to national demographics on three dimensions.

Findings

Most or all of the spending due to insurance would occur even under an indemnity. The waste attributable to moral hazard is undetectable.

Conclusions

For high-cost care, policymakers should be more concerned about the foregone efficient spending for those lacking full insurance, rather than the wasteful spending that occurs with full insurance.

]]>
<![CDATA[Regeneration of esophagus using a scaffold-free biomimetic structure created with bio-three-dimensional printing]]> https://www.researchpad.co/article/5c8c1978d5eed0c484b4d71e

Various strategies have been attempted to replace esophageal defects with natural or artificial substitutes using tissue engineering. However, these methods have not yet reached clinical application because of the high risks related to their immunogenicity or insufficient biocompatibility. In this study, we developed a scaffold-free structure with a mixture of cell types using bio-three-dimensional (3D) printing technology and assessed its characteristics in vitro and in vivo after transplantation into rats. Normal human dermal fibroblasts, human esophageal smooth muscle cells, human bone marrow-derived mesenchymal stem cells, and human umbilical vein endothelial cells were purchased and used as a cell source. After the preparation of multicellular spheroids, esophageal-like tube structures were prepared by bio-3D printing. The structures were matured in a bioreactor and transplanted into 10-12-week-old F344 male rats as esophageal grafts under general anesthesia. Mechanical and histochemical assessment of the structures were performed. Among 4 types of structures evaluated, those with the larger proportion of mesenchymal stem cells tended to show greater strength and expansion on mechanical testing and highly expressed α-smooth muscle actin and vascular endothelial growth factor on immunohistochemistry. Therefore, the structure with the larger proportion of mesenchymal stem cells was selected for transplantation. The scaffold-free structures had sufficient strength for transplantation between the esophagus and stomach using silicon stents. The structures were maintained in vivo for 30 days after transplantation. Smooth muscle cells were maintained, and flat epithelium extended and covered the inner surface of the lumen. Food had also passed through the structure. These results suggested that the esophagus-like scaffold-free tubular structures created using bio-3D printing could hold promise as a substitute for the repair of esophageal defects.

]]>
<![CDATA[Influence of donor liver telomere and G-tail on clinical outcome after living donor liver transplantation]]> https://www.researchpad.co/article/5c8acccbd5eed0c484990005

It has been reported that donor age affects patient outcomes after liver transplantation, and that telomere length is associated with age. However, to our knowledge, the impact of donor age and donor liver telomere length in liver transplantation has not been well investigated. This study aimed to clarify the influence of the length of telomere and G-tail from donor livers on the outcomes of living donors and recipients after living donor liver transplantation. The length of telomere and G-tail derived from blood samples and liver tissues of 55 living donors, measured using the hybridization protection assay. The length of telomeres from blood samples was inversely correlated with ages, whereas G-tail length from blood samples and telomere and G-tail lengths from liver tissues were not correlated with ages. Age, telomere, and G-tail length from blood did not affect postoperative liver failure and early liver regeneration of donors. On the other hand, the longer the liver telomere, the poorer the liver regeneration tended to be, especially with significant difference in donor who underwent right hemihepatectomy. We found that the survival rate of recipients who received liver graft with longer telomeres was inferior to that of those who received liver graft with shorter ones. An elderly donor, longer liver telomere, and higher Model for End-Stage Liver Disease score were identified as independent risk factors for recipient survival after transplantation. In conclusion, telomere shortening in healthy liver does not correlate with age, whereas longer liver telomeres negatively influence donor liver regeneration and recipient survival after living donor liver transplantation. These results can direct future studies and investigations on telomere shortening in the clinical and experimental transplant setting.

]]>
<![CDATA[Early predictors of outcomes of hospitalization for cirrhosis and assessment of the impact of race and ethnicity at safety-net hospitals]]> https://www.researchpad.co/article/5c897747d5eed0c4847d28e2

Background

Safety-net hospitals provide care for racially/ethnically diverse and disadvantaged urban populations. Their hospitalized patients with cirrhosis are relatively understudied and may be vulnerable to poor outcomes and racial/ethnic disparities.

Aims

To examine the outcomes of patients with cirrhosis hospitalized at regionally diverse safety-net hospitals and the impact of race/ethnicity.

Methods

A study of patients with cirrhosis hospitalized at 4 safety-net hospitals in 2012 was conducted. Demographic, clinical factors, and outcomes were compared between centers and racial/ethnic groups. Study endpoints included mortality and 30-day readmission.

Results

In 2012, 733 of 1,212 patients with cirrhosis were hospitalized for liver-related indications (median age 55 years, 65% male). The cohort was racially diverse (43% White, 25% black, 22% Hispanic, 3% Asian) with cirrhosis related to alcohol and viral hepatitis in 635 (87%) patients. Patients were hospitalized mainly for ascites (35%), hepatic encephalopathy (20%) and gastrointestinal bleeding (GIB) (17%). Fifty-four (7%) patients died during hospitalization and 145 (21%) survivors were readmitted within 30 days. Mortality rates ranged from 4 to 15% by center (p = .007) and from 3 to 10% by race/ethnicity (p = .03), but 30-day readmission rates were similar. Mortality was associated with Model for End-stage Liver Disease (MELD), acute-on-chronic liver failure, hepatocellular carcinoma, sodium and white blood cell count. Thirty-day readmission was associated with MELD and Charlson Comorbidity Index >4, with lower risk for GIB. We did not observe geographic or racial/ethnic differences in hospital outcomes in the risk-adjusted analysis.

Conclusions

Hospital mortality and 30-day readmission in patients with cirrhosis at safety-net hospitals are associated with disease severity and comorbidities, with lower readmissions in patients admitted for GIB. Despite geographic and racial/ethnic differences in hospital mortality, these factors were not independently associated with mortality.

]]>
<![CDATA[Combination of colonoscopy and magnetic resonance enterography is more useful for clinical decision making than colonoscopy alone in patients with complicated Crohn's disease]]> https://www.researchpad.co/article/5c76fe3fd5eed0c484e5b78a

Background/aims

The small bowel is affected in more than half of patients with Crohn’s disease (CD) at the time of diagnosis, and small bowel involvement has a negative impact on the long-term outcome. Many patients reportedly have active lesions in the small intestine even in patients in clinical remission. This study was performed to compare findings of magnetic resonance enterography (MRE) and ileocolonoscopy.

Methods

A single-center retrospective study was conducted in 50 patients (60 imaging series) with CD, for whom MRE was additionally performed during the bowel preparation for subsequent ileocolonoscopy. Endoscopic remission was defined as a Simple Endoscopic Score for CD (SES-CD) of <5. MRE remission was defined as a Magnetic Resonance Index of Activity (MaRIA) score of <50. The time to treatment escalation was assessed by the log-rank test.

Results

Importantly, 7 of 29 patients (24.1%) with endoscopic remission had a MaRIA score of ≥50. Both SES-CD and MaRIA correlated with the need for treatment escalation (P = 0.025, P = 0.009, respectively). MRE predicted the need for treatment escalation even in patients with endoscopic remission. Although no correlation was present between SES-CD and MaRIA score in patients with structuring/penetrating disease, or insufficient ileal insertion (<10cm), a high MaRIA score still correlated with the need for treatment escalation in stricturing or penetrating disease (P = 0.0306).

Conclusions

The MaRIA score predicts the need for treatment escalation even in patients with endoscopic remission, indicating that addition of MRE to conventional ileocolonoscopy alone can be a useful, noninvasive tool for monitoring CD especially in stricturing or penetrating disease.

]]>
<![CDATA[Risk factors for small bowel bleeding in an overt gastrointestinal bleeding presentation after negative upper and lower endoscopy]]> https://www.researchpad.co/article/5c76fe6dd5eed0c484e5ba31

Introduction

A small bowel source is suspected when evaluation of overt gastrointestinal (GI) bleeding with upper and lower endoscopy is negative. Video capsule endoscopy (VCE) is the recommended next diagnostic test for small bowel bleeding sources. However, clinical or endoscopic predictive factors for small bowel bleeding in the setting of an overt bleeding presentation are unknown. We aimed to define predictive factors for positive VCE among individuals presenting with overt bleeding and a suspected small bowel source.

Methods

We included consecutive inpatient VCE performed between September 1, 2012 to September 1, 2015 for melena or hematochezia at two tertiary centers. All patients had EGD and colonoscopy performed prior to VCE. Patient demographics, medication use, and endoscopic findings were retrospectively recorded. VCE findings were graded based on the P0-P2 grading system. The primary outcome of interest was a positive (P2) VCE. The secondary outcome of interest was the performance of a therapeutic intervention. Data were analyzed with the Fisher exact test for dichotomous variables and logistic regression.

Results

Two hundred forty-three VCE were reviewed, and 117 were included in the final analysis. A positive VCE (P2) was identified in 35 (29.9%) cases. In univariate analysis, a positive VCE was inversely associated with presence of diverticula on preceding colonoscopy (OR: 0.44, 95% CI: 0.2–0.99), while identification of blood on terminal ileal examination was associated with a positive VCE (OR: 5.18, 95% CI: 1.51–17.76). In multivariate analysis, only blood identified on terminal ileal examination remained a significant risk factor for positive VCE (OR: 6.13, 95% CI: 1.57–23.81). Blood on terminal ileal examination was also predictive of therapeutic intervention in both univariate (OR: 4.46, 95% CI: 1.3–15.2) and multivariate analysis (OR: 5.04, 95% CI: 1.25–20.32).

Conclusion

Among patients presenting with overt bleeding but negative upper and lower endoscopy, the presence of blood on examination of the terminal ileum is strongly associated with a small bowel bleeding source as well as with small bowel therapeutic intervention. Presence of diverticula on colonoscopy is inversely associated with a positive VCE and therapeutic intervention in univariate analysis.

]]>
<![CDATA[Combining liver stiffness with hyaluronic acid provides superior prognostic performance in chronic hepatitis C]]> https://www.researchpad.co/article/5c6b26b4d5eed0c484289ee1

Background

Non-invasive methods are the first choice for liver fibrosis evaluation in chronic liver diseases, but few studies investigate the ability of combined methods to predict outcomes.

Methods

591 chronic hepatitis C patients with baseline liver stiffness (LSM) by FibroScan and hyaluronic acid measurements were identified retrospectively. The patients were grouped by baseline LSM: < 10kPa, 10–16.9kPa, and 17-75kPa. Primary outcomes were all-cause mortality and liver-related mortality, analyzed using cox regression and competing risk regression models, respectively.

Results

Median follow-up was 46.1 months. Prevalence of cirrhosis at baseline was 107/591 (18.1%). Median LSM was 6.8kPa (IQR 5.3–11.6) and divided into groups, 404/591 (68.4%) had a LSM < 10kPa, 100/591 (16.9%) had a LSM between 10–16.9kPa and 87/591 (14.7%) had a LSM between 17-75kPa. There were 69 deaths, 27 from liver-related disease. 26 patients developed cirrhosis and 30 developed complications of cirrhosis. The mortality rate in the 17-75kPa group was 9.7/100 person-years, compared to 2.2/100 person-years and 1.1/100 person-years in the 10–16.9kPa and <10kPa groups (p<0.005). Liver-related mortality increased 10-fold for each group (p<0.005). Cirrhotic complications occurred almost exclusively in the 17-75kPa group, with an incidence of 10.3/100 person-years, compared to 1.8/100 person-years and 0.2/100 person-years in the 10–16.9kPa and <10kPa groups (p<0.005). Median hyaluronic acid in the 17-75kPa group was approximately 200ng/mL. Patients with a LSM 17-75kPa had significantly higher risks of death, liver-related death, and complications to cirrhosis if their hyaluronic acid measurement was more than or equal to 200ng/mL at baseline, with hazard ratios of 3.25 (95% CI 1.48–7.25), 7.7 (95% CI 2.32–28), and 3.2 (95% CI 1.35–7.39), respectively.

Conclusions

The combination of LSM and circulating hyaluronic acid measurements significantly improved prognostic ability, relative to LSM alone. Combined static and dynamic markers of liver fibrosis could provide superior risk prediction.

]]>
<![CDATA[The influence of sleep apnea syndrome and intermittent hypoxia in carotid adventitial vasa vasorum]]> https://www.researchpad.co/article/5c63397cd5eed0c484ae689b

Subjects with sleep apnea-hypopnea syndrome (SAHS) show an increased carotid intima-media thickness. However, no data exist about earlier markers of atheromatous disease, such as the proliferation and expansion of the adventitial vasa vasorum (VV) to the avascular intima in this setting. Our aim was to assess carotid VV density and its relationship with sleep parameters in a cohort of obese patients without prior vascular events. A total of 55 subjects evaluated for bariatric surgery were prospectively recruited. A non-attended respiratory polygraphy was performed. The apnea-hypopnea index (AHI) and the cumulative percentage of time spent with oxygen saturation below 90% (CT90) were assessed. Serum concentrations of soluble intercellular adhesion molecule 1, P-selectin, lipocalin-2 and soluble vascular cell adhesion molecule 1 (sVCAM-1) were measured. Contrast-enhanced carotid ultrasound was used to assess the VV density. Patients with SAHS (80%) showed a higher adventitial VV density (0.801±0.125 vs. 0.697±0.082, p = 0.005) and higher levels of sVCAM-1 (745.2±137.8 vs. 643.3±122.7 ng/ml, p = 0.035) than subjects with an AHI lower than 10 events/hour. In addition, a positive association exist between mean VV density and AHI (r = 0.445, p = 0.001) and CT90 (r = 0.399, p = 0.005). Finally, in the multiple linear regression analysis, female sex, fasting plasma glucose and AHI (but not CT90) were the only variables independently associated with the mean adventitial VV density (R2 = 0.327). In conclusion, a high VV density is present in obese subjects with SAHS, and chronic intermittent hypoxia is pointed as an independent risk factor for the development of this early step of atheromatous disease.

]]>
<![CDATA[Anatomical location-based nodal staging system is superior to the 7th edition of the American Joint Committee on Cancer staging system among patients with surgically resected, histologically low-grade gastric cancer: A single institutional experience]]> https://www.researchpad.co/article/5c63395fd5eed0c484ae6576

Background

A hybrid topographic and numeric lymph node (LN) staging system for gastric cancer, which was recently proposed by Japanese experts as a simple method with a prognostic predictive power comparable to the N staging of the American Joint Committee on Cancer (AJCC) Tumor-node-metastasis classification, has not yet been validated in other Asian countries. This study aimed to examine the prognostic predictability of the hybrid staging system with the current AJCC staging system in gastric cancer.

Methods

Overall, 400 patients with gastric cancer who underwent surgery at Changhua Christian Hospital from January 2007 to December 2017 were included in the study. Univariate and multivariate analyses were performed to identify prognostic factors for gastric cancer-related death. Homogeneity and discrimination abilities of the two staging systems were compared using likelihood ratio chi-square test, linear trend chi-square test, Harrell’s c-index, and bootstrap analysis.

Results

One-third of the LN-positive patients were reclassified into the new N and Stage system. The concordance rates of the two staging systems and the N staging between the two staging systems were 0.810 and 0.729, respectively. Harrell’s c-indices for the stage and N staging were higher in the 7th AJCC staging system than the hybrid staging system (c-index for stage, 0.771 vs 0.764; c-index for nodal stage, 0.713 vs 0.705). Stratification of the patients according to the histological grade revealed that Harrell’s c-indices for the stage and N stage of the hybrid staging system were comparable with those of the 7th AJCC staging system (c-index for AJCC stage vs hybrid stage, 0.800 vs 0.791; c-index for AJCC N stage vs hybrid N stage, 0.746 vs 0.734) among patients with histologically lower grade gastric cancer. The performance of the new nodal staging system was better than that of the 7th AJCC staging system by likelihood ratio and linear trend tests and bootstrap analysis in the low-grade group.

Conclusions

The hybrid anatomical location-based classification may have better prognostic predictive ability than the 7th AJCC staging system for LN metastasis of low-grade gastric cancer. Further studies involving different ethnic populations are necessary for the validation of the new staging system.

]]>
<![CDATA[Surgical approach and the impact of epidural analgesia on survival after esophagectomy for cancer: A population-based retrospective cohort study]]> https://www.researchpad.co/article/5c50c449d5eed0c4845e8444

Background

Esophagectomy for esophageal cancer carries high morbidity and mortality, particularly in older patients. Transthoracic esophagectomy allows formal lymphadenectomy, but leads to greater perioperative morbidity and pain than transhiatal esophagectomy. Epidural analgesia may attenuate the stress response and be less immunosuppressive than opioids, potentially affecting long-term outcomes. These potential benefits may be more pronounced for transthoracic esophagectomy due to its greater physiologic impact. We evaluated the impact of epidural analgesia on survival and recurrence after transthoracic versus transhiatal esophagectomy.

Methods

A retrospective cohort study was performed using the linked Surveillance, Epidemiology and End Results (SEER)-Medicare database. Patients aged ≥66 years with locoregional esophageal cancer diagnosed 1994–2009 who underwent esophagectomy were identified, with follow-up through December 31, 2013. Epidural receipt and surgical approach were identified from Medicare claims. Survival analyses adjusting for hospital esophagectomy volume, surgical approach, and epidural use were performed. A subgroup analysis restricted to esophageal adenocarcinoma patients was performed.

Results

Among 1,921 patients, 38% underwent transhiatal esophagectomy (n = 730) and 62% underwent transthoracic esophagectomy (n = 1,191). 61% (n = 1,169) received epidurals and 39% (n = 752) did not. Epidural analgesia was associated with transthoracic approach and higher volume hospitals. Patients with epidural analgesia had better 90-day survival. Five-year survival was higher with transhiatal esophagectomy (37.2%) than transthoracic esophagectomy (31.0%, p = 0.006). Among transthoracic esophagectomy patients, epidural analgesia was associated with improved 5-year survival (33.5% epidural versus 26.5% non-epidural, p = 0.012; hazard ratio 0.81, 95% confidence interval [0.70, 0.93]). Among the subgroup of esophageal adenocarcinoma patients undergoing transthoracic esophagectomy, epidural analgesia remained associated with improved 5-year survival (hazard ratio 0.81, 95% confidence interval [0.67, 0.96]); this survival benefit persisted in sensitivity analyses adjusting for propensity to receive an epidural.

Conclusion

Among patients undergoing transthoracic esophagectomy, including a subgroup restricted to esophageal adenocarcinoma, epidural analgesia was associated with improved survival even after adjusting for other factors.

]]>
<![CDATA[A new formula to calculate the resection limit in hepatectomy based on Gd-EOB-DTPA-enhanced magnetic resonance imaging]]> https://www.researchpad.co/article/5c644885d5eed0c484c2e84f

Background and aim

Dynamic magnetic resonance imaging with gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid (EOB-MRI) can be used not only to detect liver tumors but also to estimate liver function. The aim of this study was to establish a new EOB-MRI-based formula to determine the resection limit in patients undergoing hepatectomy.

Methods

Twenty-eight patients with a normal liver (NL group) and five with an unresectable cirrhotic liver (UL group) who underwent EOB-MRI were included. Standardized liver function (SLF) was calculated based on the signal intensity (SI), the volume of each subsegment (S1–S8), and body surface area. A formula defining the resection limit was devised based on the difference in the SLF values of patients in the NL and UL groups. The formula was validated in 50 patients who underwent EOB-MRI and hepatectomy.

Results

The average SLF value in the NL and UL groups was 2038 and 962 FV/m2, respectively. The difference (1076 FV/m2) was consistent with a 70% in resection volume. Thus, the resection limit for hepatectomy was calculated as a proportion of 70%: 70×(SLF−962)/1076 (%). The one patient who underwent hepatectomy over the resection limit died due to liver failure. In other 49 patients, in whom the resection volume was less than the resection limit, procedures were safely performed.

Conclusions

Our formula for resection limit based on EOB-MRI can improve the safety of hepatectomy.

]]>
<![CDATA[Fecal microbiota transplantation for treatment of recurrent C. difficile infection: An updated randomized controlled trial meta-analysis]]> https://www.researchpad.co/article/5c5217d8d5eed0c4847946a2

Objectives

Although systematic evaluation has confirmed the efficacy of fresh fecal microbiota transplantation (FMT) for treatment of recurrent and/or refractory and/or relapse C. difficile infection (RCDI), it lacks the support of well-designed randomized controlled trials (RCTs), and the latest guidelines do not optimize the management of FMT. In this paper, we focus on an in-depth study of fresh FMT and fecal infusion times to guide clinical practice.

Methods

We reviewed studies in PubMed, Medline, Embase, the Cochrane library and Cochrane Central written in English. The retrieval period was from the establishment of the databases to September 20th, 2018. The retrieval objects were published RCTs of RCDI treated by fresh FMT. The intervention group was fresh FMT group, while the control group included antibiotic therapy or placebo or frozen FMT or capsule. The primary and secondary outcomes were the clinical remission of diarrhea without relapse after 8–17 weeks and the occurrence of severe adverse events, respectively. Subgroup analysis analyzed the effect of single and multiple fecal infusions. Two authors independently completed the information extraction and assessed risk of bias and overall quality of the evidence.

Results

8 randomized controlled trials met the inclusion criteria, involving 537 patients (273 in the fresh FMT group and 264 in the control group). The recurrence rate of clinical diarrhea in the fresh FMT group was 11.0% (30/273), which was significantly lower than the control group (24.6%, 65/264; P < 0.05); the pooled relative risk (RR) was 0.38 (95%CI:0.16–0.87; I2 = 67%; P = 0.02) in the fresh FMT group, and the clinical heterogeneity was significant and random effects model was used; However, there was no significant difference neither for the effect of antibiotic treatment/frozen feces transplanted by enema (RR = 1.07; 95%CI: 0.64–1.80; I2 = 0%; P = 0.79) or capsule/frozen feces transplanted by colonoscopy (RR = 0.42; 95%CI: 0.05–3.94; I2 = 43%; P = 0.45) compared with fresh FMT. The subgroup analysis showed that FMT by multiple infusions could effectively and significantly (RR = 0.24; 95%CI:0.10–0.58; I2 = 0%; P = 0.001) improve the clinical diarrhea remission rate. Most mild to moderate adverse events caused by FMT were self-limited and could be quickly alleviated; no severe adverse events happened because of FMT.

Conclusions

Overall, the use of fresh feces for bacterial transplantation was the best efficiency for RCDI compared to antibiotic therapy or placebo. The fecal transmission method by enema was not ideal, but capsules or frozen feces transported by colonoscopy could be an alternative treatment compared to fresh FMT. For patients with severe RCDI, multiple fecal transplants can effectively improve their diarrhea remission rate. The focus of future research should be on how to standardize the production of capsules or frozen feces to better guide the clinical management of RCDI patients by FMT.

]]>
<![CDATA[Cytomegalovirus viral load parameters associated with earlier initiation of pre-emptive therapy after solid organ transplantation]]> https://www.researchpad.co/article/5c6448bcd5eed0c484c2ecea

Background

Human cytomegalovirus (HCMV) can be managed by monitoring HCMV DNA in the blood and giving valganciclovir when viral load exceeds a defined value. We hypothesised that such pre-emptive therapy should occur earlier than the standard 3000 genomes/ml (2520 IU/ml) when a seropositive donor transmitted virus to a seronegative recipient (D+R-) following solid organ transplantation (SOT).

Methods

Our local protocol was changed so that D+R- SOT patients commenced valganciclovir once the viral load exceeded 200 genomes/ml; 168 IU/ml (new protocol). The decision point remained at 3000 genomes/ml (old protocol) for the other two patient subgroups (D+R+, D-R+). Virological outcomes were assessed three years later, when 74 D+R- patients treated under the old protocol could be compared with 67 treated afterwards. The primary outcomes were changes in peak viral load, duration of viraemia and duration of treatment in the D+R- group. The secondary outcome was the proportion of D+R- patients who developed subsequent viraemia episodes.

Findings

In the D+R- patients, the median values of peak viral load (30,774 to 11,135 genomes/ml, p<0.0215) were significantly reduced on the new protocol compared to the old, but the duration of viraemia and duration of treatment were not. Early treatment increased subsequent episodes of viraemia from 33/58 (57%) to 36/49 (73%) of patients (p< 0.0743) with a significant increase (p = 0.0072) in those episodes that required treatment (16/58; 27% versus 26/49; 53%). Median peak viral load increased significantly (2,103 to 3,934 genomes/ml, p<0.0249) in the D+R+ but not in the D-R+ patient subgroups. There was no change in duration of viraemia or duration of treatment for any patient subgroup.

Interpretation

Pre-emptive therapy initiated at the first sign of viraemia post-transplant significantly reduced the peak viral load but increased later episodes of viraemia, consistent with the hypothesis of reduced antigenic stimulation of the immune system.

]]>
<![CDATA[The intestinal fatty acid-binding protein as a marker for intestinal damage in gastroschisis]]> https://www.researchpad.co/article/5c466552d5eed0c48451897c

Background/Purpose

We analyzed the capacity of urinary Intestinal fatty acid-binding protein (I-FABP) to quantify the degree of mucosal injury in neonates with gastroschisis (GS) and to predict the speed of their clinical recovery after surgery.

Methods

In this prospective study, we collected urine during the first 48h after surgery from neonates operated between 2012 and 2015 for GS. Neonates with surgery that did not include gut mucosa served as controls for simple GS and neonates with surgery for intestinal atresia served as control for complex GS patients. The I-FABP levels were analyzed by ELISA.

Results

Urinary I-FABP after the surgery is significantly higher in GS newborns than in control group; I-FABP in complex GS is higher than in simple GS. I-FABP can predict subsequent operation for ileus in patients with complex GS. Both ways of abdominal wall closure (i.e. primary closure and stepwise reconstruction) led to similar levels of I-FABP. None of the static I-FABP values was useful for the outcome prediction. The steep decrease in I-FABP after the surgery is associated with faster recovery, but it cannot predict early start of minimal enteral feeding, full enteral feeding or length of hospitalization.

Conclusion

Urinary I-FABP reflects the mucosal damage in gastroschisis but it has only a limited predictive value for patients’ outcome.

]]>
<![CDATA[Post-chemoradiotherapy FDG PET with qualitative interpretation criteria for outcome stratification in esophageal squamous cell carcinoma]]> https://www.researchpad.co/article/5c3d0175d5eed0c48403b89e

Objectives

Post-chemoradiotherapy (CRT) FDG PET is a useful prognosticator of esophageal cancer. However, debate on the diverse criteria of previous publications preclude worldwide multicenter comparisons, and even a universal practice guide. We aimed to validate a simple qualitative interpretation criterion of post-CRT FDG PET for outcome stratification and compare it with other criteria.

Methods

The post-CRT FDG PET of 114 patients with esophageal squamous cell carcinoma (ESCC) were independently interpreted using a qualitative 4-point scale (Qual4PS) that identified focal esophageal FDG uptake greater than liver uptake as residual tumor. Cohen’s κ coefficient (κ) was used to measure interobserver agreement of Qual4PS. The Kaplan-Meier method and Cox proportional hazards regression analyses were used for survival analysis. Other criteria included a different qualitative approach (QualBK), maximal standardized uptake values (SUVmax3.4, SUVmax2.5), relative change of SUVmax between pre- and post-CRT FDG PET (ΔSUVmax), mean standardized uptake values (SUVmean), metabolic volume (MV) and total lesion glycolysis (TLG).

Results

Overall interobserver agreement on the Qual4PS criterion was excellent (κ: 0.95). Except the QualBK, SUVmax2.5, and TLG, all the other criteria were significant predictors for overall survival (OS). Multivariable analysis showed only Qual4PS (HR: 15.41; P = 0.005) and AJCC stage (HR: 2.47; P = 0.007) were significant independent variables. The 2-year OS rates of Qual4PS(‒) patients undergoing CRT alone (68.4%) and patients undergoing trimodality therapy (62.5%) were not significant different, but the 2-year OS rates of Qual4PS(+) patients undergoing CRT alone (10.0%) were significantly lower than in patients undergoing trimodality therapy (42.1%).

Conclusions

The Qual4PS criterion is reproducible for assessing the response of ESCC to CRT, and valuable for predicting survival. It may add value to response-adapted treatment for ESCC patients, and help to decide whether surgery is warranted after CRT.

]]>
<![CDATA[Risk interval analysis of emergency room visits following colonoscopy in patients with inflammatory bowel disease]]> https://www.researchpad.co/article/5c3fa587d5eed0c484ca5700

Background and aims

Prior studies suggest that colonoscopy may exacerbate inflammatory bowel disease (IBD) symptoms. Thus, our study aimed to determine risk of emergency room (ER) visits associated with colonoscopy among IBD patients and evaluate potential modifiers of this risk.

Methods

The study population included IBD patients in the Multi-Payer Claims Database who were >20 years old and had a colonoscopy from 2007–2010. We used a self-controlled risk interval design and mixed-effects Poisson regression models to calculate risk ratios (RR) and 95% confidence intervals (CI) comparing the incidence of ER visits in the 1–4 weeks following colonoscopy (risk interval) to the incidence of ER visits in the 7–10 weeks after colonoscopy (control interval). We also conducted stratified analyses by patient characteristics, bowel preparation type, and medication.

Results

There were 212,205 IBD patients with at least 1 colonoscopy from 2007–2010, and 3,699 had an ER visit during the risk and/or control interval. The risk of an ER visit was higher in the 4-week risk interval following colonoscopy compared to the control interval (RR = 1.24; 95% CI: 1.17–1.32). The effect was strongest in those <41 years old (RR = 1.60; 95% CI: 1.21–2.11), in women (RR = 1.32; 95% CI: 1.21–1.44), and in those with sodium phosphate bowel preparation (RR = 2.09; 95% CI: 1.02–4.29). Patients using immunomodulators had no increased risk of ER visits (RR = 0.75; 95% CI: 0.35–1.59).

Conclusions

Our results suggest that there is an increased risk of ER visits following colonoscopy among IBD patients, but that immunomodulators and mild bowel preparation agents may mitigate this risk.

]]>
<![CDATA[Devising focused strategies to improve organ donor registrations: A cross-sectional study among professional drivers in coastal South India]]> https://www.researchpad.co/article/5c26973fd5eed0c48470f06b

Background

In India, annually, 500,000 people die due to non-availability of organs. Given the large proportion of brain death amongst road accident victims, any improvement in organ donation practices amongst this cohort could potentially address this deficit. In this study, we identify the potential areas for intervention to improve organ donation amongst professional drivers, a population more likely to suffer from road accidents.

Methods

300 participants were surveyed using a structured, orally-administered questionnaire to assess knowledge, attitudes and practices regarding organ donation. Multivariate analysis was performed to identify key variables affecting intent to practice.

Results

Nearly half our participants had unsatisfactory knowledge and attitude scores. Knowledge and attitude was positively correlated, rs (298) = .247, p < .001, with better scores associated with a higher likelihood of intent to practice organ donation [AOR: 2.23 (1.26–3.94), p = .006; AOR: 12.164 (6.85–21.59), p < .001 respectively]. Lack of family support and fear of donated organs going into medical research were the key barriers for the same [AOR: 0.43 (0.19–0.97), p = .04; AOR: 0.27 (0.09–0.85), p = .02 respectively].

Conclusion

Targeted health-education, behaviour change communication, and legal interventions, in conjunction, are key to improving organ donor registrations.

]]>