ResearchPad - procurement https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Process evaluation of health system costing – Experience from CHSI study in India]]> https://www.researchpad.co/article/elastic_article_14482 A national study, ‘Costing of healthcare services in India’ (CHSI) aimed at generating reliable healthcare cost estimates for health technology assessment and price-setting is being undertaken in India. CHSI sampled 52 public and 40 private hospitals in 13 states and used a mixed micro-costing approach. This paper aims to outline the process, challenges and critical lessons of cost data collection to feed methodological and quality improvement of data collection.MethodsAn exploratory survey with 3 components–an online semi-structured questionnaire, group discussion and review of monitoring data, was conducted amongst CHSI data collection teams. There were qualitative and quantitative components. Difficulty in obtaining individual data was rated on a Likert scale.ResultsMean time taken to complete cost data collection in one department/speciality was 7.86(±0.51) months, majority of which was spent on data entry and data issues resolution. Data collection was most difficult for determination of equipment usage (mean difficulty score 6.59±0.52), consumables prices (6.09±0.58), equipment price(6.05±0.72), and furniture price(5.64±0.68). Human resources, drugs & consumables contributed to 78% of total cost and 31% of data collection time. However, furniture, overheads and equipment consumed 51% of time contributing only 9% of total cost. Seeking multiple permissions, absence of electronic records, multiple sources of data were key challenges causing delays.ConclusionsMicro-costing is time and resource intensive. Addressing key issues prior to data collection would ease the process of data collection, improve quality of estimates and aid priority setting. Electronic health records and availability of national cost data base would facilitate conducting costing studies. ]]> <![CDATA[Assessment of Restored Kidney Transplantation Including the Use of Wider Criteria for Accepting Renal Donors After Cancer Excision]]> https://www.researchpad.co/article/N4a61dd48-86f7-4c56-8051-02e3201e14ad

Background.

The transplantation of kidneys after cancer excision (restored kidney transplantation, RKT) warrants further evaluation as a source of kidneys for transplantation. We determined whether larger cancers can be safely transplanted, the risks of adverse events from RKT, and whether RKT confers a survival advantage for patients waiting for transplantation.

Methods.

In a retrospective cohort study, 23 dialysis patients awaiting transplant underwent RKT at John Hunter Hospital, Australia between 2008 and 2015. Patients were >60 years old and accepted onto the National Organ Matching Service. This RKT Group was divided into donor renal cancers ≤30 mm and >30–≤50 mm. Adverse event profiles for RKT recipients were compared with 22 standard live donor recipients using logistic regression analyses. Recipient and transplant survivals for RKT were compared with 2050 controls from Australian New Zealand Dialysis Transplant Registry using Cox regression models. To increase statistical power for survival analyses, data from 25 RKT recipients from Princess Alexandra Hospital, Brisbane were added, thus creating 48 RKT recipients.

Results.

There were no significant differences in mortality, transplant failure nor AEs between the 2 cancer Groups. RKT increased the risks of Adverse event profiles (odds ratio: 6.48 [2.92–15.44]; P < 0.001). RKT reduced mortality risk by 30% (hazard ratio [HR]: 0.70 [0.36–1.07]; P = 0.299) compared with those continuing on the transplant list who may or may not be transplanted. RKT significantly reduced mortality risk for those remaining on dialysis (HR: 2.86 [1.43–5.72]; P = 0.003). Transplant survival for RKT was reduced compared with control deceased donor (HR: 0.42 [0.21–0.83]; P = 0.013) and live donor transplants (HR: 0.33 [0.02–0.86]; P =0.023).

Conclusions.

The use of larger carefully selected cancer-resected kidneys for transplantation appears safe and effective. RKT confers a possible survival advantage compared with waiting for transplantation, an increased survival compared with those remaining on dialysis but reduced transplant survival.

]]>
<![CDATA[Epidemiology and Comorbidity Burden of Organ Donor Referrals in Australia]]> https://www.researchpad.co/article/Ne2a4d70e-d283-4ac6-908e-f51d96bdf185

Background.

Increasing organ donation rates in Australia have been exceeded by a rise in potential donor referrals not proceeding to donate. Referral evaluation is resource-intensive. We sought to characterize organ donor referrals in New South Wales, Australia, and identify predictors of referrals not proceeding to donation.

Methods.

We performed a cohort study of NSW Organ and Tissue Donation Service logs 2010–2015, describing the prevalence and impact of comorbidities on referral outcome. Logistic regression was used to identify comorbidities influencing outcome and predict probability of donation.

Results.

Of 2977 referrals, 669 (22%) donated and 2308 (78%) did not. Despite increasing donation rates, the proportion proceeding to donate declined 2010–2015. Among referrals, the prevalence of all comorbidities except cerebrovascular disease increased and was higher among nondonors. History of cardiac disease, ≥65 years of age, chronic kidney or liver disease, malignancy, and absence of cerebrovascular disease were all significantly (P < 0.01) associated with non donation. Hypertension and diabetes did not significantly impact outcome. Predicted probability of donation varied from <1% to 54% depending on comorbidity burden of the referral.

Conclusions.

Comorbidity burden among donor referrals is increasing. The presence of particular comorbidities may significantly impact referral outcome. A better understanding of referral characteristics associated with non donation may improve the efficiency of the referral process in the context of encouraging routine referrals.

]]>
<![CDATA[Microbial contamination and tissue procurement location: A conventional operating room is not mandatory. An observational study]]> https://www.researchpad.co/article/5c3e4f47d5eed0c484d73ac0

Background

Standard operating rooms (SOR) are assumed to be the best place to prevent microbial contamination when performing tissue procurement. However, mobilizing an operating room is time and cost consuming if no organ retrieval is performed. In such case, non-operating dedicated rooms (NODR) are usually recommended by European guidelines for tissue harvesting. Performing the tissue retrieval in the Intensive care unit (ICU) when possible might be considered as it allows a faster and simpler procedure.

Objective

Our primary objective was to study the relationship between the risk of microbial contamination and the location (ICU, SOR or NODR) of the tissue retrieval in heart-beating and non-heart-beating deceased donors.

Materials and method

We retrospectively reviewed all deceased donors’ files of the local tissue banks of Montpellier and Marseille from January 2007 to December 2014. The primary endpoint was the microbial contamination of the grafts. We built a multivariate regression model and used a GEE (generalized estimating equations) allowing us to take into account the clustered structure of our data.

Results

2535 cases were analyzed involving 1027 donors. The retrieval took place for 1189 in a SOR, for 996 in a hospital mortuary (NODR) and for 350 in an ICU. 285 (11%) microbial contaminations were revealed. The multivariate analysis found that the location in a hospital mortuary was associated with a lower risk of contamination (OR 0.43, 95% CI [0.2–0.91], p = 0.03). A procurement performed in the ICU was not associated with a significant increased risk (OR 0.62, 95% CI [0.26–1.48], p = 0.4).

Conclusion

According to our results, performing tissue procurement in dedicated non-sterile rooms could decrease the rate of allograft tissue contamination. This study also suggests that in daily clinical practice, transferring patients from ICU to SOR for tissue procurement could be avoided as it does not lead to less microbial contamination.

]]>
<![CDATA[High seroprevalence of Strongyloides stercoralis among individuals from endemic areas considered for solid organ transplant donation: A retrospective serum-bank based study]]> https://www.researchpad.co/article/5c09940cd5eed0c4842ae1af

Background

Strongyloides stercoralis is a worldwide disseminated parasitic disease that can be transmitted from solid organ transplant (SOT) donors to recipients. We determined the serological prevalence of S. stercoralis among deceased individuals from endemic areas considered for SOT donation, using our institution’s serum bank.

Methodology

Retrospective study including all deceased potential donors from endemic areas of strongyloidiasis considered for SOT between January 2004 and December 2014 in a tertiary care hospital. The commercial serological test IVD-Elisa was used to determine the serological prevalence of S. stercoralis.

Principal findings

Among 1025 deceased individuals during the study period, 90 were from endemic areas of strongyloidiasis. There were available serum samples for 65 patients and 6 of them tested positive for S. stercoralis (9.23%). Only one of the deceased candidates was finally a donor, without transmitting the infection.

Conclusions

Among deceased individuals from endemic areas considered for SOT donation, seroprevalence of strongyloidiasis was high. This highlights the importance of adhering to current recommendations on screening for S. stercoralis among potential SOT donors at high risk of the infection, together with the need of developing a rapid diagnostic test to fully implement these screening strategies.

]]>
<![CDATA[Oral Cholera Vaccination Delivery Cost in Low- and Middle-Income Countries: An Analysis Based on Systematic Review]]> https://www.researchpad.co/article/5989db0cab0ee8fa60bcaac2

Background

Use of the oral cholera vaccine (OCV) is a vital short-term strategy to control cholera in endemic areas with poor water and sanitation infrastructure. Identifying, estimating, and categorizing the delivery costs of OCV campaigns are useful in analyzing cost-effectiveness, understanding vaccine affordability, and in planning and decision making by program managers and policy makers.

Objectives

To review and re-estimate oral cholera vaccination program costs and propose a new standardized categorization that can help in collation, analysis, and comparison of delivery costs across countries.

Data sources

Peer reviewed publications listed in PubMed database, Google Scholar and World Health Organization (WHO) websites and unpublished data from organizations involved in oral cholera vaccination.

Study eligibility criteria

The publications and reports containing oral cholera vaccination delivery costs, conducted in low- and middle-income countries based on World Bank Classification. Limits are humans and publication date before December 31st, 2014.

Participants

No participants are involved, only costs are collected.

Intervention

Oral cholera vaccination and cost estimation.

Study appraisal and synthesis method

A systematic review was conducted using pre-defined inclusion and exclusion criteria. Cost items were categorized into four main cost groups: vaccination program preparation, vaccine administration, adverse events following immunization and vaccine procurement; the first three groups constituting the vaccine delivery costs. The costs were re-estimated in 2014 US dollars (US$) and in international dollar (I$).

Results

Ten studies were identified and included in the analysis. The vaccine delivery costs ranged from US$0.36 to US$ 6.32 (in US$2014) which was equivalent to I$ 0.99 to I$ 16.81 (in I$2014). The vaccine procurement costs ranged from US$ 0.29 to US$ 29.70 (in US$2014), which was equivalent to I$ 0.72 to I$ 78.96 (in I$2014). The delivery costs in routine immunization systems were lowest from US$ 0.36 (in US$2014) equivalent to I$ 0.99 (in I$2014).

Limitations

The reported cost categories are not standardized at collection point and may lead to misclassification. Costs for some OCV campaigns are not available and analysis does not include direct and indirect costs to vaccine recipients.

Conclusions and implications of key findings

Vaccine delivery cost estimation is needed for budgeting and economic analysis of vaccination programs. The cost categorization methodology presented in this study is helpful in collecting OCV delivery costs in a standardized manner, comparing delivery costs, planning vaccination campaigns and informing decision-making.

]]>
<![CDATA[Are life-saving anticancer drugs reaching all patients? Patterns and discrepancies of trastuzumab use in the European Union and the USA]]> https://www.researchpad.co/article/5989db50ab0ee8fa60bdbf20

Background

The development of trastuzumab is considered to be one of the greatest improvements in breast cancer treatment in recent years. This study aims to evaluate changes in the uptake of trastuzumab over the last 12 years and to determine whether its use is proportional to patient needs in the European Union and the USA.

Methods

Using national registry data, the number of new cases of HER2-positive breast cancer patients per year was estimated. The number of likely trastuzumab treatments per year was estimated using trastuzumab procurement data for each country.

Results

Western Europe and the USA show increasing procurement level of trastuzumab over the years studied, reaching proportional of use of trastuzumab few years after its marketing authorization in the early 2000’s. After the approval in the adjuvant setting, in the year 2006, it was observed underuse of trastuzumab given the increase of the number of patients in need of treatment. Proportional use was shortly met after a couple of years. Few countries in Eastern Europe acquired the needed quantity of trastuzumab, with procurement levels starting to increase only after approval in the adjuvant setting in 2006.

Conclusion

Significant differences in trastuzumab procurement are observed between Western Europe, the USA and Eastern Europe, with the latter geographic region acquiring insufficient amounts of the drug required to treat all patients in need.

]]>
<![CDATA[Speeding Access to Vaccines and Medicines in Low- and Middle-Income Countries: A Case for Change and a Framework for Optimized Product Market Authorization]]> https://www.researchpad.co/article/5989db39ab0ee8fa60bd42c4

Background

The United Nations Millennium Development Goals galvanized global efforts to alleviate suffering of the world’s poorest people through unprecedented public-private partnerships. Donor aid agencies have demonstrably saved millions of lives that might otherwise have been lost to disease through increased access to quality-assured vaccines and medicines. Yet, the introduction of these health interventions in low- and middle-income countries (LMICs) continues to face a time lag due to factors which remain poorly understood.

Methods and Findings

A recurring theme from our partnership engagements was that an optimized regulatory process would contribute to improved access to quality health products. Therefore, we investigated the current system for medicine and vaccine registration in LMICs as part of our comprehensive regulatory strategy. Here, we report a fact base of the registration timelines for vaccines and drugs used to treat certain communicable diseases in LMICs. We worked with a broad set of stakeholders, including the World Health Organization’s prequalification team, national regulatory authorities, manufacturers, procurers, and other experts, and collected data on the timelines between first submission and last approval of applications for product registration sub-Saharan Africa. We focused on countries with the highest burden of communicable disease and the greatest need for the products studied. The data showed a typical lag of 4 to 7 years between the first regulatory submission which was usually to a regulatory agency in a high-income country, and the final approval in Sub-Saharan Africa. Two of the three typical registration steps which products undergo before delivery in the countries involve lengthy timelines. Failure to leverage or rely on the findings from reviews already performed by competent regulatory authorities, disparate requirements for product approval by the countries, and lengthy timelines by manufacturers to respond to regulatory queries were key underlying factors for the delays.

Conclusions

We propose a series of measures which we developed in close collaboration with key stakeholders that could be taken to reduce registration time and to make safe, effective medicines more quickly available in countries where they are most needed. Many of these recommendations are being implemented by the responsible stakeholders, including the WHO prequalification team and the national regulatory authorities in Sub-Saharan Africa. Those efforts will be the focus of subsequent publications by the pertinent groups.

]]>
<![CDATA[Progress and Challenges in Scaling Up Laboratory Monitoring of HIV Treatment]]> https://www.researchpad.co/article/5989d9edab0ee8fa60b6d08e

In a perspective on Habiyambere and colleagues, Peter Kilmarx and Raiva Simbi discuss the disconnect between HIV testing instrument capacity and utilization.

]]>
<![CDATA[Social network analysis of obsidian artefacts and Māori interaction in northern Aotearoa New Zealand]]> https://www.researchpad.co/article/5c9405a3d5eed0c484538ebb

Over the span of some 700 years the colonizing populations of Aotearoa New Zealand grew, with subsequent changes in levels of interaction and social affiliation. Historical accounts document that Māori society transformed from relatively autonomous village-based groups into larger territorial lineages, which later formed even larger geo-political tribal associations. These shifts have not been well-documented in the archaeological record, but social network analysis (SNA) of pXRF sourced obsidian recovered from 15 archaeological sites documents variable levels of similarity and affiliation. Three site communities and two source communities are defined based on the differential proportions of obsidian from 13 distinct sources. Distance and travel time between archaeological sites and obsidian sources were not the defining factors for obsidian source selection and community membership, rather social considerations are implicated. Some archaeological sites incorporated material from far off sources, and in some instances geographically close sites contained material from different sources and were assigned to different communities. The analytical site communities constitute relational identifications that partially correspond to categorical identities of current Māori iwi (tribal) territories and boundaries. Based on very limited temporal information, these site communities are thought to have coalesced sometime after AD 1500. By incorporating previously published and unpublished data, the SNA of obsidian artefacts defined robust network communities that reflect differential levels of Māori interaction and affiliation.

]]>
<![CDATA[Trends in medicines procurement by the Brazilian federal government from 2006 to 2013]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdcb91

The costs of medicines pose a growing burden on healthcare systems worldwide. A comprehensive understanding of current procurement processes provides strong support for the development of effective policies. This study examined Brazilian Federal Government pharmaceutical procurement data provided by the Integrated System for the Administration of General Services (SIASG) database, from 2006 to 2013. Medicine purchases were aggregated by volume and expenditure for each year. Data on expenditure were adjusted for inflation using the Extended National Consumer Price Index (IPCA) for December 31, 2013. Lorenz distribution curves were used to study the cumulative proportion of purchased therapeutic classes. Expenditure variance analysis was performed to determine the impact of each factor, price and/or volume, on total expenditure variation. Annual expenditure on medicines increased 2.72 times, while the purchased volume of drugs increased 1.99 times. A limited number of therapeutic classes dominated expenditure each year. Drugs for infectious diseases drove the increase in expenditures from 2006 to 2009 but were replaced by antineoplastic and immunomodulating agents beginning in 2010. Immunosuppressants (L04), accounted for one third of purchases since 2010, showing the most substantial growth in expenditures during the period (250-fold increase). The overwhelming price-related increase in expenditures caused by these medicines is bound to have a relevant impact on the sustainability of the pharmaceutical supply system. We observed increasing trends in expenditures, especially in specific therapeutic classes. We propose the development and implementation of better medicine procurement systems, and strategies to allow for monitoring of product price, effectiveness, and safety. This must be done with ongoing assessment of pharmaceutical innovations, therapeutic value and budget impact.

]]>
<![CDATA[Association between Food Insecurity and Procurement Methods among People Living with HIV in a High Resource Setting]]> https://www.researchpad.co/article/5989db4dab0ee8fa60bdafd7

Objective

People living with HIV in high-resource settings suffer severe levels of food insecurity; however, limited evidence exists regarding dietary intake and sub-components that characterize food insecurity (i.e. food quantity, quality, safety or procurement) in this population. We examined the prevalence and characteristics of food insecurity among people living with HIV across British Columbia, Canada.

Design

This cross-sectional analysis was conducted within a national community-based research initiative.

Methods

Food security was measured using the Health Canada Household Food Security Scale Module. Logistic regression was used to determine key independent predictors of food insecurity, controlling for potential confounders.

Results

Of 262 participants, 192 (73%) reported food insecurity. Sub-components associated with food insecurity in bivariate analysis included: < RDI consumption of protein (p = 0.046); being sick from spoiled/unsafe food in the past six months (p = 0.010); and procurement of food using non-traditional methods (p <0.05). In multivariable analyses, factors significantly associated with food insecurity included: procurement of food using non-traditional methods [AOR = 11.11, 95% CI: 4.79–25.68, p = <0.001]; younger age [AOR = 0.92, 95% CI: 0.86–0.96, p = <0.001]; unstable housing [AOR = 4.46, 95% CI: 1.15–17.36, p = 0.031]; household gross annual income [AOR = 4.49, 95% CI: 1.74–11.60, p = 0.002]; and symptoms of depression [AOR = 2.73, 95% CI: 1.25–5.96, p = 0.012].

Conclusions

Food insecurity among people living with HIV in British Columbia is characterized by poor dietary quality and food procurement methods. Notably, participants who reported procuring in non-traditional manners were over 10 times more likely to be food insecure. These findings suggest a need for tailored food security and social support interventions in this setting.

]]>
<![CDATA[Long-term results after transplantation of pediatric liver grafts from donation after circulatory death donors]]> https://www.researchpad.co/article/5989db51ab0ee8fa60bdc494

Background

Liver grafts from donation after circulatory death (DCD) donors are increasingly accepted as an extension of the organ pool for transplantation. There is little data on the outcome of liver transplantation with DCD grafts from a pediatric donor. The objective of this study was to assess the outcome of liver transplantation with pediatric DCD grafts and to compare this with the outcome after transplantation of livers from pediatric donation after brain death (DBD) donors.

Method

All transplantations performed with a liver from a pediatric donor (≤16 years) in the Netherlands between 2002 and 2015 were included. Patient survival, graft survival, and complication rates were compared between DCD and DBD liver transplantation.

Results

In total, 74 liver transplantations with pediatric grafts were performed; twenty (27%) DCD and 54 (73%) DBD. The median donor warm ischemia time (DWIT) was 24 min (range 15–43 min). Patient survival rate at 10 years was 78% for recipients of DCD grafts and 89% for DBD grafts (p = 0.32). Graft survival rate at 10 years was 65% in recipients of DCD versus 76% in DBD grafts (p = 0.20). If donor livers in this study would have been rejected for transplantation when the DWIT ≥30 min (n = 4), the 10-year graft survival rate would have been 81% after DCD transplantation. The rate of non-anastomotic biliary strictures was 5% in DCD and 4% in DBD grafts (p = 1.00). Other complication rates were also similar between both groups.

Conclusions

Transplantation of livers from pediatric DCD donors results in good long-term outcome especially when the DWIT is kept ≤30 min. Patient and graft survival rates are not significantly different between recipients of a pediatric DCD or DBD liver. Moreover, the incidence of non-anastomotic biliary strictures after transplantation of pediatric DCD livers is remarkably low.

]]>
<![CDATA[Nurse Staffing Calculation in the Emergency Department - Performance-Oriented Calculation Based on the Manchester Triage System at the University Hospital Bonn]]> https://www.researchpad.co/article/5989da09ab0ee8fa60b76cc7

Background

To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe.

Material and Methods

Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS), taking into account specific workload fluctuations (50th-95th percentiles).

Results

Patients classified to the MTS category red (n = 35) required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118), nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181), 40.95 min, while the two MTS categories with the least acute patients, green (n = 129) and blue (n = 40) required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010) emergency patients, 67–123 emergency patients (50–95% percentile) per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1.

Conclusion

Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments.

]]>
<![CDATA[Medicine shortages in Fiji: A qualitative exploration of stakeholders’ views]]> https://www.researchpad.co/article/5989db5cab0ee8fa60be030d

Objectives

Medicine access is a human right; yet, concerningly, there are international instances of shortages. Quantitative data has allowed WHO to propose global solutions; however, individualised understanding of specific regions is still required to work towards national solutions. Fiji has an established issue with medication supply and the aim of this study was to use qualitative methods to gain a fuller understanding of this context.

Methods

Semi-structured interviews were used to gain the perspective of key stakeholders involved in the Fijian medicine supply chain in regards to causes, impacts and possible solutions of medicine shortages. Thematic analysis was used to analyse the interview data.

Results

In total, 48 stakeholders participated and the information was synthesised into three main themes, causes, impacts and solutions and the sub-themes including; political, system and patient causes, adverse health effects on patients, professional dissatisfaction, monetary loss and loss of faith in the health system, workarounds, operation improvements, government intervention and education and training.

Conclusions

The situation in Fiji is not dissimilar to other instances of shortages around the world and hence international solutions like that proposed by WHO are feasible; however, they must be modified to be uniquely Fijian to work in this context.

]]>
<![CDATA[Effects of the Share 35 Rule on Waitlist and Liver Transplantation Outcomes for Patients with Hepatocellular Carcinoma]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdca1b

Introduction

Several studies have investigated the effects following the implementation of the “Share 35” policy; however none have investigated what effect this policy change has had on waitlist and liver transplantation (LT) outcomes for hepatocellular carcinoma(HCC).

Methods

Data were obtained from the UNOS database and a comparison of the 2 years post-Share 35 with data from the 2 years pre-Share 35 was performed.

Results

In the pre-Share35 era, 23% of LT were performed for HCC exceptions compared to 22% of LT in the post-Share35 era (p = 0.21). No difference in wait-time for HCC patients was seen in any of the UNOS regions between the 2 eras. Competing risk analysis demonstrated that HCC candidates in post-Share 35 era were more likely to die or be delisted for “too sick” while waiting (7.2% vs. 5.3%; p = 0.005) within 15 months. A higher proportion of ECD (p<0.001) and DCD (p<0.001) livers were used for patients transplanted for HCC, while lower DRI organs were used for those patients transplanted with a MELD≥35 between the 2 eras (p = 0.007).

Conclusion

No significant change to wait-time for patients listed for HCC was seen following implementation of “Share 35”. Transplant program behavior has changed resulting use of higher proportion of ECD and DCD liver grafts for patients with HCC. A higher rate of wait list mortality was observed in patients with HCC in the post-Share 35 era.

]]>
<![CDATA[A Model for Good Governance of Healthcare Technology Management in the Public Sector: Learning from Evidence-Informed Policy Development and Implementation in Benin]]> https://www.researchpad.co/article/5989d9f9ab0ee8fa60b71571

Good governance (GG) is an important concept that has evolved as a set of normative principles for low- and middle-income countries (LMICs) to strengthen the functional capacity of their public bodies, and as a conditional prerequisite to receive donor funding. Although much is written on good governance, very little is known on how to implement it. This paper documents the process of developing a strategy to implement a GG model for Health Technology Management (HTM) in the public health sector, based on lessons learned from twenty years of experience in policy development and implementation in Benin. The model comprises six phases: (i) preparatory analysis, assessing the effects of previous policies and characterizing the HTM system; (ii) stakeholder identification and problem analysis, making explicit the perceptions of problems by a diverse range of actors, and assessing their ability to solve these problems; (iii) shared analysis and visioning, delineating the root causes of problems and hypothesizing solutions; (iv) development of policy instruments for pilot testing, based on quick-win solutions to understand the system’s responses to change; (v) policy development and validation, translating the consensus solutions identified by stakeholders into a policy; and (vi) policy implementation and evaluation, implementing the policy through a cycle of planning, action, observation and reflection. The policy development process can be characterized as bottom-up, with a central focus on the participation of diverse stakeholders groups. Interactive and analytical tools of action research were used to integrate knowledge amongst actor groups, identify consensus solutions and develop the policy in a way that satisfies criteria of GG. This model could be useful for other LMICs where resources are constrained and the majority of healthcare technologies are imported.

]]>
<![CDATA[Assessing the Impact of U.S. Food Assistance Delivery Policies on Child Mortality in Northern Kenya]]> https://www.researchpad.co/article/5989da78ab0ee8fa60b97462

The U.S. is the main country in the world that delivers its food assistance primarily via transoceanic shipments of commodity-based in-kind food. This approach is costlier and less timely than cash-based assistance, which includes cash transfers, food vouchers, and local and regional procurement, where food is bought in or nearby the recipient country. The U.S.’s approach is exacerbated by a requirement that half of its transoceanic food shipments need to be sent on U.S.-flag vessels. We estimate the effect of these U.S. food assistance distribution policies on child mortality in northern Kenya by formulating and optimizing a supply chain model. In our model, monthly orders of transoceanic shipments and cash-based interventions are chosen to minimize child mortality subject to an annual budget constraint and to policy constraints on the allowable proportions of cash-based interventions and non-US-flag shipments. By varying the restrictiveness of these policy constraints, we assess the impact of possible changes in U.S. food aid policies on child mortality. The model includes an existing regression model that uses household survey data and geospatial data to forecast the mean mid-upper-arm circumference Z scores among children in a community, and allows food assistance to increase Z scores, and Z scores to influence mortality rates. We find that cash-based interventions are a much more powerful policy lever than the U.S.-flag vessel requirement: switching to cash-based interventions reduces child mortality from 4.4% to 3.7% (a 16.2% relative reduction) in our model, whereas eliminating the U.S.-flag vessel restriction without increasing the use of cash-based interventions generates a relative reduction in child mortality of only 1.1%. The great majority of the gains achieved by cash-based interventions are due to their reduced cost, not their reduced delivery lead times; i.e., the reduction of shipping expenses allows for more food to be delivered, which reduces child mortality.

]]>
<![CDATA[The Impact of Inventory Management on Stock-Outs of Essential Drugs in Sub-Saharan Africa: Secondary Analysis of a Field Experiment in Zambia]]> https://www.researchpad.co/article/5989da0bab0ee8fa60b779ad

Objective

To characterize the impact of widespread inventory management policies on stock-outs of essential drugs in Zambia’s health clinics and develop related recommendations.

Methods

Daily clinic storeroom stock levels of artemether-lumefantrine (AL) products in 2009–2010 were captured in 145 facilities through photography and manual transcription of paper forms, then used to determine historical stock-out levels and estimate demand patterns. Delivery lead-times and estimates of monthly facility accessibility were obtained through worker surveys. A simulation model was constructed and validated for predictive accuracy against historical stock-outs, then used to evaluate various changes potentially affecting product availability.

Findings

While almost no stock-outs of AL products were observed during Q4 2009 consistent with primary analysis, up to 30% of surveyed facilities stocked out of some AL product during Q1 2010 despite ample inventory being simultaneously available at the national warehouse. Simulation experiments closely reproduced these results and linked them to the use of average past monthly issues and failure to capture lead-time variability in current inventory control policies. Several inventory policy enhancements currently recommended by USAID | DELIVER were found to have limited impact on product availability.

Conclusions

Inventory control policies widely recommended and used for distributing medicines in sub-Saharan Africa directly account for a substantial fraction of stock-outs observed in common situations involving demand seasonality and facility access interruptions. Developing central capabilities in peripheral demand forecasting and inventory control is critical. More rigorous independent peer-reviewed research on pharmaceutical supply chain management in low-income countries is needed.

]]>