875 resultados para Cost of maintenance
Resumo:
Background Cost-effectiveness studies have been increasingly part of decision processes for incorporating new vaccines into the Brazilian National Immunisation Program. This study aimed to evaluate the cost-effectiveness of 10-valent pneumococcal conjugate vaccine (PCV10) in the universal childhood immunisation programme in Brazil. Methods A decision-tree analytical model based on the ProVac Initiative pneumococcus model was used, following 25 successive cohorts from birth until 5 years of age. Two strategies were compared: (1) status quo and (2) universal childhood immunisation programme with PCV10. Epidemiological and cost estimates for pneumococcal disease were based on National Health Information Systems and literature. A 'top-down' costing approach was employed. Costs are reported in 2004 Brazilian reals. Costs and benefits were discounted at 3%. Results 25 years after implementing the PCV10 immunisation programme, 10 226 deaths, 360 657 disability-adjusted life years (DALYs), 433 808 hospitalisations and 5 117 109 outpatient visits would be avoided. The cost of the immunisation programme would be R$10 674 478 765, and the expected savings on direct medical costs and family costs would be R$1 036 958 639 and R$209 919 404, respectively. This resulted in an incremental cost-effectiveness ratio of R$778 145/death avoided and R$22 066/DALY avoided from the society perspective. Conclusion The PCV10 universal infant immunisation programme is a cost-effective intervention (1-3 GDP per capita/DALY avoided). Owing to the uncertain burden of disease data, as well as unclear long-term vaccine effects, surveillance systems to monitor the long-term effects of this programme will be essential.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
How to evaluate the cost-effectiveness of repair/retrofit intervention vs. demolition/replacement and what level of shaking intensity can the chosen repairing/retrofit technique sustain are open questions affecting either the pre-earthquake prevention, the post-earthquake emergency and the reconstruction phases. The (mis)conception that the cost of retrofit interventions would increase linearly with the achieved seismic performance (%NBS) often discourages stakeholders to consider repair/retrofit options in a post-earthquake damage situation. Similarly, in a pre-earthquake phase, the minimum (by-law) level of %NBS might be targeted, leading in some cases to no-action. Furthermore, the performance measure enforcing owners to take action, the %NBS, is generally evaluated deterministically. Not directly reflecting epistemic and aleatory uncertainties, the assessment can result in misleading confidence on the expected performance. The present study aims at contributing to the delicate decision-making process of repair/retrofit vs. demolition/replacement, by developing a framework to assist stakeholders with the evaluation of the effects in terms of long-term losses and benefits of an increment in their initial investment (targeted retrofit level) and highlighting the uncertainties hidden behind a deterministic approach. For a pre-1970 case study building, different retrofit solutions are considered, targeting different levels of %NBS, and the actual probability of reaching Collapse when considering a suite of ground-motions is evaluated, providing a correlation between %NBS and Risk. Both a simplified and a probabilistic loss modelling are then undertaken to study the relationship between %NBS and expected direct and indirect losses.
Tackling of unhealthy diets, physical inactivity, and obesity: health effects and cost-effectiveness
Resumo:
The obesity epidemic is spreading to low-income and middle-income countries as a result of new dietary habits and sedentary ways of life, fuelling chronic diseases and premature mortality. In this report we present an assessment of public health strategies designed to tackle behavioural risk factors for chronic diseases that are closely linked with obesity, including aspects of diet and physical inactivity, in Brazil, China, India, Mexico, Russia, and South Africa. England was included for comparative purposes. Several population-based prevention policies can be expected to generate substantial health gains while entirely or largely paying for themselves through future reductions of health-care expenditures. These strategies include health information and communication strategies that improve population awareness about the benefits of healthy eating and physical activity; fiscal measures that increase the price of unhealthy food content or reduce the cost of healthy foods rich in fibre; and regulatory measures that improve nutritional information or restrict the marketing of unhealthy foods to children. A package of measures for the prevention of chronic diseases would deliver substantial health gains, with a very favourable cost-effectiveness profile.
Resumo:
Long-term side-effects and cost of HIV treatment motivate the development of simplified maintenance. Monotherapy with ritonavir-boosted lopinavir (LPV/r-MT) is the most widely studied strategy. However, efficacy of LPV/r-MT in compartments remains to be shown.
Resumo:
The economic burden associated with osteoporosis is considerable. As such, cost-effectiveness analyses are important contributors to the diagnostic and therapeutic decision-making process. The aim of this study was to review the cost effectiveness of treating post-menopausal osteoporosis with bisphosphonates and identify the key factors that influence the cost effectiveness of such treatment in the Swiss setting. A systematic search of databases (MEDLINE, EMBASE and the Cochrane Library) was conducted to identify published literature on the cost effectiveness of bisphosphonates in post-menopausal osteoporosis in the Swiss setting. Outcomes were compared with similar studies in Western European countries. Three cost-effectiveness studies of bisphosphonates in this patient population were identified; all were from a healthcare payer perspective. Outcomes showed that, relative to no treatment, treatment with oral bisphosphonates was predicted to be cost saving for most women aged ≥70 years with osteoporosis or at least one risk factor for fracture, and cost effective for women aged ≥75 years without prior fracture when used as a component of a population-based screen-and-treat programme. Results were most sensitive to changes in fracture risk, cost of fractures, cost of treatment, nursing home admissions and adherence with treatment. Swiss results were generally comparable to those in other European settings. Assuming similar clinical efficacy, lowering treatment cost (through the use of price-reduced brand-name or generic drugs) and/or improving adherence should both contribute to further improving the cost effectiveness of bisphosphonates in women with post-menopausal osteoporosis. Published evidence indicates that bisphosphonates are estimated to be similarly cost effective or cost saving in most treatment scenarios of post-menopausal osteoporosis in Switzerland and in neighbouring European countries.
Resumo:
The Michigan Department of Transportation is evaluating upgrading their portion of the Wolverine Line between Chicago and Detroit to accommodate high speed rail. This will entail upgrading the track to allow trains to run at speeds in excess of 110 miles per hour (mph). An important component of this upgrade will be to assess the requirement for ballast material for high speed rail. In the event that the existing ballast materials do not meet specifications for higher speed train, additional ballast will be required. The purpose of this study, therefore, is to investigate the current MDOT railroad ballast quality specifications and compare them to both the national and international specifications for use on high speed rail lines. The study found that while MDOT has quality specifications for railroad ballast it does not have any for high speed rail. In addition, the American Railway Engineering and Maintenance-of-Way Association (AREMA), while also having specifications for railroad ballast, does not have specific specifications for high speed rail lines. The AREMA aggregate specifications for ballast include the following tests: (1) LA Abrasion, (2) Percent Moisture Absorption, (3) Flat and Elongated Particles, (4) Sulfate Soundness test. Internationally, some countries do require a highly standard for high speed rail such as the Los Angeles (LA) Abrasion test, which is uses a higher standard performance and the Micro Duval test, which is used to determine the maximum speed that a high speed can operate at. Since there are no existing MDOT ballast specification for high speed rail, it is assumed that aggregate ballast specifications for the Wolverine Line will use the higher international specifications. The Wolverine line, however, is located in southern Michigan is a region of sedimentary rocks which generally do not meet the existing MDOT ballast specifications. The investigation found that there were only 12 quarries in the Michigan that meet the MDOT specification. Of these 12 quarries, six were igneous or metamorphic rock quarries, while six were carbonate quarries. Of the six carbonate quarries four were locate in the Lower Peninsula and two in the Upper Peninsula. Two of the carbonate quarries were located in near proximity to the Wolverine Line, while the remaining quarries were at a significant haulage distance. In either case, the cost of haulage becomes an important consideration. In this regard, four of the quarries were located with lake terminals allowing water transportation to down state ports. The Upper Peninsula also has a significant amount of metal based mining in both igneous and metamorphic rock that generate significant amount of waste rock that could be used as a ballast material. The main drawback, however, is the distance to the Wolverine rail line. One potential source is the Cliffs Natural Resources that operates two large surface mines in the Marquette area with rail and water transportation to both Lake Superior and Lake Michigan. Both mines mine rock with a very high compressive strength far in excess of most ballast materials used in the United States and would make an excellent ballast materials. Discussions with Cliffs, however, indicated that due to environmental concerns that they would most likely not be interested in producing a ballast material. In the United States carbonate aggregates, while used for ballast, many times don't meet the ballast specifications in addition to the problem of particle degradation that can lead to fouling and cementation issues. Thus, many carbonate aggregate quarries in close proximity to railroads are not used. Since Michigan has a significant amount of carbonate quarries, the research also investigated using the dynamic properties of aggregate as a possible additional test for aggregate ballast quality. The dynamic strength of a material can be assessed using a split Hopkinson Pressure Bar (SHPB). The SHPB has been traditionally used to assess the dynamic properties of metal but over the past 20 years it is now being used to assess the dynamic properties of brittle materials such as ceramics and rock. In addition, the wear properties of metals have been related to their dynamic properties. Wear or breakdown of railroad ballast materials is one of the main problems with ballast material due to the dynamic loading generated by trains and which will be significantly higher for high speed rails. Previous research has indicated that the Port Inland quarry along Lake Michigan in the Southern Upper Peninsula has significant dynamic properties that might make it potentially useable as an aggregate for high speed rail. The dynamic strength testing conducted in this research indicate that the Port Inland limestone in fact has a dynamic strength close to igneous rocks and much higher than other carbonate rocks in the Great Lakes region. It is recommended that further research be conducted to investigate the Port Inland limestone as a high speed ballast material.
Resumo:
BACKGROUND The Fractional Flow Reserve Versus Angiography for Multivessel Evaluation (FAME) 2 trial demonstrated a significant reduction in subsequent coronary revascularization among patients with stable angina and at least 1 coronary lesion with a fractional flow reserve ≤0.80 who were randomized to percutaneous coronary intervention (PCI) compared with best medical therapy. The economic and quality-of-life implications of PCI in the setting of an abnormal fractional flow reserve are unknown. METHODS AND RESULTS We calculated the cost of the index hospitalization based on initial resource use and follow-up costs based on Medicare reimbursements. We assessed patient utility using the EQ-5D health survey with US weights at baseline and 1 month and projected quality-adjusted life-years assuming a linear decline over 3 years in the 1-month utility improvements. We calculated the incremental cost-effectiveness ratio based on cumulative costs over 12 months. Initial costs were significantly higher for PCI in the setting of an abnormal fractional flow reserve than with medical therapy ($9927 versus $3900, P<0.001), but the $6027 difference narrowed over 1-year follow-up to $2883 (P<0.001), mostly because of the cost of subsequent revascularization procedures. Patient utility was improved more at 1 month with PCI than with medical therapy (0.054 versus 0.001 units, P<0.001). The incremental cost-effectiveness ratio of PCI was $36 000 per quality-adjusted life-year, which was robust in bootstrap replications and in sensitivity analyses. CONCLUSIONS PCI of coronary lesions with reduced fractional flow reserve improves outcomes and appears economically attractive compared with best medical therapy among patients with stable angina.
Resumo:
QUESTION UNDER STUDY The aim of this study was to evaluate the cost-effectiveness of ticagrelor and generic clopidogrel as add-on therapy to acetylsalicylic acid (ASA) in patients with acute coronary syndrome (ACS), from a Swiss perspective. METHODS Based on the PLATelet inhibition and patient Outcomes (PLATO) trial, one-year mean healthcare costs per patient treated with ticagrelor or generic clopidogrel were analysed from a payer perspective in 2011. A two-part decision-analytic model estimated treatment costs, quality-adjusted life years (QALYs), life years and the cost-effectiveness of ticagrelor and generic clopidogrel in patients with ACS up to a lifetime at a discount of 2.5% per annum. Sensitivity analyses were performed. RESULTS Over a patient's lifetime, treatment with ticagrelor generates an additional 0.1694 QALYs and 0.1999 life years at a cost of CHF 260 compared with generic clopidogrel. This results in an Incremental Cost Effectiveness Ratio (ICER) of CHF 1,536 per QALY and CHF 1,301 per life year gained. Ticagrelor dominated generic clopidogrel over the five-year and one-year periods with treatment generating cost savings of CHF 224 and 372 while gaining 0.0461 and 0.0051 QALYs and moreover 0.0517 and 0.0062 life years, respectively. Univariate sensitivity analyses confirmed the dominant position of ticagrelor in the first five years and probabilistic sensitivity analyses showed a high probability of cost-effectiveness over a lifetime. CONCLUSION During the first five years after ACS, treatment with ticagrelor dominates generic clopidogrel in Switzerland. Over a patient's lifetime, ticagrelor is highly cost-effective compared with generic clopidogrel, proven by ICERs significantly below commonly accepted willingness-to-pay thresholds.
Resumo:
Congenital Adrenal Hyperplasia (CAH), due to 21-Hydroxylase deficiency, has an estimated incidence of 1:15,000 births and can result in death, salt-wasting crisis or impaired growth. It has been proposed that early diagnosis and treatment of infants detected from newborn screening for CAH will decrease the incidence of mortality and morbidity in the affected population. The Texas Department of Health (TDH) began mandatory screening for CAH in June, 1989 and Texas is one of fourteen states to provide neonatal screening for the disorder.^ The purpose of this study was to describe the cost and effect of screening for CAH in Texas during 1994 and to compare cases first detected by screen and first detected clinically between January 1, 1990 and December 31, 1994. This study used a longitudinal descriptive research design. The data was secondary and previously collected by the Texas Department of Health. Along with the descriptive study, an economic analysis was done. The cost of the program was defined, measured and valued for four phases of screening: specimen collection, specimen testing, follow-up and diagnostic evaluation.^ There were 103 infants with Classical CAH diagnosed during the study and 71 of the cases had the more serious Salt-Wasting form of the disease. Of the infants diagnosed with Classical CAH, 60% of the cases were first detected by screen and 40% were first detected because of clinical findings before the screening results were returned. The base case cost of adding newborn screening to an existing program (excluding the cost of specimen collection) was $357,989 for 100,000 infants. The cost per case of Classical CAH diagnosed, based on the number of infants first detected by screen in 1994, was \$126,892. There were 42 infants diagnosed with the more benign Nonclassical form of the disease. When these cases were included in the total, the cost per infant to diagnose Congenital Adrenal/Hyperplasia was $87,848. ^
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
PURPOSE To evaluate and compare the costs of MRI-guided and CT-guided cervical nerve root infiltration for the minimally invasive treatment of radicular neck pain. MATERIALS AND METHODS Between September 2009 and April 2012, 22 patients (9 men, 13 women; mean age: 48.2 years) underwent MRI-guided (1.0 Tesla, Panorama HFO, Philips) single-site periradicular cervical nerve root infiltration with 40 mg triamcinolone acetonide. A further 64 patients (34 men, 30 women; mean age: 50.3 years) were treated under CT fluoroscopic guidance (Somatom Definition 64, Siemens). The mean overall costs were calculated as the sum of the prorated costs of equipment use (purchase, depreciation, maintenance, and energy costs), personnel costs and expenditure for disposables that were identified for MRI- and CT-guided procedures. Additionally, the cost of ultrasound guidance was calculated. RESULTS The mean intervention time was 24.9 min. (range: 12 - 36 min.) for MRI-guided infiltration and 19.7 min. (range: 5 - 54 min.) for CT-guided infiltration. The average total costs per patient were EUR 240 for MRI-guided interventions and EUR 124 for CT-guided interventions. These were (MRI/CT guidance) EUR 150/60 for equipment use, EUR 46/40 for personnel, and EUR 44/25 for disposables. The mean overall cost of ultrasound guidance was EUR 76. CONCLUSION Cervical nerve root infiltration using MRI guidance is still about twice as expensive as infiltration using CT guidance. However, since it does not involve radiation exposure for patients and personnel, MRI-guided nerve root infiltration may become a promising alternative to the CT-guided procedure, especially since a further price decrease is expected for MRI devices and MR-compatible disposables. In contrast, ultrasound remains the less expensive method for nerve root infiltration guidance.
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.
Resumo:
PURPOSE Rituximab maintenance therapy has been shown to improve progression-free survival in patients with follicular lymphoma; however, the optimal duration of maintenance treatment remains unknown. PATIENTS AND METHODS Two hundred seventy patients with untreated, relapsed, stable, or chemotherapy-resistant follicular lymphoma were treated with four doses of rituximab monotherapy in weekly intervals (375 mg/m(2)). Patients achieving at least a partial response were randomly assigned to receive maintenance therapy with one infusion of rituximab every 2 months, either on a short-term schedule (four administrations) or a long-term schedule (maximum of 5 years or until disease progression or unacceptable toxicity). The primary end point was event-free survival (EFS). Progression-free survival, overall survival (OS), and toxicity were secondary end points. Comparisons between the two arms were performed using the log-rank test for survival end points. RESULTS One hundred sixty-five patients were randomly assigned to the short-term (n = 82) or long-term (n = 83) maintenance arms. Because of the low event rate, the final analysis was performed after 95 events had occurred, which was before the targeted event number of 99 had been reached. At a median follow-up period of 6.4 years, the median EFS was 3.4 years (95% CI, 2.1 to 5.3) in the short-term arm and 5.3 years (95% CI, 3.5 to not available) in the long-term arm (P = .14). Patients in the long-term arm experienced more adverse effects than did those in the short-term arm, with 76% v 50% of patients with at least one adverse event (P < .001), five versus one patient with grade 3 and 4 infections, and three versus zero patients discontinuing treatment because of unacceptable toxicity, respectively. There was no difference in OS between the two groups. CONCLUSION Long-term rituximab maintenance therapy does not improve EFS, which was the primary end point of this trial, or OS, and was associated with increased toxicity.
Resumo:
BACKGROUND Limitations in the primary studies constitute one important factor to be considered in the grading of recommendations assessment, development, and evaluation (GRADE) system of rating quality of evidence. However, in the network meta-analysis (NMA), such evaluation poses a special challenge because each network estimate receives different amounts of contributions from various studies via direct as well as indirect routes and because some biases have directions whose repercussion in the network can be complicated. FINDINGS In this report we use the NMA of maintenance pharmacotherapy of bipolar disorder (17 interventions, 33 studies) and demonstrate how to quantitatively evaluate the impact of study limitations using netweight, a STATA command for NMA. For each network estimate, the percentage of contributions from direct comparisons at high, moderate or low risk of bias were quantified, respectively. This method has proven flexible enough to accommodate complex biases with direction, such as the one due to the enrichment design seen in some trials of bipolar maintenance pharmacotherapy. CONCLUSIONS Using netweight, therefore, we can evaluate in a transparent and quantitative manner how study limitations of individual studies in the NMA impact on the quality of evidence of each network estimate, even when such limitations have clear directions.