969 resultados para sensitivity analyses
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.
Resumo:
A tanulmányban a Pénzügyminisztérium gazdaságpolitikai főosztálya és az MTA Közgazdaságtudományi Intézete által kifejlesztett középméretű negyedéves makrogazdasági modell segítségével elemezzük a magyar gazdaság legfontosabb mechanizmusait. A modellezés során követett alapelvek és a modell blokkjainak bemutatása után egy forgatókönyv-elemzés keretében vizsgáljuk a makrogazdasági és költségvetési folyamatokat befolyásoló főbb faktorok hatásait. A - tágan értelmezett - "bizonytalansági tényezőket" három csoportba soroljuk: megkülönböztetjük a külső környezet (például árfolyam) változását, a gazdasági szereplők viselkedésében rejlő bizonytalanságokat (például a bérigazodás sebességének vagy a fogyasztássimítás mértékének bizonytalanságát), valamint a gazdaságpolitikai lépéseket (például állami bérek emelését). Megmutatjuk, hogy e kockázatok makrokövetkezményei nem függetlenek egymástól, például egy árfolyamváltozás hatását befolyásolja a bérigazodás sebessége. ______ This paper analyses the most important mechanisms of the Hungarian economy using a medium-sized quarterly macroeconomic model developed jointly by the Economic Policy Department of the Ministry of Finance and the Institute of Economics of the Hungarian Academy of Sciences. After introducing the fundamental principles of modelling and the building blocks of the model investigated, within a scenario analysis, the authors present the effects of the main factors behind the macroeconomic and budgetary processes. The sources of uncertainty - defined in a broad sense - are categorized in three groups: change in the external environment (e.g. the exchange rate), uncertainties in the behav-iour of economic agents (e.g. in speed of wage adjustment or extent of consumption smoothing), and economic policy decisions (e.g. the increase in public sector wages). The macroeconomic consequences of these uncertainties are shown not to be independent of each other. For instance, the effects of an exchange rate shock are influenced by the speed of wage adjustment.
Resumo:
A general framework for an ecological model of the English Channel was described in the first of this pair of papers. In this study, it was used to investigate the sensitivity of the model to various factors: model structure, parameter values, boundary conditions and forcing variables. These sensitivity analyses show how important quota formulation for phytoplankton growth is, particularly for growth of dinoflagellates. They also stress the major influence of variables and parameters related to nitrogen. The role played by rivers and particularly the river Seine was investigated. Their influence on global English Channel phytoplanktonic production seems to be relatively low, even though nutrient inputs determine the intensity of blooms in the Bay of Seine. The geographical position of the river Seine's estuary makes it important in fluxes through the Straits of Dover. Finally, the multi-annual study highlights the general stability of the English Channel ecosystem. These global considerations are discussed and further improvements to the model are proposed.
Resumo:
The criterion, based on the thermodynamics theory, that the climatic system tends to extremizesome function has suggested several studies. In particular, special attention has been devoted to the possibility that the climate reaches an extremal rate of planetary entropy production.Due to both radiative and material effects contribute to total planetary entropy production,climatic simulations obtained at the extremal rates of total, radiative or material entropy production appear to be of interest in order to elucidate which of the three extremal assumptions behaves more similar to current data. In the present paper, these results have been obtainedby applying a 2-dimensional (2-Dim) horizontal energy balance box-model, with a few independent variables (surface temperature, cloud-cover and material heat fluxes). In addition, climatic simulations for current conditions by assuming a fixed cloud-cover have been obtained. Finally,sensitivity analyses for both variable and fixed cloud models have been carried out
Resumo:
To estimate the possible direct effect of birth weight on blood pressure, it is conventional to condition on the mediator, current weight. Such conditioning can induce bias. Our aim was to assess the potential biasing effect of U, an unmeasured common cause of current weight and blood pressure, on the estimate of the controlled direct effect of birth weight on blood pressure, with the help of sensitivity analyses. We used data from a school-based study conducted in Switzerland in 2005-2006 (n = 3,762; mean age = 12.3 years). A small negative association was observed between birth weight and systolic blood pressure (linear regression coefficient βbw = -0.3 mmHg/kg, 95% confidence interval: -0.9, 0.3). The association was strengthened upon adjustment for current weight (βbw|C = -1.5 mmHg/kg, 95% confidence interval: -2.1, -0.9). Sensitivity analyses revealed that the negative conditional association was explained by U only if U was relatively strongly associated with blood pressure and if there was a large difference in the prevalence of U between low-birth weight and normal-birth weight children. This weakens the hypothesis that the negative relationship between birth weight and blood pressure arises only from collider-stratification bias induced by conditioning on current weight.
Parametric Sensitivity Analysis of the Most Recent Computational Models of Rabbit Cardiac Pacemaking
Resumo:
The cellular basis of cardiac pacemaking activity, and specifically the quantitative contributions of particular mechanisms, is still debated. Reliable computational models of sinoatrial nodal (SAN) cells may provide mechanistic insights, but competing models are built from different data sets and with different underlying assumptions. To understand quantitative differences between alternative models, we performed thorough parameter sensitivity analyses of the SAN models of Maltsev & Lakatta (2009) and Severi et al (2012). Model parameters were randomized to generate a population of cell models with different properties, simulations performed with each set of random parameters generated 14 quantitative outputs that characterized cellular activity, and regression methods were used to analyze the population behavior. Clear differences between the two models were observed at every step of the analysis. Specifically: (1) SR Ca2+ pump activity had a greater effect on SAN cell cycle length (CL) in the Maltsev model; (2) conversely, parameters describing the funny current (If) had a greater effect on CL in the Severi model; (3) changes in rapid delayed rectifier conductance (GKr) had opposite effects on action potential amplitude in the two models; (4) within the population, a greater percentage of model cells failed to exhibit action potentials in the Maltsev model (27%) compared with the Severi model (7%), implying greater robustness in the latter; (5) confirming this initial impression, bifurcation analyses indicated that smaller relative changes in GKr or Na+-K+ pump activity led to failed action potentials in the Maltsev model. Overall, the results suggest experimental tests that can distinguish between models and alternative hypotheses, and the analysis offers strategies for developing anti-arrhythmic pharmaceuticals by predicting their effect on the pacemaking activity.
Resumo:
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the “with measures” scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. Implications: A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
Resumo:
A szerző az alkalmazott többszektoros modellezés területén a lineáris programozási modellektől a számszerűsített általános egyensúlyi modellekig végbement változásokat tekinti át. Egy rövid történeti visszapillantás után a lineáris programozás módszereire épülő nemzetgazdasági szintű modellekkel összevetve mutatja be az általános egyensúlyi modellek közös, illetve eltérő jellemzőit. Egyidejűleg azt is érzékelteti, hogyan lehet az általános egyensúlyi modelleket a gazdaságpolitikai célok konzisztenciájának, a célok közötti átváltási lehetőségek elemzésére és általában a gazdaságpolitikai elképzelések érzékenységi vizsgálatára felhasználni. A szerző az elméleti-módszertani kérdések taglalását számszerűsített általános egyensúlyi modell segítségével illusztrálja. _______ The author surveys the changes having taken place in the field of multi-sector modeling, from the linear programming models to the quantified general equilibrium models. After a brief historical retrospection he presents the common and different characteristic features of the general equilibrium models by comparing them with the national economic level models based on the methods of linear programming. He also makes clear how the general equilibrium models can be used for analysing the consistency of economic policy targets, for the investigation of trade-off possibilities among the targets and, in general, for sensitivity analyses of economic policy targets. The discussion of theoretical and methodological quuestions is illustrated by the author with the aid of a quantified general equilibrium model.
Resumo:
We present a method of estimating HIV incidence rates in epidemic situations from data on age-specific prevalence and changes in the overall prevalence over time. The method is applied to women attending antenatal clinics in Hlabisa, a rural district of KwaZulu/Natal, South Africa, where transmission of HIV is overwhelmingly through heterosexual contact. A model which gives age-specific prevalence rates in the presence of a progressing epidemic is fitted to prevalence data for 1998 using maximum likelihood methods and used to derive the age-specific incidence. Error estimates are obtained using a Monte Carlo procedure. Although the method is quite general some simplifying assumptions are made concerning the form of the risk function and sensitivity analyses are performed to explore the importance of these assumptions. The analysis shows that in 1998 the annual incidence of infection per susceptible woman increased from 5.4 per cent (3.3-8.5 per cent; here and elsewhere ranges give 95 per cent confidence limits) at age 15 years to 24.5 per cent (20.6-29.1 per cent) at age 22 years and declined to 1.3 per cent (0.5-2.9 per cent) at age 50 years; standardized to a uniform age distribution, the overall incidence per susceptible woman aged 15 to 59 was 11.4 per cent (10.0-13.1 per cent); per women in the population it was 8.4 per cent (7.3-9.5 per cent). Standardized to the age distribution of the female population the average incidence per woman was 9.6 per cent (8.4-11.0 per cent); standardized to the age distribution of women attending antenatal clinics, it was 11.3 per cent (9.8-13.3 per cent). The estimated incidence depends on the values used for the epidemic growth rate and the AIDS related mortality. To ensure that, for this population, errors in these two parameters change the age specific estimates of the annual incidence by less than the standard deviation of the estimates of the age specific incidence, the AIDS related mortality should be known to within +/-50 per cent and the epidemic growth rate to within +/-25 per cent, both of which conditions are met. In the absence of cohort studies to measure the incidence of HIV infection directly, useful estimates of the age-specific incidence can be obtained from cross-sectional, age-specific prevalence data and repeat cross-sectional data on the overall prevalence of HIV infection. Several assumptions were made because of the lack of data but sensitivity analyses show that they are unlikely to affect the overall estimates significantly. These estimates are important in assessing the magnitude of the public health problem, for designing vaccine trials and for evaluating the impact of interventions. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Objective: To outline the major methodological issues appropriate to the use of the population impact number (PIN) and the disease impact number (DIN) in health policy decision making. Design: Review of literature and calculation of PIN and DIN statistics in different settings. Setting: Previously proposed extensions to the number needed to treat (NNT): the DIN and the PIN, which give a population perspective to this measure. Main results: The PIN and DIN allow us to compare the population impact of different interventions either within the same disease or in different diseases or conditions. The primary studies used for relative risk estimates should have outcomes, time periods and comparison groups that are congruent and relevant to the local setting. These need to be combined with local data on disease rates and population size. Depending on the particular problem, the target may be disease incidence or prevalence and the effects of interest may be either the incremental impact or the total impact of each intervention. For practical application, it will be important to use sensitivity analyses to determine plausible intervals for the impact numbers. Conclusions: Attention to various methodological issues will permit the DIN and PIN to be used to assist health policy makers assign a population perspective to measures of risk.
Resumo:
OBJECTIVE To estimate the budget impact from the incorporation of positron emission tomography (PET) in mediastinal and distant staging of non-small cell lung cancer.METHODS The estimates were calculated by the epidemiological method for years 2014 to 2018. Nation-wide data were used about the incidence; data on distribution of the disease´s prevalence and on the technologies’ accuracy were from the literature; data regarding involved costs were taken from a micro-costing study and from Brazilian Unified Health System (SUS) database. Two strategies for using PET were analyzed: the offer to all newly-diagnosed patients, and the restricted offer to the ones who had negative results in previous computed tomography (CT) exams. Univariate and extreme scenarios sensitivity analyses were conducted to evaluate the influence from sources of uncertainties in the parameters used.RESULTS The incorporation of PET-CT in SUS would imply the need for additional resources of 158.1 BRL (98.2 USD) million for the restricted offer and 202.7 BRL (125.9 USD) million for the inclusive offer in five years, with a difference of 44.6 BRL (27.7 USD) million between the two offer strategies within that period. In absolute terms, the total budget impact from its incorporation in SUS, in five years, would be 555 BRL (345 USD) and 600 BRL (372.8 USD) million, respectively. The costs from the PET-CT procedure were the most influential parameter in the results. In the most optimistic scenario, the additional budget impact would be reduced to 86.9 BRL (54 USD) and 103.8 BRL (64.5 USD) million, considering PET-CT for negative CT and PET-CT for all, respectively.CONCLUSIONS The incorporation of PET in the clinical staging of non-small cell lung cancer seems to be financially feasible considering the high budget of the Brazilian Ministry of Health. The potential reduction in the number of unnecessary surgeries may cause the available resources to be more efficiently allocated.
Resumo:
In the smart grids context, distributed energy resources management plays an important role in the power systems’ operation. Battery electric vehicles and plug-in hybrid electric vehicles should be important resources in the future distribution networks operation. Therefore, it is important to develop adequate methodologies to schedule the electric vehicles’ charge and discharge processes, avoiding network congestions and providing ancillary services. This paper proposes the participation of plug-in hybrid electric vehicles in fuel shifting demand response programs. Two services are proposed, namely the fuel shifting and the fuel discharging. The fuel shifting program consists in replacing the electric energy by fossil fuels in plug-in hybrid electric vehicles daily trips, and the fuel discharge program consists in use of their internal combustion engine to generate electricity injecting into the network. These programs are included in an energy resources management algorithm which integrates the management of other resources. The paper presents a case study considering a 37-bus distribution network with 25 distributed generators, 1908 consumers, and 2430 plug-in vehicles. Two scenarios are tested, namely a scenario with high photovoltaic generation, and a scenario without photovoltaic generation. A sensitivity analyses is performed in order to evaluate when each energy resource is required.
Resumo:
BACKGROUND: High-grade gliomas are aggressive, incurable tumors characterized by extensive diffuse invasion of the normal brain parenchyma. Novel therapies at best prolong survival; their costs are formidable and benefit is marginal. Economic restrictions thus require knowledge of the cost-effectiveness of treatments. Here, we show the cost-effectiveness of enhanced resections in malignant glioma surgery using a well-characterized tool for intraoperative tumor visualization, 5-aminolevulinic acid (5-ALA). OBJECTIVE: To evaluate the cost-effectiveness of 5-ALA fluorescence-guided neurosurgery compared with white-light surgery in adult patients with newly diagnosed high-grade glioma, adopting the perspective of the Portuguese National Health Service. METHODS: We used a Markov model (cohort simulation). Transition probabilities were estimated with the use of data from 1 randomized clinical trial and 1 noninterventional prospective study. Utility values and resource use were obtained from published literature and expert opinion. Unit costs were taken from official Portuguese reimbursement lists (2012 values). The health outcomes considered were quality-adjusted life-years, lifeyears, and progression-free life-years. Extensive 1-way and probabilistic sensitivity analyses were performed. RESULTS: The incremental cost-effectiveness ratios are below €10 000 in all evaluated outcomes, being around €9100 per quality-adjusted life-year gained, €6700 per life-year gained, and €8800 per progression-free life-year gained. The probability of 5-ALA fluorescence-guided surgery cost-effectiveness at a threshold of €20000 is 96.0% for quality-adjusted life-year, 99.6% for life-year, and 98.8% for progression-free life-year. CONCLUSION: 5-ALA fluorescence-guided surgery appears to be cost-effective in newly diagnosed high-grade gliomas compared with white-light surgery. This example demonstrates cost-effectiveness analyses for malignant glioma surgery to be feasible on the basis of existing data.