944 resultados para Polytopic uncertainty


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A protocol is described using lipid mutants and thiol-specific chemical reagents to study lipid-dependent and host-specific membrane protein topogenesis by the substituted-cysteine accessibility method as applied to transmembrane domains (SCAM). SCAM is adapted to follow changes in membrane protein topology as a function of changes in membrane lipid composition. The strategy described can be adapted to any membrane system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of root s = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K-s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5 % for central isolated hadrons and 1-3 % for the final calorimeter jet energy scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function  f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern policy-making is increasingly influenced by different types of uncertainty. Political actors are supposed to behave differently under the context of uncertainty then in “usual” decision-making processes. Actors exchange information in order to convince other actors and decision-makers, to coordinate their lobbying activities and form coalitions, and to get information and learn on the substantive issue. The literature suggests that preference similarity, social trust, perceived power and functional interdependence are particularly important drivers of information exchange. We assume that social trust as well as being connected to scientific actors is more important under uncertainty than in a setting with less uncertainty. To investigate information exchange under uncertainty analyze the case of unconventional shale gas development in the UK from 2008 till 2014. Our study will rely on statistical analyses of survey data on a diverse set of actors dealing with shale gas development and regulation in the UK.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper addresses the question of which factors drive the formation of policy preferences when there are remaining uncertainties about the causes and effects of the problem at stake. To answer this question we examine policy preferences reducing aquatic micropollutants, a specific case of water protection policy and different actor groups (e.g. state, science, target groups). Here, we contrast two types of policy preferences: a) preventive or source-directed policies, which mitigate pollution in order to avoid contact with water; and b) reactive or end-of-pipe policies, which filter water already contaminated by pollutants. In a second step, we analyze the drivers for actors’ policy preferences by focusing on three sets of explanations, i.e. participation, affectedness and international collaborations. The analysis of our survey data, qualitative interviews and regression analysis of the Swiss political elite show that participation in the policy-making process leads to knowledge exchange and reduces uncertainties about the policy problem, which promotes preferences for preventive policies. Likewise, actors who are affected by the consequences of micropollutants, such as consumer or environmental associations, opt for anticipatory policies. Interestingly, we find that uncertainties about the effectiveness of preventive policies can promote preferences for end-of-pipe policies. While preventive measures often rely on (uncertain) behavioral changes of target groups, reactive policies are more reliable when it comes to fulfilling defined policy goals. Finally, we find that in a transboundary water management context, actors with international collaborations prefer policies that produce immediate and reliable outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy shocks like the Fukushima accident can have important political consequences. This article examines their impact on collaboration patterns between collective actors in policy processes. It argues that external shocks create both behavioral uncertainty, meaning that actors do not know about other actors' preferences, and policy uncertainty on the choice and consequences of policy instruments. The context of uncertainty interacts with classical drivers of actor collaboration in policy processes. The analysis is based on a dataset comprising interview and survey data on political actors in two subsequent policy processes in Switzerland and Exponential Random Graph Models for network data. Results first show that under uncertainty, collaboration of actors in policy processes is less based on similar preferences than in stable contexts, but trust and knowledge of other actors are more important. Second, under uncertainty, scientific actors are not preferred collaboration partners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the role of uncertainty and imperfect local knowledge in foreign direct investment. The main idea comes from the literature on investment under uncertainty, such as Pindyck (1991) and Dixit and Pindyck (1994). We empirically test .the value of waiting. with a dataset on foreign direct investment (FDI). Many factors (e.g., political and economic regulations) as well as uncertainty and the risks due to imperfect local knowledge, determine the attractiveness of FDI. The uncertainty and irreversibility of FDI links the time interval between permission and actual execution of such FDI with explanatory variables, including information on foreign (home) countries and domestic industries. Common factors, such as regulatory change and external shocks, may affect the uncertainty when foreign investors make irreversible FDI decisions. We derive testable hypotheses from models of investment under uncertainty to determine those possible factors that induce delays in FDI, using Korean data over 1962 to 2001.