927 resultados para parametric uncertainty
Resumo:
This paper presents a neuroscientific study of aesthetic judgments on written texts. In an fMRI experiment participants read a number of proverbs without explicitly evaluating them. In a post-scan rating they rated each item for familiarity and beauty. These individual ratings were correlated with the functional data to investigate the neural correlates of implicit aesthetic judgments. We identified clusters in which BOLD activity was correlated with individual post-scan beauty ratings. This indicates that some spontaneous aesthetic evaluation takes place during reading, even if not required by the task. Positive correlations were found in the ventral striatum and in medial prefrontal cortex, likely reflecting the rewarding nature of sentences that are aesthetically pleasing. On the contrary, negative correlations were observed in the classic left frontotemporal reading network. Midline structures and bilateral temporo-parietal regions correlated positively with familiarity, suggesting a shift from the task-network towards the default network with increasing familiarity.
Resumo:
We study the tuning curve of entangled photons generated by type-0 spontaneous parametric down-conversion in a periodically poled potassium titanyl phosphate crystal. We demonstrate the X-shaped spatiotemporal structure of the spectrum by means of measurements and numerical simulations. Experiments for different pump waists, crystal temperatures, and crystal lengths are in good agreement with numerical simulations.
Resumo:
This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^
Resumo:
We address under what conditions a magma generated by partial melting at 100 km depth in the mantle wedge above a subduction zone can reach the crust in dikes before stalling. We also address under what conditions primitive basaltic magma (Mg # >60) can be delivered from this depth to the crust. We employ linear elastic fracture mechanics with magma solidification theory and perform a parametric sensitivity analysis. All dikes are initiated at a depth of 100 km in the thermal core of the wedge, and the Moho is fixed at 35 km depth. We consider a range of melt solidus temperatures (800-1100 degrees C), viscosities (10-100 Pa s), and densities (2400-2700 kg m(-3)). We also consider a range of host rock fracture toughness values (50-300 MPa m(1/2)) and dike lengths (2-5 km) and two thermal structures for the mantle wedge (1260 and 1400 degrees C at 100 km depth and 760 and 900 degrees C at 35 km depth). For the given parameter space, many dikes can reach the Moho in less than a few hundred hours, well within the time constraints provided by U series isotope disequilibria studies. Increasing the temperature in the mantle wedge, or increasing the dike length, allows additional dikes to propagate to the Moho. We conclude that some dikes with vertical lengths near their critical lengths and relatively high solidus temperatures will stall in the mantle before reaching the Moho, and these may be returned by corner flow to depths where they can melt under hydrous conditions. Thus, a chemical signature in arc lavas suggesting partial melting of slab basalts may be partly influenced by these recycled dikes. Alternatively, dikes with lengths well above their critical lengths can easily deliver primitive magmas to the crust, particularly if the mantle wedge is relatively hot. Dike transport remains a viable primary mechanism of magma ascent in convergent tectonic settings, but the potential for less rapid mechanisms making an important contribution increases as the mantle temperature at the Moho approaches the solidus temperature of the magma.
Resumo:
The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of root s = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K-s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5 % for central isolated hadrons and 1-3 % for the final calorimeter jet energy scale.
Resumo:
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.
Resumo:
Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
Modern policy-making is increasingly influenced by different types of uncertainty. Political actors are supposed to behave differently under the context of uncertainty then in “usual” decision-making processes. Actors exchange information in order to convince other actors and decision-makers, to coordinate their lobbying activities and form coalitions, and to get information and learn on the substantive issue. The literature suggests that preference similarity, social trust, perceived power and functional interdependence are particularly important drivers of information exchange. We assume that social trust as well as being connected to scientific actors is more important under uncertainty than in a setting with less uncertainty. To investigate information exchange under uncertainty analyze the case of unconventional shale gas development in the UK from 2008 till 2014. Our study will rely on statistical analyses of survey data on a diverse set of actors dealing with shale gas development and regulation in the UK.
Resumo:
The paper addresses the question of which factors drive the formation of policy preferences when there are remaining uncertainties about the causes and effects of the problem at stake. To answer this question we examine policy preferences reducing aquatic micropollutants, a specific case of water protection policy and different actor groups (e.g. state, science, target groups). Here, we contrast two types of policy preferences: a) preventive or source-directed policies, which mitigate pollution in order to avoid contact with water; and b) reactive or end-of-pipe policies, which filter water already contaminated by pollutants. In a second step, we analyze the drivers for actors’ policy preferences by focusing on three sets of explanations, i.e. participation, affectedness and international collaborations. The analysis of our survey data, qualitative interviews and regression analysis of the Swiss political elite show that participation in the policy-making process leads to knowledge exchange and reduces uncertainties about the policy problem, which promotes preferences for preventive policies. Likewise, actors who are affected by the consequences of micropollutants, such as consumer or environmental associations, opt for anticipatory policies. Interestingly, we find that uncertainties about the effectiveness of preventive policies can promote preferences for end-of-pipe policies. While preventive measures often rely on (uncertain) behavioral changes of target groups, reactive policies are more reliable when it comes to fulfilling defined policy goals. Finally, we find that in a transboundary water management context, actors with international collaborations prefer policies that produce immediate and reliable outcomes.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.