928 resultados para sources of uncertainty
Resumo:
We analyse the role of time-variation in coefficients and other sources of uncertainty in exchange rate forecasting regressions. Our techniques incorporate the notion that the relevant set of predictors and their corresponding weights, change over time. We find that predictive models which allow for sudden rather than smooth, changes in coefficients significantly beat the random walk benchmark in out-of-sample forecasting exercise. Using innovative variance decomposition scheme, we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients' variability, as the main factors hindering models' forecasting performance. The uncertainty regarding the choice of the predictor is small.
Resumo:
The evidence for anthropogenic climate change continues to strengthen, and concerns about severe weather events are increasing. As a result, scientific interest is rapidly shifting from detection and attribution of global climate change to prediction of its impacts at the regional scale. However, nearly everything we have any confidence in when it comes to climate change is related to global patterns of surface temperature, which are primarily controlled by thermodynamics. In contrast, we have much less confidence in atmospheric circulation aspects of climate change, which are primarily controlled by dynamics and exert a strong control on regional climate. Model projections of circulation-related fields, including precipitation, show a wide range of possible outcomes, even on centennial timescales. Sources of uncertainty include low-frequency chaotic variability and the sensitivity to model error of the circulation response to climate forcing. As the circulation response to external forcing appears to project strongly onto existing patterns of variability, knowledge of errors in the dynamics of variability may provide some constraints on model projections. Nevertheless, higher scientific confidence in circulation-related aspects of climate change will be difficult to obtain. For effective decision-making, it is necessary to move to a more explicitly probabilistic, risk-based approach.
Resumo:
Participants in contingent valuation studies may be uncertain about a number of aspects of the policy and survey context. The uncertainty management model of fairness judgments states that individuals will evaluate a policy in terms of its fairness when they do not know whether they can trust the relevant managing authority or experience uncertainty due to insufficient knowledge of the general issues surrounding the environmental policy. Similarly, some researchers have suggested that, not knowing how to answer WTP questions, participants convey their general attitudes toward the public good rather than report well-defined economic preferences. These contentions were investigated in a sample of 840 residents in four urban catchments across Australia who were interviewed about their WTP for stormwater pollution abatement. Four sources of uncertainty were measured: amount of prior issue-related thought, trustworthiness of the water authority, insufficient scenario information, and WTP response uncertainty. A logistic regression model was estimated in each subsample to test the main effects of the uncertainty sources on WTP as well as their interaction with fairness and proenvironmental attitudes. Results indicated support for the uncertainty management model in only one of the four samples. Similarly, proenvironmental attitudes interacted rarely with uncertainty to a significant level, and in ways that were more complex than hypothesised. It was concluded that uncertain individuals were generally not more likely than other participants to draw on either fairness evaluations or proenvironmental attitudes when making decisions about paying for stormwater pollution abatement.
Resumo:
Uncertainties as to future supply costs of nonrenewable natural resources, such as oil and gas, raise the issue of the choice of supply sources. In a perfectly deterministic world, an efficient use of multiple sources of supply requires that any given market exhausts the supply it can draw from a low cost source before moving on to a higher cost one; supply sources should be exploited in strict sequence of increasing marginal cost, with a high cost source being left untouched as long as a less costly source is available. We find that this may not be the efficient thing to do in a stochastic world. We show that there exist conditions under which it can be efficient to use a risky supply source in order to conserve a cheaper non risky source. The benefit of doing this comes from the fact that it leaves open the possibility of using it instead of the risky source in the event the latter’s future cost conditions suddenly deteriorate. There are also conditions under which it will be efficient to use a more costly non risky source while a less costly risky source is still available. The reason is that this conserves the less costly risky source in order to use it in the event of a possible future drop in its cost.
Resumo:
Laser trackers have been widely used in many industries to meet increasingly high accuracy requirements. In laser tracker measurement, it is complex and difficult to perform an accurate error analysis and uncertainty evaluation. This paper firstly reviews the working principle of single beam laser trackers and state-of- The- Art of key technologies from both industrial and academic efforts, followed by a comprehensive analysis of uncertainty sources. A generic laser tracker modelling method is formulated and the framework of the virtual tracker is proposed. The VLS can be used for measurement planning, measurement accuracy optimization and uncertainty evaluation. The completed virtual laser tracking system should take all the uncertainty sources affecting coordinate measurement into consideration and establish an uncertainty model which will behave in an identical way to the real system. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Purpose - There has been much research on manufacturing flexibility, but supply chain flexibility is still an under-investigated area. This paper focuses on supply flexibility, the aspects of flexibility related to the upstream supply chain. Our purpose is to investigate why and how firms increase supply flexibility.Methodology/Approach An exploratory multiple case study was conducted. We analyzed seven Spanish manufacturers from different sectors (automotive, apparel, electronics and electrical equipment).Findings - The results show that there are some major reasons why firms need supply flexibility (manufacturing schedule fluctuations, JIT purchasing, manufacturing slack capacity, low level of parts commonality, demand volatility, demand seasonality and forecast accuracy), and that companies increase this type of flexibility by implementing two main strategies: to increase suppliers responsiveness capability and flexible sourcing . The results also suggest that the supply flexibility strategy selected depends on two factors: the supplier searching and switching costs and the type of uncertainty (mix, volume or delivery).Research limitations - This paper has some limitations common to all case studies, such as the subjectivity of the analysis, and the questionable generalizability of results (since the sample of firms is not statistically significant).Implications - Our study contributes to the existing literature by empirically investigating which are the main reasons for companies needing to increase supply flexibility, how they increase this flexibility, and suggesting some factors that could influence the selection of a particular supply flexibility strategy.
Resumo:
The two central goals of this master's thesis are to serve as a guidebook on the determination of uncertainty in efficiency measurements and to investigate sources of uncertainty in efficiency measurements in the field of electric drives by a literature review, mathematical modeling and experimental means. The influence of individual sources of uncertainty on the total instrumental uncertainty is investigated with the help of mathematical models derived for a balance and a direct air cooled calorimeter. The losses of a frequency converter and an induction motor are measured with the input-output method and a balance calorimeter at 50 and 100 % loads. A software linking features of Matlab and Excel is created to process measurement data, calculate uncertainties and to calculate and visualize results. The uncertainties are combined with both the worst case and the realistic perturbation method and distributions of uncertainty by source are shown based on experimental results. A comparison of the calculated uncertainties suggests that the balance calorimeter determines losses more accurately than the input-output method with a relative RPM uncertainty of 1.46 % compared to 3.78 - 12.74 % respectively with 95 % level of confidence at the 93 % induction motor efficiency or higher. As some principles in uncertainty analysis are open to interpretation the views and decisions of the analyst can have noticeable influence on the uncertainty in the measurement result.
Resumo:
In this thesis I propose a novel method to estimate the dose and injection-to-meal time for low-risk intensive insulin therapy. This dosage-aid system uses an optimization algorithm to determine the insulin dose and injection-to-meal time that minimizes the risk of postprandial hyper- and hypoglycaemia in type 1 diabetic patients. To this end, the algorithm applies a methodology that quantifies the risk of experiencing different grades of hypo- or hyperglycaemia in the postprandial state induced by insulin therapy according to an individual patient’s parameters. This methodology is based on modal interval analysis (MIA). Applying MIA, the postprandial glucose level is predicted with consideration of intra-patient variability and other sources of uncertainty. A worst-case approach is then used to calculate the risk index. In this way, a safer prediction of possible hyper- and hypoglycaemic episodes induced by the insulin therapy tested can be calculated in terms of these uncertainties.
Resumo:
We separate and quantify the sources of uncertainty in projections of regional (*2,500 km) precipitation changes for the twenty-first century using the CMIP3 multi-model ensemble, allowing a direct comparison with a similar analysis for regional temperature changes. For decadal means of seasonal mean precipitation, internal variability is the dominant uncertainty for predictions of the first decade everywhere, and for many regions until the third decade ahead. Model uncertainty is generally the dominant source of uncertainty for longer lead times. Scenario uncertainty is found to be small or negligible for all regions and lead times, apart from close to the poles at the end of the century. For the global mean, model uncertainty dominates at all lead times. The signal-to-noise ratio (S/N) of the precipitation projections is highest at the poles but less than 1 almost everywhere else, and is far lower than for temperature projections. In particular, the tropics have the highest S/N for temperature, but the lowest for precipitation. We also estimate a ‘potential S/N’ by assuming that model uncertainty could be reduced to zero, and show that, for regional precipitation, the gains in S/N are fairly modest, especially for predictions of the next few decades. This finding suggests that adaptation decisions will need to be made in the context of high uncertainty concerning regional changes in precipitation. The potential to narrow uncertainty in regional temperature projections is far greater. These conclusions on S/N are for the current generation of models; the real signal may be larger or smaller than the CMIP3 multi-model mean. Also note that the S/N for extreme precipitation, which is more relevant for many climate impacts, may be larger than for the seasonal mean precipitation considered here.
Resumo:
The observed dramatic decrease in September sea ice extent (SIE) has been widely discussed in the scientific literature. Though there is qualitative agreement between observations and ensemble members of the Third Coupled Model Intercomparison Project (CMIP3), it is concerning that the observed trend (1979–2010) is not captured by any ensemble member. The potential sources of this discrepancy include: observational uncertainty, physical model limitations and vigorous natural climate variability. The latter has received less attention and is difficult to assess using the relatively short observational sea ice records. In this study multi-centennial pre-industrial control simulations with five CMIP3 climate models are used to investigate the role that the Arctic oscillation (AO), the Atlantic multi-decadal oscillation (AMO) and the Atlantic meridional overturning circulation (AMOC) play in decadal sea ice variability. Further, we use the models to determine the impact that these sources of variability have had on SIE over both the era of satellite observation (1979–2010) and an extended observational record (1953–2010). There is little evidence of a relationship between the AO and SIE in the models. However, we find that both the AMO and AMOC indices are significantly correlated with SIE in all the models considered. Using sensitivity statistics derived from the models, assuming a linear relationship, we attribute 0.5–3.1%/decade of the 10.1%/decade decline in September SIE (1979–2010) to AMO driven variability.
Resumo:
There is large diversity in simulated aerosol forcing among models that participated in the fifth Coupled Model Intercomparison Project (CMIP5), particularly related to aerosol interactions with clouds. Here we use the reported model data and fitted aerosol-cloud relations to separate the main sources of inter-model diversity in the magnitude of the cloud albedo effect. There is large diversity in the global load and spatial distribution of sulfate aerosol, as well as in global-mean cloud-top effective radius. The use of different parameterizations of aerosol-cloud interactions makes the largest contribution to diversity in modeled radiative forcing (up to -39%, +48% about the mean estimate). Uncertainty in pre-industrial sulfate load also makes a substantial contribution (-15%, +61% about the mean estimate), with smaller contributions from inter-model differences in the historical change in sulfate load and in mean cloud fraction.
Resumo:
Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.
Resumo:
An analytical method for evaluating the uncertainty of the performance of active antenna arrays in the whole spatial spectrum is presented. Since array processing algorithms based on spatial reference are widely used to track moving targets, it is essential to be aware of the impact of the uncertainty sources on the antenna response. Furthermore, the estimation of the direction of arrival (DOA) depends on the array uncertainty. The aim of the uncertainties analysis is to provide an exhaustive characterization of the behavior of the active antenna array associated with its main uncertainty sources. The result of this analysis helps to select the proper calibration technique to be implemented. An illustrative example for a triangular antenna array used for satellite tracking is presented showing the suitability of the proposed method to carry out an efficient characterization of an active antenna array.
Resumo:
Data provided by 7380 middle managers from 60 nations are used to determine whether demographic variables are correlated with managers’ reliance on vertical sources of guidance in different nations and whether these correlations differ depending on national culture characteristics. Significant effects of Hofstede’s national culture scores, age, gender, organization ownership and department function are found. After these main effects have been discounted, significant although weak interactions are found, indicating that demographic effects are stronger in individualist, low power distance nations than elsewhere. Significant non-predicted interaction effects of uncertainty avoidance and masculinity-femininity are also obtained. The implications for theory and practice of the use of demographic attributes in understanding effective management procedures in various parts of the world are discussed.