858 resultados para Robust Probabilistic Model, Dyslexic Users, Rewriting, Question-Answering
Resumo:
In this paper we introduce a new testing procedure for evaluating the rationality of fixed-event forecasts based on a pseudo-maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed-event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian-based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.
Resumo:
As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.
Resumo:
We present an efficient method of combining wide angle neutron scattering data with detailed atomistic models, allowing us to perform a quantitative and qualitative mapping of the organisation of the chain conformation in both glass and liquid phases. The structural refinement method presented in this work is based on the exploitation of the intrachain features of the diffraction pattern and its intimate linkage with atomistic models by the use of internal coordinates for bond lengths, valence angles and torsion rotations. Atomic connectivity is defined through these coordinates that are in turn assigned by pre-defined probability distributions, thus allowing for the models in question to be built stochastically. Incremental variation of these coordinates allows for the construction of models that minimise the differences between the observed and calculated structure factors. We present a series of neutron scattering data of 1,2 polybutadiene at the region 120-400K. Analysis of the experimental data yield bond lengths for C-C and C=C of 1.54Å and 1.35Å respectively. Valence angles of the backbone were found to be at 112° and the torsion distributions are characterised by five rotational states, a three-fold trans-skew± for the backbone and gauche± for the vinyl group. Rotational states of the vinyl group were found to be equally populated, indicating a largely atactic chan. The two backbone torsion angles exhibit different behaviour with respect to temperature of their trans population, with one of them adopting an almost all trans sequence. Consequently the resulting configuration leads to a rather persistent chain, something indicated by the value of the characteristic ratio extrapolated from the model. We compare our results with theoretical predictions, computer simulations, RIS models and previously reported experimental results.
Resumo:
Mineral dust aerosols in the atmosphere have the potential to affect the global climate by influencing the radiative balance of the atmosphere and the supply of micronutrients to the ocean. Ice and marine sediment cores indicate that dust deposition from the atmosphere was at some locations 2–20 times greater during glacial periods, raising the possibility that mineral aerosols might have contributed to climate change on glacial-interglacial time scales. To address this question, we have used linked terrestrial biosphere, dust source, and atmospheric transport models to simulate the dust cycle in the atmosphere for current and last glacial maximum (LGM) climates. We obtain a 2.5-fold higher dust loading in the entire atmosphere and a twenty-fold higher loading in high latitudes, in LGM relative to present. Comparisons to a compilation of atmospheric dust deposition flux estimates for LGM and present in marine sediment and ice cores show that the simulated flux ratios are broadly in agreement with observations; differences suggest where further improvements in the simple dust model could be made. The simulated increase in high-latitude dustiness depends on the expansion of unvegetated areas, especially in the high latitudes and in central Asia, caused by a combination of increased aridity and low atmospheric [CO2]. The existence of these dust source areas at the LGM is supported by pollen data and loess distribution in the northern continents. These results point to a role for vegetation feedbacks, including climate effects and physiological effects of low [CO2], in modulating the atmospheric distribution of dust.
Resumo:
There has been recent interest in sensory systems that are able to display a response which is proportional to a fold change in stimulus concentration, a feature referred to as fold-change detection (FCD). Here, we demonstrate FCD in a recent whole-pathway mathematical model of Escherichia coli chemotaxis. FCD is shown to hold for each protein in the signalling cascade and to be robust to kinetic rate and protein concentration variation. Using a sensitivity analysis, we find that only variations in the number of receptors within a signalling team lead to the model not exhibiting FCD. We also discuss the ability of a cell with multiple receptor types to display FCD and explain how a particular receptor configuration may be used to elucidate the two experimentally determined regimes of FCD behaviour. All findings are discussed in respect of the experimental literature.
Resumo:
What is the relationship between magnitude judgments relying on directly available characteristics versus probabilistic cues? Question frame was manipulated in a comparative judgment task previously assumed to involve inference across a probabilistic mental model (e.g., “which city is largest” – the “larger” question – versus “which city is smallest” – the “smaller” question). Participants identified either the largest or smallest city (Experiments 1a, 2) or the richest or poorest person (Experiment 1b) in a three-alternative forced choice (3-AFC) task (Experiment 1) or 2-AFC task (Experiment 2). Response times revealed an interaction between question frame and the number of options recognized. When asked the smaller question, response times were shorter when none of the options were recognized. The opposite pattern was found when asked the larger question: response time was shorter when all options were recognized. These task-stimuli congruity results in judgment under uncertainty are consistent with, and predicted by, theories of magnitude comparison which make use of deductive inferences from declarative knowledge.
Resumo:
Most developers of behavior change support systems (BCSS) employ ad hoc procedures in their designs. This paper presents a novel discussion concerning how analyzing the relationship between attitude toward target behavior, current behavior, and attitude toward change or maintaining behavior can facilitate the design of BCSS. We describe the three-dimensional relationships between attitude and behavior (3D-RAB) model and demonstrate how it can be used to categorize users, based on variations in levels of cognitive dissonance. The proposed model seeks to provide a method for analyzing the user context on the persuasive systems design model, and it is evaluated using existing BCSS. We identified that although designers seem to address the various cognitive states, this is not done purposefully, or in a methodical fashion, which implies that many existing applications are targeting users not considered at the design phase. As a result of this work, it is suggested that designers apply the 3D-RAB model in order to design solutions for targeted users.
Resumo:
Analysis of the forecasts and hindcasts from the ECMWF 32-day forecast model reveals that there is statistically significant skill in predicting weekly mean wind speeds over areas of Europe at lead times of at least 14–20 days. Previous research on wind speed predictability has focused on the short- to medium-range time scales, typically finding that forecasts lose all skill by the later part of the medium-range forecast. To the authors’ knowledge, this research is the first to look beyond the medium-range time scale by taking weekly mean wind speeds, instead of averages at hourly or daily resolution, for the ECMWF monthly forecasting system. It is shown that the operational forecasts have high levels of correlation (~0.6) between the forecasts and observations over the winters of 2008–12 for some areas of Europe. Hindcasts covering 20 winters show a more modest level of correlation but are still skillful. Additional analysis examines the probabilistic skill for the United Kingdom with the application of wind power forecasting in mind. It is also shown that there is forecast “value” for end users (operating in a simple cost/loss ratio decision-making framework). End users that are sensitive to winter wind speed variability over the United Kingdom, Germany, and some other areas of Europe should therefore consider forecasts beyond the medium-range time scale as it is clear there is useful information contained within the forecast.
Resumo:
The MATLAB model is contained within the compressed folders (versions are available as .zip and .tgz). This model uses MERRA reanalysis data (>34 years available) to estimate the hourly aggregated wind power generation for a predefined (fixed) distribution of wind farms. A ready made example is included for the wind farm distribution of Great Britain, April 2014 ("CF.dat"). This consists of an hourly time series of GB-total capacity factor spanning the period 1980-2013 inclusive. Given the global nature of reanalysis data, the model can be applied to any specified distribution of wind farms in any region of the world. Users are, however, strongly advised to bear in mind the limitations of reanalysis data when using this model/data. This is discussed in our paper: Cannon, Brayshaw, Methven, Coker, Lenaghan. "Using reanalysis data to quantify extreme wind power generation statistics: a 33 year case study in Great Britain". Submitted to Renewable Energy in March, 2014. Additional information about the model is contained in the model code itself, in the accompanying ReadMe file, and on our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/
Resumo:
The study of workarounds (WA) has increased in importance due to their impact on patient safety and efficiency. However, there are no adequate theories to explain the motivation to create and use a workaround in a healthcare sitting. Although theories of technology acceptance help to understand the reasons to accept or reject technology, they fail to explain drivers for alternatives. Also workarounds involve creators and performers that have different motivations. Models such as Theory of Planned Behaviour (TPB) or Theory of Reasoned Action (TRA) can help to explain the role of workaround users, but lack explanation of workaround creators’ dynamics. Our aim is to develop a theoretical foundation to explain workaround motivation behaviour models with norms that relate to sanctions to provide an integrated Workaround Motivation Model; WAMM. The development of WAMM model is explained in this paper based on workaround cases as part of further research to establish the model.
Resumo:
Persuasive technologies have been extensively applied in the context of e-commerce for the purpose of marketing, enhancing system credibility, and motivating users to adopt the systems. Recognising that persuasion impacts on consumer behaviour to purchase online have not been investigated previously. This study reviews theories of technology acceptance, and identifies their limitation in not considering the effect of persuasive technologies when determining user online technology acceptance. The study proposes a theoretical model that considers the effect of persuasive technologies on consumer acceptance of e-commerce websites; with consideration of other related variables, i.e. trust and technological attributes. Moreover the paper proposes a model based on the UTAUT2, which contains relevant contributing factors; including the concept of perceived persuasiveness.
Resumo:
Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single ‘deterministic’ forecasts. Here, the UTCI is computed on a global scale,which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia.
Resumo:
Windstorms are a main feature of the European climate and exert strong socioeconomic impacts. Large effort has been made in developing and enhancing models to simulate the intensification of windstorms, resulting footprints, and associated impacts. Simulated wind or gust speeds usually differ from observations, as regional climate models have biases and cannot capture all local effects. An approach to adjust regional climate model (RCM) simulations of wind and wind gust toward observations is introduced. For this purpose, 100 windstorms are selected and observations of 173 (111) test sites of the German Weather Service are considered for wind (gust) speed. Theoretical Weibull distributions are fitted to observed and simulated wind and gust speeds, and the distribution parameters of the observations are interpolated onto the RCM computational grid. A probability mapping approach is applied to relate the distributions and to correct the modeled footprints. The results are not only achieved for single test sites but for an area-wide regular grid. The approach is validated using root-mean-square errors on event and site basis, documenting that the method is generally able to adjust the RCM output toward observations. For gust speeds, an improvement on 88 of 100 events and at about 64% of the test sites is reached. For wind, 99 of 100 improved events and ~84% improved sites can be obtained. This gives confidence on the potential of the introduced approach for many applications, in particular those considering wind data.
Resumo:
We use a stratosphere–troposphere composition–climate model with interactive sulfur chemistry and aerosol microphysics, to investigate the effect of the 1991 Mount Pinatubo eruption on stratospheric aerosol properties. Satellite measurements indicate that shortly after the eruption, between 14 and 23 Tg of SO2 (7 to 11.5 Tg of sulfur) was present in the tropical stratosphere. Best estimates of the peak global stratospheric aerosol burden are in the range 19 to 26 Tg, or 3.7 to 6.7 Tg of sulfur assuming a composition of between 59 and 77 % H2SO4. In light of this large uncertainty range, we performed two main simulations with 10 and 20 Tg of SO2 injected into the tropical lower stratosphere. Simulated stratospheric aerosol properties through the 1991 to 1995 period are compared against a range of available satellite and in situ measurements. Stratospheric aerosol optical depth (sAOD) and effective radius from both simulations show good qualitative agreement with the observations, with the timing of peak sAOD and decay timescale matching well with the observations in the tropics and mid-latitudes. However, injecting 20 Tg gives a factor of 2 too high stratospheric aerosol mass burden compared to the satellite data, with consequent strong high biases in simulated sAOD and surface area density, with the 10 Tg injection in much better agreement. Our model cannot explain the large fraction of the injected sulfur that the satellite-derived SO2 and aerosol burdens indicate was removed within the first few months after the eruption. We suggest that either there is an additional alternative loss pathway for the SO2 not included in our model (e.g. via accommodation into ash or ice in the volcanic cloud) or that a larger proportion of the injected sulfur was removed via cross-tropopause transport than in our simulations. We also critically evaluate the simulated evolution of the particle size distribution, comparing in detail to balloon-borne optical particle counter (OPC) measurements from Laramie, Wyoming, USA (41° N). Overall, the model captures remarkably well the complex variations in particle concentration profiles across the different OPC size channels. However, for the 19 to 27 km injection height-range used here, both runs have a modest high bias in the lowermost stratosphere for the finest particles (radii less than 250 nm), and the decay timescale is longer in the model for these particles, with a much later return to background conditions. Also, whereas the 10 Tg run compared best to the satellite measurements, a significant low bias is apparent in the coarser size channels in the volcanically perturbed lower stratosphere. Overall, our results suggest that, with appropriate calibration, aerosol microphysics models are capable of capturing the observed variation in particle size distribution in the stratosphere across both volcanically perturbed and quiescent conditions. Furthermore, additional sensitivity simulations suggest that predictions with the models are robust to uncertainties in sub-grid particle formation and nucleation rates in the stratosphere.
Resumo:
The 2008-2009 financial crisis and related organizational and economic failures have meant that financial organizations are faced with a ‘tsunami’ of new regulatory obligations. This environment provides new managerial challenges as organizations are forced to engage in complex and costly remediation projects with short deadlines. Drawing from a longitudinal study conducted with nine financial institutions over twelve years, this paper identifies nine IS capabilities which underpin activities for managing regulatory themed governance, risk and compliance efforts. The research shows that many firms are now focused on meeting the Regulators’ deadlines at the expense of developing a strategic, enterprise-wide connected approach to compliance. Consequently, executives are in danger of implementing siloed compliance solutions within business functions. By evaluating the maturity of their IS capabilities which underpin regulatory adherence, managers have an opportunity to develop robust operational architectures and so are better positioned to face the challenges derived from shifting regulatory landscapes.