111 resultados para Robust Probabilistic Model, Dyslexic Users, Rewriting, Question-Answering
Resumo:
Aircraft systems are highly nonlinear and time varying. High-performance aircraft at high angles of incidence experience undesired coupling of the lateral and longitudinal variables, resulting in departure from normal controlled flight. The aim of this work is to construct a robust closed-loop control that optimally extends the stable and decoupled flight envelope. For the study of these systems nonlinear analysis methods are needed. Previously, bifurcation techniques have been used mainly to analyze open-loop nonlinear aircraft models and investigate control effects on dynamic behavior. In this work linear feedback control designs calculated by eigenstructure assignment methods are investigated for a simple aircraft model at a fixed flight condition. Bifurcation analysis in conjunction with linear control design methods is shown to aid control law design for the nonlinear system.
Resumo:
The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.
Resumo:
As in any technology systems, analysis and design issues are among the fundamental challenges in persuasive technology. Currently, the Persuasive Systems Development (PSD) framework is considered to be the most comprehensive framework for designing and evaluation of persuasive systems. However, the framework is limited in terms of providing detailed information which can lead to selection of appropriate techniques depending on the variable nature of users or use over time. In light of this, we propose a model which is intended for analysing and implementing behavioural change in persuasive technology called the 3D-RAB model. The 3D-RAB model represents the three dimensional relationships between attitude towards behaviour, attitude towards change or maintaining a change, and current behaviour, and distinguishes variable levels in a user’s cognitive state. As such it provides a framework which could be used to select appropriate techniques for persuasive technology.
Resumo:
Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (originally a linear regression technique), is discussed. In lasso models, less important inputs are identified and the corresponding coefficient is set to zero, providing an efficient and automatic model reduction procedure. For the same reason, lasso models are particularly appealing for diagnostic purposes.
Resumo:
The fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) includes a comparison of observation-based and modeling-based estimates of the aerosol direct radiative forcing. In this comparison, satellite-based studies suggest a more negative aerosol direct radiative forcing than modeling studies. A previous satellite-based study, part of the IPCC comparison, uses aerosol optical depths and accumulation-mode fractions retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) at collection 4. The latest version of MODIS products, named collection 5, improves aerosol retrievals. Using these products, the direct forcing in the shortwave spectrum defined with respect to present-day natural aerosols is now estimated at −1.30 and −0.65 Wm−2 on a global clear-sky and all-sky average, respectively, for 2002. These values are still significantly more negative than the numbers reported by modeling studies. By accounting for differences between present-day natural and preindustrial aerosol concentrations, sampling biases, and investigating the impact of differences in the zonal distribution of anthropogenic aerosols, good agreement is reached between the direct forcing derived from MODIS and the Hadley Centre climate model HadGEM2-A over clear-sky oceans. Results also suggest that satellite estimates of anthropogenic aerosol optical depth over land should be coupled with a robust validation strategy in order to refine the observation-based estimate of aerosol direct radiative forcing. In addition, the complex problem of deriving the aerosol direct radiative forcing when aerosols are located above cloud still needs to be addressed.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.
Resumo:
Climate models consistently predict a strengthened Brewer–Dobson circulation in response to greenhouse gas (GHG)-induced climate change. Although the predicted circulation changes are clearly the result of changes in stratospheric wave drag, the mechanism behind the wave-drag changes remains unclear. Here, simulations from a chemistry–climate model are analyzed to show that the changes in resolved wave drag are largely explainable in terms of a simple and robust dynamical mechanism, namely changes in the location of critical layers within the subtropical lower stratosphere, which are known from observations to control the spatial distribution of Rossby wave breaking. In particular, the strengthening of the upper flanks of the subtropical jets that is robustly expected from GHG-induced tropospheric warming pushes the critical layers (and the associated regions of wave drag) upward, allowing more wave activity to penetrate into the subtropical lower stratosphere. Because the subtropics represent the critical region for wave driving of the Brewer–Dobson circulation, the circulation is thereby strengthened. Transient planetary-scale waves and synoptic-scale waves generated by baroclinic instability are both found to play a crucial role in this process. Changes in stationary planetary wave drag are not so important because they largely occur away from subtropical latitudes.
Resumo:
The surface mass balance for Greenland and Antarctica has been calculated using model data from an AMIP-type experiment for the period 1979–2001 using the ECHAM5 spectral transform model at different triangular truncations. There is a significant reduction in the calculated ablation for the highest model resolution, T319 with an equivalent grid distance of ca 40 km. As a consequence the T319 model has a positive surface mass balance for both ice sheets during the period. For Greenland, the models at lower resolution, T106 and T63, on the other hand, have a much stronger ablation leading to a negative surface mass balance. Calculations have also been undertaken for a climate change experiment using the IPCC scenario A1B, with a T213 resolution (corresponding to a grid distance of some 60 km) and comparing two 30-year periods from the end of the twentieth century and the end of the twenty-first century, respectively. For Greenland there is change of 495 km3/year, going from a positive to a negative surface mass balance corresponding to a sea level rise of 1.4 mm/year. For Antarctica there is an increase in the positive surface mass balance of 285 km3/year corresponding to a sea level fall by 0.8 mm/year. The surface mass balance changes of the two ice sheets lead to a sea level rise of 7 cm at the end of this century compared to end of the twentieth century. Other possible mass losses such as due to changes in the calving of icebergs are not considered. It appears that such changes must increase significantly, and several times more than the surface mass balance changes, if the ice sheets are to make a major contribution to sea level rise this century. The model calculations indicate large inter-annual variations in all relevant parameters making it impossible to identify robust trends from the examined periods at the end of the twentieth century. The calculated inter-annual variations are similar in magnitude to observations. The 30-year trend in SMB at the end of the twenty-first century is significant. The increase in precipitation on the ice sheets follows closely the Clausius-Clapeyron relation and is the main reason for the increase in the surface mass balance of Antarctica. On Greenland precipitation in the form of snow is gradually starting to decrease and cannot compensate for the increase in ablation. Another factor is the proportionally higher temperature increase on Greenland leading to a larger ablation. It follows that a modest increase in temperature will not be sufficient to compensate for the increase in accumulation, but this will change when temperature increases go beyond any critical limit. Calculations show that such a limit for Greenland might well be passed during this century. For Antarctica this will take much longer and probably well into following centuries.
Resumo:
We analyze here the polar stratospheric temperatures in an ensemble of three 150-year integrations of the Canadian Middle Atmosphere Model (CMAM), an interactive chemistry-climate model which simulates ozone depletion and recovery, as well as climate change. A key motivation is to understand possible mechanisms for the observed trend in the extent of conditions favourable for polar stratospheric cloud (PSC) formation in the Arctic winter lower stratosphere. We find that in the Antarctic winter lower stratosphere, the low temperature extremes required for PSC formation increase in the model as ozone is depleted, but remain steady through the twenty-first century as the warming from ozone recovery roughly balances the cooling from climate change. Thus, ozone depletion itself plays a major role in the Antarctic trends in low temperature extremes. The model trend in low temperature extremes in the Arctic through the latter half of the twentieth century is weaker and less statistically robust than the observed trend. It is not projected to continue into the future. Ozone depletion in the Arctic is weaker in the CMAM than in observations, which may account for the weak past trend in low temperature extremes. In the future, radiative cooling in the Arctic winter due to climate change is more than compensated by an increase in dynamically driven downwelling over the pole.
Resumo:
A necessary condition for a good probabilistic forecast is that the forecast system is shown to be reliable: forecast probabilities should equal observed probabilities verified over a large number of cases. As climate change trends are now emerging from the natural variability, we can apply this concept to climate predictions and compute the reliability of simulated local and regional temperature and precipitation trends (1950–2011) in a recent multi-model ensemble of climate model simulations prepared for the Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5). With only a single verification time, the verification is over the spatial dimension. The local temperature trends appear to be reliable. However, when the global mean climate response is factored out, the ensemble is overconfident: the observed trend is outside the range of modelled trends in many more regions than would be expected by the model estimate of natural variability and model spread. Precipitation trends are overconfident for all trend definitions. This implies that for near-term local climate forecasts the CMIP5 ensemble cannot simply be used as a reliable probabilistic forecast.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
There are several scoring rules that one can choose from in order to score probabilistic forecasting models or estimate model parameters. Whilst it is generally agreed that proper scoring rules are preferable, there is no clear criterion for preferring one proper scoring rule above another. This manuscript compares and contrasts some commonly used proper scoring rules and provides guidance on scoring rule selection. In particular, it is shown that the logarithmic scoring rule prefers erring with more uncertainty, the spherical scoring rule prefers erring with lower uncertainty, whereas the other scoring rules are indifferent to either option.
Resumo:
Three wind gust estimation (WGE) methods implemented in the numerical weather prediction (NWP) model COSMO-CLM are evaluated with respect to their forecast quality using skill scores. Two methods estimate gusts locally from mean wind speed and the turbulence state of the atmosphere, while the third one considers the mixing-down of high momentum within the planetary boundary layer (WGE Brasseur). One hundred and fifty-eight windstorms from the last four decades are simulated and results are compared with gust observations at 37 stations in Germany. Skill scores reveal that the local WGE methods show an overall better behaviour, whilst WGE Brasseur performs less well except for mountain regions. The here introduced WGE turbulent kinetic energy (TKE) permits a probabilistic interpretation using statistical characteristics of gusts at observational sites for an assessment of uncertainty. The WGE TKE formulation has the advantage of a ‘native’ interpretation of wind gusts as result of local appearance of TKE. The inclusion of a probabilistic WGE TKE approach in NWP models has, thus, several advantages over other methods, as it has the potential for an estimation of uncertainties of gusts at observational sites.
Resumo:
We report numerical results from a study of balance dynamics using a simple model of atmospheric motion that is designed to help address the question of why balance dynamics is so stable. The non-autonomous Hamiltonian model has a chaotic slow degree of freedom (representing vortical modes) coupled to one or two linear fast oscillators (representing inertia-gravity waves). The system is said to be balanced when the fast and slow degrees of freedom are separated. We find adiabatic invariants that drift slowly in time. This drift is consistent with a random-walk behaviour at a speed which qualitatively scales, even for modest time scale separations, as the upper bound given by Neishtadt’s and Nekhoroshev’s theorems. Moreover, a similar type of scaling is observed for solutions obtained using a singular perturbation (‘slaving’) technique in resonant cases where Nekhoroshev’s theorem does not apply. We present evidence that the smaller Lyapunov exponents of the system scale exponentially as well. The results suggest that the observed stability of nearly-slow motion is a consequence of the approximate adiabatic invariance of the fast motion.
Resumo:
The present article addresses the following question: what variables condition syntactic transfer? Evidence is provided in support of the position that third language (L3) transfer is selective, whereby, at least under certain conditions, it is driven by the typological proximity of the target L3 measured against the other previously acquired linguistic systems (cf. Rothman and Cabrelli Amaro, 2007, 2010; Rothman, 2010; Montrul et al., 2011). To show this, we compare data in the domain of adjectival interpretation between successful first language (L1) Italian learners of English as a second language (L2) at the low to intermediate proficiency level of L3 Spanish, and successful L1 English learners of L2 Spanish at the same levels for L3 Brazilian Portuguese. The data show that, irrespective of the L1 or the L2, these L3 learners demonstrate target knowledge of subtle adjectival semantic nuances obtained via noun-raising, which English lacks and the other languages share. We maintain that such knowledge is transferred to the L3 from Italian (L1) and Spanish (L2) respectively in light of important differences between the L3 learners herein compared to what is known of the L2 Spanish performance of L1 English speakers at the same level of proficiency (see, for example, Judy et al., 2008; Rothman et al., 2010). While the present data are consistent with Flynn et al.’s (2004) Cumulative Enhancement Model, we discuss why a coupling of these data with evidence from other recent L3 studies suggests necessary modifications to this model, offering in its stead the Typological Primacy Model (TPM) for multilingual transfer.