939 resultados para measurement error models
Resumo:
The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.
Resumo:
We examine differential equations where nonlinearity is a result of the advection part of the total derivative or the use of quadratic algebraic constraints between state variables (such as the ideal gas law). We show that these types of nonlinearity can be accounted for in the tangent linear model by a suitable choice of the linearization trajectory. Using this optimal linearization trajectory, we show that the tangent linear model can be used to reproduce the exact nonlinear error growth of perturbations for more than 200 days in a quasi-geostrophic model and more than (the equivalent of) 150 days in the Lorenz 96 model. We introduce an iterative method, purely based on tangent linear integrations, that converges to this optimal linearization trajectory. The main conclusion from this article is that this iterative method can be used to account for nonlinearity in estimation problems without using the nonlinear model. We demonstrate this by performing forecast sensitivity experiments in the Lorenz 96 model and show that we are able to estimate analysis increments that improve the two-day forecast using only four backward integrations with the tangent linear model. Copyright © 2011 Royal Meteorological Society
Resumo:
This study presents a model intercomparison of four regional climate models (RCMs) and one variable resolution atmospheric general circulation model (AGCM) applied over Europe with special focus on the hydrological cycle and the surface energy budget. The models simulated the 15 years from 1979 to 1993 by using quasi-observed boundary conditions derived from ECMWF re-analyses (ERA). The model intercomparison focuses on two large atchments representing two different climate conditions covering two areas of major research interest within Europe. The first is the Danube catchment which represents a continental climate dominated by advection from the surrounding land areas. It is used to analyse the common model error of a too dry and too warm simulation of the summertime climate of southeastern Europe. This summer warming and drying problem is seen in many RCMs, and to a less extent in GCMs. The second area is the Baltic Sea catchment which represents maritime climate dominated by advection from the ocean and from the Baltic Sea. This catchment is a research area of many studies within Europe and also covered by the BALTEX program. The observed data used are monthly mean surface air temperature, precipitation and river discharge. For all models, these are used to estimate mean monthly biases of all components of the hydrological cycle over land. In addition, the mean monthly deviations of the surface energy fluxes from ERA data are computed. Atmospheric moisture fluxes from ERA are compared with those of one model to provide an independent estimate of the convergence bias derived from the observed data. These help to add weight to some of the inferred estimates and explain some of the discrepancies between them. An evaluation of these biases and deviations suggests possible sources of error in each of the models. For the Danube catchment, systematic errors in the dynamics cause the prominent summer drying problem for three of the RCMs, while for the fourth RCM this is related to deficiencies in the land surface parametrization. The AGCM does not show this drying problem. For the Baltic Sea catchment, all models similarily overestimate the precipitation throughout the year except during the summer. This model deficit is probably caused by the internal model parametrizations, such as the large-scale condensation and the convection schemes.
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
We evaluated the accuracy of six watershed models of nitrogen export in streams (kg km2 yr−1) developed for use in large watersheds and representing various empirical and quasi-empirical approaches described in the literature. These models differ in their methods of calibration and have varying levels of spatial resolution and process complexity, which potentially affect the accuracy (bias and precision) of the model predictions of nitrogen export and source contributions to export. Using stream monitoring data and detailed estimates of the natural and cultural sources of nitrogen for 16 watersheds in the northeastern United States (drainage sizes = 475 to 70,000 km2), we assessed the accuracy of the model predictions of total nitrogen and nitrate-nitrogen export. The model validation included the use of an error modeling technique to identify biases caused by model deficiencies in quantifying nitrogen sources and biogeochemical processes affecting the transport of nitrogen in watersheds. Most models predicted stream nitrogen export to within 50% of the measured export in a majority of the watersheds. Prediction errors were negatively correlated with cultivated land area, indicating that the watershed models tended to over predict export in less agricultural and more forested watersheds and under predict in more agricultural basins. The magnitude of these biases differed appreciably among the models. Those models having more detailed descriptions of nitrogen sources, land and water attenuation of nitrogen, and water flow paths were found to have considerably lower bias and higher precision in their predictions of nitrogen export.
Resumo:
The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.
Resumo:
Abstract This study presents a model intercomparison of four regional climate models (RCMs) and one variable resolution atmospheric general circulation model (AGCM) applied over Europe with special focus on the hydrological cycle and the surface energy budget. The models simulated the 15 years from 1979 to 1993 by using quasi-observed boundary conditions derived from ECMWF re-analyses (ERA). The model intercomparison focuses on two large atchments representing two different climate conditions covering two areas of major research interest within Europe. The first is the Danube catchment which represents a continental climate dominated by advection from the surrounding land areas. It is used to analyse the common model error of a too dry and too warm simulation of the summertime climate of southeastern Europe. This summer warming and drying problem is seen in many RCMs, and to a less extent in GCMs. The second area is the Baltic Sea catchment which represents maritime climate dominated by advection from the ocean and from the Baltic Sea. This catchment is a research area of many studies within Europe and also covered by the BALTEX program. The observed data used are monthly mean surface air temperature, precipitation and river discharge. For all models, these are used to estimate mean monthly biases of all components of the hydrological cycle over land. In addition, the mean monthly deviations of the surface energy fluxes from ERA data are computed. Atmospheric moisture fluxes from ERA are compared with those of one model to provide an independent estimate of the convergence bias derived from the observed data. These help to add weight to some of the inferred estimates and explain some of the discrepancies between them. An evaluation of these biases and deviations suggests possible sources of error in each of the models. For the Danube catchment, systematic errors in the dynamics cause the prominent summer drying problem for three of the RCMs, while for the fourth RCM this is related to deficiencies in the land surface parametrization. The AGCM does not show this drying problem. For the Baltic Sea catchment, all models similarily overestimate the precipitation throughout the year except during the summer. This model deficit is probably caused by the internal model parametrizations, such as the large-scale condensation and the convection schemes.
Resumo:
Proactive motion in hand tracking and in finger bending, in which the body motion occurs prior to the reference signal, was reported by the preceding researchers when the target signals were shown to the subjects at relatively high speed or high frequencies. These phenomena indicate that the human sensory-motor system tends to choose an anticipatory mode rather than a reactive mode, when the target motion is relatively fast. The present research was undertaken to study what kind of mode appears in the sensory-motor system when two persons were asked to track the hand position of the partner with each other at various mean tracking frequency. The experimental results showed a transition from a mutual error-correction mode to a synchronization mode occurred in the same region of the tracking frequency with that of the transition from a reactive error-correction mode to a proactive anticipatory mode in the mechanical target tracking experiments. Present research indicated that synchronization of body motion occurred only when both of the pair subjects operated in a proactive anticipatory mode. We also presented mathematical models to explain the behavior of the error-correction mode and the synchronization mode.
Resumo:
Current methods and techniques used in designing organisational performance measurement systems do not consider the multiple aspects of business processes or the semantics of data generated during the lifecycle of a product. In this paper, we propose an organisational performance measurement systems design model that is based on the semantics of an organisation, business process and products lifecycle. Organisational performance measurement is examined from academic and practice disciplines. The multi-discipline approach is used as a research tool to explore the weaknesses of current models that are used to design organisational performance measurement systems. This helped in identifying the gaps in research and practice concerning the issues and challenges in designing information systems for measuring the performance of an organisation. The knowledge sources investigated include on-going and completed research project reports; scientific and management literature; and practitioners’ magazines.
Resumo:
Simulations of ozone loss rates using a three-dimensional chemical transport model and a box model during recent Antarctic and Arctic winters are compared with experimental loss rates. The study focused on the Antarctic winter 2003, during which the first Antarctic Match campaign was organized, and on Arctic winters 1999/2000, 2002/2003. The maximum ozone loss rates retrieved by the Match technique for the winters and levels studied reached 6 ppbv/sunlit hour and both types of simulations could generally reproduce the observations at 2-sigma error bar level. In some cases, for example, for the Arctic winter 2002/2003 at 475 K level, an excellent agreement within 1-sigma standard deviation level was obtained. An overestimation was also found with the box model simulation at some isentropic levels for the Antarctic winter and the Arctic winter 1999/2000, indicating an overestimation of chlorine activation in the model. Loss rates in the Antarctic show signs of saturation in September, which have to be considered in the comparison. Sensitivity tests were performed with the box model in order to assess the impact of kinetic parameters of the ClO-Cl2O2 catalytic cycle and total bromine content on the ozone loss rate. These tests resulted in a maximum change in ozone loss rates of 1.2 ppbv/sunlit hour, generally in high solar zenith angle conditions. In some cases, a better agreement was achieved with fastest photolysis of Cl2O2 and additional source of total inorganic bromine but at the expense of overestimation of smaller ozone loss rates derived later in the winter.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Resumo:
The main uncertainty in anthropogenic forcing of the Earth’s climate stems from pollution aerosols, particularly their ‘‘indirect effect’’ whereby aerosols modify cloud properties. We develop a new methodology to derive a measurement-based estimate using almost exclusively information from an Earth radiation budget instrument (CERES) and a radiometer (MODIS). We derive a statistical relationship between planetary albedo and cloud properties, and, further, between the cloud properties and column aerosol concentration. Combining these relationships with a data set of satellite-derived anthropogenic aerosol fraction, we estimate an anthropogenic radiative forcing of �-0.9 ± 0.4 Wm�-2 for the aerosol direct effect and of �-0.2 ± 0.1 Wm�-2 for the cloud albedo effect. Because of uncertainties in both satellite data and the method, the uncertainty of this result is likely larger than the values given here which correspond only to the quantifiable error estimates. The results nevertheless indicate that current global climate models may overestimate the cloud albedo effect.
Resumo:
We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.
Resumo:
We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.