905 resultados para Errors in variables models
Inference for nonparametric high-frequency estimators with an application to time variation in betas
Resumo:
We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.
Resumo:
The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.
Resumo:
Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.
Resumo:
Models often underestimate blocking in the Atlantic and Pacific basins and this can lead to errors in both weather and climate predictions. Horizontal resolution is often cited as the main culprit for blocking errors due to poorly resolved small-scale variability, the upscale effects of which help to maintain blocks. Although these processes are important for blocking, the authors show that much of the blocking error diagnosed using common methods of analysis and current climate models is directly attributable to the climatological bias of the model. This explains a large proportion of diagnosed blocking error in models used in the recent Intergovernmental Panel for Climate Change report. Furthermore, greatly improved statistics are obtained by diagnosing blocking using climate model data corrected to account for mean model biases. To the extent that mean biases may be corrected in low-resolution models, this suggests that such models may be able to generate greatly improved levels of atmospheric blocking.
Resumo:
For data assimilation in numerical weather prediction, the initial forecast-error covariance matrix Pf is required. For variational assimilation it is particularly important to prescribe an accurate initial matrix Pf, since Pf is either static (in the 3D-Var case) or constant at the beginning of each assimilation window (in the 4D-Var case). At large scales the atmospheric flow is well approximated by hydrostatic balance and this balance is strongly enforced in the initial matrix Pf used in operational variational assimilation systems such as that of the Met Office. However, at convective scales this balance does not necessarily hold any more. Here we examine the extent to which hydrostatic balance is valid in the vertical forecast-error covariances for high-resolution models in order to determine whether there is a need to relax this balance constraint in convective-scale data assimilation. We use the Met Office Global and Regional Ensemble Prediction System (MOGREPS) and a 1.5 km resolution version of the Unified Model for a case study characterized by the presence of convective activity. An ensemble of high-resolution forecasts valid up to three hours after the onset of convection is produced. We show that at 1.5 km resolution hydrostatic balance does not hold for forecast errors in regions of convection. This indicates that in the presence of convection hydrostatic balance should not be enforced in the covariance matrix used for variational data assimilation at this scale. The results show the need to investigate covariance models that may be better suited for convective-scale data assimilation. Finally, we give a measure of the balance present in the forecast perturbations as a function of the horizontal scale (from 3–90 km) using a set of diagnostics. Copyright © 2012 Royal Meteorological Society and British Crown Copyright, the Met Office
Resumo:
Nearly all chemistry–climate models (CCMs) have a systematic bias of a delayed springtime breakdown of the Southern Hemisphere (SH) stratospheric polar vortex, implying insufficient stratospheric wave drag. In this study the Canadian Middle Atmosphere Model (CMAM) and the CMAM Data Assimilation System (CMAM-DAS) are used to investigate the cause of this bias. Zonal wind analysis increments from CMAMDAS reveal systematic negative values in the stratosphere near 608S in winter and early spring. These are interpreted as indicating a bias in the model physics, namely, missing gravity wave drag (GWD). The negative analysis increments remain at a nearly constant height during winter and descend as the vortex weakens, much like orographic GWD. This region is also where current orographic GWD parameterizations have a gap in wave drag, which is suggested to be unrealistic because of missing effects in those parameterizations. These findings motivate a pair of free-runningCMAMsimulations to assess the impact of extra orographicGWDat 608S. The control simulation exhibits the cold-pole bias and delayed vortex breakdown seen in the CCMs. In the simulation with extra GWD, the cold-pole bias is significantly reduced and the vortex breaks down earlier. Changes in resolved wave drag in the stratosphere also occur in response to the extra GWD, which reduce stratospheric SH polar-cap temperature biases in late spring and early summer. Reducing the dynamical biases, however, results in degraded Antarctic column ozone. This suggests that CCMs that obtain realistic column ozone in the presence of an overly strong and persistent vortex may be doing so through compensating errors.
Resumo:
The present study investigates the growth of error in baroclinic waves. It is found that stable or neutral waves are particularly sensitive to errors in the initial condition. Short stable waves are mainly sensitive to phase errors and the ultra long waves to amplitude errors. Analysis simulation experiments have indicated that the amplitudes of the very long waves become usually too small in the free atmosphere, due to the sparse and very irregular distribution of upper air observations. This also applies to the four-dimensional data assimilation experiments, since the amplitudes of the very long waves are usually underpredicted. The numerical experiments reported here show that if the very long waves have these kinds of amplitude errors in the upper troposphere or lower stratosphere the error is rapidly propagated (within a day or two) to the surface and to the lower troposphere.
Resumo:
SST errors in the tropical Atlantic are large and systematic in current coupled general-circulation models. We analyse the growth of these errors in the region of the south-eastern tropical Atlantic in initialised decadal hindcasts integrations for three of the models participating in the Coupled Model Inter-comparison Project 5. A variety of causes for the initial bias development are identified, but a crucial involvement is found, in all cases considered, of ocean-atmosphere coupling for their maintenance. These involve an oceanic “bridge” between the Equator and the Benguela-Angola coastal seas which communicates sub-surface ocean anomalies and constitutes a coupling between SSTs in the south-eastern tropical Atlantic and the winds over the Equator. The resulting coupling between SSTs, winds and precipitation represents a positive feedback for warm SST errors in the south-eastern tropical Atlantic.
Resumo:
Considerable progress has taken place in numerical weather prediction over the last decade. It has been possible to extend predictive skills in the extra-tropics of the Northern Hemisphere during the winter from less than five days to seven days. Similar improvements, albeit on a lower level, have taken place in the Southern Hemisphere. Another example of improvement in the forecasts is the prediction of intense synoptic phenomena such as cyclogenesis which on the whole is quite successful with the most advanced operational models (Bengtsson (1989), Gadd and Kruze (1988)). A careful examination shows that there are no single causes for the improvements in predictive skill, but instead they are due to several different factors encompassing the forecasting system as a whole (Bengtsson, 1985). In this paper we will focus our attention on the role of data-assimilation and the effect it may have on reducing the initial error and hence improving the forecast. The first part of the paper contains a theoretical discussion on error growth in simple data assimilation systems, following Leith (1983). In the second part we will apply the result on actual forecast data from ECMWF. The potential for further forecast improvements within the framework of the present observing system in the two hemispheres will be discussed.
Resumo:
The aim of this article is to discuss the estimation of the systematic risk in capital asset pricing models with heavy-tailed error distributions to explain the asset returns. Diagnostic methods for assessing departures from the model assumptions as well as the influence of observations on the parameter estimates are also presented. It may be shown that outlying observations are down weighted in the maximum likelihood equations of linear models with heavy-tailed error distributions, such as Student-t, power exponential, logistic II, so on. This robustness aspect may also be extended to influential observations. An application in which the systematic risk estimate of Microsoft is compared under normal and heavy-tailed errors is presented for illustration.
Resumo:
The search for better performance in the structural systems has been taken to more refined models, involving the analysis of a growing number of details, which should be correctly formulated aiming at defining a representative model of the real system. Representative models demand a great detailing of the project and search for new techniques of evaluation and analysis. Model updating is one of this technologies, it can be used to improve the predictive capabilities of computer-based models. This paper presents a FRF-based finite element model updating procedure whose the updating variables are physical parameters of the model. It includes the damping effects in the updating procedure assuming proportional and non proportional damping mechanism. The updating parameters are defined at an element level or macro regions of the model. So, the parameters are adjusted locally, facilitating the physical interpretation of the adjusting of the model. Different tests for simulated and experimental data are discussed aiming at evaluating the characteristics and potentialities of the methodology.
Resumo:
Systematic errors can have a significant effect on GPS observable. In medium and long baselines the major systematic error source are the ionosphere and troposphere refraction and the GPS satellites orbit errors. But, in short baselines, the multipath is more relevant. These errors degrade the accuracy of the positioning accomplished by GPS. So, this is a critical problem for high precision GPS positioning applications. Recently, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique. It uses a natural cubic spline to model the errors as a function which varies smoothly in time. The systematic errors functions, ambiguities and station coordinates, are estimated simultaneously. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method.
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
The fact that there is a complex and bidirectional communication between the immune and nervous systems has been well demonstrated. Lipopolysaccharide (LPS), a component of gram-negative bacteria, is widely used to systematically stimulate the immune system and generate profound physiological and behavioural changes, also known as sickness behaviour (e.g. anhedonia, lethargy, loss of appetite, anxiety, sleepiness). Different ethological tools have been used to analyse the behavioural modifications induced by LPS; however, many researchers analysed only individual tests, a single LPS dose or a unique ethological parameter, thus leading to disagreements regarding the data. In the present study, we investigated the effects of different doses of LPS (10, 50, 200 and 500 mu g/kg, i.p.) in young male Wistar rats (weighing 180200 g; 89 weeks old) on the ethological and spatiotemporal parameters of the elevated plus maze, light-dark box, elevated T maze, open-field tests and emission of ultrasound vocalizations. There was a dose-dependent increase in anxiety-like behaviours caused by LPS, forming an inverted U curve peaked at LPS 200 mu g/kg dose. However, these anxiety-like behaviours were detected only by complementary ethological analysis (stretching, grooming, immobility responses and alarm calls), and these reactions seem to be a very sensitive tool in assessing the first signs of sickness behaviour. In summary, the present work clearly showed that there are resting and alertness reactions induced by opposite neuroimmune mechanisms (neuroimmune bias) that could lead to anxiety behaviours, suggesting that misunderstanding data could occur when only few ethological variables or single doses of LPS are analysed. Finally, it is hypothesized that this bias is an evolutionary tool that increases animals security while the body recovers from a systemic infection.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.