56 resultados para Serial correlation
em CentAUR: Central Archive University of Reading - UK
Resumo:
Theoretical models suggest that decisions about diet, weight and health status are endogenous within a utility maximization framework. In this article, we model these behavioural relationships in a fixed-effect panel setting using a simultaneous equation system, with a view to determining whether economic variables can explain the trends in calorie consumption, obesity and health in Organization for Economic Cooperation and Development (OECD) countries and the large differences among the countries. The empirical model shows that progress in medical treatment and health expenditure mitigates mortality from diet-related diseases, despite rising obesity rates. While the model accounts for endogeneity and serial correlation, results are affected by data limitations.
Resumo:
This paper derives exact discrete time representations for data generated by a continuous time autoregressive moving average (ARMA) system with mixed stock and flow data. The representations for systems comprised entirely of stocks or of flows are also given. In each case the discrete time representations are shown to be of ARMA form, the orders depending on those of the continuous time system. Three examples and applications are also provided, two of which concern the stationary ARMA(2, 1) model with stock variables (with applications to sunspot data and a short-term interest rate) and one concerning the nonstationary ARMA(2, 1) model with a flow variable (with an application to U.S. nondurable consumers’ expenditure). In all three examples the presence of an MA(1) component in the continuous time system has a dramatic impact on eradicating unaccounted-for serial correlation that is present in the discrete time version of the ARMA(2, 0) specification, even though the form of the discrete time model is ARMA(2, 1) for both models.
Resumo:
We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.
Resumo:
The clustering in time (seriality) of extratropical cyclones is responsible for large cumulative insured losses in western Europe, though surprisingly little scientific attention has been given to this important property. This study investigates and quantifies the seriality of extratropical cyclones in the Northern Hemisphere using a point-process approach. A possible mechanism for serial clustering is the time-varying effect of the large-scale flow on individual cyclone tracks. Another mechanism is the generation by one parent cyclone of one or more offspring through secondary cyclogenesis. A long cyclone-track database was constructed for extended October March winters from 1950 to 2003 using 6-h analyses of 850-mb relative vorticity derived from the NCEP NCAR reanalysis. A dispersion statistic based on the varianceto- mean ratio of monthly cyclone counts was used as a measure of clustering. It reveals extensive regions of statistically significant clustering in the European exit region of the North Atlantic storm track and over the central North Pacific. Monthly cyclone counts were regressed on time-varying teleconnection indices with a log-linear Poisson model. Five independent teleconnection patterns were found to be significant factors over Europe: the North Atlantic Oscillation (NAO), the east Atlantic pattern, the Scandinavian pattern, the east Atlantic western Russian pattern, and the polar Eurasian pattern. The NAO alone is not sufficient for explaining the variability of cyclone counts in the North Atlantic region and western Europe. Rate dependence on time-varying teleconnection indices accounts for the variability in monthly cyclone counts, and a cluster process did not need to be invoked.
Resumo:
The authors propose a bit serial pipeline used to perform the genetic operators in a hardware genetic algorithm. The bit-serial nature of the dataflow allows the operators to be pipelined, resulting in an architecture which is area efficient, easily scaled and is independent of the lengths of the chromosomes. An FPGA implementation of the device achieves a throughput of >25 million genes per second
Resumo:
We present the results of a study of solar wind velocity and magnetic field correlation lengths over the last 35 years. The correlation length of the magnetic field magnitude λ | B| increases on average by a factor of two at solar maxima compared to solar minima. The correlation lengths of the components of the magnetic field λ_{B_{XYZ}} and of the velocity λ_{V_{YZ}} do not show this change and have similar values, indicating a continual turbulent correlation length of around 1.4×106 km. We conclude that a linear relation between λ | B|, VB 2, and Kp suggests that the former is related to the total magnetic energy in the solar wind and an estimate of the average size of geoeffective structures, which is, in turn, proportional to VB 2. By looking at the distribution of daily correlation lengths we show that the solar minimum values of λ | B| correspond to the turbulent outer scale. A tail of larger λ | B| values is present at solar maximum causing the increase in mean value.
Resumo:
The relationship between the magnetic field intensity and speed of solar wind events is examined using ∼3 years of data from the ACE spacecraft. No preselection of coronal mass ejections (CMEs) or magnetic clouds is carried out. The correlation between the field intensity and maximum speed is shown to increase significantly when |B| > 18 nT for 3 hours or more. Of the 24 events satisfying this criterion, 50% are magnetic clouds, the remaining half having no ordered field structure. A weaker correlation also exists between southward magnetic field and speed. Sixteen of the events are associated with halo CMEs leaving the Sun 2 to 4 days prior to the leading edge of the events arriving at ACE. Events selected by speed thresholds show no significant correlation, suggesting different relations between field intensity and speed for fast solar wind streams and ICMEs.
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.