980 resultados para Autoregressive moving average (ARMA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new farm bill enacted by Congress in June 2008 includes a new revenue-based safety-net, the Average Crop Revenue Election (ACRE) Program, that will be available to producers beginning with the 2009 crop year. This analysis of the mechanics of ACRE and the relevant yields and prices to include in ACRE can help producers assess whether ACRE will be a good choice for this crop year and beyond.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background In the last 20 years, there has been an increase in the incidence of allergic respiratory diseases worldwide and exposure to air pollution has been discussed as one of the factors associated with this increase. The objective of this study was to investigate the effects of air pollution on peak expiratory flow (PEF) and FEV1 in children with and without allergic sensitization. Methods Ninety-six children were followed from April to July, 2004 with spirometry measurements. They were tested for allergic sensitization (IgE, skin prick test, eosinophilia) and asked about allergic symptoms. Air pollution, temperature, and relative humidity data were available. Results Decrements in PEF were observed with previous 24-hr average exposure to air pollution, as well as with 310-day average exposure and were associated mainly with PM10, NO2, and O3 in all three categories of allergic sensitization. Even though allergic sensitized children tended to present larger decrements in the PEF measurements they were not statistically different from the non-allergic sensitized. Decrements in FEV1 were observed mainly with previous 24-hr average exposure and 3-day moving average. Conclusions Decrements in PEF associated with air pollution were observed in children independent from their allergic sensitization status. Their daily exposure to air pollution can be responsible for a chronic inflammatory process that might impair their lung growth and later their lung function in adulthood. Am. J. Ind. Med. 55:10871098, 2012. (c) 2012 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we present a new control chart for monitoring the covariance matrix in a bivariate process. In this method, n observations of the two variables were considered as if they came from a single variable (as a sample of 2n observations), and a sample variance was calculated. This statistic was used to build a new control chart specifically as a VMIX chart. The performance of the new control chart was compared with its main competitors: the generalized sampled variance chart, the likelihood ratio test, Nagao's test, probability integral transformation (v(t)), and the recently proposed VMAX chart. Among these statistics, only the VMAX chart was competitive with the VMIX chart. For shifts in both variances, the VMIX chart outperformed VMAX; however, VMAX showed better performance for large shifts (higher than 10%) in one variance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conceitos de qualidade cada vez mais se tornam essenciais para a sobrevivência da empresa agrícola, pois a importância do aprimoramento das operações agrícolas se faz necessária para a obtenção de resultados viáveis economicamente, ambientamente e socialmente. Uma das dimensões da qualidade é conseguir de conformidade, ou seja, a garantia de execução exata do que foi planejado para atender aos requisitos dos clientes em relação a um determinado produto ou serviço. Os objetivos deste trabalho são avaliar a distribuição longitudinal entre sementes de uma semeadora de anel interno rotativo, e propor a utilização da metodologia estatística da Média Móvel Exponencialmente Ponderada (MMEP) como alternativa para o controle de qualidade da semeadura, quando não há normalidade da distribuição dos dados. Os resultados demonstraram que a MMEP é adequada para a avaliação da qualidade da distribuição longitudinal de sementes, pois concordou com os dados apresentados na estatística descritiva, o que lhe credencia para avaliação de distribuições não normais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper defines and compares several models for describing excess influenza pneumonia mortality in Houston. First, the methodology used by the Center for Disease Control is examined and several variations of this methodology are studied. All of the models examined emphasize the difficulty of omitting epidemic weeks.^ In an attempt to find a better method of describing expected and epidemic mortality, time series methods are examined. Grouping in four-week periods, truncating the data series to adjust epidemic periods, and seasonally-adjusting the series y(,t), by:^ (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI)^ is the best method examined. This new series w(,t) is stationary and a moving average model MA(1) gives a good fit for forecasting influenza and pneumonia mortality in Houston.^ Influenza morbidity, other causes of death, sex, race, age, climate variables, environmental factors, and school absenteeism are all examined in terms of their relationship to influenza and pneumonia mortality. Both influenza morbidity and ischemic heart disease mortality show a very high relationship that remains when seasonal trends are removed from the data. However, when jointly modeling the three series it is obvious that the simple time series MA(1) model of truncated, seasonally-adjusted four-week data gives a better forecast.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high-resolution record of foraminiferal fragmentation (a dissolution indicator) for the last 250 k.y. (isotopic Stages 1 to 7) is identified in the upper 61.9 m of Ocean Drilling Program (ODP) Hole 828A, west Vanuatu. This record is comparable in detail to the atmospheric CO2 record and the d18O stack. Phase shifts between preservation spikes and maximum ice volumes (d18O of Globigerinoides sacculifer) are analogous to those on Ontong Java Plateau. Mass spectrometer (AMS14C) dating of a sample taken at the base of dissolution cycle B1 and the position of the last glacial maximum indicates a lag in time of ~8 k.y. in the Vanuatu region for the last glacial termination. When dissolution spikes are compared with minimum ice volumes there is no phase shift for the last two glacial terminations. The difference between Vanuatu and Ontong Java Plateau may be explained by local CO2 sinks and the interplay between intermediate and deep water masses. Terrigenous input increasingly affected sediment of Hole 828A on the North d'Entrecasteaux Ridge (NDR) as it approached Espiritu Santo Island. Mud and silt suspended in mid-water flows become important after 125 ka, while turbidites bypass the New Hebrides Trench only towards the last glacial maximum (LGM). Terrigenous supply seems to affect the lysocline profile that changed from an "open ocean" to a "near continent" type, thus favoring dissolution. Fragmentation of planktonic foraminifers is a more sensitive indicator of lysocline variations than is foraminiferal susceptibility to dissolution, the foraminiferal dissolution index, the abundance of benthic foraminifers, or CaCO3 content. A modern foraminiferal lysocline for the neighboring area (between 10°S and 30°S, and 160°E and 180°E) is found at 3.1 km below sea level, compared to west Vanuatu where it is shallower. The past lysocline level was deeper than 3086 m during intervals of dissolution minima, and ranged from ~2550 to 3000 m during intervals of dissolution maxima. The high sedimentation rates (in the order of 10 to 50 cm/k.y.) found in Hole 828A offer a great potential for future high-resolution studies either in this hole or other western localities along the NDR. Areas of high sedimentation near continental regions have been discarded for paleoceanographic and/or paleoclimatic studies. Nonetheless, conditions analogous to those found in Hole 828A are expected to occur in many trench areas around the world where mid-water flows have preserved as yet undiscovered fine high-resolution sedimentary records.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at pixel resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Drake Passage (DP) is the major geographic constriction for the Antarctic Circumpolar Current (ACC) and exerts a strong control on the exchange of physical, chemical, and biological properties between the Atlantic, Pacific, and Indian Ocean basins. Resolving changes in the flow of circumpolar water masses through this gateway is, therefore, crucial for advancing our understanding of the Southern Ocean's role in global ocean and climate variability. Here, we reconstruct changes in DP throughflow dynamics over the past 65,000 y based on grain size and geochemical properties of sediment records from the southernmost continental margin of South America. Combined with published sediment records from the Scotia Sea, we argue for a considerable total reduction of DP transport and reveal an up to ~40% decrease in flow speed along the northernmost ACC pathway entering the DP during glacial times. Superimposed on this long-term decrease are high-amplitude, millennial-scale variations, which parallel Southern Ocean and Antarctic temperature patterns. The glacial intervals of strong weakening of the ACC entering the DP imply an enhanced export of northern ACC surface and intermediate waters into the South Pacific Gyre and reduced Pacific-Atlantic exchange through the DP ("cold water route"). We conclude that changes in DP throughflow play a critical role for the global meridional overturning circulation and interbasin exchange in the Southern Ocean, most likely regulated by variations in the westerly wind field and changes in Antarctic sea ice extent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El propósito de esta tesis fue estudiar el rendimiento ofensivo de los equipos de balonmano de élite cuando se considera el balonmano como un sistema dinámico complejo no lineal. La perspectiva de análisis dinámica dependiente del tiempo fue adoptada para evaluar el rendimiento de los equipos durante el partido. La muestra general comprendió los 240 partidos jugados en la temporada 2011-2012 de la liga profesional masculina de balonmano de España (Liga ASOBAL). En el análisis posterior solo se consideraron los partidos ajustados (diferencia final de goles ≤ 5; n = 142). El estado del marcador, la localización del partido, el nivel de los oponentes y el periodo de juego fueron incorporados al análisis como variables situacionales. Tres estudios compusieron el núcleo de la tesis. En el primer estudio, analizamos la coordinación entre las series temporales que representan el proceso goleador a lo largo del partido de cada uno de los dos equipos que se enfrentan. Autocorrelaciones, correlaciones cruzadas, doble media móvil y transformada de Hilbert fueron usadas para el análisis. El proceso goleador de los equipos presentó una alta consistencia a lo largo de todos los partidos, así como fuertes modos de coordinación en fase en todos los contextos de juego. Las únicas diferencias se encontraron en relación al periodo de juego. La coordinación en los procesos goleadores de los equipos fue significativamente menor en el 1er y 2º periodo (0–10 min y 10–20 min), mostrando una clara coordinación creciente a medida que el partido avanzaba. Esto sugiere que son los 20 primeros minutos aquellos que rompen los partidos. En el segundo estudio, analizamos los efectos temporales (efecto inmediato, a corto y a medio plazo) de los tiempos muertos en el rendimiento goleador de los equipos. Modelos de regresión lineal múltiple fueron empleados para el análisis. Los resultados mostraron incrementos de 0.59, 1.40 y 1.85 goles para los periodos que comprenden la primera, tercera y quinta posesión de los equipos que pidieron el tiempo muerto. Inversamente, se encontraron efectos significativamente negativos para los equipos rivales, con decrementos de 0.50, 1.43 y 2.05 goles en los mismos periodos respectivamente. La influencia de las variables situacionales solo se registró en ciertos periodos de juego. Finalmente, en el tercer estudio, analizamos los efectos temporales de las exclusiones de los jugadores sobre el rendimiento goleador de los equipos, tanto para los equipos que sufren la exclusión (inferioridad numérica) como para los rivales (superioridad numérica). Se emplearon modelos de regresión lineal múltiple para el análisis. Los resultados mostraron efectos negativos significativos en el número de goles marcados por los equipos con un jugador menos, con decrementos de 0.25, 0.40, 0.61, 0.62 y 0.57 goles para los periodos que comprenden el primer, segundo, tercer, cuarto y quinto minutos previos y posteriores a la exclusión. Para los rivales, los resultados mostraron efectos positivos significativos, con incrementos de la misma magnitud en los mismos periodos. Esta tendencia no se vio afectada por el estado del marcador, localización del partido, nivel de los oponentes o periodo de juego. Los incrementos goleadores fueron menores de lo que se podría esperar de una superioridad numérica de 2 minutos. Diferentes teorías psicológicas como la paralización ante situaciones de presión donde se espera un gran rendimiento pueden ayudar a explicar este hecho. Los últimos capítulos de la tesis enumeran las conclusiones principales y presentan diferentes aplicaciones prácticas que surgen de los tres estudios. Por último, se presentan las limitaciones y futuras líneas de investigación. ABSTRACT The purpose of this thesis was to investigate the offensive performance of elite handball teams when considering handball as a complex non-linear dynamical system. The time-dependent dynamic approach was adopted to assess teams’ performance during the game. The overall sample comprised the 240 games played in the season 2011-2012 of men’s Spanish Professional Handball League (ASOBAL League). In the subsequent analyses, only close games (final goal-difference ≤ 5; n = 142) were considered. Match status, game location, quality of opposition, and game period situational variables were incorporated into the analysis. Three studies composed the core of the thesis. In the first study, we analyzed the game-scoring coordination between the time series representing the scoring processes of the two opposing teams throughout the game. Autocorrelation, cross-correlation, double moving average, and Hilbert transform were used for analysis. The scoring processes of the teams presented a high consistency across all the games as well as strong in-phase modes of coordination in all the game contexts. The only differences were found when controlling for the game period. The coordination in the scoring processes of the teams was significantly lower for the 1st and 2nd period (0–10 min and 10–20 min), showing a clear increasing coordination behavior as the game progressed. This suggests that the first 20 minutes are those that break the game-scoring. In the second study, we analyzed the temporal effects (immediate effect, short-term effect, and medium-term effect) of team timeouts on teams’ scoring performance. Multiple linear regression models were used for the analysis. The results showed increments of 0.59, 1.40 and 1.85 goals for the periods within the first, third and fifth timeout ball possessions for the teams that requested the timeout. Conversely, significant negative effects on goals scored were found for the opponent teams, with decrements of 0.59, 1.43 and 2.04 goals for the same periods, respectively. The influence of situational variables on the scoring performance was only registered in certain game periods. Finally, in the third study, we analyzed the players’ exclusions temporal effects on teams’ scoring performance, for the teams that suffer the exclusion (numerical inferiority) and for the opponents (numerical superiority). Multiple linear regression models were used for the analysis. The results showed significant negative effects on the number of goals scored for the teams with one less player, with decrements of 0.25, 0.40, 0.61, 0.62, and 0.57 goals for the periods within the previous and post one, two, three, four and five minutes of play. For the opponent teams, the results showed positive effects, with increments of the same magnitude in the same game periods. This trend was not affected by match status, game location, quality of opposition, or game period. The scoring increments were smaller than might be expected from a 2-minute numerical playing superiority. Psychological theories such as choking under pressure situations where good performance is expected could contribute to explain this finding. The final chapters of the thesis enumerate the main conclusions and underline the main practical applications that arise from the three studies. Lastly, limitations and future research directions are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the determination of the realized thermal niche and the effects of climate change on the range distribution of two brown trout populations inhabiting two streams in the Duero River basin (Iberian Peninsula) at the edge of the natural distribution area of this species. For reaching these goals, new methodological developments were applied to improve reliability of forecasts. Water temperature data were collected using 11 thermographs located along the altitudinal gradient, and they were used to model the relationship between stream temperature and air temperature along the river continuum. Trout abundance was studied using electrofishing at 37 sites to determine the current distribution. The RCP4.5 and RCP8.5 change scenarios adopted by the International Panel of Climate Change for its Fifth Assessment Report were used for simulations and local downscaling in this study. We found more reliable results using the daily mean stream temperature than maximum daily temperature and their respective seven days moving-average to determine the distribution thresholds. Thereby, the observed limits of the summer distribution of brown trout were linked to thresholds between 18.1ºC and 18.7ºC. These temperatures characterise a realised thermal niche narrower than the physiological thermal range. In the most unfavourable climate change scenario, the thermal habitat loss of brown trout increased to 38% (Cega stream) and 11% (Pirón stream) in the upstream direction at the end of the century; however, at the Cega stream, the range reduction could reach 56% due to the effect of a ?warm-window? opening in the piedmont reach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We argue that given even an infinitely long data sequence, it is impossible (with any test statistic) to distinguish perfectly between linear and nonlinear processes (including slightly noisy chaotic processes). Our approach is to consider the set of moving-average (linear) processes and study its closure under a suitable metric. We give the precise characterization of this closure, which is unexpectedly large, containing nonergodic processes, which are Poisson sums of independent and identically distributed copies of a stationary process. Proofs of these results will appear elsewhere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60G70, 60F12, 60G10.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 62F10, 62F12.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. ^ In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment ("relaxation" vs. "stress") are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. ^ For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). ^ In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the "relaxation" vs. "stress" states.^