954 resultados para Nonlinear Time Series Analysis
Resumo:
Prediction of the stock market valuation is a common interest to all market participants. Theoretically sound market valuation can be achieved by discounting future earnings of equities to present. Competing valuation models seek to find variables that affect the equity market valuation in a way that the market valuation can be explained and also variables that could be used to predict market valuation. In this paper we test the contemporaneous relationship between stock prices, forward looking earnings and long-term government bond yields. We test this so-called Fed model in a long- and short-term time series analysis. In order to test the dynamics of the relationship, we use the cointegration framework. The data used in this study spans over four decades of various market conditions between 1964-2007, using data from United States. The empirical results of our analysis do not give support for the Fed model. We are able to show that the long-term government bonds do not play statistically significant role in this relationship. The effect of forward earnings yield on the stock market prices is significant and thus we suggest the use of standard valuation ratios when trying to predict the future paths of equity prices. Also, changes in the long-term government bond yields do not have significant short-term impact on stock prices.
Resumo:
Electricity spot prices have always been a demanding data set for time series analysis, mostly because of the non-storability of electricity. This feature, making electric power unlike the other commodities, causes outstanding price spikes. Moreover, the last several years in financial world seem to show that ’spiky’ behaviour of time series is no longer an exception, but rather a regular phenomenon. The purpose of this paper is to seek patterns and relations within electricity price outliers and verify how they affect the overall statistics of the data. For the study techniques like classical Box-Jenkins approach, series DFT smoothing and GARCH models are used. The results obtained for two geographically different price series show that patterns in outliers’ occurrence are not straightforward. Additionally, there seems to be no rule that would predict the appearance of a spike from volatility, while the reverse effect is quite prominent. It is concluded that spikes cannot be predicted based only on the price series; probably some geographical and meteorological variables need to be included in modeling.
Resumo:
The ecological fallacy (EF) is a common problem regional scientists have to deal with when using aggregated data in their analyses. Although there is a wide number of studies considering different aspects of this problem, little attention has been paid to the potential negative effects of the EF in a time series context. Using Spanish regional unemployment data, this paper shows that EF effects are not only observed at the cross-section level, but also in a time series framework. The empirical evidence obtained shows that analytical regional configurations are the least susceptible to time effects relative to both normative and random regional configurations, while normative configurations are an improvement over random ones.
Resumo:
In the power market, electricity prices play an important role at the economic level. The behavior of a price trend usually known as a structural break may change over time in terms of its mean value, its volatility, or it may change for a period of time before reverting back to its original behavior or switching to another style of behavior, and the latter is typically termed a regime shift or regime switch. Our task in this thesis is to develop an electricity price time series model that captures fat tailed distributions which can explain this behavior and analyze it for better understanding. For NordPool data used, the obtained Markov Regime-Switching model operates on two regimes: regular and non-regular. Three criteria have been considered price difference criterion, capacity/flow difference criterion and spikes in Finland criterion. The suitability of GARCH modeling to simulate multi-regime modeling is also studied.
Resumo:
Finansanalytiker har en stor betydelse för finansmarknaderna, speciellt igenom att förmedla information genom resultatprognoser. Typiskt är att analytiker i viss grad är oeniga i sina resultatprognoser, och det är just denna oenighet analytiker emellan som denna avhandling studerar. Då ett företag rapporterar förluster tenderar oenigheten gällande ett företags framtid att öka. På ett intuitivt plan är det lätt att tolka detta som ökad osäkerhet. Det är även detta man finner då man studerar analytikerrapporter - analytiker ser ut att bli mer osäkra då företag börjar gå med förlust, och det är precis då som även oenigheten mellan analytikerna ökar. De matematisk-teoretiska modeller som beskriver analytikers beslutsprocesser har däremot en motsatt konsekvens - en ökad oenighet analytiker emellan kan endast uppkomma ifall analytikerna blir säkrare på ett individuellt plan, där den drivande kraften är asymmetrisk information. Denna avhandling löser motsägelsen mellan ökad säkerhet/osäkerhet som drivkraft bakom spridningen i analytikerprognoser. Genom att beakta mängden publik information som blir tillgänglig via resultatrapporter är det inte möjligt för modellerna för analytikers beslutsprocesser att ge upphov till de nivåer av prognosspridning som kan observeras i data. Slutsatsen blir därmed att de underliggande teoretiska modellerna för prognosspridning är delvis bristande och att spridning i prognoser istället mer troligt följer av en ökad osäkerhet bland analytikerna, i enlighet med vad analytiker de facto nämner i sina rapporter. Resultaten är viktiga eftersom en förståelse av osäkerhet runt t.ex. resultatrapportering bidrar till en allmän förståelse för resultatrapporteringsmiljön som i sin tur är av ytterst stor betydelse för prisbildning på finansmarknader. Vidare används typiskt ökad prognosspridning som en indikation på ökad informationsasymmetri i redovisningsforskning, ett fenomen som denna avhandling därmed ifrågasätter.
Resumo:
The aim of the present study was to determine the effect of volume and composition of fluid replacement on the physical performance of male football referees. Ten referees were evaluated during three official matches. In one match the participants were asked to consume mineral water ad libitum, and in the others they consumed a pre-determined volume of mineral water or a carbohydrate electrolyte solution (6.4% carbohydrate and 22 mM Na+) equivalent to 1% of their baseline body mass (half before the match and half during the interval). Total water loss, sweat rate and match physiological performance were measured. When rehydrated ad libitum (pre-match and at half time) participants lost 1.97 ± 0.18% of their pre-match body mass (2.14 ± 0.19 L). This parameter was significantly reduced when they consumed a pre-determined volume of fluid. Sweat rate was significantly reduced when the referees ingested a pre-determined volume of a carbohydrate electrolyte solution, 0.72 ± 0.12 vs 1.16 ± 0.11 L/h ad libitum. The high percentage (74.1%) of movements at low speed (walking, jogging) observed when they ingested fluid ad libitum was significantly reduced to 71% with mineral water and to 69.9% with carbohydrate solution. An increase in percent movement expended in backward running was observed when they consumed a pre-determined volume of carbohydrate solution, 7.7 ± 0.5 vs 5.5 ± 0.5% ad libitum. The improved hydration status achieved with the carbohydrate electrolyte solution reduced the length of time spent in activities at low-speed movements and increased the time spent in activities demanding high-energy expenditure.
Resumo:
This study aimed to examine the time course of endothelial function after a single handgrip exercise session combined with blood flow restriction in healthy young men. Nine participants (28±5.8 years) completed a single session of bilateral dynamic handgrip exercise (20 min with 60% of the maximum voluntary contraction). To induce blood flow restriction, a cuff was placed 2 cm below the antecubital fossa in the experimental arm. This cuff was inflated to 80 mmHg before initiation of exercise and maintained through the duration of the protocol. The experimental arm and control arm were randomly selected for all subjects. Brachial artery flow-mediated dilation (FMD) and blood flow velocity profiles were assessed using Doppler ultrasonography before initiation of the exercise, and at 15 and 60 min after its cessation. Blood flow velocity profiles were also assessed during exercise. There was a significant increase in FMD 15 min after exercise in the control arm compared with before exercise (64.09%±16.59%, P=0.001), but there was no change in the experimental arm (-12.48%±12.64%, P=0.252). FMD values at 15 min post-exercise were significantly higher for the control arm in comparison to the experimental arm (P=0.004). FMD returned to near baseline values at 60 min after exercise, with no significant difference between arms (P=0.424). A single handgrip exercise bout provoked an acute increase in FMD 15 min after exercise, returning to near baseline values at 60 min. This response was blunted by the addition of an inflated pneumatic cuff to the exercising arm.
Resumo:
Time series analysis has gone through different developmental stages before the current modern approaches. These can broadly categorized as the classical time series analysis and modern time series analysis approach. In the classical one, the basic target of the analysis is to describe the major behaviour of the series without necessarily dealing with the underlying structures. On the contrary, the modern approaches strives to summarize the behaviour of the series going through its underlying structure so that the series can be represented explicitly. In other words, such approach of time series analysis tries to study the series structurally. The components of the series that make up the observation such as the trend, seasonality, regression and disturbance terms are modelled explicitly before putting everything together in to a single state space model which give the natural interpretation of the series. The target of this diploma work is to practically apply the modern approach of time series analysis known as the state space approach, more specifically, the dynamic linear model, to make trend analysis over Ionosonde measurement data. The data is time series of the peak height of F2 layer symbolized by hmF2 which is the height of high electron density. In addition, the work also targets to investigate the connection between solar activity and the peak height of F2 layer. Based on the result found, the peak height of the F2 layer has shown a decrease during the observation period and also shows a nonlinear positive correlation with solar activity.
Resumo:
We consider the problem of testing whether the observations X1, ..., Xn of a time series are independent with unspecified (possibly nonidentical) distributions symmetric about a common known median. Various bounds on the distributions of serial correlation coefficients are proposed: exponential bounds, Eaton-type bounds, Chebyshev bounds and Berry-Esséen-Zolotarev bounds. The bounds are exact in finite samples, distribution-free and easy to compute. The performance of the bounds is evaluated and compared with traditional serial dependence tests in a simulation experiment. The procedures proposed are applied to U.S. data on interest rates (commercial paper rate).
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.