919 resultados para autoregressive distributed lag model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the dynamic and asymmetric dependence structure between equity portfolios from the US and UK. We demonstrate the statistical significance of dynamic asymmetric copula models in modelling and forecasting market risk. First, we construct “high-minus-low" equity portfolios sorted on beta, coskewness, and cokurtosis. We find substantial evidence of dynamic and asymmetric dependence between characteristic-sorted portfolios. Second, we consider a dynamic asymmetric copula model by combining the generalized hyperbolic skewed t copula with the generalized autoregressive score (GAS) model to capture both the multivariate non-normality and the dynamic and asymmetric dependence between equity portfolios. We demonstrate its usefulness by evaluating the forecasting performance of Value-at-Risk and Expected Shortfall for the high-minus-low portfolios. From back-testing, e find consistent and robust evidence that our dynamic asymmetric copula model provides the most accurate forecasts, indicating the importance of incorporating the dynamic and asymmetric dependence structure in risk management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A network of 25 sonic stage sensors were deployed in the Squaw Creek basin upstream from Ames Iowa to determine if the state-of-the-art distributed hydrological model CUENCAS can produce reliable information for all road crossings including those that cross small creeks draining basins as small as 1 sq. mile. A hydraulic model was implemented for the major tributaries of the Squaw Creek where IFC sonic instruments were deployed and it was coupled to CUENCAS to validate the predictions made at small tributaries in the basin. This study demonstrates that the predictions made by the hydrological model at internal locations in the basins are as accurate as the predictions made at the outlet of the basin. Final rating curves based on surveyed cross sections were developed for the 22 IFC-bridge sites that are currently operating, and routine forecast is provided at those locations (see IFIS). Rating curves were developed for 60 additional bridge locations in the basin, however, we do not use those rating curves for routine forecast because the lack of accuracy of LiDAR derived cross sections is not optimal. The results of our work form the basis for two papers that have been submitted for publication to the Journal of Hydrological Engineering. Peer review of our work will gives a strong footing to our ability to expand our results from the pilot Squaw Creek basin to all basins in Iowa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Already in ancient Greece, Hippocrates postulated that disease showed a seasonal pattern characterised by excess winter mortality. Since then, several studies have confirmed this finding, and it was generally accepted that the increase in winter mortality was mostly due to respiratory infections and seasonal influenza. More recently, it was shown that cardiovascular disease (CVD) mortality also displayed such seasonality, and that the magnitude of the seasonal effect increased from the poles to the equator. The recent study by Yang et al assessed CVD mortality attributable to ambient temperature using daily data from 15 cities in China for years 2007-2013, including nearly two million CVD deaths. A high temperature variability between and within cities can be observed (figure 1). They used sophisticated statistical methodology to account for the complex temperature-mortality relationship; first, distributed lag non-linear models combined with quasi-Poisson regression to obtain city-specific estimates, taking into account temperature, relative humidity and atmospheric pressure; then, a meta-analysis to obtain the pooled estimates. The results confirm the winter excess mortality as reported by the Eurowinter3 and other4 groups, but they show that the magnitude of ambient temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the importance of the labour mobility of inventors, as well as the scale, extent and density of their collaborative research networks, for regional innovation outcomes. To do so, we apply a knowledge production function framework at the regional level and include inventors’ networks and their labour mobility as regressors. Our empirical approach takes full account of spatial interactions by estimating a spatial lag model together, where necessary, with a spatial error model. In addition, standard errors are calculated using spatial heteroskedasticity and autocorrelation consistent estimators to ensure their robustness in the presence of spatial error autocorrelation and heteroskedasticity of unknown form. Our results point to the existence of a robust positive correlation between intraregional labour mobility and regional innovation, whilst the relationship with networks is less clear. However, networking across regions positively correlates with a region’s innovation intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is devoted to the analysis of signal variation of the Cross-Direction and Machine-Direction measurements from paper web. The data that we possess comes from the real paper machine. Goal of the work is to reconstruct the basis weight structure of the paper and to predict its behaviour to the future. The resulting synthetic data is needed for simulation of paper web. The main idea that we used for describing the basis weight variation in the Cross-Direction is Empirical Orthogonal Functions (EOF) algorithm, which is closely related to Principal Component Analysis (PCA) method. Signal forecasting in time is based on Time-Series analysis. Two principal mathematical procedures that we used in the work are Autoregressive-Moving Average (ARMA) modelling and Ornstein–Uhlenbeck (OU) process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heart rate variability (HRV) provides important information about cardiac autonomic modulation. Since it is a noninvasive and inexpensive method, HRV has been used to evaluate several parameters of cardiovascular health. However, the internal reproducibility of this method has been challenged in some studies. Our aim was to determine the intra-individual reproducibility of HRV parameters in short-term recordings obtained in supine and orthostatic positions. Electrocardiographic (ECG) recordings were obtained from 30 healthy subjects (20-49 years, 14 men) using a digital apparatus (sampling ratio = 250 Hz). ECG was recorded for 10 min in the supine position and for 10 min in the orthostatic position. The procedure was repeated 2-3 h later. Time and frequency domain analyses were performed. Frequency domain included low (LF, 0.04-0.15 Hz) and high frequency (HF, 0.15-0.4 Hz) bands. Power spectral analysis was performed by the autoregressive method and model order was set at 16. Intra-subject agreement was assessed by linear regression analysis, test of difference in variances and limits of agreement. Most HRV measures (pNN50, RMSSD, LF, HF, and LF/HF ratio) were reproducible independent of body position. Better correlation indexes (r > 0.6) were obtained in the orthostatic position. Bland-Altman plots revealed that most values were inside the agreement limits, indicating concordance between measures. Only SDNN and NNv in the supine position were not reproducible. Our results showed reproducibility of HRV parameters when recorded in the same individual with a short time between two exams. The increased sympathetic activity occurring in the orthostatic position probably facilitates reproducibility of the HRV indexes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maintenance of thermal homeostasis in rats fed a high-fat diet (HFD) is associated with changes in their thermal balance. The thermodynamic relationship between heat dissipation and energy storage is altered by the ingestion of high-energy diet content. Observation of thermal registers of core temperature behavior, in humans and rodents, permits identification of some characteristics of time series, such as autoreference and stationarity that fit adequately to a stochastic analysis. To identify this change, we used, for the first time, a stochastic autoregressive model, the concepts of which match those associated with physiological systems involved and applied in male HFD rats compared with their appropriate standard food intake age-matched male controls (n=7 per group). By analyzing a recorded temperature time series, we were able to identify when thermal homeostasis would be affected by a new diet. The autoregressive time series model (AR model) was used to predict the occurrence of thermal homeostasis, and this model proved to be very effective in distinguishing such a physiological disorder. Thus, we infer from the results of our study that maximum entropy distribution as a means for stochastic characterization of temperature time series registers may be established as an important and early tool to aid in the diagnosis and prevention of metabolic diseases due to their ability to detect small variations in thermal profile.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’évolution récente des commutateurs de sélection de longueurs d’onde (WSS -Wavelength Selective Switch) favorise le développement du multiplexeur optique d’insertionextraction reconfigurable (ROADM - Reconfigurable Optical Add/Drop Multiplexers) à plusieurs degrés sans orientation ni coloration, considéré comme un équipement fort prometteur pour les réseaux maillés du futur relativement au multiplexage en longueur d’onde (WDM -Wavelength Division Multiplexing ). Cependant, leur propriété de commutation asymétrique complique la question de l’acheminement et de l’attribution des longueur d’ondes (RWA - Routing andWavelength Assignment). Or la plupart des algorithmes de RWA existants ne tiennent pas compte de cette propriété d’asymétrie. L’interruption des services causée par des défauts d’équipements sur les chemins optiques (résultat provenant de la résolution du problème RWA) a pour conséquence la perte d’une grande quantité de données. Les recherches deviennent ainsi incontournables afin d’assurer la survie fonctionnelle des réseaux optiques, à savoir, le maintien des services, en particulier en cas de pannes d’équipement. La plupart des publications antérieures portaient particulièrement sur l’utilisation d’un système de protection permettant de garantir le reroutage du trafic en cas d’un défaut d’un lien. Cependant, la conception de la protection contre le défaut d’un lien ne s’avère pas toujours suffisante en termes de survie des réseaux WDM à partir de nombreux cas des autres types de pannes devenant courant de nos jours, tels que les bris d’équipements, les pannes de deux ou trois liens, etc. En outre, il y a des défis considérables pour protéger les grands réseaux optiques multidomaines composés de réseaux associés à un domaine simple, interconnectés par des liens interdomaines, où les détails topologiques internes d’un domaine ne sont généralement pas partagés à l’extérieur. La présente thèse a pour objectif de proposer des modèles d’optimisation de grande taille et des solutions aux problèmes mentionnés ci-dessus. Ces modèles-ci permettent de générer des solutions optimales ou quasi-optimales avec des écarts d’optimalité mathématiquement prouvée. Pour ce faire, nous avons recours à la technique de génération de colonnes afin de résoudre les problèmes inhérents à la programmation linéaire de grande envergure. Concernant la question de l’approvisionnement dans les réseaux optiques, nous proposons un nouveau modèle de programmation linéaire en nombres entiers (ILP - Integer Linear Programming) au problème RWA afin de maximiser le nombre de requêtes acceptées (GoS - Grade of Service). Le modèle résultant constitue celui de l’optimisation d’un ILP de grande taille, ce qui permet d’obtenir la solution exacte des instances RWA assez grandes, en supposant que tous les noeuds soient asymétriques et accompagnés d’une matrice de connectivité de commutation donnée. Ensuite, nous modifions le modèle et proposons une solution au problème RWA afin de trouver la meilleure matrice de commutation pour un nombre donné de ports et de connexions de commutation, tout en satisfaisant/maximisant la qualité d’écoulement du trafic GoS. Relativement à la protection des réseaux d’un domaine simple, nous proposons des solutions favorisant la protection contre les pannes multiples. En effet, nous développons la protection d’un réseau d’un domaine simple contre des pannes multiples, en utilisant les p-cycles de protection avec un chemin indépendant des pannes (FIPP - Failure Independent Path Protecting) et de la protection avec un chemin dépendant des pannes (FDPP - Failure Dependent Path-Protecting). Nous proposons ensuite une nouvelle formulation en termes de modèles de flots pour les p-cycles FDPP soumis à des pannes multiples. Le nouveau modèle soulève un problème de taille, qui a un nombre exponentiel de contraintes en raison de certaines contraintes d’élimination de sous-tour. Par conséquent, afin de résoudre efficacement ce problème, on examine : (i) une décomposition hiérarchique du problème auxiliaire dans le modèle de décomposition, (ii) des heuristiques pour gérer efficacement le grand nombre de contraintes. À propos de la protection dans les réseaux multidomaines, nous proposons des systèmes de protection contre les pannes d’un lien. Tout d’abord, un modèle d’optimisation est proposé pour un système de protection centralisée, en supposant que la gestion du réseau soit au courant de tous les détails des topologies physiques des domaines. Nous proposons ensuite un modèle distribué de l’optimisation de la protection dans les réseaux optiques multidomaines, une formulation beaucoup plus réaliste car elle est basée sur l’hypothèse d’une gestion de réseau distribué. Ensuite, nous ajoutons une bande pasiv sante partagée afin de réduire le coût de la protection. Plus précisément, la bande passante de chaque lien intra-domaine est partagée entre les p-cycles FIPP et les p-cycles dans une première étude, puis entre les chemins pour lien/chemin de protection dans une deuxième étude. Enfin, nous recommandons des stratégies parallèles aux solutions de grands réseaux optiques multidomaines. Les résultats de l’étude permettent d’élaborer une conception efficace d’un système de protection pour un très large réseau multidomaine (45 domaines), le plus large examiné dans la littérature, avec un système à la fois centralisé et distribué.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research quantitatively evaluates the water retention capacity and flood control function of the forest catchments by using hydrological data of the large flood events which happened after the serious droughts. The objective sites are the Oodo Dam and the Sameura Dam catchments in Japan. The kinematic wave model, which considers saturated and unsaturated sub-surface soil zones, is used for the rainfall-runoff analysis. The result shows that possible storage volume of the Oodo Dam catchment is 162.26 MCM in 2005, while that of Samerua is 102.83 MCM in 2005 and 102.64 MCM in 2007. Flood control function of the Oodo Dam catchment is 173 mm in water depth in 2005, while the Sameura Dam catchment 114 mm in 2005 and 126 mm in 2007. This indicates that the Oodo Dam catchment has more than twice as big water capacity as its capacity (78.4 mm), while the Sameura Dam catchment has about one-fifth of the its storage capacity (693 mm).