934 resultados para calibration of rainfall-runoff models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this Master’s thesis was to study the business model development in Finnish newspaper industry during the next then years through scenario planning. The objective was to see how will the business models develop amidst the many changes in the industry, what factors are affecting the change, what are the implications of these changes for the players in the industry and how should the Finnish newspaper companies evolve in order to succeed in the future. In this thesis the business model change is studied based on all the elements of business models, as it was discovered that the industry is too often focusing on changes in only few of those elements and a more broader view can provide valuable information for the companies. The results revealed that the industry is affected by many changes during the next ten years. Scenario planning provides a good tool for analyzing this change and for developing valuable options for businesses. After conducting series of interviews and discovering forces affecting the change, four different scenarios were developed centered on the role that newspaper will take and the level at which they are providing the content in the future. These scenarios indicated that there are varieties of options in the way the business models may develop and that companies should start making decisions proactively in order to succeed. As the business model elements are interdepended, changes made in the other elements will affect the whole model, making these decisions about the role and level of content important for the companies. In the future, it is likely that the Finnish newspaper industry will include many different kinds of business models, some of which can be drastically different from the current ones and some of which can still be similar, but take better into account the new kind of media environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cosmological standard view is based on the assumptions of homogeneity, isotropy and general relativistic gravitational interaction. These alone are not sufficient for describing the current cosmological observations of accelerated expansion of space. Although general relativity is extremely accurately tested to describe the local gravitational phenomena, there is a strong demand for modifying either the energy content of the universe or the gravitational interaction itself to account for the accelerated expansion. By adding a non-luminous matter component and a constant energy component with negative pressure, the observations can be explained with general relativity. Gravitation, cosmological models and their observational phenomenology are discussed in this thesis. Several classes of dark energy models that are motivated by theories outside the standard formulation of physics were studied with emphasis on the observational interpretation. All the cosmological models that seek to explain the cosmological observations, must also conform to the local phenomena. This poses stringent conditions for the physically viable cosmological models. Predictions from a supergravity quintessence model was compared to Supernova 1a data and several metric gravity models were studied with local experimental results. Polytropic stellar configurations of solar, white dwarf and neutron stars were numerically studied with modified gravity models. The main interest was to study the spacetime around the stars. The results shed light on the viability of the studied cosmological models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The serious neuropsychological repercussions of hepatic encephalopathy have led to the creation of several experimental models in order to better understand the pathogenesis of the disease. In the present investigation, two possible causes of hepatic encephalopathy, cholestasis and portal hypertension, were chosen to study the behavioral impairments caused by the disease using an object recognition task. This working memory test is based on a paradigm of spontaneous delayed non-matching to sample and was performed 60 days after surgery. Male Wistar rats (225-250 g) were divided into three groups: two experimental groups, microsurgical cholestasis (N = 20) and extrahepatic portal hypertension (N = 20), and a control group (N = 20). A mild alteration of the recognition memory occurred in rats with cholestasis compared to control rats and portal hypertensive rats. The latter group showed the poorest performance on the basis of the behavioral indexes tested. In particular, only the control group spent significantly more time exploring novel objects compared to familiar ones (P < 0.001). In addition, the portal hypertension group spent the shortest time exploring both the novel and familiar objects (P < 0.001). These results suggest that the existence of portosystemic collateral circulation per se may be responsible for subclinical encephalopathy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The two main objectives of Bayesian inference are to estimate parameters and states. In this thesis, we are interested in how this can be done in the framework of state-space models when there is a complete or partial lack of knowledge of the initial state of a continuous nonlinear dynamical system. In literature, similar problems have been referred to as diffuse initialization problems. This is achieved first by extending the previously developed diffuse initialization Kalman filtering techniques for discrete systems to continuous systems. The second objective is to estimate parameters using MCMC methods with a likelihood function obtained from the diffuse filtering. These methods are tried on the data collected from the 1995 Ebola outbreak in Kikwit, DRC in order to estimate the parameters of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The freezing times of fruit pulp models packed and conditioned in multi-layered boxes were evaluated under conditions similar to those employed commercially. Estimating the freezing time is a difficult practice due to the presence of significant voids in the boxes, whose influence may be analyzed by means of various methods. In this study, a procedure for estimating freezing time by using the models described in the literature was compared with experimental measurements by collecting time/temperature data. The following results show that the airflow through packages is a significant parameter for freezing time estimation. When the presence of preferential channels was considered, the predicted freezing time in the models could be 10% lower than the experimental values, depending on the method. The isotherms traced as a function of the location of the samples inside the boxes showed the displacement of the thermal center in relation to the geometric center of the product.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work was to calibrate the material properties including strength and strain values for different material zones of ultra-high strength steel (UHSS) welded joints under monotonic static loading. The UHSS is heat sensitive and softens by heat due to welding, the affected zone is heat affected zone (HAZ). In this regard, cylindrical specimens were cut out from welded joints of Strenx® 960 MC and Strenx® Tube 960 MH, were examined by tensile test. The hardness values of specimens’ cross section were measured. Using correlations between hardness and strength, initial material properties were obtained. The same size specimen with different zones of material same as real specimen were created and defined in finite element method (FEM) software with commercial brand Abaqus 6.14-1. The loading and boundary conditions were defined considering tensile test values. Using initial material properties made of hardness-strength correlations (true stress-strain values) as Abaqus main input, FEM is utilized to simulate the tensile test process. By comparing FEM Abaqus results with measured results of tensile test, initial material properties will be revised and reused as software input to be fully calibrated in such a way that FEM results and tensile test results deviate minimum. Two type of different S960 were used including 960 MC plates, and structural hollow section 960 MH X-joint. The joint is welded by BöhlerTM X96 filler material. In welded joints, typically the following zones appear: Weld (WEL), Heat affected zone (HAZ) coarse grained (HCG) and fine grained (HFG), annealed zone, and base material (BaM). Results showed that: The HAZ zone is softened due to heat input while welding. For all the specimens, the softened zone’s strength is decreased and makes it a weakest zone where fracture happens while loading. Stress concentration of a notched specimen can represent the properties of notched zone. The load-displacement diagram from FEM modeling matches with the experiments by the calibrated material properties by compromising two correlations of hardness and strength.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Master’s Thesis analyses the effectiveness of different hedging models on BRICS (Brazil, Russia, India, China, and South Africa) countries. Hedging performance is examined by comparing two different dynamic hedging models to conventional OLS regression based model. The dynamic hedging models being employed are Constant Conditional Correlation (CCC) GARCH(1,1) and Dynamic Conditional Correlation (DCC) GARCH(1,1) with Student’s t-distribution. In order to capture the period of both Great Moderation and the latest financial crisis, the sample period extends from 2003 to 2014. To determine whether dynamic models outperform the conventional one, the reduction of portfolio variance for in-sample data with contemporaneous hedge ratios is first determined and then the holding period of the portfolios is extended to one and two days. In addition, the accuracy of hedge ratio forecasts is examined on the basis of out-of-sample variance reduction. The results are mixed and suggest that dynamic hedging models may not provide enough benefits to justify harder estimation and daily portfolio adjustment. In this sense, the results are consistent with the existing literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is about the analysis of some queueing models related to N-policy.The optimal value the queue size has to attain in order to turn on a single server, assuming that the policy is to turn on a single server when the queue size reaches a certain number, N, and turn him off when the system is empty.The operating policy is the usual N-policy, but with random N and in model 2, a system similar to the one described here.This study analyses “ Tandem queue with two servers”.Here assume that the first server is a specialized one.In a queueing system,under N-policy ,the server will be on vacation until N units accumulate for the first time after becoming idle.A modified version of the N-policy for an M│M│1 queueing system is considered here.The novel feature of this model is that a busy service unit prevents the access of new customers to servers further down the line.It is deals with a queueing model consisting of two servers connected in series with a finite intermediate waiting room of capacity k.Here assume that server I is a specialized server.For this model ,the steady state probability vector and the stability condition are obtained using matrix – geometric method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mann–Kendall non-parametric test was employed for observational trend detection of monthly, seasonal and annual precipitation of five meteorological subdivisions of Central Northeast India (CNE India) for different 30-year normal periods (NP) viz. 1889–1918 (NP1), 1919–1948 (NP2), 1949–1978 (NP3) and 1979–2008 (NP4). The trends of maximum and minimum temperatures were also investigated. The slopes of the trend lines were determined using the method of least square linear fitting. An application of Morelet wavelet analysis was done with monthly rainfall during June– September, total rainfall during monsoon season and annual rainfall to know the periodicity and to test the significance of periodicity using the power spectrum method. The inferences figure out from the analyses will be helpful to the policy managers, planners and agricultural scientists to work out irrigation and water management options under various possible climatic eventualities for the region. The long-term (1889–2008) mean annual rainfall of CNE India is 1,195.1 mm with a standard deviation of 134.1 mm and coefficient of variation of 11%. There is a significant decreasing trend of 4.6 mm/year for Jharkhand and 3.2 mm/day for CNE India. Since rice crop is the important kharif crop (May– October) in this region, the decreasing trend of rainfall during themonth of July may delay/affect the transplanting/vegetative phase of the crop, and assured irrigation is very much needed to tackle the drought situation. During themonth of December, all the meteorological subdivisions except Jharkhand show a significant decreasing trend of rainfall during recent normal period NP4. The decrease of rainfall during December may hamper sowing of wheat, which is the important rabi crop (November–March) in most parts of this region. Maximum temperature shows significant rising trend of 0.008°C/year (at 0.01 level) during monsoon season and 0.014°C/year (at 0.01 level) during post-monsoon season during the period 1914– 2003. The annual maximum temperature also shows significant increasing trend of 0.008°C/year (at 0.01 level) during the same period. Minimum temperature shows significant rising trend of 0.012°C/year (at 0.01 level) during postmonsoon season and significant falling trend of 0.002°C/year (at 0.05 level) during monsoon season. A significant 4– 8 years peak periodicity band has been noticed during September over Western UP, and 30–34 years periodicity has been observed during July over Bihar subdivision. However, as far as CNE India is concerned, no significant periodicity has been noticed in any of the time series.