10 resultados para PROPORTIONAL HAZARD AND ACCELERATED FAILURE MODELS

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines how volatility in financial markets can preferable be modeled. The examination investigates how good the models for the volatility, both linear and nonlinear, are in absorbing skewness and kurtosis. The examination is done on the Nordic stock markets, including Finland, Sweden, Norway and Denmark. Different linear and nonlinear models are applied, and the results indicates that a linear model can almost always be used for modeling the series under investigation, even though nonlinear models performs slightly better in some cases. These results indicate that the markets under study are exposed to asymmetric patterns only to a certain degree. Negative shocks generally have a more prominent effect on the markets, but these effects are not really strong. However, in terms of absorbing skewness and kurtosis, nonlinear models outperform linear ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Glaucoma is a multifactorial long-term ocular neuropathy associated with progressive loss of the visual field, retinal nerve fiber structural abnormalities and optic disc changes. Like arterial hypertension it is usually a symptomless disease, but if left untreated leads to visual disability and eventual blindness. All therapies currently used aim to lower intraocular pressure (IOP) in order to minimize cell death. Drugs with new mechanisms of action could protect glaucomatous eyes against blindness. Renin-angiotensin system (RAS) is known to regulate systemic blood pressure and compounds acting on it are in wide clinical use in the treatment of hypertension and heart failure but not yet in ophthalmological use. There are only few previous studies concerning intraocular RAS, though evidence is accumulating that drugs antagonizing RAS can also lower IOP, the only treatable risk factor in glaucoma. The main aim of this experimental study was to clarify the expression of the renin-angiotensin system in the eye tissues and to test its potential oculohypotensive effects and mechanisms. In addition, the possible relationship between the development of hypertension and IOP was evaluated in animal models. In conclusion, a novel angiotensin receptor type (Mas), as well as ACE2 enzyme- producing agonists for Mas, were described for the first time in the eye structures participating in the regulation of IOP. In addition, a Mas receptor agonist significantly reduced even normal IOP. The effect was abolished by a specific receptor antagonist. Intraocular, local RAS would thus to be involved in the regulation of IOP, probably even more in pathological conditions such as glaucoma though there was no unambiguous relationship between arterial and ocular hypertension. The findings suggest the potential as antiglaucomatous drugs of agents which increase ACE2 activity and the formation of angiotensin (1-7), or activate Mas receptors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work develops methods to account for shoot structure in models of coniferous canopy radiative transfer. Shoot structure, as it varies along the light gradient inside canopy, affects the efficiency of light interception per unit needle area, foliage biomass, or foliage nitrogen. The clumping of needles in the shoot volume also causes a notable amount of multiple scattering of light within coniferous shoots. The effect of shoot structure on light interception is treated in the context of canopy level photosynthesis and resource use models, and the phenomenon of within-shoot multiple scattering in the context of physical canopy reflectance models for remote sensing purposes. Light interception. A method for estimating the amount of PAR (Photosynthetically Active Radiation) intercepted by a conifer shoot is presented. The method combines modelling of the directional distribution of radiation above canopy, fish-eye photographs taken at shoot locations to measure canopy gap fraction, and geometrical measurements of shoot orientation and structure. Data on light availability, shoot and needle structure and nitrogen content has been collected from canopies of Pacific silver fir (Abies amabilis (Dougl.) Forbes) and Norway spruce (Picea abies (L.) Karst.). Shoot structure acclimated to light gradient inside canopy so that more shaded shoots have better light interception efficiency. Light interception efficiency of shoots varied about two-fold per needle area, about four-fold per needle dry mass, and about five-fold per nitrogen content. Comparison of fertilized and control stands of Norway spruce indicated that light interception efficiency is not greatly affected by fertilization. Light scattering. Structure of coniferous shoots gives rise to multiple scattering of light between the needles of the shoot. Using geometric models of shoots, multiple scattering was studied by photon tracing simulations. Based on simulation results, the dependence of the scattering coefficient of shoot from the scattering coefficient of needles is shown to follow a simple one-parameter model. The single parameter, termed the recollision probability, describes the level of clumping of the needles in the shoot, is wavelength independent, and can be connected to previously used clumping indices. By using the recollision probability to correct for the within-shoot multiple scattering, canopy radiative transfer models which have used leaves as basic elements can use shoots as basic elements, and thus be applied for coniferous forests. Preliminary testing of this approach seems to explain, at least partially, why coniferous forests appear darker than broadleaved forests in satellite data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Väitöskirja koostuu neljästä esseestä, joissa tutkitaan empiirisen työntaloustieteen kysymyksiä. Ensimmäinen essee tarkastelee työttömyysturvan tason vaikutusta työllistymiseen Suomessa. Vuonna 2003 ansiosidonnaista työttömyysturvaa korotettiin työntekijöille, joilla on pitkä työhistoria. Korotus oli keskimäärin 15 % ja se koski ensimmäistä 150 työttömyyspäivää. Tutkimuksessa arvioidaan korotuksen vaikutus vertailemalla työllistymisen todennäköisyyksiä korotuksen saaneen ryhmän ja vertailuryhmän välillä ennen uudistusta ja sen jälkeen. Tuloksien perusteella työttömyysturvan korotus laski työllistymisen todennäköisyyttä merkittävästi, keskimäärin noin 16 %. Korotuksen vaikutus on suurin työttömyyden alussa ja se katoaa kun oikeus korotettuun ansiosidonnaiseen päättyy. Toinen essee tutkii työttömyyden pitkän aikavälin kustannuksia Suomessa keskittyen vuosien 1991 – 1993 syvään lamaan. Laman aikana toimipaikkojen sulkeminen lisääntyi paljon ja työttömyysaste nousi yli 13 prosenttiyksikköä. Tutkimuksessa verrataan laman aikana toimipaikan sulkemisen vuoksi työttömäksi jääneitä parhaassa työiässä olevia miehiä työllisinä pysyneisiin. Työttömyyden vaikutusta tarkastellaan kuuden vuoden seurantajaksolla. Vuonna 1999 työttömyyttä laman aikana kokeneen ryhmän vuosiansiot olivat keskimäärin 25 % alemmat kuin vertailuryhmässä. Tulojen menetys johtui sekä alhaisemmasta työllisyydestä että palkkatasosta. Kolmannessa esseessä tarkastellaan Suomen 1990-luvun alun laman aiheuttamaa työttömyysongelmaa tutkimalla työttömyyden kestoon vaikuttavia tekijöitä yksilötasolla. Kiinnostuksen kohteena on työttömyyden rakenteen ja työn kysynnän muutoksien vaikutus keskimääräiseen kestoon. Usein oletetaan, että laman seurauksena työttömäksi jää keskimääräistä huonommin työllistyviä henkilöitä, jolloin se itsessään pidentäisi keskimääräistä työttömyyden kestoa. Tuloksien perusteella makrotason kysyntävaikutus oli keskeinen työttömyyden keston kannalta ja rakenteen muutoksilla oli vain pieni kestoa lisäävä vaikutus laman aikana. Viimeisessä esseessä tutkitaan suhdannevaihtelun vaikutusta työpaikkaonnettomuuksien esiintymiseen. Tutkimuksessa käytetään ruotsalaista yksilötason sairaalahoitoaineistoa, joka on yhdistetty populaatiotietokantaan. Aineiston avulla voidaan tutkia vaihtoehtoisia selityksiä onnettomuuksien lisääntymiselle noususuhdanteessa, minkä on esitetty johtuvan esim. stressin tai kiireen vaikutuksesta. Tuloksien perusteella työpaikkaonnettomuudet ovat syklisiä, mutta vain tiettyjen ryhmien kohdalla. Työvoiman rakenteen vaihtelu saattaa selittää osan naisten onnettomuuksien syklisyydestä. Miesten kohdalla vain vähemmän vakavat onnettomuudet ovat syklisiä, mikä saattaa johtua strategisesta käyttäytymisestä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The question at issue in this dissertation is the epistemic role played by ecological generalizations and models. I investigate and analyze such properties of generalizations as lawlikeness, invariance, and stability, and I ask which of these properties are relevant in the context of scientific explanations. I will claim that there are generalizable and reliable causal explanations in ecology by generalizations, which are invariant and stable. An invariant generalization continues to hold or be valid under a special change called an intervention that changes the value of its variables. Whether a generalization remains invariant during its interventions is the criterion that determines whether it is explanatory. A generalization can be invariant and explanatory regardless of its lawlike status. Stability deals with a generality that has to do with holding of a generalization in possible background conditions. The more stable a generalization, the less dependent it is on background conditions to remain true. Although it is invariance rather than stability of generalizations that furnishes us with explanatory generalizations, there is an important function that stability has in this context of explanations, namely, stability furnishes us with extrapolability and reliability of scientific explanations. I also discuss non-empirical investigations of models that I call robustness and sensitivity analyses. I call sensitivity analyses investigations in which one model is studied with regard to its stability conditions by making changes and variations to the values of the model s parameters. As a general definition of robustness analyses I propose investigations of variations in modeling assumptions of different models of the same phenomenon in which the focus is on whether they produce similar or convergent results or not. Robustness and sensitivity analyses are powerful tools for studying the conditions and assumptions where models break down and they are especially powerful in pointing out reasons as to why they do this. They show which conditions or assumptions the results of models depend on. Key words: ecology, generalizations, invariance, lawlikeness, philosophy of science, robustness, explanation, models, stability

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.