13 resultados para Interest rates -- Australia -- Mathematical models.

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cardiac myocytes (heart muscle cells), coupling of electric signal known as the action potential to contraction of the heart depends crucially on calcium-induced calcium release (CICR) in a microdomain known as the dyad. During CICR, the peak number of free calcium ions (Ca) present in the dyad is small, typically estimated to be within range 1-100. Since the free Ca ions mediate CICR, noise in Ca signaling due to the small number of free calcium ions influences Excitation-Contraction (EC) coupling gain. Noise in Ca signaling is only one noise type influencing cardiac myocytes, e.g., ion channels playing a central role in action potential propagation are stochastic machines, each of which gates more or less randomly, which produces gating noise present in membrane currents. How various noise sources influence macroscopic properties of a myocyte, how noise is attenuated and taken advantage of are largely open questions. In this thesis, the impact of noise on CICR, EC coupling and, more generally, macroscopic properties of a cardiac myocyte is investigated at multiple levels of detail using mathematical models. Complementarily to the investigation of the impact of noise on CICR, computationally-efficient yet spatially-detailed models of CICR are developed. The results of this thesis show that (1) gating noise due to the high-activity mode of L-type calcium channels playing a major role in CICR may induce early after-depolarizations associated with polymorphic tachycardia, which is a frequent precursor to sudden cardiac death in heart failure patients; (2) an increased level of voltage noise typically increases action potential duration and it skews distribution of action potential durations toward long durations in cardiac myocytes; and that (3) while a small number of Ca ions mediate CICR, Excitation-Contraction coupling is robust against this noise source, partly due to the shape of ryanodine receptor protein structures present in the cardiac dyad.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksessa vertailtiin metsän erirakenteisuutta edistävien poimintahakkuiden ja pienaukkohakkuiden kannattavuutta metsänhoitosuositusten mukaiseen metsänkasvatukseen Keski-Suomessa. Poimintahakkuut ja pienaukkohakkuut ovat menetelmiä, joilla voidaan lisätä luonnonmetsän häiriödynamiikan mukaista pienipiirteistä elinympäristöjen vaihtelua ja siksi ne sopivat etenkin erityiskohteisiin monimuotoisuuden, maiseman tai metsien monikäytön vuoksi. Ne johtavat yleensä vähitellen eri-ikäisrakenteiseen metsään, jossa puuston läpimittaluokkajakauma muistuttaa käänteistä J-kirjainta. Eri-ikäisrakenteisen metsänkäsittelyn taloudellista kannattavuutta puoltavat uudistumiskustannusten poisjäänti ja tukkipuihin painottuvat säännöllisin väliajoin toteutuvat hakkuut. Menetelmän soveltumista Suomen olosuhteisiin pidetään kuitenkin epävarmana. Tässä tutkimuksessa tarkasteltiin tasaikäisrakenteisen metsän muuttamista eri-ikäisrakenteiseksi 40 vuoden siirtymäaikana Metsähallituksen hallinnoimassa Isojäven ympäristöarvometsässä Kuhmoisissa. Tutkimusaineisto koostui 405 kuusivaltaisesta tasaikäisestä kuviosta, joiden pinta-alasta metsämaata on 636 hehtaaria. Metsän kehitystä simuloitiin puutason kasvumalleja käyttäen ja käsittelytoimenpiteet simuloitiin viisivuotiskausittain SIMO-metsäsuunnitteluohjelmistolla. Simulointien avulla selvitettiin jokaisen käsittelyskenaarion hakkuumäärät puutavaralajeittain, diskontatut kassavirrat ja puustopääoman muutos tarkasteluajanjakson aikana. Puunkorjuun yksikkökustannusten laskennan apuna käytettiin automatisoitua seurantajärjestelmää, jossa metsäkoneisiin asennettuilla matkapuhelimilla kerättiin MobiDoc2-sovelluksella metsäkoneiden käytöstä kiihtyvyystiedot, GPS-paikkatiedot ja syötetiedot. Lopulta jokaiselle käsittelyskenaariolle laskettiin metsän puuntuotannollista arvoa kuvaavalla tuottoarvon yhtälöllä nettonykyarvot, josta vähennettiin diskontatut puunkorjuun kustannukset. Tutkimuksen tulosten mukaan poimintahakkuun NPV oli 3 prosentin korkokannalla noin 91 % (7420 €/ha) ja pienaukkohakkuiden noin 99 % (8076 €/ha) metsänhoitosuositusten mukaisesta käsittelystä (8176 €/ha). Komparatiivinen statiikka osoitti, että korkokannan kasvattaminen 5 prosenttiin ei olennaisesti lisännyt nettonykyarvojen eroja. Poimintahakkuiden puunkorjuun yksikkökustannukset olivat 0,8 €/m3 harvennushakkuita pienemmät ja 7,2 €/m3 uudistushakkuita suuremmat. Pienaukkohakkuiden yksikkökustannukset olivat 0,7 €/m3 uudistushakkuita suuremmat.Tulosten perusteella on väistämätöntä että siirtymävaihe tasaikäisrakenteisesta eri-ikäisrakenteiseksi metsäksi aiheuttaa taloudellisia tappioita siitäkin huolimatta, että hakkuut ovat voimakkaita ja tehdään varttuneeseen kasvatusmetsään. Tappion määrä on metsän peitteisyyden ylläpidosta aiheutuva vaihtoehtoiskustannus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to investigate the effects of location, site type, regeneration method and precommercial thinning on the characteristics and development of young, even-aged, pure Scots pine stands. In addition, the effects of timing and intensity of first commercial thinning on the yield and profitability during the rotation period were also studied. The stand characteristics and external quality of young Scots pine stands and stand-level growth models were based on extensive inventory data of the Finnish Forest Research Institute for young Scots pine stands (3 measurement times, 192 stands). The effect of precommercial thinning on stand development was examined on the basis of long-term experiments (13 stands, 169 plots). The effect of timing and intensity of the first commercial thinning on yield and profitability were based on measurements made in first commercial thinnings (27 stands of Metsähallitus), and the further stand development was modeled using the MOTTI simulator. The thesis was based on four articles and a summary. Stand level growth models were developed for young, even-aged Scots pine stands. The models reliably predicted the development up until the first commercial thinning stage. The stand density of young Scots pine stands in Finland was moderately low compared to the target values. In addition, the external quality of pines was low on average. The low stand density and poor external quality will result in the need for quality tree selection in thinnings, if high quality sawn timber is required. In Northern Finland, only 20% of the dominant trees were classified as normal. This will lead to the situation where external quality will remain relatively poor up until the end of rotation. Early and light precommercial thinning (Hdom 3 m, to a density of 3000 trees per hectare) increased the thinning removal by 40% compared to late and more intensive precommercial thinning (at 7 meters to a density of 2000 trees per hectare). A model for the effect of precommercial thinning on merchantable thinning removal at the first commercial thinning was developed for forest management planning purposes. When the recommended time of first commercial thinning was delayed from a dominant height of 12 m to 16 m, or by ten years, the yield of merchantable wood was doubled. Simultaneously, the current value of the stumpage revenues (with 4% interest rate) was increased on the average by 65% (330 € per hectare). Variation in stumpage prices or interest rates did not have any effect on the final results. Without exception, delaying the first commercial thinning by ten years seemed to be the most profitable method. This presupposes that precommercial thinning has been carried out at the right time and that tree quality aspects do not be specially considered. Furthermore, the wood yield and economic outcome from the entire rotation were similar regardless of whether the first thinning was performed at the time currently recommended or ten years later.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental variation is a fact of life for all the species on earth: for any population of any particular species, the local environmental conditions are liable to vary in both time and space. In today's world, anthropogenic activity is causing habitat loss and fragmentation for many species, which may profoundly alter the characteristics of environmental variation in remaining habitat. Previous research indicates that, as habitat is lost, the spatial configuration of remaining habitat will increasingly affect the dynamics by which populations are governed. Through the use of mathematical models, this thesis asks how environmental variation interacts with species properties to influence population dynamics, local adaptation, and dispersal evolution. More specifically, we couple continuous-time continuous-space stochastic population dynamic models to landscape models. We manipulate environmental variation via parameters such as mean patch size, patch density, and patch longevity. Among other findings, we show that a mixture of high and low quality habitat is commonly better for a population than uniformly mediocre habitat. This conclusion is justified by purely ecological arguments, yet the positive effects of landscape heterogeneity may be enhanced further by local adaptation, and by the evolution of short-ranged dispersal. The predicted evolutionary responses to environmental variation are complex, however, since they involve numerous conflicting factors. We discuss why the species that have high levels of local adaptation within their ranges may not be the same species that benefit from local adaptation during range expansion. We show how habitat loss can lead to either increased or decreased selection for dispersal depending on the type of habitat and the manner in which it is lost. To study the models, we develop a recent analytical method, Perturbation expansion, to enable the incorporation of environmental variation. Within this context, we use two methods to address evolutionary dynamics: Adaptive dynamics, which assumes mutations occur infrequently so that the ecological and evolutionary timescales can be separated, and via Genotype distributions, which assume mutations are more frequent. The two approaches generally lead to similar predictions yet, exceptionally, we show how the evolutionary response of dispersal behaviour to habitat turnover may qualitatively depend on the mutation rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integrated European debt capital market has undoubtedly broadened the possibilities for companies to access funding from the public and challenged investors to cope with an ever increasing complexity of its market participants. Well into the Euro-era, it is clear that the unified market has created potential for all involved parties, where investment opportunities are able to meet a supply of funds from a broad geographical area now summoned under a single currency. Europe’s traditionally heavy dependency on bank lending as a source of debt capital has thus been easing as corporate residents are able to tap into a deep and liquid capital market to satisfy their funding needs. As national barriers eroded with the inauguration of the Euro and interest rates for the EMU-members converged towards over-all lower yields, a new source of debt capital emerged to the vast majority of corporate residents under the new currency and gave an alternative to the traditionally more maturity-restricted bank debt. With increased sophistication came also an improved knowledge and understanding of the market and its participants. Further, investors became more willing to bear credit risk, which opened the market for firms of ever lower creditworthiness. In the process, the market as a whole saw a change in the profile of issuers, as non-financial firms increasingly sought their funding directly from the bond market. This thesis consists of three separate empirical studies on how corporates fund themselves on the European debt capital markets. The analysis focuses on a firm’s access to and behaviour on the capital market, subsequent the decision to raise capital through the issuance of arm’s length debt on the bond market. The specific areas considered are contributing to our knowledge in the fields of corporate finance and financial markets by considering explicitly firms’ primary market activities within the new market area. The first essay explores how reputation of an issuer affects its debt issuance. Essay two examines the choice of interest rate exposure on newly issued debt and the third and final essay explores pricing anomalies on corporate debt issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Habitat fragmentation produces patches of suitable habitat surrounded by unfavourable matrix habitat. A species may persist in such a fragmented landscape in an equilibrium between the extinctions and recolonizations of local populations, thus forming a metapopulation. Migration between local populations is necessary for the long-term persistence of a metapopulation. The Glanville fritillary butterfly (Melitaea cinxia) forms a metapopulation in the Åland islands in Finland. There is migration between the populations, the extent of which is affected by several environmental factors and variation in the phenotype of individual butterflies. Different allelic forms of the glycolytic enzyme phosphoglucose isomerase (Pgi) has been identified as a possible genetic factor influencing flight performance and migration rate in this species. The frequency of a certain Pgi allele, Pgi-f, follows the same pattern in relation to population age and connectivity as migration propensity. Furthermore, variation in flight metabolic performance, which is likely to affect migration propensity, has been linked to genetic variation in Pgi or a closely linked locus. The aim of this study was to investigate the association between Pgi genotype and the migration propensity in the Glanville fritillary both at the individual and population levels using a statistical modelling approach. A mark-release-recapture (MRR) study was conducted in a habitat patch network of M. cinxia in Åland to collect data on the movements of individual butterflies. Larval samples from the study area were also collected for population level examinations. Each butterfly and larva was genotyped at the Pgi locus. The MRR data was parameterised with two mathematical models of migration: the Virtual Migration Model (VM) and the spatially explicit diffusion model. VM model predicted and observed numbers of emigrants from populations with high and low frequencies of Pgi-f were compared. Posterior predictive data sets were simulated based on the parameters of the diffusion model. Lack-of-fit of observed values to the model predicted values of several descriptors of movements were detected, and the effect of Pgi genotype on the deviations was assessed by randomizations including the genotype information. This study revealed a possible difference in the effect of Pgi genotype on migration propensity between the two sexes in the Glanville fritillary. The females with and males without the Pgi-f allele moved more between habitat patches, which is probably related to differences in the function of flight in the two sexes. Females may use their high flight capacity to migrate between habitat patches to find suitable oviposition sites, whereas males may use it to acquire mates by keeping a territory and fighting off other intruding males, possibly causing them to emigrate. The results were consistent across different movement descriptors and at the individual and population levels. The effect of Pgi is likely to be dependent on the structure of the landscape and the prevailing environmental conditions.