959 resultados para Defeasible conditional
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual’s previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag–recapture data and tag–recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
VHF nighttime scintillations, recorded during a high solar activity period at a meridian chain of stations covering a magnetic latitude belt of 3°–21°N (420 km subionospheric points) are analyzed to investigate the influence of equatorial spread F irregularities on the occurrence of scintillation at latitudes away from the equator. Observations show that saturated amplitude scintillations start abruptly about one and a half hours after ground sunset and their onset is almost simultaneous at stations whose subionospheric points are within 12°N latitude of the magnetic equator, but is delayed at a station whose subionospheric point is at 21°N magnetic latitude by 15 min to 4 hours. In addition, the occurrence of postsunset scintillations at all the stations is found to be conditional on their prior occurrence at the equatorial station. If no postsunset scintillation activity is seen at the equatorial station, no scintillations are seen at other stations also. The occurrence of scintillations is explained as caused by rising plasma bubbles and associated irregularities over the magnetic equator and the subsequent mapping of these irregularities down the magnetic field lines to the F region of higher latitudes through some instantaneous mechanism; and hence an equatorial control is established on the generation of postsunset scintillation-producing irregularities in the entire low-latitude belt.
Resumo:
Correlations between oil and agricultural commodities have varied over previous decades, impacted by renewable fuels policy and turbulent economic conditions. We estimate smooth transition conditional correlation models for 12 agricultural commodities and WTI crude oil. While a structural change in correlations occurred concurrently with the introduction of biofuel policy, oil and food price levels are also key influences. High correlation between biofuel feedstocks and oil is more likely to occur when food and oil price levels are high. Correlation with oil returns is strong for biofuel feedstocks, unlike with other agricultural futures, suggesting limited contagion from energy to food markets.
Resumo:
Sequential firings with fixed time delays are frequently observed in simultaneous recordings from multiple neurons. Such temporal patterns are potentially indicative of underlying microcircuits and it is important to know when a repeatedly occurring pattern is statistically significant. These sequences are typically identified through correlation counts. In this paper we present a method for assessing the significance of such correlations. We specify the null hypothesis in terms of a bound on the conditional probabilities that characterize the influence of one neuron on another. This method of testing significance is more general than the currently available methods since under our null hypothesis we do not assume that the spiking processes of different neurons are independent. The structure of our null hypothesis also allows us to rank order the detected patterns. We demonstrate our method on simulated spike trains.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
The stochastic filtering has been in general an estimation of indirectly observed states given observed data. This means that one is discussing conditional expected values as being one of the most accurate estimation, given the observations in the context of probability space. In my thesis, I have presented the theory of filtering using two different kind of observation process: the first one is a diffusion process which is discussed in the first chapter, while the third chapter introduces the latter which is a counting process. The majority of the fundamental results of the stochastic filtering is stated in form of interesting equations, such the unnormalized Zakai equation that leads to the Kushner-Stratonovich equation. The latter one which is known also by the normalized Zakai equation or equally by Fujisaki-Kallianpur-Kunita (FKK) equation, shows the divergence between the estimate using a diffusion process and a counting process. I have also introduced an example for the linear gaussian case, which is mainly the concept to build the so-called Kalman-Bucy filter. As the unnormalized and the normalized Zakai equations are in terms of the conditional distribution, a density of these distributions will be developed through these equations and stated by Kushner Theorem. However, Kushner Theorem has a form of a stochastic partial differential equation that needs to be verify in the sense of the existence and uniqueness of its solution, which is covered in the second chapter.
Resumo:
The aim of this thesis is to analyse the key ecumenical dialogues between Methodists and Lutherans from the perspective of Arminian soteriology and Methodist theology in general. The primary research question is defined as: "To what extent do the dialogues under analysis relate to Arminian soteriology?" By seeking an answer to this question, new knowledge is sought on the current soteriological position of the Methodist-Lutheran dialogues, the contemporary Methodist theology and the commonalities between the Lutheran and Arminian understanding of soteriology. This way the soteriological picture of the Methodist-Lutheran discussions is clarified. The dialogues under analysis were selected on the basis of versatility. Firstly, the sole world organisation level dialogue was chosen: The Church – Community of Grace. Additionally, the document World Methodist Council and the Joint Declaration on the Doctrine of Justification is analysed as a supporting document. Secondly, a document concerning the discussions between two main-line churches in the United States of America was selected: Confessing Our Faith Together. Thirdly, two dialogues between non-main-line Methodist churches and main-line Lutheran national churches in Europe were chosen: Fellowship of Grace from Norway and Kristuksesta osalliset from Finland. The theoretical approach to the research conducted in this thesis is systematic analysis. The Remonstrant articles of Arminian soteriology are utilised as an analysis tool to examine the soteriological positions of the dialogues. New knowledge is sought by analysing the stances of the dialogues concerning the doctrines of partial depravity, conditional election, universal atonement, resistible grace and conditional perseverance of saints. This way information is also provided for approaching the Calvinist-Arminian controversy from new perspectives. The results of this thesis show that the current soteriological position of the Methodist-Lutheran dialogues is closer to Arminianism than Calvinism. The dialogues relate to Arminian soteriology especially concerning the doctrines of universal atonement, resistible grace and conditional perseverance of saints. The commonalities between the Lutheran and Arminian understanding of soteriology exist mainly in these three doctrines as they are uniformly favoured in the dialogues. The most discussed area of soteriology is human depravity, in which the largest diversity of stances occurs as well. On the other hand, divine election is the least discussed topic. The overall perspective, which the results of the analysis provide, indicates that the Lutherans could approach the Calvinist churches together with the Methodists with a wider theological perspective and understanding when the soteriological issues are considered as principal. Human depravity is discovered as the area of soteriology which requires most work in future ecumenical dialogues. However, the detected Lutheran hybrid notion on depravity (a Calvinist-Arminian mixture) appears to provide a useful new perspective for Calvinist-Arminian ecumenism and offers potentially fruitful considerations to future ecumenical dialogues.
Resumo:
This paper describes a detailed study of the structure of turbulence in boundary layers along mildly curved convex and concave surfaces. The surface curvature studied corresponds to δ/Rw = ± 0·01, δ being the boundary-layer thickness and Rw the radius of curvature of the wall, taken as positive for convex and negative for concave curvature. Measurements of turbulent energy balance, autocorrelations, auto- and cross-power spectra, amplitude probability distributions and conditional correlations are reported. It is observed that even mild curvature has very strong effects on the various aspects of the turbulent structure. For example, convex curvature suppresses the diffusion of turbulent energy away from the wall, reduces drastically the integral time scales and shifts the spectral distributions of turbulent energy and Reynolds shear stress towards high wavenumbers. Exactly opposite effects, though generally of a smaller magnitude, are produced by concave wall curvature. It is also found that curvature of either sign affects the v fluctuations more strongly than the u fluctuations and that curvature effects are more significant in the outer region of the boundary layer than in the region close to the wall. The data on the conditional correlations are used to study, in detail, the mechanism of turbulent transport in curved boundary layers. (Published Online April 12 2006)
Resumo:
Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.
Resumo:
This study comprises an introductory section and three essays analysing Russia's economic transition from the early 1990s up to the present. The papers present a combination of both theoretical and empirical analysis on some of the key issues Russia has faced during its somewhat troublesome transformation from state-controlled command economy to market-based economy. The first essay analyses fiscal competition for mobile capital between identical regions in a transition country. A standard tax competition framework is extended to account for two features of a transition economy: the presence of two sectors, old and new, which differ in productivity; and a non-benevolent regional decision-maker. It is shown that in very early phase of transition, when the old sector clearly dominates, consumers in a transition economy may be better off in a competitive equilibrium. Decision-makers, on the other hand, will prefer to coordinate their fiscal policies. The second essay uses annual data for 1992-2003 to examine income dispersion and convergence across 76 Russian regions. Wide disparities in income levels have indeed emerged during the transition period. Dispersion has increased most among the initially better-off regions, whereas for the initially poorer regions no clear trend of divergence or convergence could be established. Further, some - albeit not highly robust - evidence was found of both unconditional and conditional convergence, especially among the initially richer regions. Finally, it is observed that there is much less evidence of convergence after the economic crisis of 1998. The third essay analyses industrial firms' engagement in provision of infrastructure services, such as heating, electricity and road maintenance. Using a unique dataset of 404 large and medium-sized industrial enterprises in 40 regions of Russia, the essay examines public infrastructure provision by Russian industrial enterprises. It is found that to a large degree engagement in infrastructure provision, as proxied by district heating production, is a Soviet legacy. Secondly, firms providing district heating to users outside their plant area are more likely to have close and multidimensional relations with the local public sector.
Resumo:
Measurements of both the velocity and the temperature field have been made in the thermal layer that grows inside a turbulent boundary layer which is subjected to a small step change in surface heat flux. Upstream of the step, the wall heat flux is zero and the velocity boundary layer is nearly self-preserving. The thermal-layer measurements are discussed in the context of a self-preserving analysis for the temperature disturbance which grows underneath a thick external turbulent boundary layer. A logarithmic mean temperature profile is established downstream of the step but the budget for the mean-square temperature fluctuations shows that, in the inner region of the thermal layer, the production and dissipation of temperature fluctuations are not quite equal at the furthest downstream measurement station. The measurements for both the mean and the fluctuating temperature field indicate that the relaxation distance for the thermal layer is quite large, of the order of 1000θ0, where θ0 is the momentum thickness of the boundary layer at the step. Statistics of the thermal-layer interface and conditionally sampled measurements with respect to this interface are presented. Measurements of the temperature intermittency factor indicate that the interface is normally distributed with respect to its mean position. Near the step, the passive heat contaminant acts as an effective marker of the organized turbulence structure that has been observed in the wall region of a boundary layer. Accordingly, conditional averages of Reynolds stresses and heat fluxes measured in the heated part of the flow are considerably larger than the conventional averages when the temperature intermittency factor is small.
Resumo:
Transposons are mobile elements of genetic material that are able to move in the genomes of their host organisms using a special form of recombination called transposition. Bacteriophage Mu was the first transposon for which a cell-free in vitro transposition reaction was developed. Subsequently, the reaction has been refined and the minimal Mu in vitro reaction is useful in the generation of comprehensive libraries of mutant DNA molecules that can be used in a variety of applications. To date, the functional genetics applications of Mu in vitro technology have been subjected to either plasmids or genomic regions and entire genomes of viruses cloned on specific vectors. This study expands the use of Mu in vitro transposition in functional genetics and genomics by describing novel methods applicable to the targeted transgenesis of mouse and the whole-genome analysis of bacteriophages. The methods described here are rapid, efficient, and easily applicable to a wide variety of organisms, demonstrating the potential of the Mu transposition technology in the functional analysis of genes and genomes. First, an easy-to-use, rapid strategy to generate construct for the targeted mutagenesis of mouse genes was developed. To test the strategy, a gene encoding a neuronal K+/Cl- cotransporter was mutagenised. After a highly efficient transpositional mutagenesis, the gene fragments mutagenised were cloned into a vector backbone and transferred into bacterial cells. These constructs were screened with PCR using an effective 3D matrix system. In addition to traditional knock-out constructs, the method developed yields hypomorphic alleles that lead into reduced expression of the target gene in transgenic mice and have since been used in a follow-up study. Moreover, a scheme is devised to rapidly produce conditional alleles from the constructs produced. Next, an efficient strategy for the whole-genome analysis of bacteriophages was developed based on the transpositional mutagenesis of uncloned, infective virus genomes and their subsequent transfer into susceptible host cells. Mutant viruses able to produce viable progeny were collected and their transposon integration sites determined to map genomic regions nonessential to the viral life cycle. This method, applied here to three very different bacteriophages, PRD1, ΦYeO3 12, and PM2, does not require the target genome to be cloned and is directly applicable to all DNA and RNA viruses that have infective genomes. The method developed yielded valuable novel information on the three bacteriophages studied and whole-genome data can be complemented with concomitant studies on individual genes. Moreover, end-modified transposons constructed for this study can be used to manipulate genomes devoid of suitable restriction sites.
Resumo:
Cell proliferation, transcription and metabolism are regulated by complex partly overlapping signaling networks involving proteins in various subcellular compartments. The objective of this study was to increase our knowledge on such regulatory networks and their interrelationships through analysis of MrpL55, Vig, and Mat1 representing three gene products implicated in regulation of cell cycle, transcription, and metabolism. Genome-wide and biochemical in vitro studies have previously revealed MrpL55 as a component of the large subunit of the mitochondrial ribosome and demonstrated a possible role for the protein in cell cycle regulation. Vig has been implicated in heterochromatin formation and identified as a constituent of the RNAi-induced silencing complex (RISC) involved in cell cycle regulation and RNAi-directed transcriptional gene silencing (TGS) coupled to RNA polymerase II (RNAPII) transcription. Mat1 has been characterized as a regulatory subunit of cyclin-dependent kinase 7 (Cdk7) complex phosphorylating and regulating critical targets involved in cell cycle progression, energy metabolism and transcription by RNAPII. The first part of the study explored whether mRpL55 is required for cell viability or involved in a regulation of energy metabolism and cell proliferation. The results revealed a dynamic requirement of the essential Drosophila mRpL55 gene during development and suggested a function of MrpL55 in cell cycle control either at the G1/S or G2/M transition prior to cell differentiation. This first in vivo characterization of a metazoan-specific constituent of the large subunit of mitochondrial ribosome also demonstrated forth compelling evidence of the interconnection of nuclear and mitochondrial genomes as well as complex functions of the evolutionarily young metazoan-specific mitochondrial ribosomal proteins. In studies on the Drosophila RISC complex regulation, it was noted that Vig, a protein involved in heterochromatin formation, unlike other analyzed RISC associated proteins Argonaute2 and R2D2, is dynamically phosphorylated in a dsRNA-independent manner. Vig displays similarity with a known in vivo substrate for protein kinase C (PKC), human chromatin remodeling factor Ki-1/57, and is efficiently phosphorylated by PKC on multiple sites in vitro. These results suggest that function of the RISC complex protein Vig in RNAi-directed TGS and chromatin modification may be regulated through dsRNA-independent phosphorylation by PKC. In the third part of this study the role of Mat1 in regulating RNAPII transcription was investigated using cultured murine immortal fibroblasts with a conditional allele of Mat1. The results demonstrated that phosphorylation of the carboxy-terminal domain (CTD) of the large subunit of RNAPII in the heptapeptide YSPTSPS repeat in Mat-/- cells was over 10-fold reduced on Serine-5 and subsequently on Serine-2. Occupancy of the hypophosphorylated RNAPII in gene bodies was detectably decreased, whereas capping, splicing, histone methylation and mRNA levels were generally not affected. However, a subset of transcripts in absence of Mat1 was repressed and associated with decreased occupancy of RNAPII at promoters as well as defective capping. The results identify the Cdk7-CycH-Mat1 kinase submodule of TFIIH as a stimulatory non-essential regulator of transcriptional elongation and a genespecific essential factor for stable binding of RNAPII at the promoter region and capping. The results of these studies suggest important roles for both MrpL55 and Mat1 in cell cycle progression and their possible interplay at the G2/M stage in undifferentiated cells. The identified function of Mat1 and of TFIIH kinase complex in gene-specific transcriptional repression is challenging for further studies in regard to a possible link to Vig and RISC-mediated transcriptional gene silencing.