34 resultados para Conditional Distribution

em Helda - Digital Repository of University of Helsinki


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The stochastic filtering has been in general an estimation of indirectly observed states given observed data. This means that one is discussing conditional expected values as being one of the most accurate estimation, given the observations in the context of probability space. In my thesis, I have presented the theory of filtering using two different kind of observation process: the first one is a diffusion process which is discussed in the first chapter, while the third chapter introduces the latter which is a counting process. The majority of the fundamental results of the stochastic filtering is stated in form of interesting equations, such the unnormalized Zakai equation that leads to the Kushner-Stratonovich equation. The latter one which is known also by the normalized Zakai equation or equally by Fujisaki-Kallianpur-Kunita (FKK) equation, shows the divergence between the estimate using a diffusion process and a counting process. I have also introduced an example for the linear gaussian case, which is mainly the concept to build the so-called Kalman-Bucy filter. As the unnormalized and the normalized Zakai equations are in terms of the conditional distribution, a density of these distributions will be developed through these equations and stated by Kushner Theorem. However, Kushner Theorem has a form of a stochastic partial differential equation that needs to be verify in the sense of the existence and uniqueness of its solution, which is covered in the second chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A better understanding of stock price changes is important in guiding many economic activities. Since prices often do not change without good reasons, searching for related explanatory variables has involved many enthusiasts. This book seeks answers from prices per se by relating price changes to their conditional moments. This is based on the belief that prices are the products of a complex psychological and economic process and their conditional moments derive ultimately from these psychological and economic shocks. Utilizing information about conditional moments hence makes it an attractive alternative to using other selective financial variables in explaining price changes. The first paper examines the relation between the conditional mean and the conditional variance using information about moments in three types of conditional distributions; it finds that the significance of the estimated mean and variance ratio can be affected by the assumed distributions and the time variations in skewness. The second paper decomposes the conditional industry volatility into a concurrent market component and an industry specific component; it finds that market volatility is on average responsible for a rather small share of total industry volatility — 6 to 9 percent in UK and 2 to 3 percent in Germany. The third paper looks at the heteroskedasticity in stock returns through an ARCH process supplemented with a set of conditioning information variables; it finds that the heteroskedasticity in stock returns allows for several forms of heteroskedasticity that include deterministic changes in variances due to seasonal factors, random adjustments in variances due to market and macro factors, and ARCH processes with past information. The fourth paper examines the role of higher moments — especially skewness and kurtosis — in determining the expected returns; it finds that total skewness and total kurtosis are more relevant non-beta risk measures and that they are costly to be diversified due either to the possible eliminations of their desirable parts or to the unsustainability of diversification strategies based on them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Basement membranes are specialized sheets of extracellular matrix found in contact with epithelia, endothelia, and certain isolated cells. They support tissue architecture and regulate cell behaviour. Laminins are among the main constituents of basement membranes. Due to differences between laminin isoforms, laminins confer structural and functional diversity to basement membranes. The first aim of this study was to gain insights into the potential functions of the then least characterized laminins, alpha4 chain laminins, by evaluating their distribution in human tissues. We thus created a monoclonal antibody specific for laminin alpha4 chain. By immunohistochemistry, alpha4 chain laminins were primarily localized to basement membranes of blood vessel endothelia, skeletal, heart, and smooth muscle cells, nerves, and adipocytes. In addition, alpha4 chain laminins were found in the region of certain epithelial basement membranes in the epidermis, salivary gland, pancreas, esophagus, stomach, intestine, and kidney. Because of the consistent presence of alpha4 chain laminins in endothelial basement membranes of blood vessels, we evaluated the potential roles of endothelial laminins in blood vessels, lymphatic vessels, and carcinomas. Human endothelial cells produced alpha4 and alpha5 chain laminins. In quantitative and morphological adhesion assays, human endothelial cells barely adhered to alpha4 chain-containing laminin-411. The weak interaction of endothelial cells with laminin-411 appeared to be mediated by alpha6beta1 integrin. The alpha5 chain-containing laminin-511 promoted endothelial cell adhesion better than laminin-411, but it did not promote the formation of cell-extracellular matrix adhesion complexes. The adhesion of endothelial cells to laminin-511 appeared to be mediated by Lutheran glycoprotein together with beta1 and alphavbeta3 integrins. The results suggest that these laminins may induce a migratory phenotype in endothelial cells. In lymphatic capillaries, endothelial basement membranes showed immunoreactivity for laminin alpha4, beta1, beta2, and gamma1 chains, type IV and XVIII collagens, and nidogen-1. Considering the assumed inability of alpha4 chain laminins to polymerize and to promote basement membrane assembly, the findings may in part explain the incomplete basement membrane formation in these vessels. Lymphatic capillaries of ovarian carcinomas showed immunoreactivity also for laminin alpha5 chain and its receptor Lutheran glycoprotein, emphasizing a difference between normal and ovarian carcinoma lymphatic capillaries. In renal cell carcinomas, immunoreactivity for laminin alpha4 chain was found in stroma and basement membranes of blood vessels. In most tumours, immunoreactivity for laminin alpha4 chain was also observed in the basement membrane region of tumour cell islets. Renal carcinoma cells produced alpha4 chain laminins. Laminin-411 did not promote adhesion of renal carcinoma cells, but inhibited their adhesion to fibronectin. Renal carcinoma cells migrated more on laminin-411 than on fibronectin. The results suggest that alpha4 chain laminins have a counteradhesive function, and may thus have a role in detachment and invasion of renal carcinoma cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nephrin is a transmembrane protein belonging to the immunoglobulin superfamily and is expressed primarily in the podocytes, which are highly differentiated epithelial cells needed for primary urine formation in the kidney. Mutations leading to nephrin loss abrogate podocyte morphology, and result in massive protein loss into urine and consequent early death in humans carrying specific mutations in this gene. The disease phenotype is closely replicated in respective mouse models. The purpose of this thesis was to generate novel inducible mouse-lines, which allow targeted gene deletion in a time and tissue-specific manner. A proof of principle model for succesful gene therapy for this disease was generated, which allowed podocyte specific transgene replacement to rescue gene deficient mice from perinatal lethality. Furthermore, the phenotypic consequences of nephrin restoration in the kidney and nephrin deficiency in the testis, brain and pancreas in rescued mice were investigated. A novel podocyte-specific construct was achieved by using standard cloning techniques to provide an inducible tool for in vitro and in vivo gene targeting. Using modified constructs and microinjection procedures two novel transgenic mouse-lines were generated. First, a mouse-line with doxycycline inducible expression of Cre recombinase that allows podocyte-specific gene deletion was generated. Second, a mouse-line with doxycycline inducible expression of rat nephrin, which allows podocyte-specific nephrin over-expression was made. Furthermore, it was possible to rescue nephrin deficient mice from perinatal lethality by cross-breeding them with a mouse-line with inducible rat nephrin expression that restored the missing endogenous nephrin only in the kidney after doxycycline treatment. The rescued mice were smaller, infertile, showed genital malformations and developed distinct histological abnormalities in the kidney with an altered molecular composition of the podocytes. Histological changes were also found in the testis, cerebellum and pancreas. The expression of another molecule with limited tissue expression, densin, was localized to the plasma membranes of Sertoli cells in the testis by immunofluorescence staining. Densin may be an essential adherens junction protein between Sertoli cells and developing germ cells and these junctions share similar protein assembly with kidney podocytes. This single, binary conditional construct serves as a cost- and time-efficient tool to increase the understanding of podocyte-specific key proteins in health and disease. The results verified a tightly controlled inducible podocyte-specific transgene expression in vitro and in vivo as expected. These novel mouse-lines with doxycycline inducible Cre recombinase and with rat nephrin expression will be useful for conditional gene targeting of essential podocyte proteins and to study in detail their functions in the adult mice. This is important for future diagnostic and pharmacologic development platforms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A composition operator is a linear operator that precomposes any given function with another function, which is held fixed and called the symbol of the composition operator. This dissertation studies such operators and questions related to their theory in the case when the functions to be composed are analytic in the unit disc of the complex plane. Thus the subject of the dissertation lies at the intersection of analytic function theory and operator theory. The work contains three research articles. The first article is concerned with the value distribution of analytic functions. In the literature there are two different conditions which characterize when a composition operator is compact on the Hardy spaces of the unit disc. One condition is in terms of the classical Nevanlinna counting function, defined inside the disc, and the other condition involves a family of certain measures called the Aleksandrov (or Clark) measures and supported on the boundary of the disc. The article explains the connection between these two approaches from a function-theoretic point of view. It is shown that the Aleksandrov measures can be interpreted as kinds of boundary limits of the Nevanlinna counting function as one approaches the boundary from within the disc. The other two articles investigate the compactness properties of the difference of two composition operators, which is beneficial for understanding the structure of the set of all composition operators. The second article considers this question on the Hardy and related spaces of the disc, and employs Aleksandrov measures as its main tool. The results obtained generalize those existing for the case of a single composition operator. However, there are some peculiarities which do not occur in the theory of a single operator. The third article studies the compactness of the difference operator on the Bloch and Lipschitz spaces, improving and extending results given in the previous literature. Moreover, in this connection one obtains a general result which characterizes the compactness and weak compactness of the difference of two weighted composition operators on certain weighted Hardy-type spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In aquatic systems, the ability of both the predator and prey to detect each other may be impaired by turbidity. This could lead to significant changes in the trophic interactions in the food web of lakes. Most fish use their vision for predation and the location of prey can be highly influenced by light level and clarity of the water environment. Turbidity is an optical property of water that causes light to be scattered and absorbed by particles and molecules. Turbidity is highly variable in lakes, due to seasonal changes in suspended sediments, algal blooms and wind-driven suspension of sediments especially in shallow waters. There is evidence that human activity has increased erosion leading to increased turbidity in aquatic systems. Turbidity could also play a significant role in distribution of fish. Turbidity could act as a cover for small fish and reduce predation risk. Diel horizontal migration by fish is common in shallow lakes and is considered as consequences of either optimal foraging behaviour for food or as a trade-off between foraging and predator avoidance. In turbid lakes, diel horizontal migration patterns could differ since turbidity can act as a refuge itself and affect the predator-prey interactions. Laboratory experiments were conducted with perch (Perca fluviatilis L.) and white bream (Abramis björkna (L.)) to clarify the effects of turbidity on their feeding. Additionally to clarify the effects of turbidity on predator preying on different types of prey, pikeperch larvae (Sander lucioperca (L.)), Daphnia pulex (Leydig), Sida crystallina (O.F. Müller), and Chaoborus flavicans (Meigen) were used as prey in different experiments. To clarify the role of turbidity in distribution and diel horizontal migration of perch, roach (Rutilus rutilus (L.)) and white bream, field studies were conducted in shallow turbid lakes. A clear and a turbid shallow lake were compared to investigate distribution of perch and roach in these two lakes in a 15-year study period. Feeding efficiency of perch and white bream was not significantly affected with increasing clay turbidity up to 50 NTU. The perch experiments with pikeperch larvae suggested that clay turbidity could act as a refuge especially at turbidity levels higher than 50 NTU. Perch experiments with different prey types suggested that pikeperch larvae probably use turbidity as a refuge better compared to Daphnia. Increase in turbidity probably has stronger affect on perch predating on plant-attached prey. The main findings of the thesis show that turbidity can play a significant role in distribution of fish. Perch and roach could use turbidity as refuge when macrophytes disappear while small perch may also use high turbidity as refuge when macrophytes are present. Floating-leaved macrophytes are probably good refuges for small fish in clay-turbid lakes and provide a certain level of turbidity and not too complex structure for refuge. The results give light to the predator-prey interactions in turbid environments. Turbidity of water should be taken in to account when studying the diel horizontal migrations and distribution of fish in shallow lakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study we used electro-spray ionization mass-spectrometry to determine phospholipid class and molecular species compositions in bacteriophages PM2, PRD1, Bam35 and phi6 as well as their hosts. To obtain compositional data of the individual leaflets, phospholipid transbilayer distribution in the viral membranes was studied. We found that 1) the membranes of all studied bacteriophage are enriched in PG as compared to the host membranes, 2) molecular species compositions in the phage and host membranes are similar, and 3) phospholipids in the viral membranes are distributed asymmetrically with phosphatidylglycerol enriched in the outer leaflet and phosphatidylethanolamine in the inner one (except Bam35). Alternative models for selective incorporation of phospholipids to phages and for the origins of the asymmetric phospholipid transbilayer distribution are discussed. Notably, the present data are also useful when constructing high resolution structural models of bacteriophages, since diffraction methods cannot provide a detailed structure of the membrane due to high motility of the lipids and lack of symmetric organization of membrane proteins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change contributes directly or indirectly to changes in species distributions, and there is very high confidence that recent climate warming is already affecting ecosystems. The Arctic has already experienced the greatest regional warming in recent decades, and the trend is continuing. However, studies on the northern ecosystems are scarce compared to more southerly regions. Better understanding of the past and present environmental change is needed to be able to forecast the future. Multivariate methods were used to explore the distributional patterns of chironomids in 50 shallow (≤ 10m) lakes in relation to 24 variables determined in northern Fennoscandia at the ecotonal area from the boreal forest in the south to the orohemiarctic zone in the north. Highest taxon richness was noted at middle elevations around 400 m a.s.l. Significantly lower values were observed from cold lakes situated in the tundra zone. Lake water alkalinity had the strongest positive correlation with the taxon richness. Many taxa had preference for lakes either on tundra area or forested area. The variation in the chironomid abundance data was best correlated with sediment organic content (LOI), lake water total organic carbon content, pH and air temperature, with LOI being the strongest variable. Three major lake groups were separated on the basis of their chironomid assemblages: (i) small and shallow organic-rich lakes, (ii) large and base-rich lakes, and (iii) cold and clear oligotrophic tundra lakes. Environmental variables best discriminating the lake groups were LOI, taxon richness, and Mg. When repeated, this kind of an approach could be useful and efficient in monitoring the effects of global change on species ranges. Many species of fast spreading insects, including chironomids, show a remarkable ability to track environmental changes. Based on this ability, past environmental conditions have been reconstructed using their chitinous remains in the lake sediment profiles. In order to study the Holocene environmental history of subarctic aquatic systems, and quantitatively reconstruct the past temperatures at or near the treeline, long sediment cores covering the last 10000 years (the Holocene) were collected from three lakes. Lower temperature values than expected based on the presence of pine in the catchment during the mid-Holocene were reconstructed from a lake with great water volume and depth. The lake provided thermal refuge for profundal, cold adapted taxa during the warm period. In a shallow lake, the decrease in the reconstructed temperatures during the late Holocene may reflect the indirect response of the midges to climate change through, e.g., pH change. The results from three lakes indicated that the response of chironomids to climate have been more or less indirect. However, concurrent shifts in assemblages of chironomids and vegetation in two lakes during the Holocene time period indicated that the midges together with the terrestrial vegetation had responded to the same ultimate cause, which most likely was the Holocene climate change. This was also supported by the similarity in the long-term trends in faunal succession for the chironomid assemblages in several lakes in the area. In northern Finnish Lapland the distribution of chironomids were significantly correlated with physical and limnological factors that are most likely to change as a result of future climate change. The indirect and individualistic response of aquatic systems, as reconstructed using the chironomid assemblages, to the climate change in the past suggests that in the future, the lake ecosystems in the north do not respond in one predictable way to the global climate change. Lakes in the north may respond to global climate change in various ways that are dependent on the initial characters of the catchment area and the lake.