77 resultados para conditional models
Resumo:
Nephrin is a transmembrane protein belonging to the immunoglobulin superfamily and is expressed primarily in the podocytes, which are highly differentiated epithelial cells needed for primary urine formation in the kidney. Mutations leading to nephrin loss abrogate podocyte morphology, and result in massive protein loss into urine and consequent early death in humans carrying specific mutations in this gene. The disease phenotype is closely replicated in respective mouse models. The purpose of this thesis was to generate novel inducible mouse-lines, which allow targeted gene deletion in a time and tissue-specific manner. A proof of principle model for succesful gene therapy for this disease was generated, which allowed podocyte specific transgene replacement to rescue gene deficient mice from perinatal lethality. Furthermore, the phenotypic consequences of nephrin restoration in the kidney and nephrin deficiency in the testis, brain and pancreas in rescued mice were investigated. A novel podocyte-specific construct was achieved by using standard cloning techniques to provide an inducible tool for in vitro and in vivo gene targeting. Using modified constructs and microinjection procedures two novel transgenic mouse-lines were generated. First, a mouse-line with doxycycline inducible expression of Cre recombinase that allows podocyte-specific gene deletion was generated. Second, a mouse-line with doxycycline inducible expression of rat nephrin, which allows podocyte-specific nephrin over-expression was made. Furthermore, it was possible to rescue nephrin deficient mice from perinatal lethality by cross-breeding them with a mouse-line with inducible rat nephrin expression that restored the missing endogenous nephrin only in the kidney after doxycycline treatment. The rescued mice were smaller, infertile, showed genital malformations and developed distinct histological abnormalities in the kidney with an altered molecular composition of the podocytes. Histological changes were also found in the testis, cerebellum and pancreas. The expression of another molecule with limited tissue expression, densin, was localized to the plasma membranes of Sertoli cells in the testis by immunofluorescence staining. Densin may be an essential adherens junction protein between Sertoli cells and developing germ cells and these junctions share similar protein assembly with kidney podocytes. This single, binary conditional construct serves as a cost- and time-efficient tool to increase the understanding of podocyte-specific key proteins in health and disease. The results verified a tightly controlled inducible podocyte-specific transgene expression in vitro and in vivo as expected. These novel mouse-lines with doxycycline inducible Cre recombinase and with rat nephrin expression will be useful for conditional gene targeting of essential podocyte proteins and to study in detail their functions in the adult mice. This is important for future diagnostic and pharmacologic development platforms.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
This paper examines how volatility in financial markets can preferable be modeled. The examination investigates how good the models for the volatility, both linear and nonlinear, are in absorbing skewness and kurtosis. The examination is done on the Nordic stock markets, including Finland, Sweden, Norway and Denmark. Different linear and nonlinear models are applied, and the results indicates that a linear model can almost always be used for modeling the series under investigation, even though nonlinear models performs slightly better in some cases. These results indicate that the markets under study are exposed to asymmetric patterns only to a certain degree. Negative shocks generally have a more prominent effect on the markets, but these effects are not really strong. However, in terms of absorbing skewness and kurtosis, nonlinear models outperform linear ones.
Models as epistemic artefacts: Toward a non-representationalist account of scientific representation
Resumo:
This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.
Resumo:
Hypertension, obesity, dyslipidemia and dysglycemia constitute metabolic syndrome, a major public health concern, which is associated with cardiovascular mortality. High dietary salt (NaCl) is the most important dietary risk factor for elevated blood pressure. The kidney has a major role in salt-sensitive hypertension and is vulnerable to harmful effects of increased blood pressure. Elevated serum urate is a common finding in these disorders. While dysregulation of urate excretion is associated with cardiovascular diseases, present studies aimed to clarify the role of xanthine oxidoreductase (XOR), i.e. xanthine dehydrogenase (XDH) and its post-translational isoform xanthine oxidase (XO), in cardiovascular diseases. XOR yields urate from hypoxanthine and xanthine. Low oxygen levels upregulate XOR in addition to other factors. In present studies higher renal XOR activity was found in hypertension-prone rats than in the controls. Furthermore, NaCl intake increased renal XOR dose-dependently. To clarify whether XOR has any causal role in hypertension, rats were kept on NaCl diets for different periods of time, with or without a XOR inhibitor, allopurinol. While allopurinol did not alleviate hypertension, it prevented left ventricular and renal hypertrophy. Nitric oxide synthases (NOS) produce nitric oxide (NO), which mediates vasodilatation. A paucity of NO, produced by NOS inhibition, aggravated hypertension and induced renal XOR, whereas NO generating drug, alleviated salt-induced hypertension without changes in renal XOR. Zucker fa/fa rat is an animal model of metabolic syndrome. These rats developed substantial obesity and modest hypertension and showed increased hepatic and renal XOR activities. XOR was modified by diet and antihypertensive treatment. Cyclosporine (CsA) is a fungal peptide and one of the first-line immunosuppressive drugs used in the management of organ transplantation. Nephrotoxicity ensue high doses resulting in hypertension and limit CsA use. CsA increased renal XO substantially in salt-sensitive rats on a high NaCl diet, indicating a possible role for this reactive oxygen species generating isoform in CsA nephrotoxicity. Renal hypoxia, common to these rodent models of hypertension and obesity, is one of the plausible XOR inducing factors. Although XOR inhibition did not prevent hypertension, present experimental data indicate that XOR plays a role in the pathology of salt-induced cardiac and renal hypertrophy.
Resumo:
In genetic epidemiology, population-based disease registries are commonly used to collect genotype or other risk factor information concerning affected subjects and their relatives. This work presents two new approaches for the statistical inference of ascertained data: a conditional and full likelihood approaches for the disease with variable age at onset phenotype using familial data obtained from population-based registry of incident cases. The aim is to obtain statistically reliable estimates of the general population parameters. The statistical analysis of familial data with variable age at onset becomes more complicated when some of the study subjects are non-susceptible, that is to say these subjects never get the disease. A statistical model for a variable age at onset with long-term survivors is proposed for studies of familial aggregation, using latent variable approach, as well as for prospective studies of genetic association studies with candidate genes. In addition, we explore the possibility of a genetic explanation of the observed increase in the incidence of Type 1 diabetes (T1D) in Finland in recent decades and the hypothesis of non-Mendelian transmission of T1D associated genes. Both classical and Bayesian statistical inference were used in the modelling and estimation. Despite the fact that this work contains five studies with different statistical models, they all concern data obtained from nationwide registries of T1D and genetics of T1D. In the analyses of T1D data, non-Mendelian transmission of T1D susceptibility alleles was not observed. In addition, non-Mendelian transmission of T1D susceptibility genes did not make a plausible explanation for the increase in T1D incidence in Finland. Instead, the Human Leucocyte Antigen associations with T1D were confirmed in the population-based analysis, which combines T1D registry information, reference sample of healthy subjects and birth cohort information of the Finnish population. Finally, a substantial familial variation in the susceptibility of T1D nephropathy was observed. The presented studies show the benefits of sophisticated statistical modelling to explore risk factors for complex diseases.
Resumo:
Gene therapy is a promising novel approach for treating cancers resistant to or escaping currently available modalities. Treatment approaches are based on taking advantage of molecular differences between normal and tumor cells. Various strategies are currently in clinical development with adenoviruses as the most popular vehicle. Recent developments include improving targeting strategies for gene delivery to tumor cells with tumor specific promoters or infectivity enhancement. A rapidly developing field is as well replication competent agents, which allow improved tumor penetration and local amplification of the anti-tumor effect. Adenoviral cancer gene therapy approaches lack cross-resistance with other treatment options and therefore synergistic effects are possible. This study focused on development of adenoviral vectors suitable for treatment of various gynecologic cancer types, describing the development of the field from non-replicating adenoviral vectors to multiple-modified conditional replicating viruses. Transcriptional targeting of gynecologic cancer cells by the use of the promoter of vascular endothelial growth factor receptor type 1 (flt-1) was evaluated. Flt-1 is not expressed in the liver and thus an ideal promoter for transcriptional targeting of adenoviruses. Our studies implied that the flt-1 promoter is active in teratocarcinomas.and therefore a good candidate for development of oncolytic adenoviruses for treatment of this often problematic disease with then poor outcome. A tropism modified conditionally replicating adenovirus (CRAd), Ad5-Δ24RGD, was studied in gynecologic cancers. Ad5-Δ24RGD is an adenovirus selectively replication competent in cells defective in the p16/Rb pathway, including many or most tumor cells. The fiber of Ad5-Δ24RGD contains an integrin binding arginine-glycine-aspartic acid motif (RGD-4C), allowing coxackie-adenovirus receptor independent infection of cancer cells. This approach is attractive because expression levels of CAR are highly variable and often low on primary gynecological cancer cells. Oncolysis could be shown for a wide variety of ovarian and cervical cancer cell lines as well as primary ovarian cancer cell spheroids, a novel system developed for in vitro analysis of CRAds on primary tumor substrates. Biodistribution was evaluated and preclinical safety data was obtained by demonstrating lack of replication in human peripheral blood mononuclear cells. The efficicacy of Ad5-Δ24RGD was shown in different orthotopic murine models including a highly aggressive intraperitoneal model of disseminated ovarian cancer cells, where Ad5-Δ24RGD resulted in complete eradication of intraperitoneal disease in half of the mice. To further improve the selectivity and specificity of CRAds, triple-targeted oncolytic adenoviruses were cloned, featuring the cyclo-oxygenase-2 (cox-2) promoter, E1A transcomplementation and serotype chimerism. Those viruses were evaluated on ovarian cancer cells for specificity and oncolytic potency with regard to two different cox2 versions and three different variants of E1A (wild type, delta24 and delta2delta24). Ad5/3cox2Ld24 emerged as the best combination due to enhanced selectivity without potency lost in vitro or in an aggressive intraperitoneal orthotopic ovarian tumor model. In summary, the preclinical therapeutic efficacy of the CRAds tested in this study, taken together with promising biodistribution and safety data, suggest that these CRAds are interesting candidates for translation into clinical trials for gynecologic cancer.
Resumo:
This dissertation examines the short- and long-run impacts of timber prices and other factors affecting NIPF owners' timber harvesting and timber stocking decisions. The utility-based Faustmann model provides testable hypotheses of the exogenous variables retained in the timber supply analysis. The timber stock function, derived from a two-period biomass harvesting model, is estimated using a two-step GMM estimator based on balanced panel data from 1983 to 1991. Timber supply functions are estimated using a Tobit model adjusted for heteroscedasticity and nonnormality of errors based on panel data from 1994 to 1998. Results show that if specification analysis of the Tobit model is ignored, inconsistency and biasedness can have a marked effect on parameter estimates. The empirical results show that owner's age is the single most important factor determining timber stock; timber price is the single most important factor in harvesting decision. The results of the timber supply estimations can be interpreted using utility-based Faustmann model of a forest owner who values a growing timber in situ.
Resumo:
An important safety aspect to be considered when foods are enriched with phytosterols and phytostanols is the oxidative stability of these lipid compounds, i.e. their resistance to oxidation and thus to the formation of oxidation products. This study concentrated on producing scientific data to support this safety evaluation process. In the absence of an official method for analyzing of phytosterol/stanol oxidation products, we first developed a new gas chromatographic - mass spectrometric (GC-MS) method. We then investigated factors affecting these compounds' oxidative stability in lipid-based food models in order to identify critical conditions under which significant oxidation reactions may occur. Finally, the oxidative stability of phytosterols and stanols in enriched foods during processing and storage was evaluated. Enriched foods covered a range of commercially available phytosterol/stanol ingredients, different heat treatments during food processing, and different multiphase food structures. The GC-MS method was a powerful tool for measuring the oxidative stability. Data obtained in food model studies revealed that the critical factors for the formation and distribution of the main secondary oxidation products were sterol structure, reaction temperature, reaction time, and lipid matrix composition. Under all conditions studied, phytostanols as saturated compounds were more stable than unsaturated phytosterols. In addition, esterification made phytosterols more reactive than free sterols at low temperatures, while at high temperatures the situation was the reverse. Generally, oxidation reactions were more significant at temperatures above 100°C. At lower temperatures, the significance of these reactions increased with increasing reaction time. The effect of lipid matrix composition was dependent on temperature; at temperatures above 140°C, phytosterols were more stable in an unsaturated lipid matrix, whereas below 140°C they were more stable in a saturated lipid matrix. At 140°C, phytosterols oxidized at the same rate in both matrices. Regardless of temperature, phytostanols oxidized more in an unsaturated lipid matrix. Generally, the distribution of oxidation products seemed to be associated with the phase of overall oxidation. 7-ketophytosterols accumulated when oxidation had not yet reached the dynamic state. Once this state was attained, the major products were 5,6-epoxyphytosterols and 7-hydroxyphytosterols. The changes observed in phytostanol oxidation products were not as informative since all stanol oxides quantified represented hydroxyl compounds. The formation of these secondary oxidation products did not account for all of the phytosterol/stanol losses observed during the heating experiments, indicating the presence of dimeric, oligomeric or other oxidation products, especially when free phytosterols and stanols were heated at high temperatures. Commercially available phytosterol/stanol ingredients were stable during such food processes as spray-drying and ultra high temperature (UHT)-type heating and subsequent long-term storage. Pan-frying, however, induced phytosterol oxidation and was classified as a rather deteriorative process. Overall, the findings indicated that although phytosterols and stanols are stable in normal food processing conditions, attention should be paid to their use in frying. Complex interactions between other food constituents also suggested that when new phytosterol-enriched foods are developed their oxidative stability must first be established. The results presented here will assist in choosing safe conditions for phytosterol/stanol enrichment.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.