967 resultados para Discrete Conditional Phase-type model
Resumo:
Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.
Resumo:
The absolute sign of local polarity in relation to the biological growth direction has been investigated for teeth cementum using phase sensitive second harmonic generation microscopy (PS-SHGM) and a crystal of 2-cyclooctylamino-5-nitropyridine (COANP) as a nonlinear optic (NLO) reference material. A second harmonic generation (SHG) response was found in two directions of cementum: radial (acellular extrinsic fibers that are oriented more or less perpendicular to the root surface) and circumferential (cellular intrinsic fibers that are oriented more or less parallel to the surface). A mono-polar state was demonstrated for acellular extrinsic cementum. However, along the different parts of cementum in circumferential direction, two corresponding domains were observed featuring an opposite sign of polarity indicative for a bi-polar microscopic state of cellular intrinsic cementum. The phase information showed that the orientation of radial collagen fibrils of cementum is regularly organized with the donor (D) groups pointing to the surface. Circumferential collagen molecules feature orientational disorder and are oriented up and down in random manner showing acceptor or donor groups at the surface of cementum. Considering that the cementum continues to grow in thickness throughout life, we can conclude that the cementum is growing circumferentially in two opposite directions and radially in one direction. A Markov chain type model for polarity formation in the direction of growth predicts D-groups preferably appearing at the fiber front.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store’s fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.
Resumo:
The efficiency of airport airside operations is often compromised by unplanned disruptive events of different kinds, such as bad weather, strikes or technical failures, which negatively influence the punctuality and regularity of operations, causing serious delays and unexpected congestion. They may provoke important impacts and economic losses on passengers, airlines and airport operators, and consequences may propagate in the air network throughout different airports. In order to identify strategies to cope with such events and minimize their impacts, it is crucial to understand how disruptive events affect airports’ performance. The research field related with the risk of severe air transport network disruptions and their impact on society is related to the concepts of vulnerability and resilience. The main objective of this project is to provide a framework that allows to evaluate performance losses and consequences due to unexpected disruptions affecting airport airside operations, supporting the development of a methodology for estimating vulnerability and resilience indicators for airport airside operations. The methodology proposed comprises three phases. In the first phase, airside operations are modelled in both the baseline and disrupted scenarios. The model includes all main airside processes and takes into consideration the uncertainties and dynamics of the system. In the second phase, the model is implemented by using a generic simulation software, AnyLogic. Vulnerability is evaluated by taking into consideration the costs related to flight delays, cancellations and diversions; resilience is determined as a function of the loss of capacity during the entire period of disruption. In the third phase, a Bayesian Network is built in which uncertain variables refer to airport characteristics and disruption type. The Bayesian Network expresses the conditional dependence among these variables and allows to predict the impacts of disruptions on an airside system, determining the elements which influence the system resilience the most.
Resumo:
We study the dynamics of the adoption of new products by agents with continuous opinions and discrete actions (CODA). The model is such that the refusal in adopting a new idea or product is increasingly weighted by neighbor agents as evidence against the product. Under these rules, we study the distribution of adoption times and the final proportion of adopters in the population. We compare the cases where initial adopters are clustered to the case where they are randomly scattered around the social network and investigate small world effects on the final proportion of adopters. The model predicts a fat tailed distribution for late adopters which is verified by empirical data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Warranty is an important element of marketing new products as better warranty signals higher product quality and provides greater assurance to customers. Servicing warranty involves additional costs to the manufacturer and this cost depends on product reliability and warranty terms. Product reliability is influenced by the decisions made during the design and manufacturing of the product. As such warranty is very important in the context of new products. Product warranty has received the attention of researchers from many different disciplines and the literature on warranties is vast. This paper carries out a review of the literature that has appeared in the last ten years. It highlights issues of interest to manufacturers in the context of managing new products from an overall business perspective. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.
Resumo:
This paper analyses the effects of tariffs on an international economy with a monopolistic sector with two firms, located in two countries, each one producing a homogeneous good for both home consumption and export to the other identical country. We consider a game among governments and firms. First, the government imposes a tariff on imports and then we consider the two types of moving: simultaneous (Cournot-type model) and sequential (Stackelberg-type model) decisions by the firms. We also compare the results obtained in each model.
Resumo:
This paper provides evidence on the sources of co-movement in monthly US and UK stock price movements by investigating the role of macroeconomic and financial variables in a bivariate system with time-varying conditional correlations. Crosscountry communality in response is uncovered, with changes in the US Federal Funds rate, UK bond yields and oil prices having similar negative effects in both markets. Other variables also play a role, especially for the UK market. These effects do not, however, explain the marked increase in cross-market correlations observed from around 2000, which we attribute to time variation in the correlations of shocks to these markets. A regime-switching smooth transition model captures this time variation well and shows the correlations increase dramatically around 1999-2000. JEL classifications: C32, C51, G15 Keywords: international stock returns, DCC-GARCH model, smooth transition conditional correlation GARCH model, model evaluation.
Resumo:
Résumé Le fer joue un rôle important dans la plupart des fonctions biologiques mais sa présence excessive provoque la production de molécules réactives d'oxygène (ROS) qui peuvent contribuer à diverses maladies. La protéine de stockage du fer, la ferritine H, capte l'excès en fer et le stocke sous forme non-toxique, ce qui empêche des dommages potentiels. La délétion de la ferritine H dans des souris knock-out a été essayée antérieurement, mais ces souris mouraient au stade précoce du développement embryonnaire. Pour étudier l'importance du fer, et en particulier son stockage dans la ferritine, et pour pouvoir mieux comprendre les fonctions de la ferritine H, nous avons créé un modèle de souris knock-out conditionnelles de la ferritine H, selon le système classique de Cre-LoxP. Le premier exon et la région du promoteur du gène de la ferritine H ont été entourés de sites loxP. La mortalité embryonnaire provoquée par la délétion constitutive du gène de la ferritine H a été confirmée en croisant nos souris avec des souris exprimant nestin-Cre1. En croisant nos souris avec des souris transgéniques Mx-Cre, nous avons observé que l'induction de Cre par injection de polyI-polyC provoque la délétion presque complète de la ferritine H dans le foie (> 99%) et la rate (> 88%). Ces tissus ont également perdu une grande partie de leur réserve de fer. Cette observation apporte pour la première fois la preuve in vivo que la ferritine H est indispensable pour le stockage du fer, que les fonctions de la ferritine H et de la ferritine L ne sont pas équivalentes, et que la ferritine L ne peut pas assumer seule la fonction de stockage du fer. Dans le foie des souris knock-out, l'expression de l'ARN messager de l'hepcidine a été induite après 10 jours. En même temps, l'expression de l'ARN messager des gènes codant pour des protéines de l'absorption de fer (DMT1, ferroportin, Dcytb1 et hephaestin) a été réprimée mais dans le duodénum seulement. L'expression d'hepcidine est inversément corrélée avec celle des gènes liés à l'absorption de fer. Cette observation corrobore des études antérieures. Mais, en plus, elle montre également que cette répression se produit seulement dans l'intestin. Nous pouvons ainsi tirer la conclusion suivante : ou bien l'hepcidine a un récepteur spécifique dans le duodénum ou bien les gènes liés à l'absorption de fer dans le duodénum ont un facteur spécifique de transcription sensible à l'hepcidine. Aucune répression de DMT1 et de ferroportin n'a été observée dans les macrophages de la rate après l'induction d'hepcidine. La délétion de ferritine H a entraîné une augmentation du taux de mortalité des cellules hépatiques, ainsi que des altérations dans l'architecture normale du tissu de la rate. Vu par l'immunohistologie, le nombre de lymphocytes B et T était réduit dans la rate, tendant à démontrer que la ferritine H et l'homéostase du fer jouent un rôle dans l'immunité. En conclusion, le modèle de souris knock-out conditionnelles de la ferritine H nous fournit un outil précieux pour l'étude in vivo du rôle joué par la ferritine dans l'homéostase du fer, dans les dommages créés par les ROS, ainsi que dans l'apoptose et l'immunité. Summary Iron plays an important role in most biological functions. However, excess of iron results in production of reactive oxygen species (ROS) which could substantially contribute to pathology of various diseases. Ferritin H scavenges excess of iron and stores it in non-toxic form and potentially prevents the damage. Fenitin H targeting in mice has been attempted before, however, straight knockout was lethal in early embryonic stage. To study the role of iron and its storage protein ferritin and to further elucidate ferritin H functions, we aimed at creating a conditional ferritin H knockout mouse model by classical Cre-LoxP system. First exon along with promoter region of the ferritin H gene was foxed. Embryonic lethality of the constitutive ferritin H deletion was confirmed by crossing the foxed mice with mice expressing nestin Cre-1 as transgene. Almost complete deletion was observed in liver (> 99%) and spleen (>88%) upon induction of Cre by injecting polyI-polyC in Fth Lox/Lox; MxCre mice. These tissues also lost substantial fraction of their iron stores. This provides first in vivo evidence that ferritin H is required for iron storage, ferritin H and L functions are not redundant and that ferritin L cannot perform iron storage function alone. Hepcidin mRNA expression was induced after 10 days in the livers of deleted mice and, simultaneously, mRNA expression of iron absorption related genes (DMT 1, ferroportin, Dcytb1 and hephaestin) was repressed in duodenum only. Hepcidin expression is inversely correlated with that of duodenal iron absorption related genes. This is in agreement with previous studies. However, we also show that this repression happens only in intestine. This leads to the conclusion that either hepcidin has a specific receptor in duodenum or the iron absorption related genes have duodenum specific transcription factor that is responsive to hepcidin. No repression of DMT1 and ferroportin was observed in spleen macrophages upon hepcidin induction. Ferritin H deletion showed increased cell death in liver and disruption of normal architecture of spleen. B lymphocytes were reduced in spleen on immunohistology which point towards a role of ferritin H and iron homeostasis in immunity. In conclusion, ferritin H conditional knockout mouse model provides us with an invaluable tool to study the in vivo role of ferritin H in iron homeostasis, ROS mediated damage, apoptosis and immunity.
Resumo:
Time-dependent correlation functions and the spectrum of the transmitted light are calculated for absorptive optical bistability taking into account phase fluctuations of the driving laser. These fluctuations are modeled by an extended phase-diffusion model which introduces non-Markovian effects. The spectrum is obtained as a superposition of Lorentzians. It shows qualitative differences with respect to the usual calculation in which phase fluctuations of the driving laser are neglected.
Resumo:
A phase-field model for dealing with dynamic instabilities in membranes is presented. We use it to study curvature-driven pearling instability in vesicles induced by the anchorage of amphiphilic polymers on the membrane. Within this model, we obtain the morphological changes reported in recent experiments. The formation of a homogeneous pearled structure is achieved by consequent pearling of an initial cylindrical tube from the tip. For high enough concentration of anchors, we show theoretically that the homogeneous pearled shape is energetically less favorable than an inhomogeneous one, with a large sphere connected to an array of smaller spheres.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
We study the influence of disorder strength on the interface roughening process in a phase-field model with locally conserved dynamics. We consider two cases where the mobility coefficient multiplying the locally conserved current is either constant throughout the system (the two-sided model) or becomes zero in the phase into which the interface advances (one-sided model). In the limit of weak disorder, both models are completely equivalent and can reproduce the physical process of a fluid diffusively invading a porous media, where super-rough scaling of the interface fluctuations occurs. On the other hand, increasing disorder causes the scaling properties to change to intrinsic anomalous scaling. In the limit of strong disorder this behavior prevails for the one-sided model, whereas for the two-sided case, nucleation of domains in front of the invading front are observed.