984 resultados para Probability models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing and its three facets (Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS)) are terms that denote new developments in the software industry. In particular, PaaS solutions, also referred to as cloud platforms, are changing the way software is being produced, distributed, consumed, and priced. Software vendors have started considering cloud platforms as a strategic option but are battling to redefine their offerings to embrace PaaS. In contrast to SaaS and IaaS, PaaS allows for value co-creation with partners to develop complementary components and applications. It thus requires multisided business models that bring together two or more distinct customer segments. Understanding how to design PaaS business models to establish a flourishing ecosystem is crucial for software vendors. This doctoral thesis aims to address this issue in three interrelated research parts. First, based on case study research, the thesis provides a deeper understanding of current PaaS business models and their evolution. Second, it analyses and simulates consumers' preferences regarding PaaS business models, using a conjoint approach to find out what determines the choice of cloud platforms. Finally, building on the previous research outcomes, the third part introduces a design theory for the emerging class of PaaS business models, which is grounded on an extensive action design research study with a large European software vendor. Understanding PaaS business models from a market as well as a consumer perspective will, together with the design theory, inform and guide decision makers in their business model innovation plans. It also closes gaps in the research related to PaaS business model design and more generally related to platform business models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity and solution space, thus making it easier to investigate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vegeu el resum a l'inici del document de l'arxiu adjunt

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Power law distributions, a well-known model in the theory of real random variables, characterize a wide variety of natural and man made phenomena. The intensity of earthquakes, the word frequencies, the solar ares and the sizes of power outages are distributed according to a power law distribution. Recently, given the usage of power laws in the scientific community, several articles have been published criticizing the statistical methods used to estimate the power law behaviour and establishing new techniques to their estimation with proven reliability. The main object of the present study is to go in deep understanding of this kind of distribution and its analysis, and introduce the half-lives of the radioactive isotopes as a new candidate in the nature following a power law distribution, as well as a \canonical laboratory" to test statistical methods appropriate for long-tailed distributions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two hypotheses for how conditions for larval mosquitoes affect vectorial capacity make opposite predictions about the relationship of adult size and frequency of infection with vector-borne pathogens. Competition among larvae produces small adult females. The competition-susceptibility hypothesis postulates that small females are more susceptible to infection and predicts frequency of infection should decrease with size. The competition-longevity hypothesis postulates that small females have lower longevity and lower probability of becoming competent to transmit the pathogen and thus predicts frequency of infection should increase with size. We tested these hypotheses for Aedes aegypti in Rio de Janeiro, Brazil, during a dengue outbreak. In the laboratory, longevity increases with size, then decreases at the largest sizes. For field-collected females, generalised linear mixed model comparisons showed that a model with a linear increase of frequency of dengue with size produced the best Akaike’s information criterion with a correction for small sample sizes (AICc). Consensus prediction of three competing models indicated that frequency of infection increases monotonically with female size, consistent with the competition-longevity hypothesis. Site frequency of infection was not significantly related to site mean size of females. Thus, our data indicate that uncrowded, low competition conditions for larvae produce the females that are most likely to be important vectors of dengue. More generally, ecological conditions, particularly crowding and intraspecific competition among larvae, are likely to affect vector-borne pathogen transmission in nature, in this case via effects on longevity of resulting adults. Heterogeneity among individual vectors in likelihood of infection is a generally important outcome of ecological conditions impacting vectors as larvae.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a parsimonious regime-switching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multi-threshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a Diebold-Mariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the limits of discrete time repeated games with public monitoring. We solve and characterize the Abreu, Milgrom and Pearce (1991) problem. We found that for the "bad" ("good") news model the lower (higher) magnitude events suggest cooperation, i.e., zero punishment probability, while the highrt (lower) magnitude events suggest defection, i.e., punishment with probability one. Public correlation is used to connect these two sets of signals and to make the enforceability to bind. The dynamic and limit behavior of the punishment probabilities for variations in ... (the discount rate) and ... (the time interval) are characterized, as well as the limit payo¤s for all these scenarios (We also introduce uncertainty in the time domain). The obtained ... limits are to the best of my knowledge, new. The obtained ... limits coincide with Fudenberg and Levine (2007) and Fudenberg and Olszewski (2011), with the exception that we clearly state the precise informational conditions that cause the limit to converge from above, to converge from below or to degenerate. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Random Pub- lic Monitoring, Moral Hazard, Stochastic Processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Understanding of the genetic basis of type 2 diabetes (T2D) has progressed rapidly, but the interactions between common genetic variants and lifestyle risk factors have not been systematically investigated in studies with adequate statistical power. Therefore, we aimed to quantify the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention. METHODS AND FINDINGS The InterAct study includes 12,403 incident T2D cases and a representative sub-cohort of 16,154 individuals from a cohort of 340,234 European participants with 3.99 million person-years of follow-up. We studied the combined effects of an additive genetic T2D risk score and modifiable and non-modifiable risk factors using Prentice-weighted Cox regression and random effects meta-analysis methods. The effect of the genetic score was significantly greater in younger individuals (p for interaction  = 1.20×10-4). Relative genetic risk (per standard deviation [4.4 risk alleles]) was also larger in participants who were leaner, both in terms of body mass index (p for interaction  = 1.50×10-3) and waist circumference (p for interaction  = 7.49×10-9). Examination of absolute risks by strata showed the importance of obesity for T2D risk. The 10-y cumulative incidence of T2D rose from 0.25% to 0.89% across extreme quartiles of the genetic score in normal weight individuals, compared to 4.22% to 7.99% in obese individuals. We detected no significant interactions between the genetic score and sex, diabetes family history, physical activity, or dietary habits assessed by a Mediterranean diet score. CONCLUSIONS The relative effect of a T2D genetic risk score is greater in younger and leaner participants. However, this sub-group is at low absolute risk and would not be a logical target for preventive interventions. The high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Functional divergence between homologous proteins is expected to affect amino acid sequences in two main ways, which can be considered as proxies of biochemical divergence: a "covarion-like" pattern of correlated changes in evolutionary rates, and switches in conserved residues ("conserved but different"). Although these patterns have been used in case studies, a large-scale analysis is needed to estimate their frequency and distribution. We use a phylogenomic framework of animal genes to answer three questions: 1) What is the prevalence of such patterns? 2) Can we link such patterns at the amino acid level with selection inferred at the codon level? 3) Are patterns different between paralogs and orthologs? We find that covarion-like patterns are more frequently detected than "constant but different," but that only the latter are correlated with signal for positive selection. Finally, there is no obvious difference in patterns between orthologs and paralogs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prevention of Trypanosoma cruzi infection in mammals likely depends on either prevention of the invading trypomastigotes from infecting host cells or the rapid recognition and killing of the newly infected cells byT. cruzi-specific T cells. We show here that multiple rounds of infection and cure (by drug therapy) fails to protect mice from reinfection, despite the generation of potent T cell responses. This disappointing result is similar to that obtained with many other vaccine protocols used in attempts to protect animals from T. cruziinfection. We have previously shown that immune recognition ofT. cruziinfection is significantly delayed both at the systemic level and at the level of the infected host cell. The systemic delay appears to be the result of a stealth infection process that fails to trigger substantial innate recognition mechanisms while the delay at the cellular level is related to the immunodominance of highly variable gene family proteins, in particular those of the trans-sialidase family. Here we discuss how these previous studies and the new findings herein impact our thoughts on the potential of prophylactic vaccination to serve a productive role in the prevention of T. cruziinfection and Chagas disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION According to genome wide association (GWA) studies as well as candidate gene approaches, Behçet's disease (BD) is associated with human leukocyte antigen (HLA)-A and HLA-B gene regions. The HLA-B51 has been consistently associated with the disease, but the role of other HLA class I molecules remains controversial. Recently, variants in non-HLA genes have also been associated with BD. The aims of this study were to further investigate the influence of the HLA region in BD and to explore the relationship with non-HLA genes recently described to be associated in other populations. METHODS This study included 304 BD patients and 313 ethnically matched controls. HLA-A and HLA-B low resolution typing was carried out by PCR-SSOP Luminex. Eleven tag single nucleotide polymorphisms (SNPs) located outside of the HLA-region, previously described associated with the disease in GWA studies and having a minor allele frequency in Caucasians greater than 0.15 were genotyped using TaqMan assays. Phenotypic and genotypic frequencies were estimated by direct counting and distributions were compared using the χ(2) test. RESULTS In addition to HLA-B*51, HLA-B*57 was found as a risk factor in BD, whereas, B*35 was found to be protective. Other HLA-A and B specificities were suggestive of association with the disease as risk (A*02 and A*24) or protective factors (A*03 and B*58). Regarding the non-HLA genes, the three SNPs located in IL23R and one of the SNPs in IL10 were found to be significantly associated with susceptibility to BD in our population. CONCLUSION Different HLA specificities are associated with Behçet's disease in addition to B*51. Other non-HLA genes, such as IL23R and IL-10, play a role in the susceptibility to the disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND & AIMS Hy's Law, which states that hepatocellular drug-induced liver injury (DILI) with jaundice indicates a serious reaction, is used widely to determine risk for acute liver failure (ALF). We aimed to optimize the definition of Hy's Law and to develop a model for predicting ALF in patients with DILI. METHODS We collected data from 771 patients with DILI (805 episodes) from the Spanish DILI registry, from April 1994 through August 2012. We analyzed data collected at DILI recognition and at the time of peak levels of alanine aminotransferase (ALT) and total bilirubin (TBL). RESULTS Of the 771 patients with DILI, 32 developed ALF. Hepatocellular injury, female sex, high levels of TBL, and a high ratio of aspartate aminotransferase (AST):ALT were independent risk factors for ALF. We compared 3 ways to use Hy's Law to predict which patients would develop ALF; all included TBL greater than 2-fold the upper limit of normal (×ULN) and either ALT level greater than 3 × ULN, a ratio (R) value (ALT × ULN/alkaline phosphatase × ULN) of 5 or greater, or a new ratio (nR) value (ALT or AST, whichever produced the highest ×ULN/ alkaline phosphatase × ULN value) of 5 or greater. At recognition of DILI, the R- and nR-based models identified patients who developed ALF with 67% and 63% specificity, respectively, whereas use of only ALT level identified them with 44% specificity. However, the level of ALT and the nR model each identified patients who developed ALF with 90% sensitivity, whereas the R criteria identified them with 83% sensitivity. An equal number of patients who did and did not develop ALF had alkaline phosphatase levels greater than 2 × ULN. An algorithm based on AST level greater than 17.3 × ULN, TBL greater than 6.6 × ULN, and AST:ALT greater than 1.5 identified patients who developed ALF with 82% specificity and 80% sensitivity. CONCLUSIONS When applied at DILI recognition, the nR criteria for Hy's Law provides the best balance of sensitivity and specificity whereas our new composite algorithm provides additional specificity in predicting the ultimate development of ALF.