930 resultados para pattern-mixture model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we investigate potential symmetries of a simplified model for reacting mixtures. We find new similarity reductions and wider class of solutions through this approach. Further, we explore an invertible mapping which linearizes the reacting mixture model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The beta-Birnbaum-Saunders (Cordeiro and Lemonte, 2011) and Birnbaum-Saunders (Birnbaum and Saunders, 1969a) distributions have been used quite effectively to model failure times for materials subject to fatigue and lifetime data. We define the log-beta-Birnbaum-Saunders distribution by the logarithm of the beta-Birnbaum-Saunders distribution. Explicit expressions for its generating function and moments are derived. We propose a new log-beta-Birnbaum-Saunders regression model that can be applied to censored data and be used more effectively in survival analysis. We obtain the maximum likelihood estimates of the model parameters for censored data and investigate influence diagnostics. The new location-scale regression model is modified for the possibility that long-term survivors may be presented in the data. Its usefulness is illustrated by means of two real data sets. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a way to describe design patterns rigorously based on role concepts. Rigorous pattern descriptions are a key aspect for patterns to be used as rules for model evolution in the MDA context, for example. We formalize the role concepts commonly used in defining design patterns as a role metamodel using Object-Z. Given this role metamodel, individual design patterns are specified generically as a formal pattern role model using Object-Z. We also formalize the properties that must be captured in a class model when a design pattern is deployed. These properties are defined generically in terms of role bindings from a pattern role model to a class model. Our work provides a precise but abstract approach for pattern definition and also provides a precise basis for checking the validity of pattern usage in designs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a framework for pattern-based model evolution approaches in the MDA context. In the framework, users define patterns using a pattern modeling language that is designed to describe software design patterns, and they can use the patterns as rules to evolve their model. In the framework, design model evolution takes place via two steps. The first step is a binding process of selecting a pattern and defining where and how to apply the pattern in the model. The second step is an automatic model transformation that actually evolves the model according to the binding information and the pattern rule. The pattern modeling language is defined in terms of a MOF-based role metamodel, and implemented using an existing modeling framework, EMF, and incorporated as a plugin to the Eclipse modeling environment. The model evolution process is also implemented as an Eclipse plugin. With these two plugins, we provide an integrated framework where defining and validating patterns, and model evolution based on patterns can take place in a single modeling environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62F15.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Survival models are being widely applied to the engineering field to model time-to-event data once censored data is here a common issue. Using parametric models or not, for the case of heterogeneous data, they may not always represent a good fit. The present study relays on critical pumps survival data where traditional parametric regression might be improved in order to obtain better approaches. Considering censored data and using an empiric method to split the data into two subgroups to give the possibility to fit separated models to our censored data, we’ve mixture two distinct distributions according a mixture-models approach. We have concluded that it is a good method to fit data that does not fit to a usual parametric distribution and achieve reliable parameters. A constant cumulative hazard rate policy was used as well to check optimum inspection times using the obtained model from the mixture-model, which could be a plus when comparing with the actual maintenance policies to check whether changes should be introduced or not.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Intermediate-complexity general circulation models are a fundamental tool to investigate the role of internal and external variability within the general circulation of the atmosphere and ocean. The model used in this thesis is an intermediate complexity atmospheric general circulation model (SPEEDY) coupled to a state-of-the-art modelling framework for the ocean (NEMO). We assess to which extent the model allows a realistic simulation of the most prominent natural mode of variability at interannual time scales: El-Niño Southern Oscillation (ENSO). To a good approximation, the model represents the ENSO-induced Sea Surface Temperature (SST) pattern in the equatorial Pacific, despite a cold tongue-like bias. The model underestimates (overestimates) the typical ENSO spatial variability during the winter (summer) seasons. The mid-latitude response to ENSO reveals that the typical poleward stationary Rossby wave train is reasonably well represented. The spectral decomposition of ENSO features a spectrum that lacks periodicity at high frequencies and is overly periodic at interannual timescales. We then implemented an idealised transient mean state change in the SPEEDY model. A warmer climate is simulated by an alteration of the parametrized radiative fluxes that corresponds to doubled carbon dioxide absorptivity. Results indicate that the globally averaged surface air temperature increases of 0.76 K. Regionally, the induced signal on the SST field features a significant warming over the central-western Pacific and an El-Niño-like warming in the subtropics. In general, the model features a weakening of the tropical Walker circulation and a poleward expansion of the local Hadley cell. This response is also detected in a poleward rearrangement of the tropical convective rainfall pattern. The model setting that has been here implemented provides a valid theoretical support for future studies on climate sensitivity and forced modes of variability under mean state changes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Desde o começo da ocupação humana no litoral centro-sul de Santa Catarina, Brasil, a articulação entre processos naturais e antrópicos modelou uma paisagem fortemente domesticada, marcada pela construção massiva de concheiros de dimensões monumentais e pela permanência milenar. Na planície costeira entre Passagem da Barra (município de Laguna) e lago Figueirinha (município de Jaguaruna), 76 sambaquis foram mapeados, dos quais 48 possuem datação. O levantamento sistemático de sítios e datações permitiu identificar padrões de distribuição espacial nos sambaquis da região, quanto a contexto sedimentar da época de construção, estratigrafia e idade. Desse modo, reconheceram-se nos sítios da região: cinco contextos geológico-geomorfológicos de localização; três padrões estratigráficos; e quatro fases de ocupação sambaquieira baseadas na quantidade de sítios e no tipo de padrão construtivo dominante. O modelo integrado de evolução sedimentar e distribuição tempo-espacial de sambaquis indica que estes sítios eram construídos em áreas já emersas e pouco alagáveis, e que sítios interiores, afastados dos corpos lagunares, podem não se ter preservado ou não estarem expostos devido ao processo de assoreamento contínuo que caracterizou a região após a máxima transgressão holocênica. O cruzamento de dados aqui proposto evidencia a importância de abordagens integradas entre arqueologia e geociências no estudo da evolução das paisagens.