917 resultados para Hierarchical two-stage model


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most CRM work focuses on consumer applications. This paper addresses the operational adoption issues facing the organisation deploying CRM practices. There are a plethora of challenges facing organisations when adopting CRM. Previous research is limited to either examining the CRM adoption process at an individual/employees level or an organisational level. Hence, in this paper the myriad of organisational, marketing and technical antecedents that seem to impinge upon employee perceptions and organisational implementation of CRM are structured in a two-stage model. Using a stratified sample of ten organisations across four sectors, seven hypotheses are tested on data collected from 301 practitioners. A two-stage model is analysed using structural equation modelling. Findings reveal that CRM implementation relates to employee perceptions of CRM. This paper deepens our understanding of organisational practices to adopt CRM, so as an organisation properly profits from the expected benefits of CRM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage. © 2014 ARVO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I provide choice-theoretic foundations for a simple two-stage model, called transitive shortlist methods, where choices are made by sequentially by applying a pair of transitive preferences (or rationales) to eliminate inferior alternatives. Despite its simplicity, the model accommodates a wide range of choice phenomena including the status quo bias, framing, homophily, compromise, and limited willpower. I establish that the model can be succinctly characterized in terms of some well-documented context effects in choice. I also show that the underlying rationales are straightforward to determine from readily observable reversals in choice. Finally, I highlight the usefulness of these results in a variety of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates three questions related to medical practice variation. First, it tests whether average length of stay across Portuguese National Health Service hospitals varies when controlling for differences in patients’ characteristics. Second, it looks at hospital-level characteristics in order to find out whether these are able to explain differences in average length of stay across hospitals. Finally, it proposes a best practice average length of stay for each of the six episodes of care analyzed. To perform the analysis, administrative data from the Diagnosis-Related groups’ data set for the year of 2012 was used. A replication of a hierarchical two-stage model with hospital fixed effects was carried out. The results show that after taking patients’ characteristics into account, variation in average length of stay across hospitals exists. This variation cannot be explained by hospital-level characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information mismatch and overload are two fundamental issues influencing the effectiveness of information filtering systems. Even though both term-based and pattern-based approaches have been proposed to address the issues, neither of these approaches alone can provide a satisfactory decision for determining the relevant information. This paper presents a novel two-stage decision model for solving the issues. The first stage is a novel rough analysis model to address the overload problem. The second stage is a pattern taxonomy mining model to address the mismatch problem. The experimental results on RCV1 and TREC filtering topics show that the proposed model significantly outperforms the state-of-the-art filtering systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a tag-based recommender system, the multi-dimensional correlation should be modeled effectively for finding quality recommendations. Recently, few researchers have used tensor models in recommendation to represent and analyze latent relationships inherent in multi-dimensions data. A common approach is to build the tensor model, decompose it and, then, directly use the reconstructed tensor to generate the recommendation based on the maximum values of tensor elements. In order to improve the accuracy and scalability, we propose an implementation of the -mode block-striped (matrix) product for scalable tensor reconstruction and probabilistically ranking the candidate items generated from the reconstructed tensor. With testing on real-world datasets, we demonstrate that the proposed method outperforms the benchmarking methods in terms of recommendation accuracy and scalability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A numerical modelling technique for predicting the detailed performance of a double-inlet type two-stage pulse tube refrigerator has been developed. The pressure variations in the compressor, pulse tube, and reservoir were derived, assuming the stroke volume variation of the compressor to be sinusoidal. The relationships of mass flowrates, volume flowrates, and temperature as a function of time and position were developed. The predicted refrigeration powers are calculated by considering the effect of void volumes and the phase shift between pressure and mass flowrate. These results are compared with the experimental results of a specific pulse tube refrigerator configuration and an existing theoretical model. The analysis shows that the theoretical predictions are in good agreement with each other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.