978 resultados para Implementation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents important properties of standard discrete distributions and its conjugate densities. The Bernoulli and Poisson processes are described as generators of such discrete models. A characterization of distributions by mixtures is also introduced. This article adopts a novel singular notation and representation. Singular representations are unusual in statistical texts. Nevertheless, the singular notation makes it simpler to extend and generalize theoretical results and greatly facilitates numerical and computational implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Birnbaum-Saunders regression model is commonly used in reliability studies. We derive a simple matrix formula for second-order covariances of maximum-likelihood estimators in this class of models. The formula is quite suitable for computer implementation, since it involves only simple operations on matrices and vectors. Some simulation results show that the second-order covariances can be quite pronounced in small to moderate sample sizes. We also present empirical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The family of distributions proposed by Birnbaum and Saunders (1969) can be used to model lifetime data and it is widely applicable to model failure times of fatiguing materials. We give a simple matrix formula of order n(-1/2), where n is the sample size, for the skewness of the distributions of the maximum likelihood estimates of the parameters in Birnbaum-Saunders nonlinear regression models, recently introduced by Lemonte and Cordeiro (2009). The formula is quite suitable for computer implementation, since it involves only simple operations on matrices and vectors, in order to obtain closed-form skewness in a wide range of nonlinear regression models. Empirical and real applications are analyzed and discussed. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random effect models have been widely applied in many fields of research. However, models with uncertain design matrices for random effects have been little investigated before. In some applications with such problems, an expectation method has been used for simplicity. This method does not include the extra information of uncertainty in the design matrix is not included. The closed solution for this problem is generally difficult to attain. We therefore propose an two-step algorithm for estimating the parameters, especially the variance components in the model. The implementation is based on Monte Carlo approximation and a Newton-Raphson-based EM algorithm. As an example, a simulated genetics dataset was analyzed. The results showed that the proportion of the total variance explained by the random effects was accurately estimated, which was highly underestimated by the expectation method. By introducing heuristic search and optimization methods, the algorithm can possibly be developed to infer the 'model-based' best design matrix and the corresponding best estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past decade has wítenessed a series of (well accepted and defined) financial crises periods in the world economy. Most of these events aI,"e country specific and eventually spreaded out across neighbor countries, with the concept of vicinity extrapolating the geographic maps and entering the contagion maps. Unfortunately, what contagion represents and how to measure it are still unanswered questions. In this article we measure the transmission of shocks by cross-market correlation\ coefficients following Forbes and Rigobon's (2000) notion of shift-contagion,. Our main contribution relies upon the use of traditional factor model techniques combined with stochastic volatility mo deIs to study the dependence among Latin American stock price indexes and the North American indexo More specifically, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. From a theoretical perspective, we improve currently available methodology by allowing the factor loadings, in the factor model structure, to have a time-varying structure and to capture changes in the series' weights over time. By doing this, we believe that changes and interventions experienced by those five countries are well accommodated by our models which learns and adapts reasonably fast to those economic and idiosyncratic shocks. We empirically show that the time varying covariance structure can be modeled by one or two common factors and that some sort of contagion is present in most of the series' covariances during periods of economical instability, or crisis. Open issues on real time implementation and natural model comparisons are thoroughly discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, more than half of the computer development projects fail to meet the final users' expectations. One of the main causes is insufficient knowledge about the organization of the enterprise to be supported by the respective information system. The DEMO methodology (Design and Engineering Methodology for Organizations) has been proved as a well-defined method to specify, through models and diagrams, the essence of any organization at a high level of abstraction. However, this methodology is platform implementation independent, lacking the possibility of saving and propagating possible changes from the organization models to the implemented software, in a runtime environment. The Universal Enterprise Adaptive Object Model (UEAOM) is a conceptual schema being used as a basis for a wiki system, to allow the modeling of any organization, independent of its implementation, as well as the previously mentioned change propagation in a runtime environment. Based on DEMO and UEAOM, this project aims to develop efficient and standardized methods, to enable an automatic conversion of DEMO Ontological Models, based on UEAOM specification into BPMN (Business Process Model and Notation) models of processes, using clear semantics, without ambiguities, in order to facilitate the creation of processes, almost ready for being executed on workflow systems that support BPMN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the implementation of CP violation in the context of 331 models. In particular we treat a model where only three scaler triplets are needed in order to give all fermions a mass while keeping neutrino massless. In this case all CP violation is provided by the scalar sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reactive-optimisation procedures are responsible for the minimisation of online power losses in interconnected systems. These procedures are performed separately at each control centre and involve external network representations. If total losses can be minimised by the implementation of calculated local control actions, the entire system benefits economically, but such control actions generally result in a certain degree of inaccuracy, owing to errors in the modelling of the external system. Since these errors are inevitable, they must at least be maintained within tolerable limits by external-modelling approaches. Care must be taken to avoid unrealistic loss minimisation, as the local-control actions adopted can lead the system to points of operation which will be less economical for the interconnected system as a whole. The evaluation of the economic impact of the external modelling during reactive-optimisation procedures in interconnected systems, in terms of both the amount of losses and constraint violations, becomes important in this context. In the paper, an analytical approach is proposed for such an evaluation. Case studies using data from the Brazilian South-Southeast system (810 buses) have been carried out to compare two different external-modelling approaches, both derived from the equivalent-optimal-power-flow (EOPF) model. Results obtained show that, depending on the external-model representation adopted, the loss representation can be flawed. Results also suggest some modelling features that should be adopted in the EOPF model to enhance the economy of the overall system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear mixed effects models are frequently used to analyse longitudinal data, due to their flexibility in modelling the covariance structure between and within observations. Further, it is easy to deal with unbalanced data, either with respect to the number of observations per subject or per time period, and with varying time intervals between observations. In most applications of mixed models to biological sciences, a normal distribution is assumed both for the random effects and for the residuals. This, however, makes inferences vulnerable to the presence of outliers. Here, linear mixed models employing thick-tailed distributions for robust inferences in longitudinal data analysis are described. Specific distributions discussed include the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted, and the Gibbs sampler and the Metropolis-Hastings algorithms are used to carry out the posterior analyses. An example with data on orthodontic distance growth in children is discussed to illustrate the methodology. Analyses based on either the Student-t distribution or on the usual Gaussian assumption are contrasted. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process for modelling distributions of the random effects and of residuals in linear mixed models, and the MCMC implementation allows the computations to be performed in a flexible manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The iterative quadratic maximum likelihood IQML and the method of direction estimation MODE are well known high resolution direction-of-arrival DOA estimation methods. Their solutions lead to an optimization problem with constraints. The usual linear constraint presents a poor performance for certain DOA values. This work proposes a new linear constraint applicable to both DOA methods and compare their performance with two others: unit norm and usual linear constraint. It is shown that the proposed alternative performs better than others constraints. The resulting computational complexity is also investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzes the particularities of Brazilian radio networks that adopt the all-news format and briefly presents the main national and international experiences for the implementation of the model and its conceptualization. Besides the bibliographic review, we use the multiple-case study, analyzing as the empirical objects CBN and BandNews FM networks. Also, we apply methodological procedures of systematic non-participant observation, supplemented by interviews and surveys. We conclude that the different all-news programming models and network organizations influence the processes of production, information structure, broadcasting language, and therefore the stations' profile. © 2011 Copyright Taylor and Francis Group, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The sequencing and publication of the cattle genome and the identification of single nucleotide polymorphism (SNP) molecular markers have provided new tools for animal genetic evaluation and genomic-enhanced selection. These new tools aim to increase the accuracy and scope of selection while decreasing generation interval. The objective of this study was to evaluate the enhancement of accuracy caused by the use of genomic information (Clarifide® - Pfizer) on genetic evaluation of Brazilian Nellore cattle. Review: The application of genome-wide association studies (GWAS) is recognized as one of the most practical approaches to modern genetic improvement. Genomic selection is perhaps most suited to the improvement of traits with low heritability in zebu cattle. The primary interest in livestock genomics has been to estimate the effects of all the markers on the chip, conduct cross-validation to determine accuracy, and apply the resulting information in GWAS either alone [9] or in combination with bull test and pedigree-based genetic evaluation data. The cost of SNP50K genotyping however limits the commercial application of GWAS based on all the SNPs on the chip. However, reasonable predictability and accuracy can be achieved in GWAS by using an assay that contains an optimally selected predictive subset of markers, as opposed to all the SNPs on the chip. The best way to integrate genomic information into genetic improvement programs is to have it included in traditional genetic evaluations. This approach combines traditional expected progeny differences based on phenotype and pedigree with the genomic breeding values based on the markers. Including the different sources of information into a multiple trait genetic evaluation model, for within breed dairy cattle selection, is working with excellent results. However, given the wide genetic diversity of zebu breeds, the high-density panel used for genomic selection in dairy cattle (Ilumina Bovine SNP50 array) appears insufficient for across-breed genomic predictions and selection in beef cattle. Today there is only one breed-specific targeted SNP panel and genomic predictions developed using animals across the entire population of the Nellore breed (www.pfizersaudeanimal.com), which enables genomically - enhanced selection. Genomic profiles are a way to enhance our current selection tools to achieve more accurate predictions for younger animals. Material and Methods: We analyzed the age at first calving (AFC), accumulated productivity (ACP), stayability (STAY) and heifer pregnancy at 30 months (HP30) in Nellore cattle fitting two different animal models; 1) a traditional single trait model, and 2) a two-trait model where the genomic breeding value or molecular value prediction (MVP) was included as a correlated trait. All mixed model analyses were performed using the statistical software ASREML 3.0. Results: Genetic correlation estimates between AFC, ACP, STAY, HP30 and respective MVPs ranged from 0.29 to 0.46. Results also showed an increase of 56%, 36%, 62% and 19% in estimated accuracy of AFC, ACP, STAY and HP30 when MVP information was included in the animal model. Conclusion: Depending upon the trait, integration of MVP information into genetic evaluation resulted in increased accuracy of 19% to 62% as compared to accuracy from traditional genetic evaluation. GE-EPD will be an effective tool to enable faster genetic improvement through more dependable selection of young animals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)