971 resultados para Parameter-estimation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate, the complementary exponential geometric distribution, which is complementary to the exponential geometric model proposed by Adamidis and Loukas (1998). The new distribution arises on a latent complementary risks scenario, in which the lifetime associated with a particular risk is not observable; rather, we observe only the maximum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its reliability and failure rate functions, moments, including the mean and variance, variation coefficient, and modal value. The parameter estimation is based on the usual maximum likelihood approach. We report the results of a misspecification simulation study performed in order to assess the extent of misspecification errors when testing the exponential geometric distribution against our complementary one in the presence of different sample size and censoring percentage. The methodology is illustrated on four real datasets; we also make a comparison between both modeling approaches. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Scale mixtures of the skew-normal (SMSN) distribution is a class of asymmetric thick-tailed distributions that includes the skew-normal (SN) distribution as a special case. The main advantage of these classes of distributions is that they are easy to simulate and have a nice hierarchical representation facilitating easy implementation of the expectation-maximization algorithm for the maximum-likelihood estimation. In this paper, we assume an SMSN distribution for the unobserved value of the covariates and a symmetric scale mixtures of the normal distribution for the error term of the model. This provides a robust alternative to parameter estimation in multivariate measurement error models. Specific distributions examined include univariate and multivariate versions of the SN, skew-t, skew-slash and skew-contaminated normal distributions. The results and methods are applied to a real data set.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce in this paper the class of linear models with first-order autoregressive elliptical errors. The score functions and the Fisher information matrices are derived for the parameters of interest and an iterative process is proposed for the parameter estimation. Some robustness aspects of the maximum likelihood estimates are discussed. The normal curvatures of local influence are also derived for some usual perturbation schemes whereas diagnostic graphics to assess the sensitivity of the maximum likelihood estimates are proposed. The methodology is applied to analyse the daily log excess return on the Microsoft whose empirical distributions appear to have AR(1) and heavy-tailed errors. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objective of this paper is to study a logarithm extension of the bimodal skew normal model introduced by Elal-Olivero et al. [1]. The model can then be seen as an alternative to the log-normal model typically used for fitting positive data. We study some basic properties such as the distribution function and moments, and discuss maximum likelihood for parameter estimation. We report results of an application to a real data set related to nickel concentration in soil samples. Model fitting comparison with several alternative models indicates that the model proposed presents the best fit and so it can be quite useful in real applications for chemical data on substance concentration. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis contributes to the heuristic optimization of the p-median problem and Swedish population redistribution.   The p-median model is the most representative model in the location analysis. When facilities are located to a population geographically distributed in Q demand points, the p-median model systematically considers all the demand points such that each demand point will have an effect on the decision of the location. However, a series of questions arise. How do we measure the distances? Does the number of facilities to be located have a strong impact on the result? What scale of the network is suitable? How good is our solution? We have scrutinized a lot of issues like those. The reason why we are interested in those questions is that there are a lot of uncertainties in the solutions. We cannot guarantee our solution is good enough for making decisions. The technique of heuristic optimization is formulated in the thesis.   Swedish population redistribution is examined by a spatio-temporal covariance model. A descriptive analysis is not always enough to describe the moving effects from the neighbouring population. A correlation or a covariance analysis is more explicit to show the tendencies. Similarly, the optimization technique of the parameter estimation is required and is executed in the frame of statistical modeling. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a spatial-temporal downscaling approach to construction of the intensity-duration-frequency (IDF) relations at a local site in the context of climate change and variability. More specifically, the proposed approach is based on a combination of a spatial downscaling method to link large-scale climate variables given by General Circulation Model (GCM) simulations with daily extreme precipitations at a site and a temporal downscaling procedure to describe the relationships between daily and sub-daily extreme precipitations based on the scaling General Extreme Value (GEV) distribution. The feasibility and accuracy of the suggested method were assessed using rainfall data available at eight stations in Quebec (Canada) for the 1961-2000 period and climate simulations under four different climate change scenarios provided by the Canadian (CGCM3) and UK (HadCM3) GCM models. Results of this application have indicated that it is feasible to link sub-daily extreme rainfalls at a local site with large-scale GCM-based daily climate predictors for the construction of the IDF relations for present (1961-1990) and future (2020s, 2050s, and 2080s) periods at a given site under different climate change scenarios. In addition, it was found that annual maximum rainfalls downscaled from the HadCM3 displayed a smaller change in the future, while those values estimated from the CGCM3 indicated a large increasing trend for future periods. This result has demonstrated the presence of high uncertainty in climate simulations provided by different GCMs. In summary, the proposed spatial-temporal downscaling method provided an essential tool for the estimation of extreme rainfalls that are required for various climate-related impact assessment studies for a given region.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Em cenas naturais, ocorrem com certa freqüência classes espectralmente muito similares, isto é, os vetores média são muito próximos. Em situações como esta, dados de baixa dimensionalidade (LandSat-TM, Spot) não permitem uma classificação acurada da cena. Por outro lado, sabe-se que dados em alta dimensionalidade [FUK 90] tornam possível a separação destas classes, desde que as matrizes covariância sejam suficientemente distintas. Neste caso, o problema de natureza prática que surge é o da estimação dos parâmetros que caracterizam a distribuição de cada classe. Na medida em que a dimensionalidade dos dados cresce, aumenta o número de parâmetros a serem estimados, especialmente na matriz covariância. Contudo, é sabido que, no mundo real, a quantidade de amostras de treinamento disponíveis, é freqüentemente muito limitada, ocasionando problemas na estimação dos parâmetros necessários ao classificador, degradando portanto a acurácia do processo de classificação, na medida em que a dimensionalidade dos dados aumenta. O Efeito de Hughes, como é chamado este fenômeno, já é bem conhecido no meio científico, e estudos vêm sendo realizados com o objetivo de mitigar este efeito. Entre as alternativas propostas com a finalidade de mitigar o Efeito de Hughes, encontram-se as técnicas de regularização da matriz covariância. Deste modo, técnicas de regularização para a estimação da matriz covariância das classes, tornam-se um tópico interessante de estudo, bem como o comportamento destas técnicas em ambientes de dados de imagens digitais de alta dimensionalidade em sensoriamento remoto, como por exemplo, os dados fornecidos pelo sensor AVIRIS. Neste estudo, é feita uma contextualização em sensoriamento remoto, descrito o sistema sensor AVIRIS, os princípios da análise discriminante linear (LDA), quadrática (QDA) e regularizada (RDA) são apresentados, bem como os experimentos práticos dos métodos, usando dados reais do sensor. Os resultados mostram que, com um número limitado de amostras de treinamento, as técnicas de regularização da matriz covariância foram eficientes em reduzir o Efeito de Hughes. Quanto à acurácia, em alguns casos o modelo quadrático continua sendo o melhor, apesar do Efeito de Hughes, e em outros casos o método de regularização é superior, além de suavizar este efeito. Esta dissertação está organizada da seguinte maneira: No primeiro capítulo é feita uma introdução aos temas: sensoriamento remoto (radiação eletromagnética, espectro eletromagnético, bandas espectrais, assinatura espectral), são também descritos os conceitos, funcionamento do sensor hiperespectral AVIRIS, e os conceitos básicos de reconhecimento de padrões e da abordagem estatística. No segundo capítulo, é feita uma revisão bibliográfica sobre os problemas associados à dimensionalidade dos dados, à descrição das técnicas paramétricas citadas anteriormente, aos métodos de QDA, LDA e RDA, e testes realizados com outros tipos de dados e seus resultados.O terceiro capítulo versa sobre a metodologia que será utilizada nos dados hiperespectrais disponíveis. O quarto capítulo apresenta os testes e experimentos da Análise Discriminante Regularizada (RDA) em imagens hiperespectrais obtidos pelo sensor AVIRIS. No quinto capítulo são apresentados as conclusões e análise final. A contribuição científica deste estudo, relaciona-se à utilização de métodos de regularização da matriz covariância, originalmente propostos por Friedman [FRI 89] para classificação de dados em alta dimensionalidade (dados sintéticos, dados de enologia), para o caso especifico de dados de sensoriamento remoto em alta dimensionalidade (imagens hiperespectrais). A conclusão principal desta dissertação é que o método RDA é útil no processo de classificação de imagens com dados em alta dimensionalidade e classes com características espectrais muito próximas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A tradicional representação da estrutura a termo das taxas de juros em três fatores latentes (nível, inclinação e curvatura) teve sua formulação original desenvolvida por Charles R. Nelson e Andrew F. Siegel em 1987. Desde então, diversas aplicações vêm sendo desenvolvidas por acadêmicos e profissionais de mercado tendo como base esta classe de modelos, sobretudo com a intenção de antecipar movimentos nas curvas de juros. Ao mesmo tempo, estudos recentes como os de Diebold, Piazzesi e Rudebusch (2010), Diebold, Rudebusch e Aruoba (2006), Pooter, Ravazallo e van Dijk (2010) e Li, Niu e Zeng (2012) sugerem que a incorporação de informação macroeconômica aos modelos da ETTJ pode proporcionar um maior poder preditivo. Neste trabalho, a versão dinâmica do modelo Nelson-Siegel, conforme proposta por Diebold e Li (2006), foi comparada a um modelo análogo, em que são incluídas variáveis exógenas macroeconômicas. Em paralelo, foram testados dois métodos diferentes para a estimação dos parâmetros: a tradicional abordagem em dois passos (Two-Step DNS), e a estimação com o Filtro de Kalman Estendido, que permite que os parâmetros sejam estimados recursivamente, a cada vez que uma nova informação é adicionada ao sistema. Em relação aos modelos testados, os resultados encontrados mostram-se pouco conclusivos, apontando uma melhora apenas marginal nas estimativas dentro e fora da amostra quando as variáveis exógenas são incluídas. Já a utilização do Filtro de Kalman Estendido mostrou resultados mais consistentes quando comparados ao método em dois passos para praticamente todos os horizontes de tempo estudados.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When estimating policy parameters, also known as treatment effects, the assignment to treatment mechanism almost always causes endogeneity and thus bias many of these policy parameters estimates. Additionally, heterogeneity in program impacts is more likely to be the norm than the exception for most social programs. In situations where these issues are present, the Marginal Treatment Effect (MTE) parameter estimation makes use of an instrument to avoid assignment bias and simultaneously to account for heterogeneous effects throughout individuals. Although this parameter is point identified in the literature, the assumptions required for identification may be strong. Given that, we use weaker assumptions in order to partially identify the MTE, i.e. to stablish a methodology for MTE bounds estimation, implementing it computationally and showing results from Monte Carlo simulations. The partial identification we perfom requires the MTE to be a monotone function over the propensity score, which is a reasonable assumption on several economics' examples, and the simulation results shows it is possible to get informative even in restricted cases where point identification is lost. Additionally, in situations where estimated bounds are not informative and the traditional point identification is lost, we suggest a more generic method to point estimate MTE using the Moore-Penrose Pseudo-Invese Matrix, achieving better results than traditional methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The oscillations presents in control loops can cause damages in petrochemical industry. Canceling, or even preventing such oscillations, would save up to large amount of dollars. Studies have identified that one of the causes of these oscillations are the nonlinearities present on industrial process actuators. This study has the objective to develop a methodology for removal of the harmful effects of nonlinearities. Will be proposed an parameter estimation method to Hammerstein model, whose nonlinearity is represented by dead-zone or backlash. The estimated parameters will be used to construct inverse models of compensation. A simulated level system was used as a test platform. The valve that controls inflow has a nonlinearity. Results and describing function analysis show an improvement on system response