947 resultados para Estimation methods
Resumo:
This work proposes a new technique for phasor estimation applied in microprocessor numerical relays for distance protection of transmission lines, based on the recursive least squares method and called least squares modified random walking. The phasor estimation methods have compromised their performance, mainly due to the DC exponential decaying component present in fault currents. In order to reduce the influence of the DC component, a Morphological Filter (FM) was added to the method of least squares and previously applied to the process of phasor estimation. The presented method is implemented in MATLABr and its performance is compared to one-cycle Fourier technique and conventional phasor estimation, which was also based on least squares algorithm. The methods based on least squares technique used for comparison with the proposed method were: forgetting factor recursive, covariance resetting and random walking. The techniques performance analysis were carried out by means of signals synthetic and signals provided of simulations on the Alternative Transient Program (ATP). When compared to other phasor estimation methods, the proposed method showed satisfactory results, when it comes to the estimation speed, the steady state oscillation and the overshoot. Then, the presented method performance was analyzed by means of variations in the fault parameters (resistance, distance, angle of incidence and type of fault). Through this study, the results did not showed significant variations in method performance. Besides, the apparent impedance trajectory and estimated distance of the fault were analysed, and the presented method showed better results in comparison to one-cycle Fourier algorithm
Resumo:
In Survival Analysis, long duration models allow for the estimation of the healing fraction, which represents a portion of the population immune to the event of interest. Here we address classical and Bayesian estimation based on mixture models and promotion time models, using different distributions (exponential, Weibull and Pareto) to model failure time. The database used to illustrate the implementations is described in Kersey et al. (1987) and it consists of a group of leukemia patients who underwent a certain type of transplant. The specific implementations used were numeric optimization by BFGS as implemented in R (base::optim), Laplace approximation (own implementation) and Gibbs sampling as implemented in Winbugs. We describe the main features of the models used, the estimation methods and the computational aspects. We also discuss how different prior information can affect the Bayesian estimates
Resumo:
Os minerais da fração argila, goethita e hematita, são óxidos de ferro (Fe) indicadores pedoambientais com grande influência nos atributos físicos e químicos do solo. O conhecimento dos padrões espaciais desses óxidos auxilia a compreensão das interrelações de causa e efeito com os atributos do solo. Nesse sentido, a qualidade das estimativas espaciais produzidas pode alterar os resultados obtidos e, por consequência, as interpretações dos padrões espaciais obtidos. O presente estudo teve o objetivo de avaliar o desempenho dos métodos geoestatísticos de estimativas (KO) e simulações sequenciais gaussianas (SSG) na caracterização espacial de teores de óxidos de Fe, goethita (Gt) e hematita (Hm), em uma pedoforma côncava e outra convexa. Foram coletadas 121 amostras de solos em cada pedoforma de um Argissolo em pontos com espaçamentos regulares de 10 m. Os teores de óxidos de Fe foram obtidos por meio de difração de raios-X. Os dados foram submetidos a análises geoestatísticas por meio da modelagem do variograma e posterior interpolação por KO e SSG. A KO não refletiu a verdadeira variabilidade dos óxidos de Fe, hematita e goethita, demonstrando ser inapropriada para a caracterização espacial dos teores dos óxidos de Fe. Assim, o uso da SSG é preferível à krigagem quando a manutenção dos altos e baixos valores nas estimativas espaciais é necessária. O desempenho dos métodos geoestatísticos foi influenciado pelas pedoformas. Os mapas E-type devem ser recomendados em vez de mapas de KO para os óxidos de Fe, por serem ricos em detalhes e práticos na definição de zonas homogêneas para o manejo localizado em frente de KO, sobretudo em pedoforma côncava.
Resumo:
The iterative quadratic maximum likelihood IQML and the method of direction estimation MODE are well known high resolution direction-of-arrival DOA estimation methods. Their solutions lead to an optimization problem with constraints. The usual linear constraint presents a poor performance for certain DOA values. This work proposes a new linear constraint applicable to both DOA methods and compare their performance with two others: unit norm and usual linear constraint. It is shown that the proposed alternative performs better than others constraints. The resulting computational complexity is also investigated.
Resumo:
Includes bibliography
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Matematica Aplicada e Computacional - FCT
Resumo:
By means of a meta-analysis, this article sets out to estimate average values for the income and price elasticities of gasoline demand and to analyse the reasons for the variations in the elasticities reported by the literature. The findings show that there is publication bias, that the volatility of elasticity estimates is not due to sampling errors alone, and that there are systematic factors explaining these differences. The income and price elasticities of gasoline demand differ between the short and long run and by region, and the estimation can appropriately include the vehicle fleet and the prices of substitute goods, the data types and the estimation methods used. The presence of a low price elasticity suggests that a fuel tax will be inadequate to control rising consumption in a context of rapid economic growth.
Resumo:
Pós-graduação em Agronomia (Ciência do Solo) - FCAV
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The evaluation of soil permeability throughout the weathering profile is one of the most important features to be considered in environmental studies. This study, developed from field testing and analysis of data obtained by geostatistical methods, aims at mapping the permeability around the Ribeirão Claro river. The intent is to simulate an accident with toxic liquids where soil permeability is of fundamental importance. Another purpose of the research was to determine the minimum time that, in the event of an accident, a possible contaminant to reach the water table level and be routed to the nearest drain, in this case, the Ribeirao V Claro river constitutes fundamental information. The studied area of approximately 4 km² is located within the UNESP-Rio Claro campus, consisting of colluvial soil from Fm. Rio Claro superimposed on residual soil of Fm. Corumbataí. The method used to determine the permeability is the concentric cylinders performed on a sampling grid with 64 points containing spacing of 5 meters EW and 10 meters NS. In the places of permeability tests were collected samples for laboratory determination of the percentage of fines. From particle size analysis was performed and analysis statistical and geostatistical on this data. The histogram was based on the statistical studies, and the semivariograms were based on geostatistical estimation methods. Based on the comparison between the maps and the data obtained, it was determined that the percentage of fines in colluvial surface soil has little influence on permeability, which the proximity to the Ribeirao Claro river, the eastern portion, a factor that influences the distribution of permeability values
Resumo:
The aim of this work is to study some of the density estimation tec- niques and to apply to the segmentation of medical images. Medical images are used to help the diagnostic of tumor diseases as well as to plan and deliver treatment. A computer image is an array of values representing colors in some scale. The smallest element of the image to which it is possible to assign a value is called pixel. Segmen- tation is the process of dividing the image in portions through the classi¯cation of each pixel. The simplest way of classi¯cation is by thresholding, given the number of portions and the threshold values. Another method is constructing a histogram of the pixel values and assign a portion to each pike. The threshold is the mean between two pikes. As the histogram does not form a smooth curve it is di±cult to discern between true pikes and random variation. Density estimation methods allow the estimation of a smooth curve. Image data can be considered as mixture of different densities. In this project parametric and nonparametric methods for density estimation will be addressed and some of them are applied to CT image data
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.
Resumo:
Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.