145 resultados para Probabilidade e Estatística Aplicadas


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work is to make a brief discussion of methods to estimate the parameters of the Generalized Pareto distribution (GPD). Being addressed the following techniques: Moments (moments), Maximum Likelihood (MLE), Biased Probability Weighted Moments (PWMB), Unbiased Probability Weighted Moments (PWMU), Mean Power Density Divergence (MDPD), Median (MED), Pickands (PICKANDS), Maximum Penalized Likelihood (MPLE), Maximum Goodness-of-fit (MGF) and the Maximum Entropy (POME) technique, the focus of this manuscript. By way of illustration adjustments were made for the Generalized Pareto distribution, for a sequence of earthquakes intraplacas which occurred in the city of João Câmara in the northeastern region of Brazil, which was monitored continuously for two years (1987 and 1988). It was found that the MLE and POME were the most efficient methods, giving them basically mean squared errors. Based on the threshold of 1.5 degrees was estimated the seismic risk for the city, and estimated the level of return to earthquakes of intensity 1.5°, 2.0°, 2.5°, 3.0° and the most intense earthquake never registered in the city, which occurred in November 1986 with magnitude of about 5.2º

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two-level factorial designs are widely used in industrial experimentation. However, many factors in such a design require a large number of runs to perform the experiment, and too many replications of the treatments may not be feasible, considering limitations of resources and of time, making it expensive. In these cases, unreplicated designs are used. But, with only one replicate, there is no internal estimate of experimental error to make judgments about the significance of the observed efects. One of the possible solutions for this problem is to use normal plots or half-normal plots of the efects. Many experimenters use the normal plot, while others prefer the half-normal plot and, often, for both cases, without justification. The controversy about the use of these two graphical techniques motivates this work, once there is no register of formal procedure or statistical test that indicates \which one is best". The choice between the two plots seems to be a subjective issue. The central objective of this master's thesis is, then, to perform an experimental comparative study of the normal plot and half-normal plot in the context of the analysis of the 2k unreplicated factorial experiments. This study involves the construction of simulated scenarios, in which the graphics performance to detect significant efects and to identify outliers is evaluated in order to verify the following questions: Can be a plot better than other? In which situations? What kind of information does a plot increase to the analysis of the experiment that might complement those provided by the other plot? What are the restrictions on the use of graphics? Herewith, this work intends to confront these two techniques; to examine them simultaneously in order to identify similarities, diferences or relationships that contribute to the construction of a theoretical reference to justify or to aid in the experimenter's decision about which of the two graphical techniques to use and the reason for this use. The simulation results show that the half-normal plot is better to assist in the judgement of the efects, while the normal plot is recommended to detect outliers in the data

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Survival models deals with the modeling of time to event data. However in some situations part of the population may be no longer subject to the event. Models that take this fact into account are called cure rate models. There are few studies about hypothesis tests in cure rate models. Recently a new test statistic, the gradient statistic, has been proposed. It shares the same asymptotic properties with the classic large sample tests, the likelihood ratio, score and Wald tests. Some simulation studies have been carried out to explore the behavior of the gradient statistic in fi nite samples and compare it with the classic statistics in diff erent models. The main objective of this work is to study and compare the performance of gradient test and likelihood ratio test in cure rate models. We first describe the models and present the main asymptotic properties of the tests. We perform a simulation study based on the promotion time model with Weibull distribution to assess the performance of the tests in finite samples. An application is presented to illustrate the studied concepts

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the work reported here we present theoretical and numerical results about a Risk Model with Interest Rate and Proportional Reinsurance based on the article Inequalities for the ruin probability in a controlled discrete-time risk process by Ros ario Romera and Maikol Diasparra (see [5]). Recursive and integral equations as well as upper bounds for the Ruin Probability are given considering three di erent approaches, namely, classical Lundberg inequality, Inductive approach and Martingale approach. Density estimation techniques (non-parametrics) are used to derive upper bounds for the Ruin Probability and the algorithms used in the simulation are presented

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we study the survival cure rate model proposed by Yakovlev et al. (1993), based on a competing risks structure concurring to cause the event of interest, and the approach proposed by Chen et al. (1999), where covariates are introduced to model the risk amount. We focus the measurement error covariates topics, considering the use of corrected score method in order to obtain consistent estimators. A simulation study is done to evaluate the behavior of the estimators obtained by this method for finite samples. The simulation aims to identify not only the impact on the regression coefficients of the covariates measured with error (Mizoi et al. 2007) but also on the coefficients of covariates measured without error. We also verify the adequacy of the piecewise exponential distribution to the cure rate model with measurement error. At the end, model applications involving real data are made

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we presented an exhibition of the mathematical theory of orthogonal compact support wavelets in the context of multiresoluction analysis. These are particularly attractive wavelets because they lead to a stable and very efficient algorithm, that is Fast Transform Wavelet (FWT). One of our objectives is to develop efficient algorithms for calculating the coefficients wavelet (FWT) through the pyramid algorithm of Mallat and to discuss his connection with filters Banks. We also studied the concept of multiresoluction analysis, that is the context in that wavelets can be understood and built naturally, taking an important step in the change from the Mathematical universe (Continuous Domain) for the Universe of the representation (Discret Domain)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we study the survival cure rate model proposed by Yakovlev (1993) that are considered in a competing risk setting. Covariates are introduced for modeling the cure rate and we allow some covariates to have missing values. We consider only the cases by which the missing covariates are categorical and implement the EM algorithm via the method of weights for maximum likelihood estimation. We present a Monte Carlo simulation experiment to compare the properties of the estimators based on this method with those estimators under the complete case scenario. We also evaluate, in this experiment, the impact in the parameter estimates when we increase the proportion of immune and censored individuals among the not immune one. We demonstrate the proposed methodology with a real data set involving the time until the graduation for the undergraduate course of Statistics of the Universidade Federal do Rio Grande do Norte

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we present the principal fractals, their caracteristics, properties abd their classification, comparing them to Euclidean Geometry Elements. We show the importance of the Fractal Geometry in the analysis of several elements of our society. We emphasize the importance of an appropriate definition of dimension to these objects, because the definition we presently know doesn t see a satisfactory one. As an instrument to obtain these dimentions we present the Method to count boxes, of Hausdorff- Besicovich and the Scale Method. We also study the Percolation Process in the square lattice, comparing it to percolation in the multifractal subject Qmf, where we observe som differences between these two process. We analize the histogram grafic of the percolating lattices versus the site occupation probability p, and other numerical simulations. And finaly, we show that we can estimate the fractal dimension of the percolation cluster and that the percolatin in a multifractal suport is in the same universality class as standard percolation. We observe that the area of the blocks of Qmf is variable, pc is a function of p which is related to the anisotropy of Qmf

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We presented in this work two methods of estimation for accelerated failure time models with random e_ects to process grouped survival data. The _rst method, which is implemented in software SAS, by NLMIXED procedure, uses an adapted Gauss-Hermite quadrature to determine marginalized likelihood. The second method, implemented in the free software R, is based on the method of penalized likelihood to estimate the parameters of the model. In the _rst case we describe the main theoretical aspects and, in the second, we briey presented the approach adopted with a simulation study to investigate the performance of the method. We realized implement the models using actual data on the time of operation of oil wells from the Potiguar Basin (RN / CE).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Present day weather forecast models usually cannot provide realistic descriptions of local and particulary extreme weather conditions. However, for lead times of about a small number of days, they provide reliable forecast of the atmospheric circulation that encompasses the subscale processes leading to extremes. Hence, forecasts of extreme events can only be achieved through a combination of dynamical and statistical analysis methods, where a stable and significant statistical model based on prior physical reasoning establishes posterior statistical-dynamical model between the local extremes and the large scale circulation. Here we present the development and application of such a statistical model calibration on the besis of extreme value theory, in order to derive probabilistic forecast for extreme local temperature. The dowscaling applies to NCEP/NCAR re-analysis, in order to derive estimates of daily temperature at Brazilian northeastern region weather stations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we studied the asymptotic unbiasedness, the strong and the uniform strong consistencies of a class of kernel estimators fn as an estimator of the density function f taking values on a k-dimensional sphere

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the paper of Campos and Dorea [3] was detailed. In that article a Kernel Estimator was applied to a sequence of random variables with general state space, which were independent and identicaly distributed. In chapter 2, the estimator´s properties such as asymptotic unbiasedness, consistency in quadratic mean, strong consistency and asymptotic normality were verified. In chapter 3, using R software, numerical experiments were developed in order to give a visual idea of the estimate process