998 resultados para Probabilidade e Estatística Aplicadas


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents an exercise in meta-comprehension of what has been researched on teaching probability and statistics in Brazil. This research was based on the work on this subject presented in the third International Symposium for Research in Mathematics Education (III SIPEM). Articles were selected from the proceedings of the event analyzed hermeneuticly according to the procedures of phenomenology. We observed no evidence of clustering of research on this topic in terms of region or institutions, and we also emphasize that research on the teaching of Probability and Statistics needs to advance toward a theoretical discussion that transcends the subjects being studied and makes broader and deeper links between theory and practice. Findings also indicate that this sub-area of research in mathematics education is in the process of constituting itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coffee is one of the main products of Brazilian agriculture, the country is currently the largest producer and exporter. Knowing the growth pattern of a fruit can assist in the development of culture indicating for example, the times of increased fruit weight and its optimum harvest, essential to improve the management and quality of coffee. Some authors indicate that the growth curve of the coffee fruit has a double sigmoid shape. However, it consists of just a visual observation without exploring the use of regression models. The aims of this study were: i) determine if the growth pattern of the coffee fruit is really double sigmoidal; ii) to propose a new approach in weighted importance re-sampling to estimate the parameters of regression models and select the most suitable double sigmoidal model to describe the growth of coffee fruits; iii) to study the spatial distribution effect of the crop in the growth curve of coffee fruits. In the first article the aim was determine if the growth pattern of the coffee fruit is really double sigmoidal. The models double Gompertz and double Logistic showed significantly superior fit to models of simple sigmoid confirming that the standard of coffee fruits growth is really double sigmoidal. In the second article we propose to consider an approximation of the likelihood as the candidate distribution of the weighted importance resampling, aiming to facilitate the process of obtaining samples of marginal distributions of each parameter. This technique was effective since it provided parameters with practical interpretation and low computational effort, therefore, it can be used to estimate parameters of double sigmoidal growth curves. The nonlinear model double Logistic was the most appropriate to describe the growth curve of coffee fruits. In the third article aimed to verify the influence of different planting alignments and sun exposure faces in the fruits growth curve. A difference between the growth rates in the two stages of fruit development was identified, regardless the side. Although it has been proven differences in productivity and quality of coffee, there was no difference between the growth curves in the different planting alignments herein studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the composition of this work are present two parts. The first part contains the theory used. The second part contains the two articles. The first article examines two models of the class of generalized linear models for analyzing a mixture experiment, which studied the effect of different diets consist of fat, carbohydrate, and fiber on tumor expression in mammary glands of female rats, given by the ratio mice that had tumor expression in a particular diet. Mixture experiments are characterized by having the effect of collinearity and smaller sample size. In this sense, assuming normality for the answer to be maximized or minimized may be inadequate. Given this fact, the main characteristics of logistic regression and simplex models are addressed. The models were compared by the criteria of selection of models AIC, BIC and ICOMP, simulated envelope charts for residuals of adjusted models, odds ratios graphics and their respective confidence intervals for each mixture component. It was concluded that first article that the simplex regression model showed better quality of fit and narrowest confidence intervals for odds ratio. The second article presents the model Boosted Simplex Regression, the boosting version of the simplex regression model, as an alternative to increase the precision of confidence intervals for the odds ratio for each mixture component. For this, we used the Monte Carlo method for the construction of confidence intervals. Moreover, it is presented in an innovative way the envelope simulated chart for residuals of the adjusted model via boosting algorithm. It was concluded that the Boosted Simplex Regression model was adjusted successfully and confidence intervals for the odds ratio were accurate and lightly more precise than the its maximum likelihood version.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O pesquisador da FGV/DAPP João Victor participou, durante o mês de Julho, do 21º SINAPE - Simpósio Nacional de Probabilidade e Estatística, em Natal, a principal reunião científica da comunidade estatística brasileira. Durante uma semana, o pesquisador da DAPP participou de palestras e minicursos e apresentou seu projeto sobre Ferramentas para Formatação e Verificação de Microdados de Pesquisas, sob orientação do atual presidente-eleito do International Statistical Institute, Pedro Luis do Nascimento Silva.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work is to make a brief discussion of methods to estimate the parameters of the Generalized Pareto distribution (GPD). Being addressed the following techniques: Moments (moments), Maximum Likelihood (MLE), Biased Probability Weighted Moments (PWMB), Unbiased Probability Weighted Moments (PWMU), Mean Power Density Divergence (MDPD), Median (MED), Pickands (PICKANDS), Maximum Penalized Likelihood (MPLE), Maximum Goodness-of-fit (MGF) and the Maximum Entropy (POME) technique, the focus of this manuscript. By way of illustration adjustments were made for the Generalized Pareto distribution, for a sequence of earthquakes intraplacas which occurred in the city of João Câmara in the northeastern region of Brazil, which was monitored continuously for two years (1987 and 1988). It was found that the MLE and POME were the most efficient methods, giving them basically mean squared errors. Based on the threshold of 1.5 degrees was estimated the seismic risk for the city, and estimated the level of return to earthquakes of intensity 1.5°, 2.0°, 2.5°, 3.0° and the most intense earthquake never registered in the city, which occurred in November 1986 with magnitude of about 5.2º

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two-level factorial designs are widely used in industrial experimentation. However, many factors in such a design require a large number of runs to perform the experiment, and too many replications of the treatments may not be feasible, considering limitations of resources and of time, making it expensive. In these cases, unreplicated designs are used. But, with only one replicate, there is no internal estimate of experimental error to make judgments about the significance of the observed efects. One of the possible solutions for this problem is to use normal plots or half-normal plots of the efects. Many experimenters use the normal plot, while others prefer the half-normal plot and, often, for both cases, without justification. The controversy about the use of these two graphical techniques motivates this work, once there is no register of formal procedure or statistical test that indicates \which one is best". The choice between the two plots seems to be a subjective issue. The central objective of this master's thesis is, then, to perform an experimental comparative study of the normal plot and half-normal plot in the context of the analysis of the 2k unreplicated factorial experiments. This study involves the construction of simulated scenarios, in which the graphics performance to detect significant efects and to identify outliers is evaluated in order to verify the following questions: Can be a plot better than other? In which situations? What kind of information does a plot increase to the analysis of the experiment that might complement those provided by the other plot? What are the restrictions on the use of graphics? Herewith, this work intends to confront these two techniques; to examine them simultaneously in order to identify similarities, diferences or relationships that contribute to the construction of a theoretical reference to justify or to aid in the experimenter's decision about which of the two graphical techniques to use and the reason for this use. The simulation results show that the half-normal plot is better to assist in the judgement of the efects, while the normal plot is recommended to detect outliers in the data

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Survival models deals with the modeling of time to event data. However in some situations part of the population may be no longer subject to the event. Models that take this fact into account are called cure rate models. There are few studies about hypothesis tests in cure rate models. Recently a new test statistic, the gradient statistic, has been proposed. It shares the same asymptotic properties with the classic large sample tests, the likelihood ratio, score and Wald tests. Some simulation studies have been carried out to explore the behavior of the gradient statistic in fi nite samples and compare it with the classic statistics in diff erent models. The main objective of this work is to study and compare the performance of gradient test and likelihood ratio test in cure rate models. We first describe the models and present the main asymptotic properties of the tests. We perform a simulation study based on the promotion time model with Weibull distribution to assess the performance of the tests in finite samples. An application is presented to illustrate the studied concepts

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the work reported here we present theoretical and numerical results about a Risk Model with Interest Rate and Proportional Reinsurance based on the article Inequalities for the ruin probability in a controlled discrete-time risk process by Ros ario Romera and Maikol Diasparra (see [5]). Recursive and integral equations as well as upper bounds for the Ruin Probability are given considering three di erent approaches, namely, classical Lundberg inequality, Inductive approach and Martingale approach. Density estimation techniques (non-parametrics) are used to derive upper bounds for the Ruin Probability and the algorithms used in the simulation are presented

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica no perfil de Manutenção e Produção

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho é apresentar a base teórica para o problema de aprendizagem através de exemplos conforme as ref. [14], [15] e [16]. Aprender através de exemplos pode ser examinado como o problema de regressão da aproximação de uma função multivaluada sobre um conjunto de dados esparsos. Tal problema não é bem posto e a maneira clássica de resolvê-lo é através da teoria de regularização. A teoria de regularização clássica, como será considerada aqui, formula este problema de regressão como o problema variacional de achar a função f que minimiza o funcional Q[f] = 1 n n Xi=1 (yi ¡ f(xi))2 + ¸kfk2 K; onde kfk2 K é a norma em um espa»co de Hilbert especial que chamaremos de Núcleo Reprodutivo (Reproducing Kernel Hilbert Spaces), ou somente RKHS, IH definido pela função positiva K, o número de pontos do exemplo n e o parâmetro de regularização ¸. Sob condições gerais a solução da equação é dada por f(x) = n Xi=1 ciK(x; xi): A teoria apresentada neste trabalho é na verdade a fundamentação para uma teoria mais geral que justfica os funcionais regularizados para a aprendizagem através de um conjunto infinito de dados e pode ser usada para estender consideravelmente a estrutura clássica a regularização, combinando efetivamente uma perspectiva de análise funcional com modernos avanços em Teoria de Probabilidade e Estatística.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tese trata de sistemas de filas de espera estudando o seu comportamento ao longo do tempo e quando se encontram em estado de equilíbrio. A tese é constituída por três grandes capítulos. Em primeiro lugar são apresentados alguns conceitos básicos da probabilidade, da estatística e de processos de estocásticos. São também descritas as condições e características necessárias para formar um sistema de filas de espera. Em seguida desenvolvemos vários tipos de sistemas de filas de espera markovianos, estudando várias características de cada modelo, entre elas o número esperado de clientes no sistema e na fila, o tempo esperado que um cliente aguarda no sistema e na fila, após o sistema estar em equilíbrio. Apresentamos também alguns gráficos e comparações. Por fim, fazemos uma abordagem a alguns sistemas de filas de espera não markovianos, com um estudo menos aprofundado, mas sempre tentando determinar as características que foram determinadas nos modelos markovianos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we study the survival cure rate model proposed by Yakovlev et al. (1993), based on a competing risks structure concurring to cause the event of interest, and the approach proposed by Chen et al. (1999), where covariates are introduced to model the risk amount. We focus the measurement error covariates topics, considering the use of corrected score method in order to obtain consistent estimators. A simulation study is done to evaluate the behavior of the estimators obtained by this method for finite samples. The simulation aims to identify not only the impact on the regression coefficients of the covariates measured with error (Mizoi et al. 2007) but also on the coefficients of covariates measured without error. We also verify the adequacy of the piecewise exponential distribution to the cure rate model with measurement error. At the end, model applications involving real data are made