801 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::MATEMATICA::MATEMATICA APLICADA


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os Algoritmos Genético (AG) e o Simulated Annealing (SA) são algoritmos construídos para encontrar máximo ou mínimo de uma função que representa alguma característica do processo que está sendo modelado. Esses algoritmos possuem mecanismos que os fazem escapar de ótimos locais, entretanto, a evolução desses algoritmos no tempo se dá de forma completamente diferente. O SA no seu processo de busca trabalha com apenas um ponto, gerando a partir deste sempre um nova solução que é testada e que pode ser aceita ou não, já o AG trabalha com um conjunto de pontos, chamado população, da qual gera outra população que sempre é aceita. Em comum com esses dois algoritmos temos que a forma como o próximo ponto ou a próxima população é gerada obedece propriedades estocásticas. Nesse trabalho mostramos que a teoria matemática que descreve a evolução destes algoritmos é a teoria das cadeias de Markov. O AG é descrito por uma cadeia de Markov homogênea enquanto que o SA é descrito por uma cadeia de Markov não-homogênea, por fim serão feitos alguns exemplos computacionais comparando o desempenho desses dois algoritmos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we present a risk theory application in the following scenario: In each period of time we have a change in the capital of the ensurance company and the outcome of a two-state Markov chain stabilishs if the company pays a benece it heat to one of its policyholders or it receives a Hightimes c > 0 paid by someone buying a new policy. At the end we will determine once again by the recursive equation for expectation the time ruin for this company

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Percolation Theory, functions like the probability that a given site belongs to the infinite cluster, average size of clusters, etc. are described through power laws and critical exponents. This dissertation uses a method called Finite Size Scaling to provide a estimative of those exponents. The dissertation is divided in four parts. The first one briefly presents the main results for Site Percolation Theory for d = 2 dimension. Besides, some important quantities for the determination of the critical exponents and for the phase transistions understanding are defined. The second shows an introduction to the fractal concept, dimension and classification. Concluded the base of our study, in the third part the Scale Theory is mentioned, wich relates critical exponents and the quantities described in Chapter 2. In the last part, through the Finite Size Scaling method, we determine the critical exponents fi and. Based on them, we used the previous Chapter scale relations in order to determine the remaining critical exponents

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We considered prediction techniques based on models of accelerated failure time with random e ects for correlated survival data. Besides the bayesian approach through empirical Bayes estimator, we also discussed about the use of a classical predictor, the Empirical Best Linear Unbiased Predictor (EBLUP). In order to illustrate the use of these predictors, we considered applications on a real data set coming from the oil industry. More speci - cally, the data set involves the mean time between failure of petroleum-well equipments of the Bacia Potiguar. The goal of this study is to predict the risk/probability of failure in order to help a preventive maintenance program. The results show that both methods are suitable to predict future failures, providing good decisions in relation to employment and economy of resources for preventive maintenance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we studied the strong consistency for a class of estimates for a transition density of a Markov chain with general state space E ⊂ Rd. The strong ergodicity of the estimates for the density transition is obtained from the strong consistency of the kernel estimates for both the marginal density p(:) of the chain and the joint density q(., .). In this work the Markov chain is supposed to be homogeneous, uniformly ergodic and possessing a stationary density p(.,.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In production lines, the entire process is bound to unexpected happenings which may cost losing the production quality. Thus, it means losses to the manufacturer. Identify such causes and remove them is the task of the processing management. The on-line control system consists of periodic inspection of every month produced item. Once any of those items is quali ed as not t, it is admitted that a change in the fraction of the items occurred, and then the process is stopped for adjustments. This work is an extension of Quinino & Ho (2010) and has as objective main to make the monitoramento in a process through the control on-line of quality for the number of non-conformities about the inspected item. The strategy of decision to verify if the process is under control, is directly associated to the limits of the graphic control of non-conformities of the process. A policy of preventive adjustments is incorporated in order to enlarge the conforming fraction of the process. With the help of the R software, a sensibility analysis of the proposed model is done showing in which situations it is most interesting to execute the preventive adjustment

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two-level factorial designs are widely used in industrial experimentation. However, many factors in such a design require a large number of runs to perform the experiment, and too many replications of the treatments may not be feasible, considering limitations of resources and of time, making it expensive. In these cases, unreplicated designs are used. But, with only one replicate, there is no internal estimate of experimental error to make judgments about the significance of the observed efects. One of the possible solutions for this problem is to use normal plots or half-normal plots of the efects. Many experimenters use the normal plot, while others prefer the half-normal plot and, often, for both cases, without justification. The controversy about the use of these two graphical techniques motivates this work, once there is no register of formal procedure or statistical test that indicates \which one is best". The choice between the two plots seems to be a subjective issue. The central objective of this master's thesis is, then, to perform an experimental comparative study of the normal plot and half-normal plot in the context of the analysis of the 2k unreplicated factorial experiments. This study involves the construction of simulated scenarios, in which the graphics performance to detect significant efects and to identify outliers is evaluated in order to verify the following questions: Can be a plot better than other? In which situations? What kind of information does a plot increase to the analysis of the experiment that might complement those provided by the other plot? What are the restrictions on the use of graphics? Herewith, this work intends to confront these two techniques; to examine them simultaneously in order to identify similarities, diferences or relationships that contribute to the construction of a theoretical reference to justify or to aid in the experimenter's decision about which of the two graphical techniques to use and the reason for this use. The simulation results show that the half-normal plot is better to assist in the judgement of the efects, while the normal plot is recommended to detect outliers in the data

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In survival analysis, the response is usually the time until the occurrence of an event of interest, called failure time. The main characteristic of survival data is the presence of censoring which is a partial observation of response. Associated with this information, some models occupy an important position by properly fit several practical situations, among which we can mention the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that enables greater exibility in adjusting lifetime data. This paper presents a simulation study that compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study the accelerated failure-time generalized Gamma regression models with a unified approach. The models attempt to estimate simultaneously the effects of covariates on the acceleration/deceleration of the timing of a given event and the surviving fraction. The method is implemented in the free statistical software R. Finally the model is applied to a real dataset referring to the time until the return of the disease in patients diagnosed with breast cancer

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem treated in this dissertation is to establish boundedness for the iterates of an iterative algorithm in es in each step an orthogonal projection on a straight line in exed in a (possibly infinite) family of lines, allowing arbitrary order in applying the projections. This problem was analyzed in a paper by Barany et al. in 1994, which found a necessary and suficient condition in the case d = 2, and analyzed further the case d > 2, under some technical conditions. However, this paper uses non-trivial intuitive arguments and its proofs lack suficient rigor. In this dissertation we discuss and strengthen the results of this paper, in order to complete and simplify its proofs

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new control chart to monitor a process mean employing a combined npx-X control chart. Basically the procedure consists of splitting the sample of size n into two sub-samples n1 and n2 determined by an optimization search. The sampling occur in two-stages. In the first stage the units of the sub-sample n1 are evaluated by attributes and plotted in npx control chart. If this chart signs then units of second sub-sample are measured and the monitored statistic plotted in X control chart (second stage). If both control charts sign then the process is stopped for adjustment. The possibility of non-inspection in all n items may promote a reduction not only in the cost but also the time spent to examine the sampled items. Performances of the current proposal, individual X and npx control charts are compared. In this study the proposed procedure presents many competitive options for the X control chart for a sample size n and a shift from the target mean. The average time to sign (ATS) of the current proposal lower than the values calculated from an individual X control chart points out that the combined control chart is an efficient tool in monitoring process mean.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Survival models deals with the modeling of time to event data. However in some situations part of the population may be no longer subject to the event. Models that take this fact into account are called cure rate models. There are few studies about hypothesis tests in cure rate models. Recently a new test statistic, the gradient statistic, has been proposed. It shares the same asymptotic properties with the classic large sample tests, the likelihood ratio, score and Wald tests. Some simulation studies have been carried out to explore the behavior of the gradient statistic in fi nite samples and compare it with the classic statistics in diff erent models. The main objective of this work is to study and compare the performance of gradient test and likelihood ratio test in cure rate models. We first describe the models and present the main asymptotic properties of the tests. We perform a simulation study based on the promotion time model with Weibull distribution to assess the performance of the tests in finite samples. An application is presented to illustrate the studied concepts

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the work reported here we present theoretical and numerical results about a Risk Model with Interest Rate and Proportional Reinsurance based on the article Inequalities for the ruin probability in a controlled discrete-time risk process by Ros ario Romera and Maikol Diasparra (see [5]). Recursive and integral equations as well as upper bounds for the Ruin Probability are given considering three di erent approaches, namely, classical Lundberg inequality, Inductive approach and Martingale approach. Density estimation techniques (non-parametrics) are used to derive upper bounds for the Ruin Probability and the algorithms used in the simulation are presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present a mathematical and computational modeling of electrokinetic phenomena in electrically charged porous medium. We consider the porous medium composed of three different scales (nanoscopic, microscopic and macroscopic). On the microscopic scale the domain is composed by a porous matrix and a solid phase. The pores are filled with an aqueous phase consisting of ionic solutes fully diluted, and the solid matrix consists of electrically charged particles. Initially we present the mathematical model that governs the electrical double layer in order to quantify the electric potential, electric charge density, ion adsorption and chemical adsorption in nanoscopic scale. Then, we derive the microscopic model, where the adsorption of ions due to the electric double layer and the reactions of protonation/ deprotanaç~ao and zeta potential obtained in modeling nanoscopic arise in microscopic scale through interface conditions in the problem of Stokes and Nerst-Planck equations respectively governing the movement of the aqueous solution and transport of ions. We developed the process of upscaling the problem nano/microscopic using the homogenization technique of periodic structures by deducing the macroscopic model with their respectives cell problems for effective parameters of the macroscopic equations. Considering a clayey porous medium consisting of kaolinite clay plates distributed parallel, we rewrite the macroscopic model in a one-dimensional version. Finally, using a sequential algorithm, we discretize the macroscopic model via the finite element method, along with the interactive method of Picard for the nonlinear terms. Numerical simulations on transient regime with variable pH in one-dimensional case are obtained, aiming computational modeling of the electroremediation process of clay soils contaminated