926 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::PROBABILIDADE E ESTATISTICA
Resumo:
The separation oil-water by the use of flotation process is characterized by the involvement between the liquid and gas phases. For the comprehension of this process, it s necessary to analyze the physical and chemical properties command float flotation, defining the nature and forces over the particles. The interface chemistry has an important role on the flotation technology once, by dispersion of a gas phase into a liquid mixture the particles desired get stuck into air bubbles, being conduced to a superficial layer where can be physically separated. Through the study of interface interaction involved in the system used for this work, was possible to apply the results in an mathematical model able to determine the probability of flotation using a different view related to petroleum emulsions such as oil-water. The terms of probability of flotation correlate the collision and addition between particles of oil and air bubbles, that as more collisions, better is the probability of flotation. The additional probability was analyzed by the isotherm of absorption from Freundlich, represents itself the add probability between air bubbles and oil particles. The mathematical scheme for float flotation involved the injected air flow, the size of bubbles and quantity for second, the volume of float cell, viscosity of environment and concentration of demulsifier. The results shown that the float agent developed by castor oil, pos pH variation, salt quantity, temperature, concentration and water-oil quantity, presented efficient extraction of oil from water, up to 95%, using concentrations around 11 ppm of demulsifier. The best results were compared to other commercial products, codified by ―W‖ and ―Z‖, being observed an equivalent demulsifier power between Agflot and commercial product ―W‖ and superior to commercial product ―Z‖
Resumo:
The segmentation of an image aims to subdivide it into constituent regions or objects that have some relevant semantic content. This subdivision can also be applied to videos. However, in these cases, the objects appear in various frames that compose the videos. The task of segmenting an image becomes more complex when they are composed of objects that are defined by textural features, where the color information alone is not a good descriptor of the image. Fuzzy Segmentation is a region-growing segmentation algorithm that uses affinity functions in order to assign to each element in an image a grade of membership for each object (between 0 and 1). This work presents a modification of the Fuzzy Segmentation algorithm, for the purpose of improving the temporal and spatial complexity. The algorithm was adapted to segmenting color videos, treating them as 3D volume. In order to perform segmentation in videos, conventional color model or a hybrid model obtained by a method for choosing the best channels were used. The Fuzzy Segmentation algorithm was also applied to texture segmentation by using adaptive affinity functions defined for each object texture. Two types of affinity functions were used, one defined using the normal (or Gaussian) probability distribution and the other using the Skew Divergence. This latter, a Kullback-Leibler Divergence variation, is a measure of the difference between two probability distributions. Finally, the algorithm was tested in somes videos and also in texture mosaic images composed by images of the Brodatz album
Resumo:
The monitoring of Earth dam makes use of visual inspection and instrumentation to identify and characterize the deterioration that compromises the security of earth dams and associated structures. The visual inspection is subjective and can lead to misinterpretation or omission of important information and, some problems are detected too late. The instrumentation are efficient but certain technical or operational issues can cause restrictions. Thereby, visual inspections and instrumentation can lead to a lack of information. Geophysics offers consolidated, low-cost methods that are non-invasive, non-destructive and low cost. They have a strong potential and can be used assisting instrumentation. In the case that a visual inspection and strumentation does not provide all the necessary information, geophysical methods would provide more complete and relevant information. In order to test these theories, geophysical acquisitions were performed using Georadar (GPR), Electric resistivity, Seismic refraction, and Refraction Microtremor (ReMi) on the dike of the dam in Sant Llorenç de Montgai, located in the province of Lleida, 145 km from Barcelona, Catalonia. The results confirmed that the geophysical methods used each responded satisfactorily to the conditions of the earth dike, the anomalies present and the geological features found, such as alluvium and carbonate and evaporite rocks. It has also been confirmed that these methods, when used in an integrated manner, are able to reduce the ambiguities in individual interpretations. They facilitate improved imaging of the interior dikes and of major geological features, thus inspecting the massif and its foundation. Consequently, the results obtained in this study demonstrated that these geophysical methods are sufficiently effective for inspecting earth dams and they are an important tool in the instrumentation and visual inspection of the security of the dams
Resumo:
Neste trabalho, elaboramos e discutimos uma rede complexa sem escala, ou seja, uma rede cuja distribuição de conectividade segue uma lei de distribuição de potência. Nosso trabalho pode ser resumido da seguinte forma: Para efeito de didática vamos começar com redes aleatórias que estão relacionados com situações reais e artificiais, e depois comentar as redes livres de escala, como proposto por Barabási-Albert (BA). Depois disso, discutimos uma extensão deste modelo, onde Barabasi e Bianconi (BB) incluem a qualidade. Discutimos também o modelo de afinidade, ou seja, (Ver Almeida et al). Finalmente vamos mostrar o nosso modelo, uma extensão do modelo de afinidade dada por e apresentar os resultados correspondentes. Para realizar tal tarefa modificamos a regra de ligação preferencial do modelo de BB colocando um fator que apresenta o grau de probabilidade entre os sítios da rede. Esta quantidade é feita pela diferença entre a qualidade do novo sítio e a qualidade dos anteriores. Este novo parâmetro produz novos resultados interessantes: a distribuição que segue uma lei de especial de potência, expoente apropriado. A evolução temporal da conectividade do sítio também é calculada . Além disso, mostramos também, os resultados que foram obtidos, via simulação numérica, para o menor caminho médio e o coeficiente de agregação da rede gerada pelo nosso modelo, isto é, pelo modelo de afinidade.
Resumo:
In this work, the study of some complex systems is done with use of two distinct procedures. In the first part, we have studied the usage of Wavelet transform on analysis and characterization of (multi)fractal time series. We have test the reliability of Wavelet Transform Modulus Maxima method (WTMM) in respect to the multifractal formalism, trough the calculation of the singularity spectrum of time series whose fractality is well known a priori. Next, we have use the Wavelet Transform Modulus Maxima method to study the fractality of lungs crackles sounds, a biological time series. Since the crackles sounds are due to the opening of a pulmonary airway bronchi, bronchioles and alveoli which was initially closed, we can get information on the phenomenon of the airway opening cascade of the whole lung. Once this phenomenon is associated with the pulmonar tree architecture, which displays fractal geometry, the analysis and fractal characterization of this noise may provide us with important parameters for comparison between healthy lungs and those affected by disorders that affect the geometry of the tree lung, such as the obstructive and parenchymal degenerative diseases, which occurs, for example, in pulmonary emphysema. In the second part, we study a site percolation model for square lattices, where the percolating cluster grows governed by a control rule, corresponding to a method of automatic search. In this model of percolation, which have characteristics of self-organized criticality, the method does not use the automated search on Leaths algorithm. It uses the following control rule: pt+1 = pt + k(Rc − Rt), where p is the probability of percolation, k is a kinetic parameter where 0 < k < 1 and R is the fraction of percolating finite square lattices with side L, LxL. This rule provides a time series corresponding to the dynamical evolution of the system, in particular the likelihood of percolation p. We proceed an analysis of scaling of the signal obtained in this way. The model used here enables the study of the automatic search method used for site percolation in square lattices, evaluating the dynamics of their parameters when the system goes to the critical point. It shows that the scaling of , the time elapsed until the system reaches the critical point, and tcor, the time required for the system loses its correlations, are both inversely proportional to k, the kinetic parameter of the control rule. We verify yet that the system has two different time scales after: one in which the system shows noise of type 1 f , indicating to be strongly correlated. Another in which it shows white noise, indicating that the correlation is lost. For large intervals of time the dynamics of the system shows ergodicity
Resumo:
Complex systems have stimulated much interest in the scientific community in the last twenty years. Examples this area are the Domany-Kinzel cellular automaton and Contact Process that are studied in the first chapter this tesis. We determine the critical behavior of these systems using the spontaneous-search method and short-time dynamics (STD). Ours results confirm that the DKCA e CP belong to universality class of Directed Percolation. In the second chapter, we study the particle difusion in two models of stochastic sandpiles. We characterize the difusion through diffusion constant D, definite through in the relation h(x)2i = 2Dt. The results of our simulations, using finite size scalling and STD, show that the diffusion constant can be used to study critical properties. Both models belong to universality class of Conserved Directed Percolation. We also study that the mean-square particle displacement in time, and characterize its dependence on the initial configuration and particle density. In the third chapter, we introduce a computacional model, called Geographic Percolation, to study watersheds, fractals with aplications in various areas of science. In this model, sites of a network are assigned values between 0 and 1 following a given probability distribution, we order this values, keeping always its localization, and search pk site that percolate network. Once we find this site, we remove it from the network, and search for the next that has the network to percole newly. We repeat these steps until the complete occupation of the network. We study the model in 2 and 3 dimension, and compare the bidimensional case with networks form at start real data (Alps e Himalayas)
Resumo:
In this work we study a new risk model for a firm which is sensitive to its credit quality, proposed by Yang(2003): Are obtained recursive equations for finite time ruin probability and distribution of ruin time and Volterra type integral equation systems for ultimate ruin probability, severity of ruin and distribution of surplus before and after ruin
Resumo:
The on-line processes control for attributes consists of inspecting a single item at every m produced ones. If the examined item is conforming, the production continues; otherwise, the process stops for adjustment. However, in many practical situations, the interest consist of monitoring the number of non-conformities among the examined items. In this case, if the number of non-conformities is higher than an upper control limit, the process needs to be stopped and some adjustment is required. The contribution of this paper is to propose a control system for the number of nonconforming of the inspected item. Employing properties of an ergodic Markov chain, an expression for the expected cost per item of the control system was obtained and it will be minimized by two parameters: the sampling interval and the upper limit control of the non-conformities of the examined item. Numerical examples illustrate the proposed procedure
Resumo:
Este trabalho tem como objetivo o estudo do comportamento assintótico da estatística de Pearson (1900), que é o aparato teórico do conhecido teste qui-quadrado ou teste x2 como também é usualmente denotado. Inicialmente estudamos o comportamento da distribuição da estatística qui-quadrado de Pearson (1900) numa amostra {X1, X2,...,Xn} quando n → ∞ e pi = pi0 , 8n. Em seguida detalhamos os argumentos usados em Billingley (1960), os quais demonstram a convergência em distribuição de uma estatística, semelhante a de Pearson, baseada em uma amostra de uma cadeia de Markov, estacionária, ergódica e com espaço de estados finitos S
Resumo:
In this work we study the Hidden Markov Models with finite as well as general state space. In the finite case, the forward and backward algorithms are considered and the probability of a given observed sequence is computed. Next, we use the EM algorithm to estimate the model parameters. In the general case, the kernel estimators are used and to built a sequence of estimators that converge in L1-norm to the density function of the observable process
Resumo:
In this work we studied the consistency for a class of kernel estimates of f f (.) in the Markov chains with general state space E C Rd case. This study is divided into two parts: In the first one f (.) is a stationary density of the chain, and in the second one f (x) v (dx) is the limit distribution of a geometrically ergodic chain
Resumo:
Os Algoritmos Genético (AG) e o Simulated Annealing (SA) são algoritmos construídos para encontrar máximo ou mínimo de uma função que representa alguma característica do processo que está sendo modelado. Esses algoritmos possuem mecanismos que os fazem escapar de ótimos locais, entretanto, a evolução desses algoritmos no tempo se dá de forma completamente diferente. O SA no seu processo de busca trabalha com apenas um ponto, gerando a partir deste sempre um nova solução que é testada e que pode ser aceita ou não, já o AG trabalha com um conjunto de pontos, chamado população, da qual gera outra população que sempre é aceita. Em comum com esses dois algoritmos temos que a forma como o próximo ponto ou a próxima população é gerada obedece propriedades estocásticas. Nesse trabalho mostramos que a teoria matemática que descreve a evolução destes algoritmos é a teoria das cadeias de Markov. O AG é descrito por uma cadeia de Markov homogênea enquanto que o SA é descrito por uma cadeia de Markov não-homogênea, por fim serão feitos alguns exemplos computacionais comparando o desempenho desses dois algoritmos
Resumo:
In this work, we present a risk theory application in the following scenario: In each period of time we have a change in the capital of the ensurance company and the outcome of a two-state Markov chain stabilishs if the company pays a benece it heat to one of its policyholders or it receives a Hightimes c > 0 paid by someone buying a new policy. At the end we will determine once again by the recursive equation for expectation the time ruin for this company
Resumo:
In Percolation Theory, functions like the probability that a given site belongs to the infinite cluster, average size of clusters, etc. are described through power laws and critical exponents. This dissertation uses a method called Finite Size Scaling to provide a estimative of those exponents. The dissertation is divided in four parts. The first one briefly presents the main results for Site Percolation Theory for d = 2 dimension. Besides, some important quantities for the determination of the critical exponents and for the phase transistions understanding are defined. The second shows an introduction to the fractal concept, dimension and classification. Concluded the base of our study, in the third part the Scale Theory is mentioned, wich relates critical exponents and the quantities described in Chapter 2. In the last part, through the Finite Size Scaling method, we determine the critical exponents fi and. Based on them, we used the previous Chapter scale relations in order to determine the remaining critical exponents
Resumo:
We considered prediction techniques based on models of accelerated failure time with random e ects for correlated survival data. Besides the bayesian approach through empirical Bayes estimator, we also discussed about the use of a classical predictor, the Empirical Best Linear Unbiased Predictor (EBLUP). In order to illustrate the use of these predictors, we considered applications on a real data set coming from the oil industry. More speci - cally, the data set involves the mean time between failure of petroleum-well equipments of the Bacia Potiguar. The goal of this study is to predict the risk/probability of failure in order to help a preventive maintenance program. The results show that both methods are suitable to predict future failures, providing good decisions in relation to employment and economy of resources for preventive maintenance.