117 resultados para Probabilidade geometrica
Resumo:
Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures
Resumo:
The competitiveness of the trade generated by the higher availability of products with lower quality and cost promoted a new reality of industrial production with small clearances. Track deviations at the production are not discarded, uncertainties can statistically occur. The world consumer and the Brazilian one are supported by the consumer protection code, in lawsuits against the products poor quality. An automobile is composed of various systems and thousands of constituent parts, increasing the likelihood of failure. The dynamic and security systems are critical in relation to the consequences of possible failures. The investigation of the failure gives us the possibility of learning and contributing to various improvements. Our main purpose in this work is to develop a systematic, specific methodology by investigating the root cause of the flaw occurred on an axle end of the front suspension of an automobile, and to perform comparative data analyses between the fractured part and the project information. Our research was based on a flaw generated in an automotive suspension system involved in a mechanical judicial cause, resulting in property and personal damages. In the investigations concerning the analysis of mechanical flaws, knowledge on materials engineering plays a crucial role in the process, since it enables applying techniques for characterizing materials, relating the technical attributes required from a respective part with its structure of manufacturing material, thus providing a greater scientific contribution to the work. The specific methodology developed follows its own flowchart. In the early phase, the data in the records and information on the involved ones were collected. The following laboratory analyses were performed: macrography of the fracture, micrography with SEM (Scanning Electron Microscope) of the initial and final fracture, phase analysis with optical microscopy, Brinell hardness and Vickers microhardness analyses, quantitative and qualitative chemical analysis, by using X-ray fluorescence and optical spectroscopy for carbon analysis, qualitative study on the state of tension was done. Field data were also collected. In the analyses data of the values resulting from the fractured stock parts and the design values were compared. After the investigation, one concluded that: the developed methodology systematized the investigation and enabled crossing data, thus minimizing diagnostic error probability, the morphology of the fracture indicates failure by the fatigue mechanism in a geometrically propitious location, a tension hub, the part was subjected to low tensions by the sectional area of the final fracture, the manufacturing material of the fractured part has low ductility, the component fractured in an earlier moment than the one recommended by the manufacturer, the percentages of C, Si, Mn and Cr of the fractured part present values which differ from the design ones, the hardness value of the superior limit of the fractured part is higher than that of the design, and there is no manufacturing uniformity between stock and fractured part. The work will contribute to optimizing the guidance of the actions in a mechanical engineering judicial expertise
Resumo:
The oil industry`s need to produce with maximum efficiency, not to mention the safety and the environment aspects, encourages the optimization of processes. It makes them look for a level of excellence in acquisition of equipment, ensuring the quality without prejudice security of facilities and peoples. Knowing the reliability of equipment and that this stands for a system is fundamental to the production strategy to seeks the maximum return on investment. The reliability analysis techniques have been increasingly applied in the industry as strategy for predicting failures likelihood ensuring the integrity of processes. Some reliability theories underlie the decisions to use stochastic calculations to estimate equipment failure. This dissertation proposes two techniques associating qualitative (through expertise opinion) and quantitative data (European North Sea oil companies fault database, Ored) applied on centrifugal pump to water injection system for secondary oil recovery on two scenarios. The data were processed in reliability commercial software. As a result of hybridization, it was possible to determine the pump life cycle and what impact on production if it fails. The technique guides the best maintenance policy - important tool for strategic decisions on asset management.
Resumo:
In the urban areas of the cities a larger problem of destiny of effluents of the treatment stations is verified due to the junction of the sewages in great volumes. This way the hidroponic cultive becomes important, for your intensive characteristic, as alternative of reuse. This work presents as objective the improvement of the relation hidric-nutritious of the hidroponic cultive of green forage (FVH) using treaty sewage. The production of forage was with corn (Zea mays L.), using double hybrid AG1051, in the experimental field of the Federal University of Rio Grande do Norte (UFRN), in the city of Natal-RN-Brazil. The treated effluent essentially domestic had origin of anaerobic reactor, type decant-digester of two cameras in series followed by anaerobic filters drowned. The hidroponic experimental system was composed of 08 stonemasons, with limited contours for masonry of drained ceramic brick, measuring each one 2,5 meters in length for 1,0 meter of width, with inclination of 4% (m/m) in the longitudinal sense, leveled carefully, in way to not to allow preferential roads in the flow. These dimensions, the useful area of Isow was of 2 square meters. The stonemasons of cultive were waterproof (found and lateral) with plastic canvas of 200 micres of thickness, in the white color. Controlled the entrance and exit of the effluente in the stonemasons, with cycles of 12,68 minutes, it being water of 1,18 minutes. The treatments were constituted of: T1 - 24 hours/day under it waters with flow of 2 L/min; T2 - 12 hours/day under waters with flow of 4 L/min; T3 - 12 hours/day under waters with flow of 2 L/min; and T4 - 16 hours/day under waters with flow of 3 L/min. There were evaluations of the evapotranspirometric demand, of hidroponic system affluent and effluent seeking to characterize and to monitor physical-chemical parameters as: pH, temperature, Electric Conductivity and Fecal Coliforms. This last one was analyzed to the 11 days after isow (DAS) and to the 14 DAS. The others were analyzed daily. I sow it was accomplished in the dates of February 21, 2007, first experiment, and April 10, 2007, second experiment. The density of Isow was of 2 kg of seeds, germinated before 48 hours, for square meter of stonemason. The statistic delineament was it casual entirely with two repetitions, in two experiments. It was applied Tukey test of average to five percent of probability. The cultivation cycle was of 14 DAS with evapotranspirometric demand maximum, reached by T1, of 67,44 mm/day. The analyzed parameters, as mass of green matter - Kg, productivity-Kg/m2 and reason of production of seed FVH/Kg used in Isow, the best result was presented by T1, obtaining value of up to 19,01 Kg/m2 of cultive. Without significant difference, the T4 presented greats values with 16 hours under cycle of water. The Treatments 2 and 3 with 12 hours under cycle of water, they obtained inferior results to the other Treatments. As treatment system, came efficient in the reduction of the salinity. T1 obtained reduction medium maxim of 62,5%, to the 7 DAS, in the amount of salts that enter in the system in they are absorbed in the cultivation. The cultivation FVH acted reducing the microbiologic load. Significant percentile of reduction they were reached, with up to 90,23% of reduction of Units of Colonies (UFC), constituting, like this, the Hidroponic System as good alternative of treatment of effluents of Reactors of high Efficiency
Resumo:
The state of Rio Grande do Norte, possessor of an extremely irregular regime of rains, has the necessity of enlarge and specify the researches about its own hydro-climatic conditions, to achieve trustworthy results that are able to minimize the adversities imposed by these conditions and make possible the implementation of a better planning in the economic activities and of subsistence that somehow utilize of the multiple uses of hydro resources of the State. This way, the daily values observed from the pluviometric series of 166 posts, with 45 years uninterrupted of historic data, were adjusted to the incomplete gamma function to the determination of the probability of rain in the 36 period of ten days in which the year was divided. To the attainment of the α and β parameters of this function it was applied the method of the maximum verisimilitude allowing, in the end, to analyze the temporal and spatial distribution of the rain in the level of 75% of probability. The values of potential evapo-transpiration were calculated by the Linacre method that, through the SURFER software, were confronted with the dependant rain, obtaining, in this way, the spatialization of the potential hydro availability, which the values can be known to any period of ten days of the year, city and/or region of the state of Rio Grande do Norte. With the identification of the main meteorological systems that act in the State, we sought to better comprehend how this systems interfere, in the irregular regime of rain, in the situations of several clime in the major part of Rio Grande do Norte and in the hydro regional balance. And, finally, with these data in hand and with the generated maps, we verified that space-temporal distribution of the rain and of the potential hydro availability were heterogeneous in the whole State, mainly in the West and Central regions, inserted in potiguar s semi-arid, which, after the period of the rains station, suffers with dry season and length drought during the rest of the year
Resumo:
The Pitimbu River Watershed (PRW), belonging to Potiguar capital metropolitan area, State of Rio Grande do Norte, contributes, among other purposes, to human using and animal watering. This watershed is extremely important because, besides filling up with freshwater approximately 30% of the south part of Natal (South, East and West Zones), contributes to the river shore ecosystem equilibrium. Face to the current conjuncture, this study aims to evaluate the urban development dynamics in the PRW, applying Cellular Automata as a modeling instrument, and to simulate future urban scenarios, between 2014 and 2033, using the simulation program SLEUTH. In the calibration phase, urban spots for 1 984, 1992, 2004 and 2013 years were used, with resolution from 100 meters. After the simulation, it was found a predominance of organic growth, expanding the BHRP from existing urban centers. The spontaneous growth occurred through the fullest extent of the watershed, however the probability of effective growth should not exceed 21%. It was observed that, there was a 68% increase for the period between 2014 and 2033, corresponding to an expansion area of 1,778 ha. For 2033, the source of Pitimbu River area and the Jiqui Lake surroundings will increase more than 78%. Finally, it was seen an exogenous urban growth tendency in the watershed (outside-in). As a result of this growth, hydraulics resources will become scarcer
Resumo:
This work whose title is "The transcendental arguments: Kant Andy Hume's problem" has as its main objective to interpret Kant's answer to Hume's problem in the light of the conjunction of the causality and induction themes which is equivalent to skeptical- naturalist reading of the latter. In this sense, this initiative complements the previous treatment seen in our dissertation, where the same issue had been discussed from a merely skeptical reading that Kant got from Hume thought and was only examined causality. Among the specific objectives, we list the following: a) critical philosophy fulfills three basic functions, a founding, one negative and one would argue that the practical use of reason, here named as defensive b) the Kantian solution of Hume's problem in the first critisism would fulfill its founding and negative functions of critique of reason; c) the Kantian treatment of the theme of induction in other criticisms would will fulfill the defense function of critique of reason; d) that the evidence of Kant's answer to Hume's problem are more consistent when will be satisfied these three functions or moments of criticism. The basic structure of the work consists of three parts: the first the genesis of Hume's problem - our intention is to reconstruct Hume's problem, analyzing it from the perspective of two definitions of cause, where the dilution of the first definition in the second match the reduction of psychological knowledge to the probability of following the called naturalization of causal relations; whereas in the second - Legality and Causality - it is stated that when considering Hume in the skeptic-naturalist option, Kant is not entitled to respond by transcendental argument AB; A⊢B from the second Analogy, evidence that is rooted in the position of contemporary thinkers, such as Strawson and Allison; in third part - Purpose and Induction - admits that Kant responds to Hume on the level of regulative reason use, although the development of this test exceeds the limits of the founding function of criticism. And this is articulated in both the Introduction and Concluding Remarks by meeting the defensive [and negative] function of criticism. In this context, based on the use of so-called transcendental arguments that project throughout the critical trilogy, we provide solution to a recurring issue that recurs at several points in our submission and concerning to the "existence and / or the necessity of empirical causal laws. In this light, our thesis is that transcendental arguments are only an apodictic solution to the Hume s skeptical-naturalist problem when is at stake a practical project in which the interest of reason is ensured, as will, in short, proved in our final considerations
Resumo:
The following work is to interpret and analyze the problem of induction under a vision founded on set theory and probability theory as a basis for solution of its negative philosophical implications related to the systems of inductive logic in general. Due to the importance of the problem and the relatively recent developments in these fields of knowledge (early 20th century), as well as the visible relations between them and the process of inductive inference, it has been opened a field of relatively unexplored and promising possibilities. The key point of the study consists in modeling the information acquisition process using concepts of set theory, followed by a treatment using probability theory. Throughout the study it was identified as a major obstacle to the probabilistic justification, both: the problem of defining the concept of probability and that of rationality, as well as the subtle connection between the two. This finding called for a greater care in choosing the criterion of rationality to be considered in order to facilitate the treatment of the problem through such specific situations, but without losing their original characteristics so that the conclusions can be extended to classic cases such as the question about the continuity of the sunrise
Resumo:
Neste trabalho, através de simulações computacionais, identificamos os fenômenos físicos associados ao crescimento e a dinâmica de polímeros como sistemas complexos exibindo comportamentos não linearidades, caos, criticalidade auto-organizada, entre outros. No primeiro capítulo, iniciamos com uma breve introdução onde descrevemos alguns conceitos básicos importantes ao entendimento do nosso trabalho. O capítulo 2 consiste na descrição do nosso estudo da distribuição de segmentos num polímero ramificado. Baseado em cálculos semelhantes aos usados em cadeias poliméricas lineares, utilizamos o modelo de crescimento para polímeros ramificados (Branched Polymer Growth Model - BPGM) proposto por Lucena et al., e analisamos a distribuição de probabilidade dos monômeros num polímero ramificado em 2 dimensões, até então desconhecida. No capítulo seguinte estudamos a classe de universalidade dos polímeros ramificados gerados pelo BPGM. Utilizando simulações computacionais em 3 dimensões do modelo proposto por Lucena et al., calculamos algumas dimensões críticas (dimensões fractal, mínima e química) para tentar elucidar a questão da classe de universalidade. Ainda neste Capítulo, descrevemos um novo modelo para a simulação de polímeros ramificados que foi por nós desenvolvido de modo a poupar esforço computacional. Em seguida, no capítulo 4 estudamos o comportamento caótico do crescimento de polímeros gerados pelo BPGM. Partimos de polímeros criticamente organizados e utilizamos uma técnica muito semelhante aquela usada em transições de fase em Modelos de Ising para estudar propagação de danos chamada de Distância de Hamming. Vimos que a distância de Hamming para o caso dos polímeros ramificados se comporta como uma lei de potência, indicando um caráter não-extensivo na dinâmica de crescimento. No Capítulo 5 analisamos o movimento molecular de cadeias poliméricas na presença de obstáculos e de gradientes de potenciais. Usamos um modelo generalizado de reptação para estudar a difusão de polímeros lineares em meios desordenados. Investigamos a evolução temporal destas cadeias em redes quadradas e medimos os tempos característicos de transporte t. Finalizamos esta dissertação com um capítulo contendo a conclusão geral denoss o trabalho (Capítulo 6), mais dois apêndices (Apêndices A e B) contendo a fenomenologia básica para alguns conceitos que utilizaremos ao longo desta tese (Fractais e Percolação respectivamente) e um terceiro e ´ultimo apêndice (Apêndice C) contendo uma descrição de um programa de computador para simular o crescimentos de polímeros ramificados em uma rede quadrada
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points
Resumo:
The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points. In oil recovery terminology, the given single point can be mapped to an injection well (injector) and the multiple other points to production wells (producers). In the previously standard case of one injection well and one production well separated by Euclidean distance r, the distribution of shortest paths l, P(l|r), shows a power-law behavior with exponent gl = 2.14 in 2D. Here we analyze the situation of one injector and an array A of producers. Symmetric arrays of producers lead to one peak in the distribution P(l|A), the probability that the shortest path between the injector and any of the producers is l, while the asymmetric configurations lead to several peaks in the distribution. We analyze configurations in which the injector is outside and inside the set of producers. The peak in P(l|A) for the symmetric arrays decays faster than for the standard case. For very long paths all the studied arrays exhibit a power-law behavior with exponent g ∼= gl.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
A linear chain do not present phase transition at any finite temperature in a one dimensional system considering only first neighbors interaction. An example is the Ising ferromagnet in which his critical temperature lies at zero degree. Analogously, in percolation like disordered geometrical systems, the critical point is given by the critical probability equals to one. However, this situation can be drastically changed if we consider long-range bonds, replacing the probability distribution by a function like . In this kind of distribution the limit α → ∞ corresponds to the usual first neighbor bond case. In the other hand α = 0 corresponds to the well know "molecular field" situation. In this thesis we studied the behavior of Pc as a function of a to the bond percolation specially in d = 1. Our goal was to check a conjecture proposed by Tsallis in the context of his Generalized Statistics (a generalization to the Boltzmann-Gibbs statistics). By this conjecture, the scaling laws that depend with the size of the system N, vary in fact with the quantitie
Resumo:
In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.