15 resultados para Método das K-Médias

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, the DFA introduced by Peng, was established as an important tool capable of detecting long-range autocorrelation in time series with non-stationary. This technique has been successfully applied to various areas such as: Econophysics, Biophysics, Medicine, Physics and Climatology. In this study, we used the DFA technique to obtain the Hurst exponent (H) of the profile of electric density profile (RHOB) of 53 wells resulting from the Field School of Namorados. In this work we want to know if we can or not use H to spatially characterize the spatial data field. Two cases arise: In the first a set of H reflects the local geology, with wells that are geographically closer showing similar H, and then one can use H in geostatistical procedures. In the second case each well has its proper H and the information of the well are uncorrelated, the profiles show only random fluctuations in H that do not show any spatial structure. Cluster analysis is a method widely used in carrying out statistical analysis. In this work we use the non-hierarchy method of k-means. In order to verify whether a set of data generated by the k-means method shows spatial patterns, we create the parameter Ω (index of neighborhood). High Ω shows more aggregated data, low Ω indicates dispersed or data without spatial correlation. With help of this index and the method of Monte Carlo. Using Ω index we verify that random cluster data shows a distribution of Ω that is lower than actual cluster Ω. Thus we conclude that the data of H obtained in 53 wells are grouped and can be used to characterize space patterns. The analysis of curves level confirmed the results of the k-means

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of non-human primates in scientific research has contributed significantly to the biomedical area and, in the case of Callithrix jacchus, has provided important evidence on physiological mechanisms that help explain its biology, making the species a valuable experimental model in different pathologies. However, raising non-human primates in captivity for long periods of time is accompanied by behavioral disorders and chronic diseases, as well as progressive weight loss in most of the animals. The Primatology Center of the Universidade Federal do Rio Grande do Norte (UFRN) has housed a colony of C. jacchus for nearly 30 years and during this period these animals have been weighed systematically to detect possible alterations in their clinical conditions. This procedure has generated a volume of data on the weight of animals at different age ranges. These data are of great importance in the study of this variable from different perspectives. Accordingly, this paper presents three studies using weight data collected over 15 years (1985-2000) as a way of verifying the health status and development of the animals. The first study produced the first article, which describes the histopathological findings of animals with probable diagnosis of permanent wasting marmoset syndrome (WMS). All the animals were carriers of trematode parasites (Platynosomum spp) and had obstruction in the hepatobiliary system; it is suggested that this agent is one of the etiological factors of the syndrome. In the second article, the analysis focused on comparing environmental profile and cortisol levels between the animals with normal weight curve evolution and those with WMS. We observed a marked decrease in locomotion, increased use of lower cage extracts and hypocortisolemia. The latter is likely associated to an adaptation of the mechanisms that make up the hypothalamus-hypophysis-adrenal axis, as observed in other mammals under conditions of chronic malnutrition. Finally, in the third study, the animals with weight alterations were excluded from the sample and, using computational tools (K-means and SOM) in a non-supervised way, we suggest found new ontogenetic development classes for C. jacchus. These were redimensioned from five to eight classes: infant I, infant II, infant III, juvenile I, juvenile II, sub-adult, young adult and elderly adult, in order to provide a more suitable classification for more detailed studies that require better control over the animal development

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main goal of this work is to investigate the suitability of applying cluster ensemble techniques (ensembles or committees) to gene expression data. More specifically, we will develop experiments with three diferent cluster ensembles methods, which have been used in many works in literature: coassociation matrix, relabeling and voting, and ensembles based on graph partitioning. The inputs for these methods will be the partitions generated by three clustering algorithms, representing diferent paradigms: kmeans, ExpectationMaximization (EM), and hierarchical method with average linkage. These algorithms have been widely applied to gene expression data. In general, the results obtained with our experiments indicate that the cluster ensemble methods present a better performance when compared to the individual techniques. This happens mainly for the heterogeneous ensembles, that is, ensembles built with base partitions generated with diferent clustering algorithms

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Peng was the first to work with the Technical DFA (Detrended Fluctuation Analysis), a tool capable of detecting auto-long-range correlation in time series with non-stationary. In this study, the technique of DFA is used to obtain the Hurst exponent (H) profile of the electric neutron porosity of the 52 oil wells in Namorado Field, located in the Campos Basin -Brazil. The purpose is to know if the Hurst exponent can be used to characterize spatial distribution of wells. Thus, we verify that the wells that have close values of H are spatially close together. In this work we used the method of hierarchical clustering and non-hierarchical clustering method (the k-mean method). Then compare the two methods to see which of the two provides the best result. From this, was the parameter � (index neighborhood) which checks whether a data set generated by the k- average method, or at random, so in fact spatial patterns. High values of � indicate that the data are aggregated, while low values of � indicate that the data are scattered (no spatial correlation). Using the Monte Carlo method showed that combined data show a random distribution of � below the empirical value. So the empirical evidence of H obtained from 52 wells are grouped geographically. By passing the data of standard curves with the results obtained by the k-mean, confirming that it is effective to correlate well in spatial distribution

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, the DFA introduced by Peng, was established as an important tool capable of detecting long-range autocorrelation in time series with non-stationary. This technique has been successfully applied to various areas such as: Econophysics, Biophysics, Medicine, Physics and Climatology. In this study, we used the DFA technique to obtain the Hurst exponent (H) of the profile of electric density profile (RHOB) of 53 wells resulting from the Field School of Namorados. In this work we want to know if we can or not use H to spatially characterize the spatial data field. Two cases arise: In the first a set of H reflects the local geology, with wells that are geographically closer showing similar H, and then one can use H in geostatistical procedures. In the second case each well has its proper H and the information of the well are uncorrelated, the profiles show only random fluctuations in H that do not show any spatial structure. Cluster analysis is a method widely used in carrying out statistical analysis. In this work we use the non-hierarchy method of k-means. In order to verify whether a set of data generated by the k-means method shows spatial patterns, we create the parameter Ω (index of neighborhood). High Ω shows more aggregated data, low Ω indicates dispersed or data without spatial correlation. With help of this index and the method of Monte Carlo. Using Ω index we verify that random cluster data shows a distribution of Ω that is lower than actual cluster Ω. Thus we conclude that the data of H obtained in 53 wells are grouped and can be used to characterize space patterns. The analysis of curves level confirmed the results of the k-means

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Six Sigma methodology consists of a program guided to the continuous improvement of enterprise processes, which aims at customer satisfaction, as well as increasing financial and operational profits. Considering that more than 99% of the Brazilian companies have up to five hundred employees, this study investigates how Six Sigma can be applied in small or medium size companies. This study was conducted based on literature review and application of the ideas in a medium size newspaper company, which manufactures newspapers that circulate daily in the state of Rio Grande do Norte. The results of the research point to the viability of use of the methodology in this market segment, as well as suggest a method that can be used in similar companies

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work has as main objective to study the application of microstrip antennas with patch and use of superconducting arrays of planar and linear phase. Was presented a study of the main theories that explain clearly the superconductivity. The BCS theory, Equations of London and the Two Fluid Model are theories that supported the implementation of the superconducting microstrip antennas. Arrangements phase was analyzed in linear and planar configuration of its antennas are reported factors such arrays to settings and criteria of phase and the spacing between the elements that make the arrayst was reviewed in order to minimize losses due to secondary lobes. The antenna used has a rectangular patch Sn5InCa2Ba4Cu10Oy the superconducting material was analyzed by the method of Transverse Transmission Line (TTL) applied in the field of Fourier transform (FTD). The TTL is a full-wave method, which has committed to obtaining the electromagnetic fields in terms of cross-cutting components of the structure. The inclusion of superconducting patch is made using the boundary condition, complex resistive. Are obtained when the resonant frequency depending on the parameters of the antenna, radiation pattern of E-Plan and H-Plan for the M-phase arrangements of antennas in the linear and planar configurations for different values of phase and spacing between the elements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neste trabalho é proposto um novo algoritmo online para o resolver o Problema dos k-Servos (PKS). O desempenho desta solução é comparado com o de outros algoritmos existentes na literatura, a saber, os algoritmos Harmonic e Work Function, que mostraram ser competitivos, tornando-os parâmetros de comparação significativos. Um algoritmo que apresente desempenho eficiente em relação aos mesmos tende a ser competitivo também, devendo, obviamente, se provar o referido fato. Tal prova, entretanto, foge aos objetivos do presente trabalho. O algoritmo apresentado para a solução do PKS é baseado em técnicas de aprendizagem por reforço. Para tanto, o problema foi modelado como um processo de decisão em múltiplas etapas, ao qual é aplicado o algoritmo Q-Learning, um dos métodos de solução mais populares para o estabelecimento de políticas ótimas neste tipo de problema de decisão. Entretanto, deve-se observar que a dimensão da estrutura de armazenamento utilizada pela aprendizagem por reforço para se obter a política ótima cresce em função do número de estados e de ações, que por sua vez é proporcional ao número n de nós e k de servos. Ao se analisar esse crescimento (matematicamente, ) percebe-se que o mesmo ocorre de maneira exponencial, limitando a aplicação do método a problemas de menor porte, onde o número de nós e de servos é reduzido. Este problema, denominado maldição da dimensionalidade, foi introduzido por Belmann e implica na impossibilidade de execução de um algoritmo para certas instâncias de um problema pelo esgotamento de recursos computacionais para obtenção de sua saída. De modo a evitar que a solução proposta, baseada exclusivamente na aprendizagem por reforço, seja restrita a aplicações de menor porte, propõe-se uma solução alternativa para problemas mais realistas, que envolvam um número maior de nós e de servos. Esta solução alternativa é hierarquizada e utiliza dois métodos de solução do PKS: a aprendizagem por reforço, aplicada a um número reduzido de nós obtidos a partir de um processo de agregação, e um método guloso, aplicado aos subconjuntos de nós resultantes do processo de agregação, onde o critério de escolha do agendamento dos servos é baseado na menor distância ao local de demanda

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esse estudo se propôs a avaliar um método auxiliar de diagnóstico (DIAGNOdent®) na predição de cárie de dentina em dentes decíduos, através de validação histológica e de microdureza. Verificando possíveis diferenças entre os valores obtidos através do método auxiliar, além de determinar um ponto de corte para parada de remoção de tecido cariado. A amostra do estudo foi de 15 crianças que apresentaram 21 cavidades de cárie, tratadas e analisadas desde antes da realização da restauração até quando da exodontia do elemento dentário. Os espécimes foram analisados através do DIAGNOdent® antes da abertura das cavidades, após a remoção do tecido cariado e depois da reabertura das cavidades que se deu após a exodontia dos elementos dentários. Posteriormente, receberam preparo metalográfico para realização de testes de microdureza e microscopia óptica que serviram como forma de validação para as mensurações obtidas pela fluorescência a laser. Houve diferença significativa entre os valores DIAGNOdent® encontrados antes da abertura da cavidade e os demais (p < 0,0001). Havendo correlação significativa (r = 0,432 ; p = 0,019) apenas para as aferições obtidas antes da abertura das cavidades e quando do término da remoção do tecido cariado. Para a microdureza, as médias axiais foram significativamente maiores que as pulpares, existindo correlação entre a microdureza pulpar e os valores do DIAGNOdent® após a reabertura (r = - 0,472 ; p = 0,002). Valores de 15,38% para a sensibilidade, 100% para a especificidade, 100% para o valor preditivo positivo e 71,79% para o valor preditivo negativo, foram obtidos quando se utilizou um ponto de corte de 30 para fluorescência a laser, tendo como padrão-ouro a microdureza pulpar. Partindo da média dos valores de fluorescência a laser obtidos após a remoção do tecido cariado e acrescentando-se um desvio-padrão a esta, o método indicou o valor de 19 como ponto de corte para cessar a remoção de dentina. Concluiu-se que, nas condições analisadas, o método auxiliar de diagnóstico (DIAGNOdent®) é um método acurado na predição de cárie de dentina em dentes decíduos. Além disso, o método comprovou que o padrão usual de remoção de dentina garante a remoção do tecido cariado

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this research was to evaluate the passivity and strain induced in infrastructures screwed on abutments, made by CAD/CAM technology, and to compare these samples with parts manufactured by conventional casting. Using CAD/CAM technology, 4 samples were made from zirconia (Zircad) and 4 samples were manufactured from cobaltchrome (CoCrcad). The control groups were 4 specimens of cobalt-chrome, made by onepiece casting (CoCrci), for a total of 12 infrastructures. To evaluate the passivity, the infraestructures were installed on the abutments. One end was tightened and the vertical gap between the infrastructure and the prosthetic abutment was measured with scanning electron microscopy (250×). The mean strain in these infrastructures was analyzed via the photoelasticity test. A significant difference (p = 0.000) in passivity was observed between the control (CoCrci) and sample groups (CoCrcad and CoCrci). CoCrcad exhibited the best value of passivity (48.76 ± 13.45 μm) and CoCrci the worst (187.55 ± 103.63 μm), Zircad presented an intermediate value (103.81 ± 43.15 μm). When compared to the other groups, CoCrci showed the highest mean strain around the implants (17.19 ± 7.22 kPa). It was concluded that the zirconia infrastructure made by CAD / CAM showed a higher vertical marginal misfit than those made in cobalt-chromium alloy with the same methodology, however, the tension generated in the implants was similar. The CAD/CAM technology is more accurate for passivity and mean strain of infrastructure screwed on abutments than conventional manufacturing techniques

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis starts from the proposition that postmodernity is the very cultural manifestation of Late Capitalism. The research was concerned with the publication of privacy that gives us news feeds of Facebook online social network. We used netnography, which is considered a construction method in action that combines study skills to immersion of the researcher in the investigated field. Netnography is an alternative methodology for the study of communication threads in cyber environments. We note that there is an exhibition of themselves in a related environment that reproduces the properties of the spectacle society, with an emphasis on the fact that this exhibition be made and want to be made by the individual himself, allegorically, window dresser and with spectacle. The subject is revealed by itself, shown and is induced to show and display at the same time. It is a large-scale exhibition of the private life events; it is more than spectacle, surpassing debordian sense, approaching the exhibitionism in the Freudian sense. It is a subject in a new way of existence. At the time of posting in public, the person violates their privacy. It is the desecration of stardom intimacy. We found that these new behavioral forms sharing of human experiences, under the mediation of typical technologies informationalism era, appear as a major brand of sociability, meshing the ongoing technological revolution.