10 resultados para Images - Computational methods

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Currently, computational methods have been increasingly used to aid in the characterization of molecular biological systems, especially when they relevant to human health. Ibuprofen is a nonsteroidal antiinflammatory or broadband use in the clinic. Once in the bloodstream, most of ibuprofen is linked to human serum albumin, the major protein of blood plasma, decreasing its bioavailability and requiring larger doses to produce its antiinflamatory action. This study aimes to characterize, through the interaction energy, how is the binding of ibuprofen to albumin and to establish what are the main amino acids and molecular interactions involved in the process. For this purpouse, it was conducted an in silico study, by using quantum mechanical calculations based on Density Functional Theory (DFT), with Generalized Gradient approximation (GGA) to describe the effects of exchange and correlation. The interaction energy of each amino acid belonging to the binding site to the ligand was calculated the using the method of molecular fragmentation with conjugated caps (MFCC). Besides energy, we calculated the distances, types of molecular interactions and atomic groups involved. The theoretical models used were satisfactory and show a more accurate description when the dielectric constant ε = 40 was used. The findings corroborate the literature in which the Sudlow site I (I-FA3) is the primary binding site and the site I-FA6 as secondary site. However, it differs in identifying the most important amino acids, which by interaction energy, in order of decreasing energy, are: Arg410, Lys414, Ser 489, Leu453 and Tyr411 to the I-Site FA3 and Leu481, Ser480, Lys351, Val482 and Arg209 to the site I-FA6. The quantification of interaction energy and description of the most important amino acids opens new avenues for studies aiming at manipulating the structure of ibuprofen, in order to decrease its interaction with albumin, and consequently increase its distribution

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although it has been suggested that retinal vasculature is a diffusion-limited aggregation (DLA) fractal, no study has been dedicated to standardizing its fractal analysis . The aims of this project was to standardize a method to estimate the fractal dimensions of retinal vasculature and to characterize their normal values; to determine if this estimation is dependent on skeletization and on segmentation and calculation methods; to assess the suitability of the DLA model and to determine the usefulness of log-log graphs in characterizing vasculature fractality . To achieve these aims, the information, mass-radius and box counting dimensions of 20 eyes vasculatures were compared when the vessels were manually or computationally segmented; the fractal dimensions of the vasculatures of 60 eyes of healthy volunteers were compared with those of 40 DLA models and the log-log graphs obtained were compared with those of known fractals and those of non-fractals. The main results were: the fractal dimensions of vascular trees were dependent on segmentation methods and dimension calculation methods, but there was no difference between manual segmentation and scale-space, multithreshold and wavelet computational methods; the means of the information and box dimensions for arteriolar trees were 1.29. against 1.34 and 1.35 for the venular trees; the dimension for the DLA models were higher than that for vessels; the log-log graphs were straight, but with varying local slopes, both for vascular trees and for fractals and non-fractals. This results leads to the following conclusions: the estimation of the fractal dimensions for retinal vasculature is dependent on its skeletization and on the segmentation and calculation methods; log-log graphs are not suitable as a fractality test; the means of the information and box counting dimensions for the normal eyes were 1.47 and 1.43, respectively, and the DLA model with optic disc seeding is not sufficient for retinal vascularization modeling

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The usual Ashkin-Teller (AT) model is obtained as a superposition of two Ising models coupled through a four-spin interaction term. In two dimension the AT model displays a line of fixed points along which the exponents vary continuously. On this line the model becomes soluble via a mapping onto the Baxter model. Such richness of multicritical behavior led Grest and Widom to introduce the N-color Ashkin-Teller model (N-AT). Those authors made an extensive analysis of the model thus introduced both in the isotropic as well as in the anisotropic cases by several analytical and computational methods. In the present work we define a more general version of the 3-color Ashkin-Teller model by introducing a 6-spin interaction term. We investigate the corresponding symmetry structure presented by our model in conjunction with an analysis of possible phase diagrams obtained by real space renormalization group techniques. The phase diagram are obtained at finite temperature in the region where the ferromagnetic behavior is predominant. Through the use of the transmissivities concepts we obtain the recursion relations in some periodical as well as aperiodic hierarchical lattices. In a first analysis we initially consider the two-color Ashkin-Teller model in order to obtain some results with could be used as a guide to our main purpose. In the anisotropic case the model was previously studied on the Wheatstone bridge by Claudionor Bezerra in his Master Degree dissertation. By using more appropriated computational resources we obtained isomorphic critical surfaces described in Bezerra's work but not properly identified. Besides, we also analyzed the isotropic version in an aperiodic hierarchical lattice, and we showed how the geometric fluctuations are affected by such aperiodicity and its consequences in the corresponding critical behavior. Those analysis were carried out by the use of appropriated definitions of transmissivities. Finally, we considered the modified 3-AT model with a 6-spin couplings. With the inclusion of such term the model becomes more attractive from the symmetry point of view. For some hierarchical lattices we derived general recursion relations in the anisotropic version of the model (3-AAT), from which case we can obtain the corresponding equations for the isotropic version (3-IAT). The 3-IAT was studied extensively in the whole region where the ferromagnetic couplings are dominant. The fixed points and the respective critical exponents were determined. By analyzing the attraction basins of such fixed points we were able to find the three-parameter phase diagram (temperature £ 4-spin coupling £ 6-spin coupling). We could identify fixed points corresponding to the universality class of Ising and 4- and 8-state Potts model. We also obtained a fixed point which seems to be a sort of reminiscence of a 6-state Potts fixed point as well as a possible indication of the existence of a Baxter line. Some unstable fixed points which do not belong to any aforementioned q-state Potts universality class was also found

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of the maps obtained from remote sensing orbital images submitted to digital processing became fundamental to optimize conservation and monitoring actions of the coral reefs. However, the accuracy reached in the mapping of submerged areas is limited by variation of the water column that degrades the signal received by the orbital sensor and introduces errors in the final result of the classification. The limited capacity of the traditional methods based on conventional statistical techniques to solve the problems related to the inter-classes took the search of alternative strategies in the area of the Computational Intelligence. In this work an ensemble classifiers was built based on the combination of Support Vector Machines and Minimum Distance Classifier with the objective of classifying remotely sensed images of coral reefs ecosystem. The system is composed by three stages, through which the progressive refinement of the classification process happens. The patterns that received an ambiguous classification in a certain stage of the process were revalued in the subsequent stage. The prediction non ambiguous for all the data happened through the reduction or elimination of the false positive. The images were classified into five bottom-types: deep water; under-water corals; inter-tidal corals; algal and sandy bottom. The highest overall accuracy (89%) was obtained from SVM with polynomial kernel. The accuracy of the classified image was compared through the use of error matrix to the results obtained by the application of other classification methods based on a single classifier (neural network and the k-means algorithm). In the final, the comparison of results achieved demonstrated the potential of the ensemble classifiers as a tool of classification of images from submerged areas subject to the noise caused by atmospheric effects and the water column

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing is one technology of extreme importance, allowing capture of data from the Earth's surface that are used with various purposes, including, environmental monitoring, tracking usage of natural resources, geological prospecting and monitoring of disasters. One of the main applications of remote sensing is the generation of thematic maps and subsequent survey of areas from images generated by orbital or sub-orbital sensors. Pattern classification methods are used in the implementation of computational routines to automate this activity. Artificial neural networks present themselves as viable alternatives to traditional statistical classifiers, mainly for applications whose data show high dimensionality as those from hyperspectral sensors. This work main goal is to develop a classiffier based on neural networks radial basis function and Growing Neural Gas, which presents some advantages over using individual neural networks. The main idea is to use Growing Neural Gas's incremental characteristics to determine the radial basis function network's quantity and choice of centers in order to obtain a highly effective classiffier. To demonstrate the performance of the classiffier three studies case are presented along with the results.