124 resultados para Processamento de imagens multitemporais
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user
Resumo:
With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users
Resumo:
We propose a multi-resolution, coarse-to-fine approach for stereo matching, where the first matching happens at a different depth for each pixel. The proposed technique has the potential of attenuating several problems faced by the constant depth algorithm, making it possible to reduce the number of errors or the number of comparations needed to get equivalent results. Several experiments were performed to demonstrate the method efficiency, including comparison with the traditional plain correlation technique, where the multi-resolution matching with variable depth, proposed here, generated better results with a smaller processing time
Resumo:
Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth
Resumo:
This work proposes a method to localize a simple humanoid robot, without embedded sensors, using images taken from an extern camera and image processing techniques. Once the robot is localized relative to the camera, supposing we know the position of the camera relative to the world, we can compute the position of the robot relative to the world. To make the camera move in the work space, we will use another mobile robot with wheels, which has a precise locating system, and will place the camera on it. Once the humanoid is localized in the work space, we can take the necessary actions to move it. Simultaneously, we will move the camera robot, so it will take good images of the humanoid. The mainly contributions of this work are: the idea of using another mobile robot to aid the navigation of a humanoid robot without and advanced embedded electronics; chosing of the intrinsic and extrinsic calibration methods appropriated to the task, especially in the real time part; and the collaborative algorithm of simultaneous navigation of the robots
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
Resumo:
A 3D binary image is considered well-composed if, and only if, the union of the faces shared by the foreground and background voxels of the image is a surface in R3. Wellcomposed images have some desirable topological properties, which allow us to simplify and optimize algorithms that are widely used in computer graphics, computer vision and image processing. These advantages have fostered the development of algorithms to repair bi-dimensional (2D) and three-dimensional (3D) images that are not well-composed. These algorithms are known as repairing algorithms. In this dissertation, we propose two repairing algorithms, one randomized and one deterministic. Both algorithms are capable of making topological repairs in 3D binary images, producing well-composed images similar to the original images. The key idea behind both algorithms is to iteratively change the assigned color of some points in the input image from 0 (background)to 1 (foreground) until the image becomes well-composed. The points whose colors are changed by the algorithms are chosen according to their values in the fuzzy connectivity map resulting from the image segmentation process. The use of the fuzzy connectivity map ensures that a subset of points chosen by the algorithm at any given iteration is the one with the least affinity with the background among all possible choices
Resumo:
Image segmentation is the process of subdiving an image into constituent regions or objects that have similar features. In video segmentation, more than subdividing the frames in object that have similar features, there is a consistency requirement among segmentations of successive frames of the video. Fuzzy segmentation is a region growing technique that assigns to each element in an image (which may have been corrupted by noise and/or shading) a grade of membership between 0 and 1 to an object. In this work we present an application that uses a fuzzy segmentation algorithm to identify and select particles in micrographs and an extension of the algorithm to perform video segmentation. Here, we treat a video shot is treated as a three-dimensional volume with different z slices being occupied by different frames of the video shot. The volume is interactively segmented based on selected seed elements, that will determine the affinity functions based on their motion and color properties. The color information can be extracted from a specific color space or from three channels of a set of color models that are selected based on the correlation of the information from all channels. The motion information is provided into the form of dense optical flows maps. Finally, segmentation of real and synthetic videos and their application in a non-photorealistic rendering (NPR) toll are presented
Resumo:
Image segmentation is the process of labeling pixels on di erent objects, an important step in many image processing systems. This work proposes a clustering method for the segmentation of color digital images with textural features. This is done by reducing the dimensionality of histograms of color images and using the Skew Divergence to calculate the fuzzy a nity functions. This approach is appropriate for segmenting images that have colorful textural features such as geological, dermoscopic and other natural images, as images containing mountains, grass or forests. Furthermore, experimental results of colored texture clustering using images of aquifers' sedimentary porous rocks are presented and analyzed in terms of precision to verify its e ectiveness.
Resumo:
This work presents an analysis of the behavior of some algorithms usually available in stereo correspondence literature, with full HD images (1920x1080 pixels) to establish, within the precision dilemma versus runtime applications which these methods can be better used. The images are obtained by a system composed of a stereo camera coupled to a computer via a capture board. The OpenCV library is used for computer vision operations and processing images involved. The algorithms discussed are an overall method of search for matching blocks with the Sum of the Absolute Value of the difference (Sum of Absolute Differences - SAD), a global technique based on cutting energy graph cuts, and a so-called matching technique semi -global. The criteria for analysis are processing time, the consumption of heap memory and the mean absolute error of disparity maps generated.
Resumo:
In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user
Resumo:
Objetivo: Trabalho realizado em ratos com o objetivo de estudar o efeito do Fator de Crescimento de Fibroblastos básico (FCFb) na cicatrização da aponeurose abdominal. Métodos: Foram usados 20 ratos Wistar separados aleatoriamente em 2 grupos iguais. Os animais foram anestesiados com pentobarbital sódico na dose de 20 mg/Kg por via intraperitoneal e submetidos a laparotomia mediana de 4 cm, cuja camada aponeurótica foi suturada com mononylon 5-0. No grupo I foi aplicada a dose de 5mg de FCFb sobre a sutura da aponeurose. No grupo II (controle) foi aplicada solução salina 0,9% sobre a linha se sutura. Após observação por 7 dias os animais foram mortos com superdose de anestésico. A camada aponeurótica com 1,5 cm de largura foi submetida a teste de resistência à tensão empregando a Máquina de Ensaios EMIC MF500. Biópsias das zonas de sutura foram processadas e coradas com HE e o tricômico de Masson. Os achados histopatológicos foram quantificados através de sistema digital (Image pro-plus) de captura e processamento de imagens. Os dados obtidos foram analisados pelo teste T com significância 0,05. Resultados: Nos animais do grupo I (experimental) a zona de sutura da camada aponeurótica suportou a carga de 1.103±103,39gf. A quantificação dos dados histopatológicos desse grupo atingiu a densidade média 226±29,32. No grupo II (controle) a carga suportada pela zona de sutura foi de 791,1±92,77 gf. Quando foram comparadas as médias das resistências à tensão dos dois grupos, observou-se uma diferença significante (p<0,01). O exame histopatológico das lâminas desse grupo relevou densidade média 114,1±17,01, correspondendo a uma diferença significante quando comparadas as médias dos dois grupos (p<0,01). Conclusão: Os dados permitem concluir que o FCFb contribuiu para aumentar a resistência da aponeurose suturada e para melhorar os parâmetros histopatológicos da cicatrização.
Resumo:
This dissertation presents a cooperative virtual multimedia enviroment for employing on time medical Field, using a TCP/IP computer network. The Virtual Diagnosis Room environment make it possible to perform cooperative tasks using classical image processing. Synchronous and assynchronous text conversation (chat) and content markup, in order to produce remote cooperative diagnosis. The dissertation also describes the tool in detail and its functions, that enables the interaction among users, along with implementation detals, contributions and weakness of this work