959 resultados para Techniques: Image Processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work investigates performance of recent feature-based matching techniques when applied to registration of underwater images. Matching methods are tested versus different contrast enhancing pre-processing of images. As a result of the performed experiments for various dominating in images underwater artifacts and present deformation, the outperforming preprocessing, detection and description methods are proposed

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El càncer de mama és una de les causes de més mortalitat entreles dones dels països desenvolupats. És tractat d'una maneramés eficient quan es fa una detecció precoç, on les tècniques d'imatge són molt importants. Una de les tècniques d'imatge més utilitzades després dels raigs-X són els ultrasons. A l'hora de fer un processat d'imatges d'ultrasò, els experts en aquest camp es troben amb una sèrie de limitacions en el moment d'utilitzar uns filtrats per les imatges, quan es fa ús de determinades eines. Una d'aquestes limitacions consisteix en la falta d'interactivitat que aquestes ens ofereixen. Per tal de solventar aquestes limitacions, s'ha desenvolupat una eina interactiva que permet explorar el mapa de paràmetres visualitzant el resultat del filtrat en temps real, d'una manera dinàmica i intuïtiva. Aquesta eina s'ha desenvolupat dins l'entorn de visualització d'imatge mèdica MeVisLab. El MeVisLab és un entorn molt potent i modular pel desenvolupament d'algorismes de processat d'imatges, visualització i mètodes d'interacció, especialment enfocats a la imatge mèdica. A més del processament bàsic d'imatges i de mòduls de visualització, inclou algorismes avançats de segmentació, registre i moltes análisis morfològiques i funcionals de les imatges.S'ha dut a terme un experiment amb quatre experts que, utilitzantl'eina desenvolupada, han escollit els paràmetres que creien adientsper al filtrat d'una sèrie d'imatges d'ultrasò. En aquest experiments'han utilitzat uns filtres que l'entorn MeVisLab ja té implementats:el Bilateral Filter, l'Anisotropic Difusion i una combinació d'un filtrede Mediana i un de Mitjana.Amb l'experiment realitzat, s'ha fet un estudi dels paràmetres capturats i s'han proposat una sèrie d'estimadors que seran favorables en la majoria dels casos per dur a terme el preprocessat d'imatges d'ultrasò

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Robotic platforms have advanced greatly in terms of their remote sensing capabilities, including obtaining optical information using cameras. Alongside these advances, visual mapping has become a very active research area, which facilitates the mapping of areas inaccessible to humans. This requires the efficient processing of data to increase the final mosaic quality and computational efficiency. In this paper, we propose an efficient image mosaicing algorithm for large area visual mapping in underwater environments using multiple underwater robots. Our method identifies overlapping image pairs in the trajectories carried out by the different robots during the topology estimation process, being this a cornerstone for efficiently mapping large areas of the seafloor. We present comparative results based on challenging real underwater datasets, which simulated multi-robot mapping

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La visualització científica estudia i defineix algorismes i estructures de dades que permeten fer comprensibles conjunts de dades a través d’imatges. En el cas de les aplicacions mèdiques les dades que cal interpretar provenen de diferents dispositius de captació i es representen en un model de vòxels. La utilitat d’aquest model de vòxels depèn de poder-lo veure des del punt de vista ideal, és a dir el que aporti més informació. D’altra banda, existeix la tècnica dels Miralls Màgics que permet veure el model de vòxels des de diferents punts de vista alhora i mostrant diferents valors de propietat a cada mirall. En aquest projecte implementarem un algorisme que permetrà determinar el punt de vista ideal per visualitzar un model de vòxels així com també els punts de vista ideals per als miralls per tal d’aconseguir el màxim d’informació possible del model de vòxels. Aquest algorisme es basa en la teoria de la informació per saber quina és la millor visualització. L’algorisme també permetrà determinar l’assignació de colors òptima per al model de vòxels

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The research proposes a methodology for assessing broiler breeder response to changes in rearing thermal environment. The continuous video recording of a flock analyzed may offer compelling evidences of thermal comfort, as well as other indications of welfare. An algorithm for classifying specific broiler breeder behavior was developed. Videos were recorded over three boxes where 30 breeders were reared. The boxes were mounted inside an environmental chamber were ambient temperature varied from cold to hot. Digital images were processed based on the number of pixels, according to their light intensity variation and binary contrast allowing a sequence of behaviors related to welfare. The system used the default of x, y coordinates, where x represents the horizontal distance from the top left of the work area to the point P, and y is the vertical distance. The video images were observed, and a grid was developed for identifying the area the birds stayed and the time they spent at that place. The sequence was analyzed frame by frame confronting the data with specific adopted thermal neutral rearing standards. The grid mask overlapped the real bird image. The resulting image allows the visualization of clusters, as birds in flock behave in certain patterns. An algorithm indicating the breeder response to thermal environment was developed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The authors thoroughly report the development, the technical aspects and the performance of the first navigated liver resections, by laparotomy and laparoscopy, in Brazil, done at the National Cancer Institute, Ministry of Health, using a surgical navigator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Weed mapping is a useful tool for site-specific herbicide applications. The objectives of this study were (1) to determine the percentage of land area covered by weeds in no-till and conventionally tilled fields of common bean using digital image processing and geostatistics, and (2) to compare two types of cameras. Two digital cameras (color and infrared) and a differential GPS were affixed to a center pivot structure for image acquisition. Sample field images were acquired in a regular grid pattern, and the images were processed to estimate the percentage of weed cover. After calculating the georeferenced weed percentage values, maps were constructed using geostatistical techniques. Based on the results, color images are recommended for mapping the percentage of weed cover in no-till systems, while infrared images are recommended for weed mapping in conventional tillage systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents a framework for segmentation of clustered overlapping convex objects. The proposed approach is based on a three-step framework in which the tasks of seed point extraction, contour evidence extraction, and contour estimation are addressed. The state-of-art techniques for each step were studied and evaluated using synthetic and real microscopic image data. According to obtained evaluation results, a method combining the best performers in each step was presented. In the proposed method, Fast Radial Symmetry transform, edge-to-marker association algorithm and ellipse fitting are employed for seed point extraction, contour evidence extraction and contour estimation respectively. Using synthetic and real image data, the proposed method was evaluated and compared with two competing methods and the results showed a promising improvement over the competing methods, with high segmentation and size distribution estimation accuracy.