46 resultados para colour-based segmentation

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanisms underlying the parsing of a spatial distribution of velocity vectors into two adjacent (spatially segregated) or overlapping (transparent) motion surfaces were examined using random dot kinematograms. Parsing might occur using either of two principles. Surfaces might be defined on the basis of similarity of motion vectors and then sharp perceptual boundaries drawn between different surfaces (continuity-based segmentation). Alternatively, detection of a high gradient of direction or speed separating the motion surfaces might drive the process (discontinuity-based segmentation). To establish which method is used, we examined the effect of blurring the motion direction gradient. In the case of a sharp direction gradient, each dot had one of two directions differing by 135°. With a shallow gradient, most dots had one of two directions but the directions of the remainder spanned the range between one motion-defined surface and the other. In the spatial segregation case the gradient defined a central boundary separating two regions. In the transparent version the dots were randomly positioned. In both cases all dots moved with the same speed and existed for only two frames before being randomly replaced. The ability of observers to parse the motion distribution was measured in terms of their ability to discriminate the direction of one of the two surfaces. Performance was hardly affected by spreading the gradient over at least 25% of the dots (corresponding to a 1° strip in the segregation case). We conclude that detection of sharp velocity gradients is not necessary for distinguishing different motion surfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colour-based particle filters have been used exhaustively in the literature given rise to multiple applications However tracking coloured objects through time has an important drawback since the way in which the camera perceives the colour of the object can change Simple updates are often used to address this problem which imply a risk of distorting the model and losing the target In this paper a joint image characteristic-space tracking is proposed which updates the model simultaneously to the object location In order to avoid the curse of dimensionality a Rao-Blackwellised particle filter has been used Using this technique the hypotheses are evaluated depending on the difference between the model and the current target appearance during the updating stage Convincing results have been obtained in sequences under both sudden and gradual illumination condition changes Crown Copyright (C) 2010 Published by Elsevier B V All rights reserved

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a multi-camera application capable of processing high resolution images and extracting features based on colors patterns over graphic processing units (GPU). The goal is to work in real time under the uncontrolled environment of a sport event like a football match. Since football players are composed for diverse and complex color patterns, a Gaussian Mixture Models (GMM) is applied as segmentation paradigm, in order to analyze sport live images and video. Optimization techniques have also been applied over the C++ implementation using profiling tools focused on high performance. Time consuming tasks were implemented over NVIDIA's CUDA platform, and later restructured and enhanced, speeding up the whole process significantly. Our resulting code is around 4-11 times faster on a low cost GPU than a highly optimized C++ version on a central processing unit (CPU) over the same data. Real time has been obtained processing until 64 frames per second. An important conclusion derived from our study is the scalability of the application to the number of cores on the GPU. © 2011 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissolved CO2 measurements are usually made using a Severinghaus electrode, which is bulky and can suffer from electrical interference. In contrast, optical sensors for gaseous CO2, whilst not suffering these problems, are mainly used for making gaseous (not dissolved) CO2 measurements, due to dye leaching and protonation, especially at high ionic strengths (>0.01 M) and acidity (<pH 4). This is usually prevented by coating the sensor with a gas-permeable, but ion-impermeable, membrane (GPM). Herein, we introduce a highly sensitive, colourimetric-based, plastic film sensor for the measurement of both gaseous and dissolved CO2, in which a pH-sensitive dye, thymol blue (TB) is coated onto particles of hydrophilic silica to create a CO2-sensitive, TB-based pigment, which is then extruded into low density polyethylene (LDPE) to create a GPM-free, i.e. naked, TB plastic sensor film for gaseous and dissolved CO2 measurements. When used for making dissolved CO2 measurements, the hydrophobic nature of the LDPE renders the film: (i) indifferent to ionic strength, (ii) highly resistant to acid attack and (iii) stable when stored under ambient (dark) conditions for >8 months, with no loss of colour or function. Here, the performance of the TB plastic film is primarily assessed as a dissolved CO2 sensor in highly saline (3.5 wt%) water. The TB film is blue in the absence of CO2 and yellow in its presence, exhibiting 50% transition in its colour at ca. 0.18% CO2. This new type of CO2 sensor has great potential in the monitoring of CO2 levels in the hydrosphere, as well as elsewhere, e.g. food packaging and possibly patient monitoring.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tissue microarray (TMA) is a high throughput analysis tool to identify new diagnostic and prognostic markers in human cancers. However, standard automated method in tumour detection on both routine histochemical and immunohistochemistry (IHC) images is under developed. This paper presents a robust automated tumour cell segmentation model which can be applied to both routine histochemical tissue slides and IHC slides and deal with finer pixel-based segmentation in comparison with blob or area based segmentation by existing approaches. The presented technique greatly improves the process of TMA construction and plays an important role in automated IHC quantification in biomarker analysis where excluding stroma areas is critical. With the finest pixel-based evaluation (instead of area-based or object-based), the experimental results show that the proposed method is able to achieve 80% accuracy and 78% accuracy in two different types of pathological virtual slides, i.e., routine histochemical H&E and IHC images, respectively. The presented technique greatly reduces labor-intensive workloads for pathologists and highly speeds up the process of TMA construction and provides a possibility for fully automated IHC quantification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces an automated computer- assisted system for the diagnosis of cervical intraepithelial neoplasia (CIN) using ultra-large cervical histological digital slides. The system contains two parts: the segmentation of squamous epithelium and the diagnosis of CIN. For the segmentation, to reduce processing time, a multiresolution method is developed. The squamous epithelium layer is first segmented at a low (2X) resolution. The boundaries are further fine tuned at a higher (20X) resolution. The block-based segmentation method uses robust texture feature vectors in combination with support vector machines (SVMs) to perform classification. Medical rules are finally applied. In testing, segmentation using 31 digital slides achieves 94.25% accuracy. For the diagnosis of CIN, changes in nuclei structure and morphology along lines perpendicular to the main axis of the squamous epithelium are quantified and classified. Using multi-category SVM, perpendicular lines are classified into Normal, CIN I, CIN II, and CIN III. The robustness of the system in term of regional diagnosis is measured against pathologists' diagnoses and inter-observer variability between two pathologists is considered. Initial results suggest that the system has potential as a tool both to assist in pathologists' diagnoses, and in training.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A novel image segmentation method based on a constraint satisfaction neural network (CSNN) is presented. The new method uses CSNN-based relaxation but with a modified scanning scheme of the image. The pixels are visited with more distant intervals and wider neighborhoods in the first level of the algorithm. The intervals between pixels and their neighborhoods are reduced in the following stages of the algorithm. This method contributes to the formation of more regular segments rapidly and consistently. A cluster validity index to determine the number of segments is also added to complete the proposed method into a fully automatic unsupervised segmentation scheme. The results are compared quantitatively by means of a novel segmentation evaluation criterion. The results are promising.