978 resultados para Automatic Image Annotation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Color segmentation of images usually requires a manual selection and classification of samples to train the system. This paper presents an automatic system that performs these tasks without the need of a long training, providing a useful tool to detect and identify figures. In real situations, it is necessary to repeat the training process if light conditions change, or if, in the same scenario, the colors of the figures and the background may have changed, being useful a fast training method. A direct application of this method is the detection and identification of football players.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Realising high performance image and signal processing
applications on modern FPGA presents a challenging implementation problem due to the large data frames streaming through these systems. Specifically, to meet the high bandwidth and data storage demands of these applications, complex hierarchical memory architectures must be manually specified
at the Register Transfer Level (RTL). Automated approaches which convert high-level operation descriptions, for instance in the form of C programs, to an FPGA architecture, are unable to automatically realise such architectures. This paper
presents a solution to this problem. It presents a compiler to automatically derive such memory architectures from a C program. By transforming the input C program to a unique dataflow modelling dialect, known as Valved Dataflow (VDF), a mapping and synthesis approach developed for this dialect can
be exploited to automatically create high performance image and video processing architectures. Memory intensive C kernels for Motion Estimation (CIF Frames at 30 fps), Matrix Multiplication (128x128 @ 500 iter/sec) and Sobel Edge Detection (720p @ 30 fps), which are unrealisable by current state-of-the-art C-based synthesis tools, are automatically derived from a C description of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Barrett's oesophagus (BO) is a well recognized precursor of the majority of cases of oesophageal adenocarcinoma (OAC). Endoscopic surveillance of BO patients is frequently undertaken in an attempt to detect early OAC, high grade dysplasia (HGD) or low grade dysplasia (LGD). However histological interpretation and grading of dysplasia is subjective and poorly reproducible. The alternative flow cytometry and cytology-preparation image cytometry techniques require large amounts of tissue and specialist expertise which are not widely available for frontline health care.
Methods: This study has combined whole slide imaging with DNA image cytometry, to provide a novel method for the detection and quantification of abnormal DNA contents. 20 cases were evaluated, including 8 Barrett's specialised intestinal metaplasia (SIM), 6 LGD and 6 HGD. Feulgen stained oesophageal sections (1µm thickness) were digitally scanned in their entirety and evaluated to select regions of interests and abnormalities. Barrett’s mucosa was then interactively chosen for automatic nuclei segmentation where irrelevant cell types are ignored. The combined DNA content histogram for all selected image regions was then obtained. In addition, histogram measurements, including 5c exceeding ratio (xER-5C), 2c deviation index (2cDI) and DNA grade of malignancy (DNA-MG), were computed.
Results: The histogram measurements, xER-5C, 2cDI and DNA-MG, were shown to be effective in differentiating SIM from HGD, SIM from LGD, and LGD from HGD. All three measurements discriminated SIM from HGD cases successfully with statistical significance (pxER-5C=0.0041, p2cDI=0.0151 and pDNA-MG=0.0057). Statistical significance is also achieved differentiating SIM from LGD samples with pxER-5C=0.0019, p2cDI=0.0023 and pDNA-MG=0.0030. Furthermore the differences between LGD and HGD cases are statistical significant (pxER-5C=0.0289, p2cDI=0.0486 and pDNA-MG=0.0384).
Conclusion: Whole slide image cytometry is a novel and effective method for the detection and quantification of abnormal DNA content in BO. Compared to manual histological review, this proposed method is more objective and reproducible. Compared to flow cytometry and cytology-preparation image cytometry, the current method is low cost, simple to use and only requires a single 1µm tissue section. Whole slide image cytometry could assist the routine clinical diagnosis of dysplasia in BO, which is relevant for future progression risk to OAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a novel automated glaucoma detection framework for mass-screening that operates on inexpensive retinal cameras. The proposed methodology is based on the assumption that discriminative features for glaucoma diagnosis can be extracted from the optical nerve head structures,
such as the cup-to-disc ratio or the neuro-retinal rim variation. After automatically segmenting the cup and optical disc, these features are feed into a machine learning classifier. Experiments were performed using two different datasets and from the obtained results the proposed technique provides
better performance than approaches based on appearance. A main advantage of our approach is that it only requires a few training samples to provide high accuracy over several different glaucoma stages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary visual cortex employs simple, complex and end-stopped cells to create a scale space of 1D singularities (lines and edges) and of 2D singularities (line and edge junctions and crossings called keypoints). In this paper we show first results of a biological model which attributes information of the local image structure to keypoints at all scales, ie junction type (L, T, +) and main line/edge orientations. Keypoint annotation in combination with coarse to fine scale processing facilitates various processes, such as image matching (stereo and optical flow), object segregation and object tracking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a comprehensive review on automatic analysis of Proteomics and Genomics images is presented. Special emphasis is given to a particularly complex image produced by a technique called Two-Dimensional Gel Electrophoresis (2-DE), with thousands of spots (or blobs). Automatic methods for the detection, segmentation and matching of blob like features are discussed and proposed. In particular, a very robust procedure was achieved for processing 2-DE images, consisting mainly of two steps: a) A very trustworthy new approach for the automatic detection and segmentation of spots, based on the Watershed Transform, without any foreknowledge of spot shape or size, and without user intervention; b) A new method for spot matching, based on image registration, that performs well for either global or local distortions. The results of the proposed methods are compared to state-of-the-art academic and commercial products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Bioinformática), Universidade de Lisboa, Faculdade de Ciências, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’annotation en rôles sémantiques est une tâche qui permet d’attribuer des étiquettes de rôles telles que Agent, Patient, Instrument, Lieu, Destination etc. aux différents participants actants ou circonstants (arguments ou adjoints) d’une lexie prédicative. Cette tâche nécessite des ressources lexicales riches ou des corpus importants contenant des phrases annotées manuellement par des linguistes sur lesquels peuvent s’appuyer certaines approches d’automatisation (statistiques ou apprentissage machine). Les travaux antérieurs dans ce domaine ont porté essentiellement sur la langue anglaise qui dispose de ressources riches, telles que PropBank, VerbNet et FrameNet, qui ont servi à alimenter les systèmes d’annotation automatisés. L’annotation dans d’autres langues, pour lesquelles on ne dispose pas d’un corpus annoté manuellement, repose souvent sur le FrameNet anglais. Une ressource telle que FrameNet de l’anglais est plus que nécessaire pour les systèmes d’annotation automatisé et l’annotation manuelle de milliers de phrases par des linguistes est une tâche fastidieuse et exigeante en temps. Nous avons proposé dans cette thèse un système automatique pour aider les linguistes dans cette tâche qui pourraient alors se limiter à la validation des annotations proposées par le système. Dans notre travail, nous ne considérons que les verbes qui sont plus susceptibles que les noms d’être accompagnés par des actants réalisés dans les phrases. Ces verbes concernent les termes de spécialité d’informatique et d’Internet (ex. accéder, configurer, naviguer, télécharger) dont la structure actancielle est enrichie manuellement par des rôles sémantiques. La structure actancielle des lexies verbales est décrite selon les principes de la Lexicologie Explicative et Combinatoire, LEC de Mel’čuk et fait appel partiellement (en ce qui concerne les rôles sémantiques) à la notion de Frame Element tel que décrit dans la théorie Frame Semantics (FS) de Fillmore. Ces deux théories ont ceci de commun qu’elles mènent toutes les deux à la construction de dictionnaires différents de ceux issus des approches traditionnelles. Les lexies verbales d’informatique et d’Internet qui ont été annotées manuellement dans plusieurs contextes constituent notre corpus spécialisé. Notre système qui attribue automatiquement des rôles sémantiques aux actants est basé sur des règles ou classificateurs entraînés sur plus de 2300 contextes. Nous sommes limités à une liste de rôles restreinte car certains rôles dans notre corpus n’ont pas assez d’exemples annotés manuellement. Dans notre système, nous n’avons traité que les rôles Patient, Agent et Destination dont le nombre d’exemple est supérieur à 300. Nous avons crée une classe que nous avons nommé Autre où nous avons rassemblé les autres rôles dont le nombre d’exemples annotés est inférieur à 100. Nous avons subdivisé la tâche d’annotation en sous-tâches : identifier les participants actants et circonstants et attribuer des rôles sémantiques uniquement aux actants qui contribuent au sens de la lexie verbale. Nous avons soumis les phrases de notre corpus à l’analyseur syntaxique Syntex afin d’extraire les informations syntaxiques qui décrivent les différents participants d’une lexie verbale dans une phrase. Ces informations ont servi de traits (features) dans notre modèle d’apprentissage. Nous avons proposé deux techniques pour l’identification des participants : une technique à base de règles où nous avons extrait une trentaine de règles et une autre technique basée sur l’apprentissage machine. Ces mêmes techniques ont été utilisées pour la tâche de distinguer les actants des circonstants. Nous avons proposé pour la tâche d’attribuer des rôles sémantiques aux actants, une méthode de partitionnement (clustering) semi supervisé des instances que nous avons comparée à la méthode de classification de rôles sémantiques. Nous avons utilisé CHAMÉLÉON, un algorithme hiérarchique ascendant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, different techniques for image analysis of high density microarrays have been investigated. Most of the existing image analysis techniques require prior knowledge of image specific parameters and direct user intervention for microarray image quantification. The objective of this research work was to develop of a fully automated image analysis method capable of accurately quantifying the intensity information from high density microarrays images. The method should be robust against noise and contaminations that commonly occur in different stages of microarray development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the technologies for the fabrication of high quality microarray advances rapidly, quantification of microarray data becomes a major task. Gridding is the first step in the analysis of microarray images for locating the subarrays and individual spots within each subarray. For accurate gridding of high-density microarray images, in the presence of contamination and background noise, precise calculation of parameters is essential. This paper presents an accurate fully automatic gridding method for locating suarrays and individual spots using the intensity projection profile of the most suitable subimage. The method is capable of processing the image without any user intervention and does not demand any input parameters as many other commercial and academic packages. According to results obtained, the accuracy of our algorithm is between 95-100% for microarray images with coefficient of variation less than two. Experimental results show that the method is capable of gridding microarray images with irregular spots, varying surface intensity distribution and with more than 50% contamination

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a novel framework for automatic segmentation of primary tumors and its boundary from brain MRIs using morphological filtering techniques. This method uses T2 weighted and T1 FLAIR images. This approach is very simple, more accurate and less time consuming than existing methods. This method is tested by fifty patients of different tumor types, shapes, image intensities, sizes and produced better results. The results were validated with ground truth images by the radiologist. Segmentation of the tumor and boundary detection is important because it can be used for surgical planning, treatment planning, textural analysis, 3-Dimensional modeling and volumetric analysis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents an efficient method for volume rendering of glioma tumors from segmented 2D MRI Datasets with user interactive control, by replacing manual segmentation required in the state of art methods. The most common primary brain tumors are gliomas, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the pre- operative tumor volume is essential. Tumor portions were automatically segmented from 2D MR images using morphological filtering techniques. These seg- mented tumor slices were propagated and modeled with the software package. The 3D modeled tumor consists of gray level values of the original image with exact tumor boundary. Axial slices of FLAIR and T2 weighted images were used for extracting tumors. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming proc- ess and is prone to error. These defects are overcome in this method. Authors verified the performance of our method on several sets of MRI scans. The 3D modeling was also done using segmented 2D slices with the help of a medical software package called 3D DOCTOR for verification purposes. The results were validated with the ground truth models by the Radi- ologist.