906 resultados para Text feature extraction
Resumo:
This paper presents a novel approach to water pollution detection from remotely sensed low-platform mounted visible band camera images. We examine the feasibility of unsupervised segmentation for slick (oily spills on the water surface) region labelling. Adaptive and non adaptive filtering is combined with density modeling of the obtained textural features. A particular effort is concentrated on the textural feature extraction from raw intensity images using filter banks and adaptive feature extraction from the obtained output coefficients. Segmentation in the extracted feature space is achieved using Gaussian mixture models (GMM).
Resumo:
This study considers the application of image analysis in petrography and investigates the possibilities for advancing existing techniques by introducing feature extraction and analysis capabilities of a higher level than those currently employed. The aim is to construct relevant, useful descriptions of crystal form and inter-crystal relations in polycrystalline igneous rock sections. Such descriptions cannot be derived until the `ownership' of boundaries between adjacent crystals has been established: this is the fundamental problem of crystal boundary assignment. An analysis of this problem establishes key image features which reveal boundary ownership; a set of explicit analysis rules is presented. A petrographic image analysis scheme based on these principles is outlined and the implementation of key components of the scheme considered. An algorithm for the extraction and symbolic representation of image structural information is developed. A new multiscale analysis algorithm which produces a hierarchical description of the linear and near-linear structure on a contour is presented in detail. Novel techniques for symmetry analysis are developed. The analyses considered contribute both to the solution of the boundary assignment problem and to the construction of geologically useful descriptions of crystal form. The analysis scheme which is developed employs grouping principles such as collinearity, parallelism, symmetry and continuity, so providing a link between this study and more general work in perceptual grouping and intermediate level computer vision. Consequently, the techniques developed in this study may be expected to find wider application beyond the petrographic domain.
Resumo:
A recent novel approach to the visualisation and analysis of datasets, and one which is particularly applicable to those of a high dimension, is discussed in the context of real applications. A feed-forward neural network is utilised to effect a topographic, structure-preserving, dimension-reducing transformation of the data, with an additional facility to incorporate different degrees of associated subjective information. The properties of this transformation are illustrated on synthetic and real datasets, including the 1992 UK Research Assessment Exercise for funding in higher education. The method is compared and contrasted to established techniques for feature extraction, and related to topographic mappings, the Sammon projection and the statistical field of multidimensional scaling.
Resumo:
The process of automatic handwriting investigation in forensic science is described. The general scheme of a computer-based handwriting analysis system is used to point out at the basic problems of image enhancement and segmentation, feature extraction and decision-making. Factors that may compromise the accuracy of expert’s conclusion are underlined and directions for future investigations are marked.
Resumo:
After many years of scholar study, manuscript collections continue to be an important source of novel information for scholars, concerning both the history of earlier times as well as the development of cultural documentation over the centuries. D-SCRIBE project aims to support and facilitate current and future efforts in manuscript digitization and processing. It strives toward the creation of a comprehensive software product, which can assist the content holders in turning an archive of manuscripts into a digital collection using automated methods. In this paper, we focus on the problem of recognizing early Christian Greek manuscripts. We propose a novel digital image binarization scheme for low quality historical documents allowing further content exploitation in an efficient way. Based on the existence of closed cavity regions in the majority of characters and character ligatures in these scripts, we propose a novel, segmentation-free, fast and efficient technique that assists the recognition procedure by tracing and recognizing the most frequently appearing characters or character ligatures.
Resumo:
In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.
Resumo:
It is well established that accent recognition can be as accurate as up to 95% when the signals are noise-free, using feature extraction techniques such as mel-frequency cepstral coefficients and binary classifiers such as discriminant analysis, support vector machine and k-nearest neighbors. In this paper, we demonstrate that the predictive performance can be reduced by as much as 15% when the signals are noisy. Specifically, in this paper we perturb the signals with different levels of white noise, and as the noise become stronger, the out-of-sample predictive performance deteriorates from 95% to 80%, although the in-sample prediction gives overly-optimistic results. ACM Computing Classification System (1998): C.3, C.5.1, H.1.2, H.2.4., G.3.
Resumo:
Good estimates of ecosystem complexity are essential for a number of ecological tasks: from biodiversity estimation, to forest structure variable retrieval, to feature extraction by edge detection and generation of multifractal surface as neutral models for e.g. feature change assessment. Hence, measuring ecological complexity over space becomes crucial in macroecology and geography. Many geospatial tools have been advocated in spatial ecology to estimate ecosystem complexity and its changes over space and time. Among these tools, free and open source options especially offer opportunities to guarantee the robustness of algorithms and reproducibility. In this paper we will summarize the most straightforward measures of spatial complexity available in the Free and Open Source Software GRASS GIS, relating them to key ecological patterns and processes.
Resumo:
Lung cancer is one of the most common types of cancer and has the highest mortality rate. Patient survival is highly correlated with early detection. Computed Tomography technology services the early detection of lung cancer tremendously by offering aminimally invasive medical diagnostic tool. However, the large amount of data per examination makes the interpretation difficult. This leads to omission of nodules by human radiologist. This thesis presents a development of a computer-aided diagnosis system (CADe) tool for the detection of lung nodules in Computed Tomography study. The system, called LCD-OpenPACS (Lung Cancer Detection - OpenPACS) should be integrated into the OpenPACS system and have all the requirements for use in the workflow of health facilities belonging to the SUS (Brazilian health system). The LCD-OpenPACS made use of image processing techniques (Region Growing and Watershed), feature extraction (Histogram of Gradient Oriented), dimensionality reduction (Principal Component Analysis) and classifier (Support Vector Machine). System was tested on 220 cases, totaling 296 pulmonary nodules, with sensitivity of 94.4% and 7.04 false positives per case. The total time for processing was approximately 10 minutes per case. The system has detected pulmonary nodules (solitary, juxtavascular, ground-glass opacity and juxtapleural) between 3 mm and 30 mm.
Resumo:
Background: Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and / or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results: The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions: Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.
Resumo:
The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.
Resumo:
A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.
Resumo:
A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.
Resumo:
Purpose: There are two goals of this study. The first goal of this study is to investigate the feasibility of using classic textural feature extraction in radiotherapy response assessment among a unique cohort of early stage breast cancer patients who received the single-dose preoperative radiotherapy. The second goal of this study is to investigate the clinical feasibility of using classic texture features as potential biomarkers which are supplementary to regional apparent diffusion coefficient in gynecological cancer radiotherapy response assessment.
Methods and Materials: For the breast cancer study, 15 patients with early stage breast cancer were enrolled in this retrospective study. Each patient received a single-fraction radiation treatment, and DWI and DCE-MRI scans were conducted before and after the radiotherapy. DWI scans were acquired using a spin-echo EPI sequence with diffusion weighting factors of b = 0 and b = 500 mm2/s, and the apparent diffusion coefficient (ADC) maps were calculated. DCE-MRI scans were acquired using a T1-weighted 3D SPGR sequence with a temporal resolution of about 1 minute. The contrast agent (CA) was intravenously injected with a 0.1 mmol/kg bodyweight dose at 2 ml/s. Two parameters, volume transfer constant (Ktrans) and kep were analyzed using the two-compartment Tofts pharmacokinetic model. For pharmacokinetic parametric maps and ADC maps, 33 textural features were generated from the clinical target volume (CTV) in a 3D fashion using the classic gray level co-occurrence matrix (GLCOM) and gray level run length matrix (GLRLM). Wilcoxon signed-rank test was used to determine the significance of each texture feature’s change after the radiotherapy. The significance was set to 0.05 with Bonferroni correction.
For the gynecological cancer study, 12 female patients with gynecologic cancer treated with fractionated external beam radiotherapy (EBRT) combined with high dose rate (HDR) intracavitary brachytherapy were studied. Each patient first received EBRT treatment followed by five fractions of HDR treatment. Before EBRT and before each fraction of brachytherapy, Diffusion Weighted MRI (DWI-MRI) and CT scans were acquired. DWI scans were acquired in sagittal plane utilizing a spin-echo echo-planar imaging sequence with weighting factors of b = 500 s/mm2 and b = 1000 s/mm2, one set of images of b = 0 s/mm2 were also acquired. ADC maps were calculated using linear least-square fitting method. Distributed diffusion coefficient (DDC) maps and stretching parameter α were calculated. For ADC and DDC maps, 33 classic texture features were generated utilizing the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from high-risk clinical target volume (HR-CTV). Wilcoxon signed-rank statistics test was applied to determine the significance of each feature’s numerical value change after radiotherapy. Significance level was set to 0.05 with multi-comparison correction if applicable.
Results: For the breast cancer study, regarding ADC maps calculated from DWI-MRI, 24 out of 33 CTV features changed significantly after the radiotherapy. For DCE-MRI pharmacokinetic parameters, all 33 CTV features of Ktrans and 33 features of kep changed significantly.
For the gynecological cancer study, regarding ADC maps, 28 out of 33 HR-CTV texture features showed significant changes after the EBRT treatment. 28 out of 33 HR-CTV texture features indicated significant changes after HDR treatments. The texture features that indicated significant changes after HDR treatments are the same as those after EBRT treatment. 28 out of 33 HR-CTV texture features showed significant changes after whole radiotherapy treatment process. The texture features that indicated significant changes for the whole treatment process are the same as those after HDR treatments.
Conclusion: Initial results indicate that certain classic texture features are sensitive to radiation-induced changes. Classic texture features with significant numerical changes can be used in monitoring radiotherapy effect. This might suggest that certain texture features might be used as biomarkers which are supplementary to ADC and DDC for assessment of radiotherapy response in breast cancer and gynecological cancer.
Resumo:
Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.