901 resultados para Classification image technique


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In computer vision, training a model that performs classification effectively is highly dependent on the extracted features, and the number of training instances. Conventionally, feature detection and extraction are performed by a domain-expert who, in many cases, is expensive to employ and hard to find. Therefore, image descriptors have emerged to automate these tasks. However, designing an image descriptor still requires domain-expert intervention. Moreover, the majority of machine learning algorithms require a large number of training examples to perform well. However, labelled data is not always available or easy to acquire, and dealing with a large dataset can dramatically slow down the training process. In this paper, we propose a novel Genetic Programming based method that automatically synthesises a descriptor using only two training instances per class. The proposed method combines arithmetic operators to evolve a model that takes an image and generates a feature vector. The performance of the proposed method is assessed using six datasets for texture classification with different degrees of rotation, and is compared with seven domain-expert designed descriptors. The results show that the proposed method is robust to rotation, and has significantly outperformed, or achieved a comparable performance to, the baseline methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of remote sensing for monitoring of submerged aquatic vegetation (SAV) in fluvial environments has been limited by the spatial and spectral resolution of available image data. The absorption of light in water also complicates the use of common image analysis methods. This paper presents the results of a study that uses very high resolution (VHR) image data, collected with a Near Infrared sensitive DSLR camera, to map the distribution of SAV species for three sites along the Desselse Nete, a lowland river in Flanders, Belgium. Plant species, including Ranunculus aquatilis L., Callitriche obtusangula Le Gall, Potamogeton natans L., Sparganium emersum L. and Potamogeton crispus L., were classified from the data using Object-Based Image Analysis (OBIA) and expert knowledge. A classification rule set based on a combination of both spectral and structural image variation (e.g. texture and shape) was developed for images from two sites. A comparison of the classifications with manually delineated ground truth maps resulted for both sites in 61% overall accuracy. Application of the rule set to a third validation image, resulted in 53% overall accuracy. These consistent results show promise for species level mapping in such biodiverse environments, but also prompt a discussion on assessment of classification accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hallux rígidus (HR) affects the first metatarsophalangeal joint (MTPJ) between 35% and 60% of the population over 65 years and there are multiple ways of treatment. Depending on the radiological stage where you find the deformity determines the procedure to be performed; in the early stages cheilectomy techniques and corrective osteotomy is performed while the more advanced ratings, the surgeon chooses destructive techniques considered as arthrodesis and arthroplasty. This final of degree project aims to focus on 1 MTPJ destructive techniques to clarify which of the procedures generates better results by a number of parameters; outcomes of the American Orthopaedic Foot scale and Ankle Society Hallux metatarsophalangeal Interphalangeal-scale (AOFAS), range of motion (ROM) of the 1ºAMTF, radiological classification. As for the implant arthroplasty technique, this article offers information on material and design that generates better relating to patient characteristics such as age, inflammatory joint diseases, viability and durability of the implant results. The conclusion from this review is that the values obtained in the arthrodesis according AOFAS decrease due to loss of mobility, but both techniques have similar values of effectiveness and concludes with the decision that the technique used is determined taking into account various factors and patient characteristics. Keywords: Hallux rígidus; (Hallux Rígidus) and surgery treatment; Hallux Rígidus arthrodesis; Hallux Rígidus arthroplasty; Hallux Rígidus (arthroplasty and arthrodesis).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : Face à l’accroissement de la résolution spatiale des capteurs optiques satellitaires, de nouvelles stratégies doivent être développées pour classifier les images de télédétection. En effet, l’abondance de détails dans ces images diminue fortement l’efficacité des classifications spectrales; de nombreuses méthodes de classification texturale, notamment les approches statistiques, ne sont plus adaptées. À l’inverse, les approches structurelles offrent une ouverture intéressante : ces approches orientées objet consistent à étudier la structure de l’image pour en interpréter le sens. Un algorithme de ce type est proposé dans la première partie de cette thèse. Reposant sur la détection et l’analyse de points-clés (KPC : KeyPoint-based Classification), il offre une solution efficace au problème de la classification d’images à très haute résolution spatiale. Les classifications effectuées sur les données montrent en particulier sa capacité à différencier des textures visuellement similaires. Par ailleurs, il a été montré dans la littérature que la fusion évidentielle, reposant sur la théorie de Dempster-Shafer, est tout à fait adaptée aux images de télédétection en raison de son aptitude à intégrer des concepts tels que l’ambiguïté et l’incertitude. Peu d’études ont en revanche été menées sur l’application de cette théorie à des données texturales complexes telles que celles issues de classifications structurelles. La seconde partie de cette thèse vise à combler ce manque, en s’intéressant à la fusion de classifications KPC multi-échelle par la théorie de Dempster-Shafer. Les tests menés montrent que cette approche multi-échelle permet d’améliorer la classification finale dans le cas où l’image initiale est de faible qualité. De plus, l’étude effectuée met en évidence le potentiel d’amélioration apporté par l’estimation de la fiabilité des classifications intermédiaires, et fournit des pistes pour mener ces estimations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’imagerie hyperspectrale (HSI) fournit de l’information spatiale et spectrale concernant l’émissivité de la surface des matériaux, ce qui peut être utilisée pour l’identification des minéraux. Pour cela, un matériel de référence ou endmember, qui en minéralogie est la forme la plus pure d’un minéral, est nécessaire. L’objectif principal de ce projet est l’identification des minéraux par imagerie hyperspectrale. Les informations de l’imagerie hyperspectrale ont été enregistrées à partir de l’énergie réfléchie de la surface du minéral. L’énergie solaire est la source d’énergie dans l’imagerie hyperspectrale de télédétection, alors qu’un élément chauffant est la source d’énergie utilisée dans les expériences de laboratoire. Dans la première étape de ce travail, les signatures spectrales des minéraux purs sont obtenues avec la caméra hyperspectrale, qui mesure le rayonnement réfléchi par la surface des minéraux. Dans ce projet, deux séries d’expériences ont été menées dans différentes plages de longueurs d’onde (0,4 à 1 µm et 7,7 à 11,8 µm). Dans la deuxième partie de ce projet, les signatures spectrales obtenues des échantillons individuels sont comparées avec des signatures spectrales de la bibliothèque hyperspectrale de l’ASTER. Dans la troisième partie, trois méthodes différentes de classification hyperspectrale sont considérées pour la classification. Spectral Angle Mapper (SAM), Spectral Information Divergence (SID), et Intercorrélation normalisée (NCC). Enfin, un système d’apprentissage automatique, Extreme Learning Machine (ELM), est utilisé pour identifier les minéraux. Deux types d’échantillons ont été utilisés dans ce projet. Le système d’ELM est divisé en deux parties, la phase d’entraînement et la phase de test du système. Dans la phase d’entraînement, la signature d’un seul échantillon minéral est entrée dans le système, et dans la phase du test, les signatures spectrales des différents minéraux, qui sont entrées dans la phase d’entraînement, sont comparées par rapport à des échantillons de minéraux mixtes afin de les identifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To gain a better understanding of the fluid–structure interaction and especially when dealing with a flow around an arbitrarily moving body, it is essential to develop measurement tools enabling the instantaneous detection of moving deformable interface during the flow measurements. A particularly useful application is the determination of unsteady turbulent flow velocity field around a moving porous fishing net structure which is of great interest for selectivity and also for the numerical code validation which needs a realistic database. To do this, a representative piece of fishing net structure is used to investigate both the Turbulent Boundary Layer (TBL) developing over the horizontal porous moving fishing net structure and the turbulent flow passing through the moving porous structure. For such an investigation, Time Resolved PIV measurements are carried out and combined with a motion tracking technique allowing the measurement of the instantaneous motion of the deformable fishing net during PIV measurements. Once the two-dimensional motion of the porous structure is accessed, PIV velocity measurements are analyzed in connection with the detected motion. Finally, the TBL is characterized and the effect of the structure motion on the volumetric flow rate passing though the moving porous structure is clearly demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Close similarities have been found between the otoliths of sea-caught and laboratory-reared larvae of the common sole Solea solea (L.), given appropriate temperatures and nourishment of the latter. But from hatching to mouth formation. and during metamorphosis, sole otoliths have proven difficult to read because the increments may be less regular and low contrast. In this study, the growth increments in otoliths of larvae reared at 12 degrees C were counted by light microscopy to test the hypothesis of daily deposition, with some results verified using scanning electron microscopy (SEM), and by image analysis in order to compare the reliability of the 2 methods in age estimation. Age was first estimated (in days posthatch) from light micrographs of whole mounted otoliths. Counts were initiated from the increment formed at the time of month opening (Day 4). The average incremental deposition rate was consistent with the daily hypothesis. However, the light-micrograph readings tended to underestimate the mean ages of the larvae. Errors were probably associated with the low-contrast increments: those deposited after the mouth formation during the transition to first feeding, and those deposited from the onset of eye migration (about 20 d posthatch) during metamorphosis. SEM failed to resolve these low-contrast areas accurately because of poor etching. A method using image analysis was applied to a subsample of micrograph-counted otoliths. The image analysis was supported by an algorithm of pattern recognition (Growth Demodulation Algorithm, GDA). On each otolith, the GDA method integrated the growth pattern of these larval otoliths to averaged data from different radial profiles, in order to demodulate the exponential trend of the signal before spectral analysis (Fast Fourier Transformation, FFT). This second method both allowed more precise designation of increments, particularly for low-contrast areas, and more accurate readings but increased error in mean age estimation. The variability is probably due to a still rough perception of otolith increments by the GDA method, counting being achieved through a theoretical exponential pattern and mean estimates being given by FFT. Although this error variability was greater than expected, the method provides for improvement in both speed and accuracy in otolith readings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Due to the high prevalence of renal failure in transcatheter aortic valve replacement (TAVR) candidates, a non-contrast MR technique is desirable for pre-procedural planning. We sought to evaluate the feasibility of a novel, non-contrast, free-breathing, self-navigated three-dimensional (SN3D) MR sequence for imaging the aorta from its root to the iliofemoral run-off in comparison to non-contrast two-dimensional-balanced steady-state free-precession (2D-bSSFP) imaging. METHODS: SN3D [field of view (FOV), 220-370 mm(3); slice thickness, 1.15 mm; repetition/echo time (TR/TE), 3.1/1.5 ms; and flip angle, 115°] and 2D-bSSFP acquisitions (FOV, 340 mm; slice thickness, 6 mm; TR/TE, 2.3/1.1 ms; flip angle, 77°) were performed in 10 healthy subjects (all male; mean age, 30.3 ± 4.3 yrs) using a 1.5-T MRI system. Aortic root measurements and qualitative image ratings (four-point Likert-scale) were compared. RESULTS: The mean effective aortic annulus diameter was similar for 2D-bSSFP and SN3D (26.7 ± 0.7 vs. 26.1 ± 0.9 mm, p = 0.23). The mean image quality of 2D-bSSFP (4; IQR 3-4) was rated slightly higher (p = 0.03) than SN3D (3; IQR 2-4). The mean total acquisition time for SN3D imaging was 12.8 ± 2.4 min. CONCLUSIONS: Our results suggest that a novel SN3D sequence allows rapid, free-breathing assessment of the aortic root and the aortoiliofemoral system without administration of contrast medium. KEY POINTS: • The prevalence of renal failure is high among TAVR candidates. • Non-contrast 3D MR angiography allows for TAVR procedure planning. • The self-navigated sequence provides a significantly reduced scanning time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper catalogues the procedures and steps involved in agroclimatic classification. These vary from conventional descriptive methods to modern computer-based numerical techniques. There are three mutually independent numerical classification techniques, namely Ordination, Cluster analysis, and Minimum spanning tree; and under each technique there are several forms of grouping techniques existing. The vhoice of numerical classification procedure differs with the type of data set. In the case of numerical continuous data sets with booth positive and negative values, the simple and least controversial procedures are unweighted pair group method (UPGMA) and weighted pair group method (WPGMA) under clustering techniques with similarity measure obtained either from Gower metric or standardized Euclidean metric. Where the number of attributes are large, these could be reduced to fewer new attributes defined by the principal components or coordinates by ordination technique. The first few components or coodinates explain the maximum variance in the data matrix. These revided attributes are less affected by noise in the data set. It is possible to check misclassifications using minimum spanning tree.