993 resultados para Complex Images
Resumo:
The first and second authors would like to thank the support of the PhD grants with references SFRH/BD/28817/2006 and SFRH/PROTEC/49517/2009, respectively, from Fundação para a Ciência e Tecnol ogia (FCT). This work was partially done in the scope of the project “Methodologies to Analyze Organs from Complex Medical Images – Applications to Fema le Pelvic Cavity”, wi th reference PTDC/EEA- CRO/103320/2008, financially supported by FCT.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Palmer previously proposed a classification system of triangular fibrocartilage complex (TFCC) injuries that proved to be useful in directing clinical management. However, dorsal peripheral tears (variants of class 1C) were not described and have rarely been reported in the literature since. We herewith present a rare case of bucket-handle tear of the TFCC. To our knowledge, this is the first case demonstrating partial separation of both the palmar and dorsal distal radioulnar ligaments (DRULs) from the articular disc. The particular wrist magnetic resonance (MR) arthrographic findings of this unusual complex peripheral TFCC tear (a variant of both class 1B and 1C) were nicely appreciated upon sagittal reformatted images.
Resumo:
Methods are presented to map complex fiber architectures in tissues by imaging the 3D spectra of tissue water diffusion with MR. First, theoretical considerations show why and under what conditions diffusion contrast is positive. Using this result, spin displacement spectra that are conventionally phase-encoded can be accurately reconstructed by a Fourier transform of the measured signal's modulus. Second, studies of in vitro and in vivo samples demonstrate correspondence between the orientational maxima of the diffusion spectrum and those of the fiber orientation density at each location. In specimens with complex muscular tissue, such as the tongue, diffusion spectrum images show characteristic local heterogeneities of fiber architectures, including angular dispersion and intersection. Cerebral diffusion spectra acquired in normal human subjects resolve known white matter tracts and tract intersections. Finally, the relation between the presented model-free imaging technique and other available diffusion MRI schemes is discussed.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Real-world images are complex objects, difficult to describe but at the same time possessing a high degree of redundancy. A very recent study [1] on the statistical properties of natural images reveals that natural images can be viewed through different partitions which are essentially fractal in nature. One particular fractal component, related to the most singular (sharpest) transitions in the image, seems to be highly informative about the whole scene. In this paper we will show how to decompose the image into their fractal components.We will see that the most singular component is related to (but not coincident with) the edges of the objects present in the scenes. We will propose a new, simple method to reconstruct the image with information contained in that most informative component.We will see that the quality of the reconstruction is strongly dependent on the capability to extract the relevant edges in the determination of the most singular set.We will discuss the results from the perspective of coding, proposing this method as a starting point for future developments.
Resumo:
Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.
Resumo:
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation‑based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi‑resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Among the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, have the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical‑based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.
Resumo:
The modern generation of Cherenkov telescopes has revealed a new population of gamma-ray sources in the Galaxy. Some of them have been identified with previously known X-ray binary systems while other remain without clear counterparts a lower energies. Our initial goal here was reporting on extensive radio observations of the first extended and yet unidentified source, namely TeV J2032+4130. This object was originally detected by the HEGRA telescope in the direction of the Cygnus OB2 region and its nature has been a matter of debate during the latest years. The situation has become more complex with the Whipple and MILAGRO telescopes new TeV detections in the same field which could be consistent with the historic HEGRA source, although a different origin cannot be ruled out. Aims.We aim to pursue our radio exploration of the TeV J2032+4130 position that we initiated in a previous paper but taking now into account the latest results from new Whipple and MILAGRO TeV telescopes. The data presented here are an extended follow up of our previous work. Methods.Our investigation is mostly based on interferometric radio observations with the Giant Metre Wave Radio Telescope (GMRT) close to Pune (India) and the Very Large Array (VLA) in New Mexico (USA). We also conducted near infrared observations with the 3.5 m telescope and the OMEGA2000 camera at the Centro Astronómico Hispano Alemán (CAHA) in Almería (Spain). Results.We present deep radio maps centered on the TeV J2032+4130 position at different wavelengths. In particular, our 49 and 20 cm maps cover a field of view larger than half a degree that fully includes the Whipple position and the peak of MILAGRO emission. Our most important result here is a catalogue of 153 radio sources detected at 49 cm within the GMRT antennae primary beam with a full width half maximum (FWHM) of 43 arc-minute. Among them, peculiar sources inside the Whipple error ellipse are discussed in detail, including a likely double-double radio galaxy and a one-sided jet source of possible blazar nature. This last object adds another alternative counterpart possibility to be considered for both the HEGRA, Whipple and MILAGRO emission. Moreover, our multi-configuration VLA images reveal the non-thermal extended emission previously reported by us with improved angular resolution. Its non-thermal spectral index is also confirmed thanks to matching beam observations at the 20 and 6 cm wavelengths.
Resumo:
The management and conservation of coastal waters in the Baltic is challenged by a number of complex environmental problems, including eutrophication and habitat degradation. Demands for a more holistic, integrated and adaptive framework of ecosystem-based management emphasize the importance of appropriate information on the status and changes of the aquatic ecosystems. The thesis focuses on the spatiotemporal aspects of environmental monitoring in the extensive and geomorphologically complex coastal region of SW Finland, where the acquisition of spatially and temporally representative monitoring data is inherently challenging. Furthermore, the region is subject to multiple human interests and uses. A holistic geographical approach is emphasized, as it is ultimately the physical conditions that set the frame for any human activity. Characteristics of the coastal environment were examined using water quality data from the database of the Finnish environmental administration and Landsat TM/ETM+ images. A basic feature of the complex aquatic environment in the Archipelago Sea is its high spatial and temporal variability; this foregrounds the importance of geographical information as a basis of environmental assessments. While evidence of a consistent water turbidity pattern was observed, the coastal hydrodynamic realm is also characterized by high spatial and temporal variability. It is therefore also crucial to consider the spatial and temporal representativeness of field monitoring data. Remote sensing may facilitate evaluation of hydrodynamic conditions in the coastal region and the spatial extrapolation of in situ data despite their restrictions. Additionally, remotely sensed images can be used in the mapping of many of those coastal habitats that need to be considered in environmental management. With regard to surface water monitoring, only a small fraction of the currently available data stored in the Hertta-PIVET register can be used effectively in scientific studies and environmental assessments. Long-term consistent data collection from established sampling stations should be emphasized but research-type seasonal assessments producing abundant data should also be encouraged. Thus a more comprehensive coordination of field work efforts is called for. The integration of remote sensing and various field measurement techniques would be especially useful in the complex coastal waters. The integration and development of monitoring system in Finnish coastal areas also requires further scientific assesement of monitoring practices. A holistic approach to the gathering and management of environmental monitoring data could be a cost-effective way of serving a multitude of information needs, and would fit the holistic, ecosystem-based management regimes that are currently being strongly promoted in Europe.
Resumo:
Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU).
Resumo:
Le Ministère des Ressources Naturelles et de la Faune (MRNF) a mandaté la compagnie de géomatique SYNETIX inc. de Montréal et le laboratoire de télédétection de l’Université de Montréal dans le but de développer une application dédiée à la détection automatique et la mise à jour du réseau routier des cartes topographiques à l’échelle 1 : 20 000 à partir de l’imagerie optique à haute résolution spatiale. À cette fin, les mandataires ont entrepris l’adaptation du progiciel SIGMA0 qu’ils avaient conjointement développé pour la mise à jour cartographique à partir d’images satellitales de résolution d’environ 5 mètres. Le produit dérivé de SIGMA0 fut un module nommé SIGMA-ROUTES dont le principe de détection des routes repose sur le balayage d’un filtre le long des vecteurs routiers de la cartographie existante. Les réponses du filtre sur des images couleurs à très haute résolution d’une grande complexité radiométrique (photographies aériennes) conduisent à l’assignation d’étiquettes selon l’état intact, suspect, disparu ou nouveau aux segments routiers repérés. L’objectif général de ce projet est d’évaluer la justesse de l’assignation des statuts ou états en quantifiant le rendement sur la base des distances totales détectées en conformité avec la référence ainsi qu’en procédant à une analyse spatiale des incohérences. La séquence des essais cible d’abord l’effet de la résolution sur le taux de conformité et dans un second temps, les gains escomptés par une succession de traitements de rehaussement destinée à rendre ces images plus propices à l’extraction du réseau routier. La démarche globale implique d’abord la caractérisation d’un site d’essai dans la région de Sherbrooke comportant 40 km de routes de diverses catégories allant du sentier boisé au large collecteur sur une superficie de 2,8 km2. Une carte de vérité terrain des voies de communication nous a permis d’établir des données de référence issues d’une détection visuelle à laquelle sont confrontés les résultats de détection de SIGMA-ROUTES. Nos résultats confirment que la complexité radiométrique des images à haute résolution en milieu urbain bénéficie des prétraitements telles que la segmentation et la compensation d’histogramme uniformisant les surfaces routières. On constate aussi que les performances présentent une hypersensibilité aux variations de résolution alors que le passage entre nos trois résolutions (84, 168 et 210 cm) altère le taux de détection de pratiquement 15% sur les distances totales en concordance avec la référence et segmente spatialement de longs vecteurs intacts en plusieurs portions alternant entre les statuts intact, suspect et disparu. La détection des routes existantes en conformité avec la référence a atteint 78% avec notre plus efficace combinaison de résolution et de prétraitements d’images. Des problèmes chroniques de détection ont été repérés dont la présence de plusieurs segments sans assignation et ignorés du processus. Il y a aussi une surestimation de fausses détections assignées suspectes alors qu’elles devraient être identifiées intactes. Nous estimons, sur la base des mesures linéaires et des analyses spatiales des détections que l’assignation du statut intact devrait atteindre 90% de conformité avec la référence après divers ajustements à l’algorithme. La détection des nouvelles routes fut un échec sans égard à la résolution ou au rehaussement d’image. La recherche des nouveaux segments qui s’appuie sur le repérage de points potentiels de début de nouvelles routes en connexion avec les routes existantes génère un emballement de fausses détections navigant entre les entités non-routières. En lien avec ces incohérences, nous avons isolé de nombreuses fausses détections de nouvelles routes générées parallèlement aux routes préalablement assignées intactes. Finalement, nous suggérons une procédure mettant à profit certaines images rehaussées tout en intégrant l’intervention humaine à quelques phases charnières du processus.
Resumo:
This paper explains the Genetic Algorithm (GA) evolution of optimized wavelet that surpass the cdf9/7 wavelet for fingerprint compression and reconstruction. Optimized wavelets have already been evolved in previous works in the literature, but they are highly computationally complex and time consuming. Therefore, in this work, a simple approach is made to reduce the computational complexity of the evolution algorithm. A training image set comprised of three 32x32 size cropped images performed much better than the reported coefficients in literature. An average improvement of 1.0059 dB in PSNR above the classical cdf9/7 wavelet over the 80 fingerprint images was achieved. In addition, the computational speed was increased by 90.18 %. The evolved coefficients for compression ratio (CR) 16:1 yielded better average PSNR for other CRs also. Improvement in average PSNR was experienced for degraded and noisy images as well
Resumo:
Segmentation of medical imagery is a challenging problem due to the complexity of the images, as well as to the absence of models of the anatomy that fully capture the possible deformations in each structure. Brain tissue is a particularly complex structure, and its segmentation is an important step for studies in temporal change detection of morphology, as well as for 3D visualization in surgical planning. In this paper, we present a method for segmentation of brain tissue from magnetic resonance images that is a combination of three existing techniques from the Computer Vision literature: EM segmentation, binary morphology, and active contour models. Each of these techniques has been customized for the problem of brain tissue segmentation in a way that the resultant method is more robust than its components. Finally, we present the results of a parallel implementation of this method on IBM's supercomputer Power Visualization System for a database of 20 brain scans each with 256x256x124 voxels and validate those against segmentations generated by neuroanatomy experts.
Resumo:
This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.