949 resultados para Retinal image quality metric
Resumo:
Limestone-based (karstic) freshwater wetlands of the Everglades, Belize, Mexico, and Jamaica are distinctive in having a high biomass of CaCO3-rich periphyton mats. Diatoms are common components of these mats and show predictable responses to environmental variation, making them good candidates for assessing nutrient enrichment in these naturally ultraoligotrophic wetlands. However, aside from in the Everglades of southern Florida, very little research has been done to document the diatoms and their environmental preferences in karstic Caribbean wetlands, which are increasingly threatened by eutrophication. We identified diatoms in periphyton mats collected during wet and dry periods from the Everglades and similar freshwater karstic wetlands in Belize, Mexico, and Jamaica. We compared diatom assemblage composition and diversity among locations and periods, and the effect of the limiting nutrient, P, on species composition among locations. We used periphyton-mat total P (TP) as a metric of availability. A total of 176 diatom species in 45 genera were recorded from the 4 locations. Twenty-three of these species, including 9 that are considered indicative of Everglades diatom flora, were found in all 4 locations. In Everglades and Caribbean sites, we identified assemblages and indicator species associated with low and high periphyton-mat TP and calculated TP optima and tolerances for each indicator species. TP optima and tolerances of indicator species differed between the Everglades and the Caribbean, but weighted averaging models predicted periphyton-mat TP concentrations from diatom assemblages at Everglades (R2 = 0.56) and Caribbean (R2 = 0.85) locations. These results show that diatoms can be effective indicators of water quality in karstic wetlands of the Caribbean, but application of regionally generated transfer functions to distant sites provides less reliable estimates than locally developed functions.
Resumo:
Medical imaging technology and applications are continuously evolving, dealing with images of increasing spatial and temporal resolutions, which allow easier and more accurate medical diagnosis. However, this increase in resolution demands a growing amount of data to be stored and transmitted. Despite the high coding efficiency achieved by the most recent image and video coding standards in lossy compression, they are not well suited for quality-critical medical image compression where either near-lossless or lossless coding is required. In this dissertation, two different approaches to improve lossless coding of volumetric medical images, such as Magnetic Resonance and Computed Tomography, were studied and implemented using the latest standard High Efficiency Video Encoder (HEVC). In a first approach, the use of geometric transformations to perform inter-slice prediction was investigated. For the second approach, a pixel-wise prediction technique, based on Least-Squares prediction, that exploits inter-slice redundancy was proposed to extend the current HEVC lossless tools. Experimental results show a bitrate reduction between 45% and 49%, when compared with DICOM recommended encoders, and 13.7% when compared with standard HEVC.
Resumo:
The importance of non-destructive techniques (NDT) in structural health monitoring programmes is being critically felt in the recent times. The quality of the measured data, often affected by various environmental conditions can be a guiding factor in terms usefulness and prediction efficiencies of the various detection and monitoring methods used in this regard. Often, a preprocessing of the acquired data in relation to the affecting environmental parameters can improve the information quality and lead towards a significantly more efficient and correct prediction process. The improvement can be directly related to the final decision making policy about a structure or a network of structures and is compatible with general probabilistic frameworks of such assessment and decision making programmes. This paper considers a preprocessing technique employed for an image analysis based structural health monitoring methodology to identify sub-marine pitting corrosion in the presence of variable luminosity, contrast and noise affecting the quality of images. A preprocessing of the gray-level threshold of the various images is observed to bring about a significant improvement in terms of damage detection as compared to an automatically computed gray-level threshold. The case dependent adjustments of the threshold enable to obtain the best possible information from an existing image. The corresponding improvements are observed in a qualitative manner in the present study.
Resumo:
Advancements in retinal imaging technologies have drastically improved the quality of eye care in the past couple decades. Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) are two examples of critical imaging modalities for the diagnosis of retinal pathologies. However current-generation SLO and OCT systems have limitations in diagnostic capability due to the following factors: the use of bulky tabletop systems, monochromatic imaging, and resolution degradation due to ocular aberrations and diffraction.
Bulky tabletop SLO and OCT systems are incapable of imaging patients that are supine, under anesthesia, or otherwise unable to maintain the required posture and fixation. Monochromatic SLO and OCT imaging prevents the identification of various color-specific diagnostic markers visible with color fundus photography like those of neovascular age-related macular degeneration. Resolution degradation due to ocular aberrations and diffraction has prevented the imaging of photoreceptors close to the fovea without the use of adaptive optics (AO), which require bulky and expensive components that limit the potential for widespread clinical use.
In this dissertation, techniques for extending the diagnostic capability of SLO and OCT systems are developed. These techniques include design strategies for miniaturizing and combining SLO and OCT to permit multi-modal, lightweight handheld probes to extend high quality retinal imaging to pediatric eye care. In addition, a method for extending true color retinal imaging to SLO to enable high-contrast, depth-resolved, high-fidelity color fundus imaging is demonstrated using a supercontinuum light source. Finally, the development and combination of SLO with a super-resolution confocal microscopy technique known as optical photon reassignment (OPRA) is demonstrated to enable high-resolution imaging of retinal photoreceptors without the use of adaptive optics.
Resumo:
With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.
Resumo:
Digital Image Processing is a rapidly evolving eld with growing applications in Science and Engineering. It involves changing the nature of an image in order to either improve its pictorial information for human interpretation or render it more suitable for autonomous machine perception. One of the major areas of image processing for human vision applications is image enhancement. The principal goal of image enhancement is to improve visual quality of an image, typically by taking advantage of the response of human visual system. Image enhancement methods are carried out usually in the pixel domain. Transform domain methods can often provide another way to interpret and understand image contents. A suitable transform, thus selected, should have less computational complexity. Sequency ordered arrangement of unique MRT (Mapped Real Transform) coe cients can give rise to an integer-to-integer transform, named Sequency based unique MRT (SMRT), suitable for image processing applications. The development of the SMRT from UMRT (Unique MRT), forward & inverse SMRT algorithms and the basis functions are introduced. A few properties of the SMRT are explored and its scope in lossless text compression is presented.
Resumo:
BACKGROUND The presence of traumatic dental injuries and malocclusions can have a negative impact on quality of life of young children and their parents, affecting their oral health and well-being. The aim of this study was to assess the impact of traumatic dental injuries and anterior malocclusion traits on the Oral Health-Related Quality of Life (OHRQoL) of children between 2 and 5 years-old. METHODS Parents of 260 children answered the six domains of the Early Childhood Oral Health Impact Scale (ECOHIS) on their perception of the OHRQoL (outcome). Two calibrated dentists assessed the types of traumatic dental injuries (Kappa = 0.9) and the presence of anterior malocclusion traits (Kappa = 1.0). OHRQoL was measured using the ECOHIS. Poisson regression was used to associate the type of traumatic dental injury and the presence of anterior malocclusion traits to the outcome. RESULTS The presence of anterior malocclusion traits did not show a negative impact on the overall OHRQoL mean or in each domain. Only complicated traumatic dental injuries showed a negative impact on the symptoms (p = 0.005), psychological (p = 0.029), self image/social interaction (p = 0.004) and family function (p = 0.018) domains and on the overall OHRQoL mean score (p = 0.002). The presence of complicated traumatic dental injuries showed an increased negative impact on the children's quality of life (RR = 1.89; 95% CI = 1.36, 2.63; p < 0.001). CONCLUSIONS Complicated traumatic dental injuries have a negative impact on the OHRQoL of preschool children and their parents, but anterior malocclusion traits do not.
Resumo:
Résumé : En raison de sa grande étendue, le Nord canadien présente plusieurs défis logistiques pour une exploitation rentable de ses ressources minérales. La TéléCartographie Prédictive (TCP) vise à faciliter la localisation de gisements en produisant des cartes du potentiel géologique. Des données altimétriques sont nécessaires pour générer ces cartes. Or, celles actuellement disponibles au nord du 60e parallèle ne sont pas optimales principalement parce qu’elles sont dérivés de courbes à équidistance variable et avec une valeur au mètre. Parallèlement, il est essentiel de connaître l'exactitude verticale des données altimétriques pour être en mesure de les utiliser adéquatement, en considérant les contraintes liées à son exactitude. Le projet présenté vise à aborder ces deux problématiques afin d'améliorer la qualité des données altimétriques et contribuer à raffiner la cartographie prédictive réalisée par TCP dans le Nord canadien, pour une zone d’étude située au Territoire du Nord-Ouest. Le premier objectif était de produire des points de contrôles permettant une évaluation précise de l'exactitude verticale des données altimétriques. Le second objectif était de produire un modèle altimétrique amélioré pour la zone d'étude. Le mémoire présente d'abord une méthode de filtrage pour des données Global Land and Surface Altimetry Data (GLA14) de la mission ICESat (Ice, Cloud and land Elevation SATellite). Le filtrage est basé sur l'application d'une série d'indicateurs calculés à partir d’informations disponibles dans les données GLA14 et des conditions du terrain. Ces indicateurs permettent d'éliminer les points d'élévation potentiellement contaminés. Les points sont donc filtrés en fonction de la qualité de l’attitude calculée, de la saturation du signal, du bruit d'équipement, des conditions atmosphériques, de la pente et du nombre d'échos. Ensuite, le document décrit une méthode de production de Modèles Numériques de Surfaces (MNS) améliorés, par stéréoradargrammétrie (SRG) avec Radarsat-2 (RS-2). La première partie de la méthodologie adoptée consiste à faire la stéréorestitution des MNS à partir de paires d'images RS-2, sans point de contrôle. L'exactitude des MNS préliminaires ainsi produits est calculée à partir des points de contrôles issus du filtrage des données GLA14 et analysée en fonction des combinaisons d’angles d'incidences utilisées pour la stéréorestitution. Ensuite, des sélections de MNS préliminaires sont assemblées afin de produire 5 MNS couvrant chacun la zone d'étude en totalité. Ces MNS sont analysés afin d'identifier la sélection optimale pour la zone d'intérêt. Les indicateurs sélectionnés pour la méthode de filtrage ont pu être validés comme performant et complémentaires, à l’exception de l’indicateur basé sur le ratio signal/bruit puisqu’il était redondant avec l’indicateur basé sur le gain. Autrement, chaque indicateur a permis de filtrer des points de manière exclusive. La méthode de filtrage a permis de réduire de 19% l'erreur quadratique moyenne sur l'élévation, lorsque que comparée aux Données d'Élévation Numérique du Canada (DNEC). Malgré un taux de rejet de 69% suite au filtrage, la densité initiale des données GLA14 a permis de conserver une distribution spatiale homogène. À partir des 136 MNS préliminaires analysés, aucune combinaison d’angles d’incidences des images RS-2 acquises n’a pu être identifiée comme étant idéale pour la SRG, en raison de la grande variabilité des exactitudes verticales. Par contre, l'analyse a indiqué que les images devraient idéalement être acquises à des températures en dessous de 0°C, pour minimiser les disparités radiométriques entre les scènes. Les résultats ont aussi confirmé que la pente est le principal facteur d’influence sur l’exactitude de MNS produits par SRG. La meilleure exactitude verticale, soit 4 m, a été atteinte par l’assemblage de configurations de même direction de visées. Par contre, les configurations de visées opposées, en plus de produire une exactitude du même ordre (5 m), ont permis de réduire le nombre d’images utilisées de 30%, par rapport au nombre d'images acquises initialement. Par conséquent, l'utilisation d'images de visées opposées pourrait permettre d’augmenter l’efficacité de réalisation de projets de SRG en diminuant la période d’acquisition. Les données altimétriques produites pourraient à leur tour contribuer à améliorer les résultats de la TCP, et augmenter la performance de l’industrie minière canadienne et finalement, améliorer la qualité de vie des citoyens du Nord du Canada.
Resumo:
Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.
Resumo:
Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state of the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state of the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.
Resumo:
Buses are considered a slow, low comfort and low reliability transport system, thus its negative and por image. In the framework of the 3iBS project (2012), several examples of innovative and/or effective solutions regarding the Level of Service (LoS) were analysed aiming to provide operators, practitioners and policy makers with a set of Good Practice Guidelines to strengthen the competitiveness of the bus in the urban environment. The identification of the key indicators regarding vehicles, infrastructure and operation was possible through the analysis of a set of case studies -among which Barcelona (Spain), Cagliari (Italy), London (United Kingdom), Paris and Nantes (France). A cross comparison between the case studies was carried out for contrasting the level of achievement of the different criteria considered. The information provided on Regulatory, Financial and Technical issues allows the identification of a number of specific factors influencing the implementation of a high quality transport scheme, and set the basis for the elaboration of a set of Guidelines for the implementation of an intelligent, innovative and integrated bus system, including the main barriers to be tackled.
Resumo:
The assessment of water quality has changed markedly worldwide over the last years, especially in Europe due to the implementation of the Water Framework Directive. Fish was considered a key-element in this context and several fish-based multi-metric indices have been proposed. In this study, we propose a multi-metric index, the Estuarine Fish Assessment Index (EFAI), developed for Portuguese estuaries, designed for the overall assessment of transitional waters, which could also be applied at the water body level within an estuary. The EFAI integrates seven metrics: species richness, percentage of marine migrants, number of species and abundance of estuarine resident species, number of species and abundance of piscivorous species, status of diadromous species, status of introduced species and status of disturbance sensitive species. Fish sampling surveys were conducted in 2006, 2009 and 2010, using beam trawl, in 13 estuarine systems along the Portuguese coast. Most of the metrics presented a high variability among the transitional systems surveyed. According to the EFAI values, Portuguese estuaries presented a "Good" water quality status (except the Douro in a particular year). The assessments in different years were generally concordant, with a few exceptions. The relationship between the EFAI and the Anthropogenic Pressure Index (API) was not significant, but a negative and significant correlation was registered between the EFAI and the expert judgement pressure index, at both estuary and water body level. The ordination analysis performed to evaluate similarities among North-East Atlantic Geographical Intercalibration Group (NEAGIG) fish-based indices put in evidence four main groups: the French index, since it is substantially different from all the other indices (uses only four metrics based on densities); indices from Ireland, United Kingdom and Spain (Asturias and Cantabria); the Dutch and German indices; and the indices of Belgium. Portugal and Spain (Basque country). The need for detailed studies, including comparative approaches, on several aspects of these assessment tools, especially in what regards their response to anthropogenic pressures was stressed. (C) 2011 Elsevier Ltd. All rights reserved.