965 resultados para images processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): I.7, I.7.5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carte du Ciel (from French, map of the sky) is a part of a 19th century extensive international astronomical project whose goal was to map the entire visible sky. The results of this vast effort were collected in the form of astrographic plates and their paper representatives that are called astrographic maps and are widely distributed among many observatories and astronomical institutes over the world. Our goal is to design methods and algorithms to automatically extract data from digitized Carte du Ciel astrographic maps. This paper examines the image processing and pattern recognition techniques that can be adopted for automatic extraction of astronomical data from stars’ triple expositions that can aid variable stars detection in Carte du Ciel maps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports on a novel method to build a 3-D model of the above-water portion of icebergs using surface imaging. The goal is to work towards the automation of iceberg surveys, allowing an Autonomous Surface Craft (ASC) to acquire shape and size information. After collecting data and images, the core software algorithm is made up of three parts: occluding contour finding, volume intersection, and parameter estimation. A software module is designed that could be used on the ASC to perform automatic and fast processing of above-water surface image data to determine iceberg shape and size measurement and determination. The resolution of the method is calculated using data from the iceberg database of the Program of Energy Research and Development (PERD). The method was investigated using data from field trials conducted through the summer of 2014 by surveying 8 icebergs during 3 expeditions. The results were analyzed to determine iceberg characteristics. Limitations of this method are addressed including its accuracy. Surface imaging system and LIDAR system are developed to profile the above-water iceberg in 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and / or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results: The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions: Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Medical imaging technologies are experiencing a growth in terms of usage and image resolution, namely in diagnostics systems that require a large set of images, like CT or MRI. Furthermore, legal restrictions impose that these scans must be archived for several years. These facts led to the increase of storage costs in medical image databases and institutions. Thus, a demand for more efficient compression tools, used for archiving and communication, is arising. Currently, the DICOM standard, that makes recommendations for medical communications and imaging compression, recommends lossless encoders such as JPEG, RLE, JPEG-LS and JPEG2000. However, none of these encoders include inter-slice prediction in their algorithms. This dissertation presents the research work on medical image compression, using the MRP encoder. MRP is one of the most efficient lossless image compression algorithm. Several processing techniques are proposed to adapt the input medical images to the encoder characteristics. Two of these techniques, namely changing the alignment of slices for compression and a pixel-wise difference predictor, increased the compression efficiency of MRP, by up to 27.9%. Inter-slice prediction support was also added to MRP, using uni and bi-directional techniques. Also, the pixel-wise difference predictor was added to the algorithm. Overall, the compression efficiency of MRP was improved by 46.1%. Thus, these techniques allow for compression ratio savings of 57.1%, compared to DICOM encoders, and 33.2%, compared to HEVC RExt Random Access. This makes MRP the most efficient of the encoders under study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the properties of ERP effects elicited by unattended (spatially uncued) objects using a short-lag repetition-priming paradigm. Same or different common objects were presented in a yoked prime-probe trial either as intact images or slightly scrambled (half-split) versions. Behaviourally, only objects in a familiar (intact) view showed priming. An enhanced negativity was observed at parietal and occipito-parietal electrode sites within the time window of the posterior N250 after the repetition of intact, but not split, images. An additional post-hoc N2pc analysis of the prime display supported that this result could not be attributed to differences in salience between familiar intact and split views. These results demonstrate that spatially unattended objects undergo visual processing but only if shown in familiar views, indicating a role of holistic processing of objects that is independent of attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we demonstrate a digital signal processing (DSP) algorithm for improving spatial resolution of images captured by CMOS cameras. The basic approach is to reconstruct a high resolution (HR) image from a shift-related low resolution (LR) image sequence. The aliasing relationship of Fourier transforms between discrete and continuous images in the frequency domain is used for mapping LR images to a HR image. The method of projection onto convex sets (POCS) is applied to trace the best estimate of pixel matching from the LR images to the reconstructed HR image. Computer simulations and preliminary experimental results have shown that the algorithm works effectively on the application of post-image-captured processing for CMOS cameras. It can also be applied to HR digital image reconstruction, where shift information of the LR image sequence is known.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado, Processamento de Linguagem Natural e Indústrias da Língua, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Of all the important small and medium-sized blood vessels of the human body, retinal blood vessels are the only deep capillary that can be directly observed by a non-traumatic method. Retinal vascular morphology, such as vessel diameter, shape and distribution, is influenced by systemic diseases (Martinez-Perez, Hughes, Thom and Parker 2007). We can use digital fundus photography and analysis of retinal vascular morphology to find the relationship between the changes in vascular morphology and diabetes for the diagnosis of diseases. We aim at developing a retinal image processing system, that can analyze retinal images and provide helpful information for diagnosis. © 2013 Springer-Verlag.