996 resultados para Hyperspectral image
Resumo:
The University of Cambridge is unusual in that its Department of Engineering is a single department which covers virtually all branches of engineering under one roof. In their first two years of study, our undergrads study the full breadth of engineering topics and then have to choose a specialization area for the final two years of study. Here we describe part of a course, given towards the end of their second year, which is designed to entice these students to specialize in signal processing and information engineering topics for years 3 and 4. The course is based around a photo editor and an image search application, and it requires no prior knowledge of the z-transform or of 2-dimensional signal processing. It does assume some knowledge of 1-D convolution and basic Fourier methods and some prior exposure to Matlab. The subject of this paper, the photo editor, is written in standard Matlab m-files which are fully visible to the students and help them to see how specific algorithms are implemented in detail. © 2011 IEEE.
Resumo:
A novel method for modelling the statistics of 2D photographic images useful in image restoration is defined. The new method is based on the Dual Tree Complex Wavelet Transform (DT-CWT) but a phase rotation is applied to the coefficients to create complex coefficients whose phase is shift-invariant at multiscale edge and ridge features. This is in addition to the magnitude shift invariance achieved by the DT-CWT. The increased correlation between coefficients adjacent in space and scale provides an improved mechanism for signal estimation. © 2006 IEEE.
Resumo:
The use of mixture-model techniques for motion estimation and image sequence segmentation was discussed. The issues such as modeling of occlusion and uncovering, determining the relative depth of the objects in a scene, and estimating the number of objects in a scene were also investigated. The segmentation algorithm was found to be computationally demanding, but the computational requirements were reduced as the motion parameters and segmentation of the frame were initialized. The method provided a stable description, in whichthe addition and removal of objects from the description corresponded to the entry and exit of objects from the scene.
Resumo:
The Particle Image Velocimetry (PIV) technique is an image processing tool to obtain instantaneous velocity measurements during an experiment. The basic principle of PIV analysis is to divide the image into small patches and calculate the locations of the individual patches in consecutive images with the help of cross correlation functions. This paper focuses on the application of the PIV analysis in dynamic centrifuge tests on small scale tunnels in loose, dry sand. Digital images were captured during the application of the earthquake loading on tunnel models using a fast digital camera capable of taking digital images at 1000 frames per second at 1 Megapixel resolution. This paper discusses the effectiveness of the existing methods used to conduct PIV analyses on dynamic centrifuge tests. Results indicate that PIV analysis in dynamic testing requires special measures in order to obtain reasonable deformation data. Nevertheless, it was possible to obtain interesting mechanisms regarding the behaviour of the tunnels from PIV analyses. © 2010 Taylor & Francis Group, London.
Resumo:
Reconstruction of an image from a set of projections has been adapted to generate multidimensional nuclear magnetic resonance (NMR) spectra, which have discrete features that are relatively sparsely distributed in space. For this reason, a reliable reconstruction can be made from a small number of projections. This new concept is called Projection Reconstruction NMR (PR-NMR). In this paper, multidimensional NMR spectra are reconstructed by Reversible Jump Markov Chain Monte Carlo (RJMCMC). This statistical method generates samples under the assumption that each peak consists of a small number of parameters: position of peak centres, peak amplitude, and peak width. In order to find the number of peaks and shape, RJMCMC has several moves: birth, death, merge, split, and invariant updating. The reconstruction schemes are tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA.
Resumo:
Several research studies have been recently initiated to investigate the use of construction site images for automated infrastructure inspection, progress monitoring, etc. In these studies, it is always necessary to extract material regions (concrete or steel) from the images. Existing methods made use of material's special color/texture ranges for material information retrieval, but they do not sufficiently discuss how to find these appropriate color/texture ranges. As a result, users have to define appropriate ones by themselves, which is difficult for those who do not have enough image processing background. This paper presents a novel method of identifying concrete material regions using machine learning techniques. Under the method, each construction site image is first divided into regions through image segmentation. Then, the visual features of each region are calculated and classified with a pre-trained classifier. The output value determines whether the region is composed of concrete or not. The method was implemented using C++ and tested over hundreds of construction site images. The results were compared with the manual classification ones to indicate the method's validity.
Resumo:
Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.