972 resultados para Spectral resolution
Resumo:
Near-infrared spectroscopy is a somewhat unutilised technique for the study of minerals. The technique has the ability to determine water content, hydroxyl groups and transition metals. In this paper we show the application of NIR spectroscopy to the study of selected minerals. The structure and spectral properties of two Cu-tellurite minerals graemite and teineite are compared with bismuth containing tellurite mineral smirnite by the application of NIR and IR spectroscopy. The position of Cu2+ bands and their splitting in the electronic spectra of tellurites are in conformity with octahedral geometry distortion. The spectral pattern of smirnite resembles graemite and the observed band at 10855 cm-1 with a weak shoulder at 7920 cm-1 is identified as due to Cu2+ ion. Any transition metal impurities may be identified by their bands in this spectral region. Three prominent bands observed in the region of 7200-6500 cm-1 are the overtones of water whilst the weak bands observed near 6200 cm-1in tellurites may be attributed to the hydrogen bonding between (TeO3)2- and H2O. The observation of a number of bands centred at around 7200 cm-1 confirms molecular water in tellurite minerals. A number of overlapping bands in the low wavenumbers 4500-4000 cm-1 is the result of combinational modes of (TeO3)2−ion. The appearance of the most intense peak at 5200 cm-1 with a pair of weak bands near 6000 cm-1 is a common feature in all the spectra and is related to the combinations of OH vibrations of water molecules, and bending vibrations ν2 (δ H2O). Bending vibrations δ H2O observed in the IR spectra shows a single band for smirnite at 1610 cm-1. The resolution of this band into number of components is evidenced for non-equivalent types of molecular water in graemite and teineite. (TeO3)2- stretching vibrations are characterized by three main absorptions at 1080, 780 and 695 cm-1.
Resumo:
Road features extraction from remote sensed imagery has been a long-term topic of great interest within the photogrammetry and remote sensing communities for over three decades. The majority of the early work only focused on linear feature detection approaches, with restrictive assumption on image resolution and road appearance. The widely available of high resolution digital aerial images makes it possible to extract sub-road features, e.g. road pavement markings. In this paper, we will focus on the automatic extraction of road lane markings, which are required by various lane-based vehicle applications, such as, autonomous vehicle navigation, and lane departure warning. The proposed approach consists of three phases: i) road centerline extraction from low resolution image, ii) road surface detection in the original image, and iii) pavement marking extraction on the generated road surface. The proposed method was tested on the aerial imagery dataset of the Bruce Highway, Queensland, and the results demonstrate the efficiency of our approach.
Resumo:
With the increasing resolution of remote sensing images, road network can be displayed as continuous and homogeneity regions with a certain width rather than traditional thin lines. Therefore, road network extraction from large scale images refers to reliable road surface detection instead of road line extraction. In this paper, a novel automatic road network detection approach based on the combination of homogram segmentation and mathematical morphology is proposed, which includes three main steps: (i) the image is classified based on homogram segmentation to roughly identify the road network regions; (ii) the morphological opening and closing is employed to fill tiny holes and filter out small road branches; and (iii) the extracted road surface is further thinned by a thinning approach, pruned by a proposed method and finally simplified with Douglas-Peucker algorithm. Lastly, the results from some QuickBird images and aerial photos demonstrate the correctness and efficiency of the proposed process.
Resumo:
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. ----- By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. ----- We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.
Resumo:
Accurate road lane information is crucial for advanced vehicle navigation and safety applications. With the increasing of very high resolution (VHR) imagery of astonishing quality provided by digital airborne sources, it will greatly facilitate the data acquisition and also significantly reduce the cost of data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lanes from aerial images with employment of the image analysis procedures. This algorithm starts with constructing the (Digital Surface Model) DSM and true orthophotos from the stereo images. Next, a maximum likelihood clustering algorithm is used to separate road from other ground objects. After the detection of road surface, the road traffic and lane lines are further detected using texture enhancement and morphological operations. Finally, the generated road network is evaluated to test the performance of the proposed approach, in which the datasets provided by Queensland department of Main Roads are used. The experiment result proves the effectiveness of our approach.
Resumo:
The highly variable flagellin-encoding flaA gene has long been used for genotyping Campylobacter jejuni and Campylobacter coli. High-resolution melting (HRM) analysis is emerging as an efficient and robust method for discriminating DNA sequence variants. The objective of this study was to apply HRM analysis to flaA-based genotyping. The initial aim was to identify a suitable flaA fragment. It was found that the PCR primers commonly used to amplify the flaA short variable repeat (SVR) yielded a mixed PCR product unsuitable for HRM analysis. However, a PCR primer set composed of the upstream primer used to amplify the fragment used for flaA restriction fragment length polymorphism (RFLP) analysis and the downstream primer used for flaA SVR amplification generated a very pure PCR product, and this primer set was used for the remainder of the study. Eighty-seven C. jejuni and 15 C. coli isolates were analyzed by flaA HRM and also partial flaA sequencing. There were 47 flaA sequence variants, and all were resolved by HRM analysis. The isolates used had previously also been genotyped using single-nucleotide polymorphisms (SNPs), binary markers, CRISPR HRM, and flaA RFLP. flaAHRManalysis provided resolving power multiplicative to the SNPs, binary markers, and CRISPR HRM and largely concordant with the flaA RFLP. It was concluded that HRM analysis is a promising approach to genotyping based on highly variable genes.
Resumo:
The following paper presents an evaluation of airborne sensors for use in vegetation management in powerline corridors. Three integral stages in the management process are addressed including, the detection of trees, relative positioning with respect to the nearest powerline and vegetation height estimation. Image data, including multi-spectral and high resolution, are analyzed along with LiDAR data captured from fixed wing aircraft. Ground truth data is then used to establish the accuracy and reliability of each sensor thus providing a quantitative comparison of sensor options. Tree detection was achieved through crown delineation using a Pulse-Coupled Neural Network (PCNN) and morphologic reconstruction applied to multi-spectral imagery. Through testing it was shown to achieve a detection rate of 96%, while the accuracy in segmenting groups of trees and single trees correctly was shown to be 75%. Relative positioning using LiDAR achieved a RMSE of 1.4m and 2.1m for cross track distance and along track position respectively, while Direct Georeferencing achieved RMSE of 3.1m in both instances. The estimation of pole and tree heights measured with LiDAR had a RMSE of 0.4m and 0.9m respectively, while Stereo Matching achieved 1.5m and 2.9m. Overall a small number of poles were missed with detection rates of 98% and 95% for LiDAR and Stereo Matching.
Resumo:
Natural iowaite, magnesium–ferric oxychloride mineral having light green color originating from Australia has been characterized by EPR, optical, IR, and Raman spectroscopy. The optical spectrum exhibits a number of electronic bands due to both Fe(III) and Mn(II) ions in iowaite. From EPR studies, the g values are calculated for Fe(III) and g and A values for Mn(II). EPR and optical absorption studies confirm that Fe(III) and Mn(II) are in distorted octahedral geometry. The bands that appear both in NIR and Raman spectra are due to the overtones and combinations of water and carbonate molecules. Thus EPR, optical, and Raman spectroscopy have proven most useful for the study of the chemistry of natural iowaite and chemical changes in the mineral.
Resumo:
This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.
Resumo:
Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.
Resumo:
There is a need in industry for a commodity polyethylene film with controllable degradation properties that will degrade in an environmentally neutral way, for applications such as shopping bags and packaging film. Additives such as starch have been shown to accelerate the degradation of plastic films, however control of degradation is required so that the film will retain its mechanical properties during storage and use, and then degrade when no longer required. By the addition of a photocatalyst it is hoped that polymer film will breakdown with exposure to sunlight. Furthermore, it is desired that the polymer film will degrade in the dark, after a short initial exposure to sunlight. Research has been undertaken into the photo- and thermo-oxidative degradation processes of 25 ìm thick LLDPE (linear low density polyethylene) film containing titania from different manufacturers. Films were aged in a suntest or in an oven at 50 °C, and the oxidation product formation was followed using IR spectroscopy. Degussa P25, Kronos 1002, and various organic-modified and doped titanias of the types Satchleben Hombitan and Hunstsman Tioxide incorporated into LLDPE films were assessed for photoactivity. Degussa P25 was found to be the most photoactive with UVA and UVC exposure. Surface modification of titania was found to reduce photoactivity. Crystal phase is thought to be among the most important factors when assessing the photoactivity of titania as a photocatalyst for degradation. Pre-irradiation with UVA or UVC for 24 hours of the film containing 3% Degussa P25 titania prior to aging in an oven resulted in embrittlement in ca. 200 days. The multivariate data analysis technique PCA (principal component analysis) was used as an exploratory tool to investigate the IR spectral data. Oxidation products formed in similar relative concentrations across all samples, confirming that titania was catalysing the oxidation of the LLDPE film without changing the oxidation pathway. PCA was also employed to compare rates of degradation in different films. PCA enabled the discovery of water vapour trapped inside cavities formed by oxidation by titania particles. Imaging ATR/FTIR spectroscopy with high lateral resolution was used in a novel experiment to examine the heterogeneous nature of oxidation of a model polymer compound caused by the presence of titania particles. A model polymer containing Degussa P25 titania was solvent cast onto the internal reflection element of the imaging ATR/FTIR and the oxidation under UVC was examined over time. Sensitisation of 5 ìm domains by titania resulted in areas of relatively high oxidation product concentration. The suitability of transmission IR with a synchrotron light source to the study of polymer film oxidation was assessed as the Australian Synchrotron in Melbourne, Australia. Challenges such as interference fringes and poor signal-to-noise ratio need to be addressed before this can become a routine technique.
Resumo:
A voglite mineral sample of Volrite Canyon #1 mine, Frey Point, White Canyon Mine District, San Juan County, Utah, USA is used in the present study. An EPR study on powdered sample confirms the presence of Mn(II) and Cu(II). Optical absorption spectral results are due to Cu(II) which is in distorted octahedron. NIR results are indicating the presence of water fundamentals.
Resumo:
Currently the Bachelor of Design is the generic degree offered to the four disciplines of Architecture, Landscape Architecture, Industrial Design, and Interior Design within the School of Design at the Queensland University of Technology. Regardless of discipline, Digital Communication is a core unit taken by the 600 first year students entering the Bachelor of Design degree. Within the design disciplines the communication of the designer's intentions is achieved primarily through the use of graphic images, with written information being considered as supportive or secondary. As such, Digital Communication attempts to educate learners in the fundamentals of this graphic design communication, using a generic digital or software tool. Past iterations of the unit have not acknowledged the subtle difference in design communication of the different design disciplines involved, and has used a single generic software tool. Following a review of the unit in 2008, it was decided that a single generic software tool was no longer entirely sufficient. This decision was based on the recognition that there was an increasing emergence of discipline specific digital tools, and an expressed student desire and apparent aptitude to learn these discipline specific tools. As a result the unit was reconstructed in 2009 to offer both discipline specific and generic software instruction, if elected by the student. This paper, apart from offering the general context and pedagogy of the existing and restructured units, will more importantly offer research data that validates the changes made to the unit. Most significant of this new data is the results of surveys that authenticate actual student aptitude versus desire in learning discipline specific tools. This is done through an exposure of student self efficacy in problem resolution and technological prowess - generally and specifically within the unit. More traditional means of validation is also presented that includes the results of the generic university-wide Learning Experience Survey of the unit, as well as a comparison between the assessment results of the restructured unit versus the previous year.
Resumo:
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.