979 resultados para NATURAL IMAGES
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
The task addressed in this thesis is the automatic alignment of an ensemble of misaligned images in an unsupervised manner. This application is especially useful in computer vision applications where annotations of the shape of an object of interest present in a collection of images is required. Performing this task manually is a slow, tedious, expensive and error prone process which hinders the progress of research laboratories and businesses. Most recently, the unsupervised removal of geometric variation present in a collection of images has been referred to as congealing based on the seminal work of Learned-Miller [21]. The only assumption made in congealing is that the parametric nature of the misalignment is known a priori (e.g. translation, similarity, a�ne, etc) and that the object of interest is guaranteed to be present in each image. The capability to congeal an ensemble of misaligned images stemming from the same object class has numerous applications in object recognition, detection and tracking. This thesis concerns itself with the construction of a congealing algorithm titled, least-squares congealing, which is inspired by the well known image to image alignment algorithm developed by Lucas and Kanade [24]. The algorithm is shown to have superior performance characteristics when compared to previously established methods: canonical congealing by Learned-Miller [21] and stochastic congealing by Z�ollei [39].
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
In this study, the feasibility of difference imaging for improving the contrast of electronic portal imaging device (EPID) images is investigated. The difference imaging technique consists of the acquisition of two EPID images (with and without the placement of an additional layer of attenuating medium on the surface of the EPID)and the subtraction of one of these images from the other. The resulting difference image shows improved contrast, compared to a standard EPID image, since it is generated by lower-energy photons. Results of this study show that, ¯rstly, this method can produce images exhibiting greater contrast than is seen in standard megavoltage EPID images and that, secondly, the optimal thickness of attenuating material for producing a maximum contrast enhancement may vary with phantom thickness and composition. Further studies of the possibilities and limitations of the di®erence imaging technique, and the physics behind it, are therefore recommended.
Resumo:
To understand human behavior, it is important to know under what conditions people deviate from selfish rationality. This study explores the interaction of natural survival instincts and internalized social norms using data on the sinking of the Titanic and the Lusitania. We show that time pressure appears to be crucial when explaining behavior under extreme conditions of life and death. Even though the two vessels and the composition of their passengers were quite similar, the behavior of the individuals on board was dramatically different. On the Lusitania, selfish behavior dominated (which corresponds to the classical homo oeconomicus); on the Titanic, social norms and social status (class) dominated, which contradicts standard economics. This difference could be attributed to the fact that the Lusitania sank in 18 minutes, creating a situation in which the short-run flight impulse dominates behavior. On the slowly sinking Titanic (2 hours, 40 minutes), there was time for socially determined behavioral patterns to re-emerge. To our knowledge, this is the first time that these shipping disasters have been analyzed in a comparative manner with advanced statistical (econometric) techniques using individual data of the passengers and crew. Knowing human behavior under extreme conditions allows us to gain insights about how varied human behavior can be depending on differing external conditions.
Resumo:
The ideal dermal matrix should be able to provide the right biological and physical environment to ensure homogenous cell and extracellular matrix (ECM) distribution, as well as the right size and morphology of the neo-tissue required. Four natural and synthetic 3D matrices were evaluated in vitro as dermal matrices, namely (1) equine collagen foam, TissuFleece®, (2) acellular dermal replacement, Alloderm®, (3) knitted poly(lactic-co-glycolic acid) (10:90)–poly(-caprolactone) (PLGA–PCL) mesh, (4) chitosan scaffold. Human dermal fibroblasts were cultured on the specimens over 3 weeks. Cell morphology, distribution and viability were assessed by electron microscopy, histology and confocal laser microscopy. Metabolic activity and DNA synthesis were analysed via MTS metabolic assay and [3H]-thymidine uptake, while ECM protein expression was determined by immunohistochemistry. TissuFleece®, Alloderm® and PLGA–PCL mesh supported cell attachment, proliferation and neo-tissue formation. However, TissuFleece® contracted to 10% of the original size while Alloderm® supported cell proliferation predominantly on the surface of the material. PLGA–PCL mesh promoted more homogenous cell distribution and tissue formation. Chitosan scaffolds did not support cell attachment and proliferation. These results demonstrated that physical characteristics including porosity and mechanical stability to withstand cell contraction forces are important in determining the success of a dermal matrix material.
Resumo:
Compressed natural gas (CNG) engines are thought to be less harmful to the environment than conventional diesel engines, especially in terms of particle emissions. Although, this is true with respect to particulate matter (PM) emissions, results of particle number (PN) emission comparisons have been inconclusive. In this study, results of on-road and dynamometer studies of buses were used to derive several important conclusions. We show that, although PN emissions from CNG buses are significantly lower than from diesel buses at low engine power, they become comparable at high power. For diesel buses, PN emissions are not significantly different between acceleration and operation at steady maximum power. However, the corresponding PN emissions from CNG buses when accelerating are an order of magnitude greater than when operating at steady maximum power. During acceleration under heavy load, PN emissions from CNG buses are an order of magnitude higher than from diesel buses. The particles emitted from CNG buses are too small to contribute to PM10 emissions or contribute to a reduction of visibility, and may consist of semivolatile nanoparticles.
Resumo:
Motor vehicle emission factors are generally derived from driving tests mimicking steady state conditions or transient drive cycles. However, neither of these test conditions completely represents real world driving conditions. In particular, they fail to determine emissions generated during the accelerating phase – a condition in which urban buses spend much of their time. In this study we analyse and compare the results of time-dependant emission measurements conducted on diesel and compressed natural gas (CNG) buses during an urban driving cycle on a chassis dynamometer and we derive power-law expressions relating carbon dioxide (CO2) emission factors to the instantaneous speed while accelerating from rest. Emissions during acceleration are compared with that during steady speed operation. These results have important implications for emission modelling particularly under congested traffic conditions.
Resumo:
The chapter investigates Shock Control Bumps (SCB) on a Natural Laminar Flow (NLF) aerofoil; RAE 5243 for Active Flow Control (AFC). A SCB approach is used to decelerate supersonic flow on the suction/pressure sides of transonic aerofoil that leads delaying shock occurrence or weakening of shock strength. Such an AFC technique reduces significantly the total drag at transonic speeds. This chapter considers the SCB shape design optimisation at two boundary layer transition positions (0 and 45%) using an Euler software coupled with viscous boundary layer effects and robust Evolutionary Algorithms (EAs). The optimisation method is based on a canonical Evolution Strategy (ES) algorithm and incorporates the concepts of hierarchical topology and parallel asynchronous evaluation of candidate solution. Two test cases are considered with numerical experiments; the first test deals with a transition point occurring at the leading edge and the transition point is fixed at 45% of wing chord in the second test. Numerical results are presented and it is demonstrated that an optimal SCB design can be found to significantly reduce transonic wave drag and improves lift on drag (L/D) value when compared to the baseline aerofoil design.
Resumo:
Rice grassy stunt virus is a member of the genus Tenuivirus, is persistently transmitted by a brown planthopper, and has occurred in rice plants in South, Southeast, and East Asia (similar to North and South America). We determined the complete nucleotide (nt) sequences of RNAs 1 (9760 nt), 2 (4069 nt), 3 (3127 nt), 4 (2909 nt), 5 (2704 nt), and 6 (2590 nt) of a southern Philippine isolate from South Cotabato and compared them with those of a northern Philippine isolate from Laguna (Toriyama et al., 1997, 1998). The numbers of nucleotides in the terminal untranslated regions and open reading frames were identical between the two isolates except for the 5′ untranslated region of the complementary strand of RNA 4. Overall nucleotide differences between the two isolates were only 0.08% in RNA 1, 0.58% in RNA 4, and 0.26% in RNA 5, whereas they were 2.19% in RNA 2, 8.38% in RNA 3, and 3.63% in RNA 6. In the intergenic regions, the two isolates differed by 9.12% in RNA 2, 11.6% in RNA 3, and 6.86% in RNA 6 with multiple consecutive nucleotide deletion/insertions, whereas they differed by only 0.78% in RNA 4 and 0.34% in RNA 5. The nucleotide variation in the intergenic region of RNA 6 within the South Cotabato isolate was only 0.33%. These differences in accumulation of mutations among individual RNA segments indicate that there was genetic reassortment in the two geographical isolates; RNAs 1, 4, and 5 of the two isolates came from a common ancestor, whereas RNAs 2, 3, and 6 were from two different ancestors.