276 resultados para Quantization
Resumo:
The problem of selecting anappropriate wavelet filter is always present in signal compression based on thewavelet transform. In this report, we propose a method to select a wavelet filter from a predefined set of filters for the compression of spectra from a multispectral image. The wavelet filter selection is based on the Learning Vector Quantization (LVQ). In the training phase for the test images, the best wavelet filter for each spectrum has been found by a careful compression-decompression evaluation. Certain spectral features are used in characterizing the pixel spectra. The LVQ is used to form the best wavelet filter class for different types of spectra from multispectral images. When a new image is to be compressed, a set of spectra from that image is selected, the spectra are classified by the trained LVQand the filter associated to the largest class is selected for the compression of every spectrum from the multispectral image. The results show, that almost inevery case our method finds the most suitable wavelet filter from the pre-defined set for the compression.
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.
Resumo:
The purpose of this thesis is to present a new approach to the lossy compression of multispectral images. Proposed algorithm is based on combination of quantization and clustering. Clustering was investigated for compression of the spatial dimension and the vector quantization was applied for spectral dimension compression. Presenting algo¬rithms proposes to compress multispectral images in two stages. During the first stage we define the classes' etalons, another words to each uniform areas are located inside the image the number of class is given. And if there are the pixels are not yet assigned to some of the clusters then it doing during the second; pass and assign to the closest eta¬lons. Finally a compressed image is represented with a flat index image pointing to a codebook with etalons. The decompression stage is instant too. The proposed method described in this paper has been tested on different satellite multispectral images from different resources. The numerical results and illustrative examples of the method are represented too.
Resumo:
In this thesis is studied the influence of uniaxial deformation of GaAs/AlGaAs quantum well structures to photoluminescence. Uniaxial deformation was applied along [110] and polarization ratio of photoluminescence at T = 77 K and 300 K was measured. Also the physical origin of photoluminescence lines in spectrum was determined and the energy band splitting value between states of heavy and light holes was estimated. It was found that the dependencies of polarization ratio on uniaxial deformation for bulk GaAs and GaAs/AlGaAs are different. Two observed lines in photoluminescence spectrum are induced by free electron recombination to energy sublevels of valence band corresponding to heavy and light holes. Those sublevels are splited due to the combination of size quantization and external pressure. The quantum splitting energy value was estimated. Also was shown a method, which allows to determine the energy splitting value of sublevels at room temperature and at comparatively low uniaxial deformation, when the other method for determining of the splitting becomes impossible.
Resumo:
Peer-reviewed
Resumo:
The quantum harmonic oscillator is described by the Hermite equation.¹ The asymptotic solution is predominantly used to obtain its analytical solutions. Wave functions (solutions) are quadratically integrable if taken as the product of the convergent asymptotic solution (Gaussian function) and Hermite polynomial,¹ whose degree provides the associated quantum number. Solving it numerically, quantization is observed when a control real variable is "tuned" to integer values. This can be interpreted by graphical reading of Y(x) and |Y(x)|², without other mathematical analysis, and prove useful for teaching fundamentals of quantum chemistry to undergraduates.
Resumo:
Investigation of galvanomagnetic effects in nanostructure GaAs/Mn/GaAs/In0.15Ga0.85As/ GaAs is presented. This nanostructure is classified as diluted magnetic semiconductor (DMS). Temperature dependence of transverse magnetoresistivity of the sample was studied. The anomalous Hall effect was detected and subtracted from the total Hall component. Special attention was paid to the measurements of Shubnikov-de Haas oscillations, which exists only in the case of magnetic field aligned perpendicularly to the plane of the sample. This confirms two-dimensional character of the hole energy spectrum in the quantum well. Such important characteristics as cyclotron mass, the Fermi energy and the Dingle temperature were calculated, using experimental data of Shubnikov-de Haas oscillations. The hole concentration and hole mobility in the quantum well also were estimated for the sample. At 4.2 K spin splitting of the maxima of transverse resistivity was observed and g-factor was calculated for that case. The values of the Dingle temperatures were obtained by two different approaches. From the comparison of these values it was concluded that the broadening of Landau levels in the investigated structure is mainly defined by the scattering of charge carriers on the defects of the crystal lattice
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
Le présent mémoire comprend un survol des principales méthodes de rendu en demi-tons, de l’analog screening à la recherche binaire directe en passant par l’ordered dither, avec une attention particulière pour la diffusion d’erreur. Ces méthodes seront comparées dans la perspective moderne de la sensibilité à la structure. Une nouvelle méthode de rendu en demi-tons par diffusion d’erreur est présentée et soumise à diverses évaluations. La méthode proposée se veut originale, simple, autant à même de préserver le caractère structurel des images que la méthode à l’état de l’art, et plus rapide que cette dernière par deux à trois ordres de magnitude. D’abord, l’image est décomposée en fréquences locales caractéristiques. Puis, le comportement de base de la méthode proposée est donné. Ensuite, un ensemble minutieusement choisi de paramètres permet de modifier ce comportement de façon à épouser les différents caractères fréquentiels locaux. Finalement, une calibration détermine les bons paramètres à associer à chaque fréquence possible. Une fois l’algorithme assemblé, toute image peut être traitée très rapidement : chaque pixel est attaché à une fréquence propre, cette fréquence sert d’indice pour la table de calibration, les paramètres de diffusion appropriés sont récupérés, et la couleur de sortie déterminée pour le pixel contribue en espérance à souligner la structure dont il fait partie.
Resumo:
On révise les prérequis de géométrie différentielle nécessaires à une première approche de la théorie de la quantification géométrique, c'est-à-dire des notions de base en géométrie symplectique, des notions de groupes et d'algèbres de Lie, d'action d'un groupe de Lie, de G-fibré principal, de connexion, de fibré associé et de structure presque-complexe. Ceci mène à une étude plus approfondie des fibrés en droites hermitiens, dont une condition d'existence de fibré préquantique sur une variété symplectique. Avec ces outils en main, nous commençons ensuite l'étude de la quantification géométrique, étape par étape. Nous introduisons la théorie de la préquantification, i.e. la construction des opérateurs associés à des observables classiques et la construction d'un espace de Hilbert. Des problèmes majeurs font surface lors de l'application concrète de la préquantification : les opérateurs ne sont pas ceux attendus par la première quantification et l'espace de Hilbert formé est trop gros. Une première correction, la polarisation, élimine quelques problèmes, mais limite grandement l'ensemble des observables classiques que l'on peut quantifier. Ce mémoire n'est pas un survol complet de la quantification géométrique, et cela n'est pas son but. Il ne couvre ni la correction métaplectique, ni le noyau BKS. Il est un à-côté de lecture pour ceux qui s'introduisent à la quantification géométrique. D'une part, il introduit des concepts de géométrie différentielle pris pour acquis dans (Woodhouse [21]) et (Sniatycki [18]), i.e. G-fibrés principaux et fibrés associés. Enfin, il rajoute des détails à quelques preuves rapides données dans ces deux dernières références.
Resumo:
This thesis deals with some aspects of the Physics of the early universe, like phase transitions, bubble nucleations and premodial density perturbations which lead to the formation structures in the universe. Quantum aspects of the gravitational interaction play an essential role in retical high-energy physics. The questions of the quantum gravity are naturally connected with early universe and Grand Unification Theories. In spite of numerous efforts, the various problems of quantum gravity remain still unsolved. In this condition, the consideration of different quantum gravity models is an inevitable stage to study the quantum aspects of gravitational interaction. The important role of gravitationally coupled scalar field in the physics of the early universe is discussed in this thesis. The study shows that the scalar-gravitational coupling and the scalar curvature did play a crucial role in determining the nature of phase transitions that took place in the early universe. The key idea in studying the formation structure in the universe is that of gravitational instability.
Resumo:
The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.
Resumo:
This thesis Entitled Studies on Quasinormal modes and Late-time tails black hole spacetimes. In this thesis, the signature of these new theories are probed on the evolution of field perturbations on the black hole spacetimes in the theory. Chapter 1 gives a general introduction to black holes and its perturbation formalism. Various concepts in the area covered by the thesis are also elucidated in this chapter. Chapter 2 describes the evolution of massive, charged scalar field perturbations around a Reissner-Nordstrom black hole surrounded by a static and spherically symmetric quintessence. Chapter 3 comprises the evolution of massless scalar, electromagnetic and gravitational fields around spherically symmetric black hole whose asymptotes are defined by the quintessence, with special interest on the late-time behavior. Chapter 4 examines the evolution of Dirac field around a Schwarzschild black hole surrounded by quintessence. Detailed numerical simulations are done to analyze the nature of field on different surfaces of constant radius . Chapter 5is dedicated to the study of the evolution of massless fields around the black hole geometry in the HL gravity.
Resumo:
In 1931 Dirac studied the motion of an electron in the field of a magnetic monopole and found that the quantization of electric charge can be explained by postulating the mere existence of a magnetic monopole. Since 1974 there has been a resurgence of interest in magnetic monopole due to the work of ‘t’ Hooft and Polyakov who independently observed that monopoles can exist as finite energy topologically stable solutions to certain spontaneously broken gauge theories. The thesis, “Studies on Magnetic Monopole Solutions of Non-abelian Gauge Theories and Related Problems”, reports a systematic investigation of classical solutions of non-abelian gauge theories with special emphasis on magnetic monopoles and dyons which possess both electric and magnetic charges. The formation of bound states of a dyon with fermions and bosons is also studied in detail. The thesis opens with an account of a new derivation of a relationship between the magnetic charge of a dyon and the topology of the gauge fields associated with it. Although this formula has been reported earlier in the literature, the present method has two distinct advantages. In the first place, it does not depend either on the mechanism of symmetry breaking or on the nature of the residual symmetry group. Secondly, the results can be generalized to finite temperature monopoles.