480 resultados para BRST Quantization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present pyktree, an implementation of the K-tree algorithm in the Python programming language. The K-tree algorithm provides highly balanced search trees for vector quantization that scales up to very large data sets. Pyktree is highly modular and well suited for rapid-prototyping of novel distance measures and centroid representations. It is easy to install and provides a python package for library use as well as command line tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Probabilistic topic models have recently been used for activity analysis in video processing, due to their strong capacity to model both local activities and interactions in crowded scenes. In those applications, a video sequence is divided into a collection of uniform non-overlaping video clips, and the high dimensional continuous inputs are quantized into a bag of discrete visual words. The hard division of video clips, and hard assignment of visual words leads to problems when an activity is split over multiple clips, or the most appropriate visual word for quantization is unclear. In this paper, we propose a novel algorithm, which makes use of a soft histogram technique to compensate for the loss of information in the quantization process; and a soft cut technique in the temporal domain to overcome problems caused by separating an activity into two video clips. In the detection process, we also apply a soft decision strategy to detect unusual events.We show that the proposed soft decision approach outperforms its hard decision counterpart in both local and global activity modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing demand for mobile video has attracted much attention from both industry and researchers. To satisfy users and to facilitate the usage of mobile video, providing optimal quality to the users is necessary. As a result, quality of experience (QoE) becomes an important focus in measuring the overall quality perceived by the end-users, from the aspects of both objective system performance and subjective experience. However, due to the complexity of user experience and diversity of resources (such as videos, networks and mobile devices), it is still challenging to develop QoE models for mobile video that can represent how user-perceived value varies with changing conditions. Previous QoE modelling research has two main limitations: aspects influencing QoE are insufficiently considered; and acceptability as the user value is seldom studied. Focusing on the QoE modelling issues, two aims are defined in this thesis: (i) investigating the key influencing factors of mobile video QoE; and (ii) establishing QoE prediction models based on the relationships between user acceptability and the influencing factors, in order to help provide optimal mobile video quality. To achieve the first goal, a comprehensive user study was conducted. It investigated the main impacts on user acceptance: video encoding parameters such as quantization parameter, spatial resolution, frame rate, and encoding bitrate; video content type; mobile device display resolution; and user profiles including gender, preference for video content, and prior viewing experience. Results from both quantitative and qualitative analysis revealed the significance of these factors, as well as how and why they influenced user acceptance of mobile video quality. Based on the results of the user study, statistical techniques were used to generate a set of QoE models that predict the subjective acceptability of mobile video quality by using a group of the measurable influencing factors, including encoding parameters and bitrate, content type, and mobile device display resolution. Applying the proposed QoE models into a mobile video delivery system, optimal decisions can be made for determining proper video coding parameters and for delivering most suitable quality to users. This would lead to consistent user experience on different mobile video content and efficient resource allocation. The findings in this research enhance the understanding of user experience in the field of mobile video, which will benefit mobile video design and research. This thesis presents a way of modelling QoE by emphasising user acceptability of mobile video quality, which provides a strong connection between technical parameters and user-desired quality. Managing QoE based on acceptability promises the potential for adapting to the resource limitations and achieving an optimal QoE in the provision of mobile video content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is a reply to "Comment on 'Online Estimation of Allan Variance Parameters' " by James C.Wilcox published in JOURNAL OF GUIDANCE, CONTROL, AND DYNAMICS Vol. 24, No. 3, May–June 2001. OUR statement “Modern gyros provide angular rate measurements directly, and hence, angular quantization is meaningless” made in the original paper should first be read with the accompanying sentences in the paragraph. The meaning of the sentence would perhaps have been clearer if written". . .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, expressions for convolution multiplication properties of DCT IV and DST IV are derived starting from equivalent DFT representations. Using these expressions methods for implementing linear filtering through block convolution in the DCT IV and DST IV domain are proposed. Techniques developed for DCT IV and DST IV are further extended to MDCT and MDST where the filter implementation is near exact for symmetric filters and approximate for non-symmetric filters. No additional overlapping is required for implementing the symmetric filtering in the MDCT domain and hence the proposed algorithm is computationally competitive with DFT based systems. Moreover, inherent 50% overlap between the adjacent frames used for MDCT/MDST domain reduces the blocking artifacts due to block processing or quantization. The techniques are computationally efficient for symmetric filters and provides a new alternative to DFT based convolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex amplitude encoded in any digital hologram must undergo quantization, usually in either polar or rectangular format . In this paper these two schemes are compared under the constraints and conditions inherent in digital holography . For Fourier transform holograms when the spectrum is levelled through phase coding, the rectangular format is shown to be optimal . In the absence of phase coding, and also if the amplitude spectrum has a large dynamic range, the polar format may be preferable .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyze aspects of symmetry breaking for Moyal spacetimes within a quantization scheme which preserves the twisted Poincare´ symmetry. Towards this purpose, we develop the Lehmann-Symanzik- Zimmermann (LSZ) approach for Moyal spacetimes. The latter gives a formula for scattering amplitudes on these spacetimes which can be obtained from the corresponding ones on the commutative spacetime. This formula applies in the presence of spontaneous breakdown of symmetries as well. We also derive Goldstone’s theorem on Moyal spacetime. The formalism developed here can be directly applied to the twisted standard model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hereditary non-polyposis colorectal carcinoma (HNPCC; Lynch syndrome) is among the most common hereditary cancers in man and a model of cancers arising through deficient DNA mismatch repair (MMR). It is inherited in a dominant manner with predisposing germline mutations in the MMR genes, mainly MLH1, MSH2, MSH6 and PMS2. Both copies of the MMR gene need to be inactivated for cancer development. Since Lynch syndrome family members are born with one defective copy of one of the MMR genes in their germline, they only need to acquire a so called second hit to inactivate the MMR gene. Hence, they usually develop cancer at an early age. MMR gene inactivation leads to accumulation of mutations particularly in short repeat tracts, known as microsatellites, causing microsatellite instability (MSI). MSI is the hallmark of Lynch syndrome tumors, but is present in approximately 15% of sporadic tumors as well. There are several possible mechanisms of somatic inactivation (i.e. the second hit ) of MMR genes, for instance deletion of the wild-type copy, leading to loss of heterozygosity (LOH), methylation of promoter regions necessary for gene transcription, or mitotic recombination or gene conversion. In the Lynch syndrome tumors carrying germline mutations in the MMR gene, LOH was found to be the most frequent mechanism of somatic inactivation in the present study. We also studied MLH1/MSH2 deletion carriers and found that somatic mutations identical to the ones in the germline occurred frequently in colorectal cancers and were also present in extracolonic Lynch syndrome-associated tumors. Chromosome-specific marker analysis implied that gene conversion, rather than mitotic recombination or deletion of the respective gene locus accounted for wild-type inactivation. Lynch syndrome patients are predisposed to certain types of cancers, the most common ones being colorectal, endometrial and gastric cancer. Gastric cancer and uroepithelial tumors of bladder and ureter were observed to be true Lynch syndrome tumors with MMR deficiency as the driving force of tumorigenesis. Brain tumors and kidney carcinoma, on the other hand, were mostly MSS, implying the possibility of alternative routes of tumor development. These results present possible implications in clinical cancer surveillance. In about one-third of families suspected of Lynch syndrome, mutations in MMR genes are not found, and we therefore looked for alternative mechanisms of predisposition. According to our results, large genomic deletions, mainly in MSH2, and germline epimutations in MLH1, together explain a significant fraction of point mutation-negative families suspected of Lynch syndrome and are associated with characteristic clinical and family features. Our findings have important implications in the diagnosis and management of Lynch syndrome families.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantization formats of four digital holographic codes (Lohmann,Lee, Burckhardt and Hsueh-Sawchuk) are evaluated. A quantitative assessment is made from errors in both the Fourier transform and image domains. In general, small errors in the Fourier amplitude or phase alone do not guarantee high image fidelity. From quantization considerations, the Lee hologram is shown to be the best choice for randomly phase coded objects. When phase coding is not feasible, the Lohmann hologram is preferable as it is easier to plot.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using analysis-by-synthesis (AbS) approach, we develop a soft decision based switched vector quantization (VQ) method for high quality and low complexity coding of wideband speech line spectral frequency (LSF) parameters. For each switching region, a low complexity transform domain split VQ (TrSVQ) is designed. The overall rate-distortion (R/D) performance optimality of new switched quantizer is addressed in the Gaussian mixture model (GMM) based parametric framework. In the AbS approach, the reduction of quantization complexity is achieved through the use of nearest neighbor (NN) TrSVQs and splitting the transform domain vector into higher number of subvectors. Compared to the current LSF quantization methods, the new method is shown to provide competitive or better trade-off between R/D performance and complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This masters thesis explores some of the most recent developments in noncommutative quantum field theory. This old theme, first suggested by Heisenberg in the late 1940s, has had a renaissance during the last decade due to the firmly held belief that space-time becomes noncommutative at small distances and also due to the discovery that string theory in a background field gives rise to noncommutative field theory as an effective low energy limit. This has led to interesting attempts to create a noncommutative standard model, a noncommutative minimal supersymmetric standard model, noncommutative gravity theories etc. This thesis reviews themes and problems like those of UV/IR mixing, charge quantization, how to deal with the non-commutative symmetries, how to solve the Seiberg-Witten map, its connection to fluid mechanics and the problem of constructing general coordinate transformations to obtain a theory of noncommutative gravity. An emphasis has been put on presenting both the group theoretical results and the string theoretical ones, so that a comparison of the two can be made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simplified yet analytical approach on few ballistic properties of III-V quantum wire transistor has been presented by considering the band non-parabolicity of the electrons in accordance with Kane's energy band model using the Bohr-Sommerfeld's technique. The confinement of the electrons in the vertical and lateral directions are modeled by an infinite triangular and square well potentials respectively, giving rise to a two dimensional electron confinement. It has been shown that the quantum gate capacitance, the drain currents and the channel conductance in such systems are oscillatory functions of the applied gate and drain voltages at the strong inversion regime. The formation of subbands due to the electrical and structural quantization leads to the discreetness in the characteristics of such 1D ballistic transistors. A comparison has also been sought out between the self-consistent solution of the Poisson's-Schrodinger's equations using numerical techniques and analytical results using Bohr-Sommerfeld's method. The results as derived in this paper for all the energy band models gets simplified to the well known results under certain limiting conditions which forms the mathematical compatibility of our generalized theoretical formalism.