962 resultados para Average Entropy


Relevância:

60.00% 60.00%

Publicador:

Resumo:

With growing success in experimental implementations it is critical to identify a gold standard for quantum information processing, a single measure of distance that can be used to compare and contrast different experiments. We enumerate a set of criteria that such a distance measure must satisfy to be both experimentally and theoretically meaningful. We then assess a wide range of possible measures against these criteria, before making a recommendation as to the best measures to use in characterizing quantum information processing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Approximate entropy (ApEn) of blood pressure (BP) can be easily measured based on software analysing 24-h ambulatory BP monitoring (ABPM), but the clinical value of this measure is unknown. In a prospective study we investigated whether ApEn of BP predicts, in addition to average and variability of BP, the risk of hypertensive crisis. In 57 patients with known hypertension we measured ApEn, average and variability of systolic and diastolic BP based on 24-h ABPM. Eight of these fifty-seven patients developed hypertensive crisis during follow-up (mean follow-up duration 726 days). In bivariate regression analysis, ApEn of systolic BP (P<0.01), average of systolic BP (P=0.02) and average of diastolic BP (P=0.03) were significant predictors of hypertensive crisis. The incidence rate ratio of hypertensive crisis was 14.0 (95% confidence interval (CI) 1.8, 631.5; P<0.01) for high ApEn of systolic BP as compared to low values. In multivariable regression analysis, ApEn of systolic (P=0.01) and average of diastolic BP (P<0.01) were independent predictors of hypertensive crisis. A combination of these two measures had a positive predictive value of 75%, and a negative predictive value of 91%, respectively. ApEn, combined with other measures of 24-h ABPM, is a potentially powerful predictor of hypertensive crisis. If confirmed in independent samples, these findings have major clinical implications since measures predicting the risk of hypertensive crisis define patients requiring intensive follow-up and intensified therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a methodology for synthesizing isolated multi-component, high entropy alloy nanoparticles. Wet chemical synthesis technique was used to synthesis NiFeCrCuCo nanoparticles. As synthesized nanoparticles were spherical with an average size of 26.7 +/- 3.3 nm. Average composition of the as-synthesized nanoparticle dispersion was 26 +/- 2 at% Cr, 14 +/- 2 at% Fe, 10 +/- 0.6 at% Co, 25 +/- 0.1 at% Ni and 25 +/- 1.1 at% Cu. Compositional analysis of the nanoparticles conducted using the compositional line profile analysis and compositional mapping on a single nanoparticle level revealed a fairly uniform distribution of all the five component elements within the nanoparticle volume. Electron diffraction analysis clearly revealed that the structure of as-synthesized nanoparticles was face centered cubic. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I analyze two inequalities on entropy and information, one due to von Neumann and a recent one to Schiffer, and show that the relevant quantities in these inequalities are related by special doubly stochastic matrices (DSM). I then use generalization of the first inequality to prove algebraically a generalization of Schiffer's inequality to arbitrary DSM. I also give a second interpretation to the latter inequality, determine its domain of applicability, and illustrate it by using Zeeman splitting. This example shows that symmetric (degenerate) systems have less entropy than the corresponding split systems, if compared at the same average energy. This seemingly counter-intuitive result is explained thermodynamically. © 1991.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem addressed concerns the determination of the average numberof successive attempts of guessing a word of a certain length consisting of letters withgiven probabilities of occurrence. Both first- and second-order approximations to a naturallanguage are considered. The guessing strategy used is guessing words in decreasing orderof probability. When word and alphabet sizes are large, approximations are necessary inorder to estimate the number of guesses. Several kinds of approximations are discusseddemonstrating moderate requirements regarding both memory and central processing unit(CPU) time. When considering realistic sizes of alphabets and words (100), the numberof guesses can be estimated within minutes with reasonable accuracy (a few percent) andmay therefore constitute an alternative to, e.g., various entropy expressions. For manyprobability distributions, the density of the logarithm of probability products is close to anormal distribution. For those cases, it is possible to derive an analytical expression for theaverage number of guesses. The proportion of guesses needed on average compared to thetotal number decreases almost exponentially with the word length. The leading term in anasymptotic expansion can be used to estimate the number of guesses for large word lengths.Comparisons with analytical lower bounds and entropy expressions are also provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a novel relative entropy rate (RER) based approach for multiple HMM (MHMM) approximation of a class of discrete-time uncertain processes. Under different uncertainty assumptions, the model design problem is posed either as a min-max optimisation problem or stochastic minimisation problem on the RER between joint laws describing the state and output processes (rather than the more usual RER between output processes). A suitable filter is proposed for which performance results are established which bound conditional mean estimation performance and show that estimation performance improves as the RER is reduced. These filter consistency and convergence bounds are the first results characterising multiple HMM approximation performance and suggest that joint RER concepts provide a useful model selection criteria. The proposed model design process and MHMM filter are demonstrated on an important image processing dim-target detection problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The importance of agriculture in many countries has tended to reduce as their economies move from a resource base to a manufacturing industry base. Although the level of agricultural production in first world countries has increased over the past two decades, this increase has generally been at a less significant rate compared to other sectors of the economies. Despite this increase in secondary and high technology industries, developed countries have continued to encourage and support their agricultural industries. This support has been through both tariffs and price support. Following pressure from developing economies, particularly through the World Trade Organisation (WTO), GATT Uruguay round and the Cairns Group developed countries are now in various stages of winding back or de-coupling agricultural support within their economies. A major concern of farmers in protected agricultural markets is the impact of a free market trade in agricultural commodities on farm incomes, profitability and land values. This paper will analyse both the capital and income performance of the NSW rural land market over the period 1990-1999. This analysis will be based on several rural land use classifications and will compare the total return from rural properties based on the farm income generated by both the average farmer and those farmers considered to be in the top 20% of the various land use areas. The analysis will provide a comprehensive overview of rural production in a free trade economy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this chapter we propose clipping with amplitude and phase corrections to reduce the peak-to-average power ratio (PAR) of orthogonal frequency division multiplexed (OFDM) signals in high-speed wireless local area networks defined in IEEE 802.11a physical layer. The proposed techniques can be implemented with a small modification at the transmitter and the receiver remains standard compliant. PAR reduction as much as 4dB can be achieved by selecting a suitable clipping ratio and a correction factor depending on the constellation used. Out of band noise (OBN) is also reduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallel combinatory orthogonal frequency division multiplexing (PC-OFDM yields lower maximum peak-to-average power ratio (PAR), high bandwidth efficiency and lower bit error rate (BER) on Gaussian channels compared to OFDM systems. However, PC-OFDM does not improve the statistics of PAR significantly. In this chapter, the use of a set of fixed permutations to improve the statistics of the PAR of a PC-OFDM signal is presented. For this technique, interleavers are used to produce K-1 permuted sequences from the same information sequence. The sequence with the lowest PAR, among K sequences is chosen for the transmission. The PAR of a PC-OFDM signal can be further reduced by 3-4 dB by this technique. Mathematical expressions for the complementary cumulative density function (CCDF)of PAR of PC-OFDM signal and interleaved PC-OFDM signal are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper argues a model of adaptive design for sustainable architecture within a framework of entropy evolution. The spectrum of sustainable architecture consists of efficient use of energy and material resource in the life-cycle of buildings, active involvement of the occupants into micro-climate control within the building, and the natural environment as the physical context. The interactions amongst all the parameters compose a complex system of sustainable architecture design, of which the conventional linear and fragmented design technologies are insufficient to indicate holistic and ongoing environmental performance. The latest interpretation of the Second Law of Thermodynamics states a microscopic formulation of an entropy evolution of complex open systems. It provides a design framework for an adaptive system evolves for the optimization in open systems, this adaptive system evolves for the optimization of building environmental performance. The paper concludes that adaptive modelling in entropy evolution is a design alternative for sustainable architecture.