82 resultados para Continuous Variable Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of new video standards such as MPEG-4 part-10 and H.264/H.26L, demands for advanced video coding, particularly in the area of variable block size video motion estimation (VBSME), are increasing. In this paper, we propose a new one-dimensional (1-D) very large-scale integration architecture for full-search VBSME (FSVBSME). The VBS sum of absolute differences (SAD) computation is performed by re-using the results of smaller sub-block computations. These are distributed and combined by incorporating a shuffling mechanism within each processing element. Whereas a conventional 1-D architecture can process only one motion vector (MV), this new architecture can process up to 41 MV sub-blocks (within a macroblock) in the same number of clock cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently Ziman et al. [Phys. Rev. A 65, 042105 (2002)] have introduced a concept of a universal quantum homogenizer which is a quantum machine that takes as input a given (system) qubit initially in an arbitrary state rho and a set of N reservoir qubits initially prepared in the state xi. The homogenizer realizes, in the limit sense, the transformation such that at the output each qubit is in an arbitrarily small neighborhood of the state xi irrespective of the initial states of the system and the reservoir qubits. In this paper we generalize the concept of quantum homogenization for qudits, that is, for d-dimensional quantum systems. We prove that the partial-swap operation induces a contractive map with the fixed point which is the original state of the reservoir. We propose an optical realization of the quantum homogenization for Gaussian states. We prove that an incoming state of a photon field is homogenized in an array of beam splitters. Using Simon's criterion, we study entanglement between outgoing beams from beam splitters. We derive an inseparability condition for a pair of output beams as a function of the degree of squeezing in input beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification using radial basis function (RBF) neural networks. The problem considered here is simultaneous network construction and parameter optimization, well-known to be a mixed integer hard one. The proposed algorithm performs these two tasks within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Computational complexity analysis and simulation results confirm the effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Treasure et al. (2004) recently proposed a new sub space-monitoring technique, based on the N4SID algorithm, within the multivariate statistical process control framework. This dynamic-monitoring method requires considerably fewer variables to be analysed when compared with dynamic principal component analysis (PCA). The contribution charts and variable reconstruction, traditionally employed for static PCA, are analysed in a dynamic context. The contribution charts and variable reconstruction may be affected by the ratio of the number of retained components to the total number of analysed variables. Particular problems arise if this ratio is large and a new reconstruction chart is introduced to overcome these. The utility of such a dynamic contribution chart and variable reconstruction is shown in a simulation and by application to industrial data from a distillation unit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A conventional local model (LM) network consists of a set of affine local models blended together using appropriate weighting functions. Such networks have poor interpretability since the dynamics of the blended network are only weakly related to the underlying local models. In contrast, velocity-based LM networks employ strictly linear local models to provide a transparent framework for nonlinear modelling in which the global dynamics are a simple linear combination of the local model dynamics. A novel approach for constructing continuous-time velocity-based networks from plant data is presented. Key issues including continuous-time parameter estimation, correct realisation of the velocity-based local models and avoidance of the input derivative are all addressed. Application results are reported for the highly nonlinear simulated continuous stirred tank reactor process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the design of gain- scheduled sampled-data controllers for continuous-time polytopic linear parameter-varying systems. The scheduling variables are assumed to available only at the sampling instants, and a bound on the time-variation of the scheduling parameters is also assumed to be known. The resultant gain-scheduled controllers improve the maximum achieveable delay bound over previous constant-gain ones in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For some time there is a large interest in variable step-size methods for adaptive filtering. Recently, a few stochastic gradient algorithms have been proposed, which are based on cost functions that have exponential dependence on the chosen error. However, we have experienced that the cost function based on exponential of the squared error does not always satisfactorily converge. In this paper we modify this cost function in order to improve the convergence of exponentiated cost function and the novel ECVSS (exponentiated convex variable step-size) stochastic gradient algorithm is obtained. The proposed technique has attractive properties in both stationary and abrupt-change situations. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A theory of strongly interacting Fermi systems of a few particles is developed. At high excit at ion energies (a few times the single-parti cle level spacing) these systems are characterized by an extreme degree of complexity due to strong mixing of the shell-model-based many-part icle basis st at es by the residual two- body interaction. This regime can be described as many-body quantum chaos. Practically, it occurs when the excitation energy of the system is greater than a few single-particle level spacings near the Fermi energy. Physical examples of such systems are compound nuclei, heavy open shell atoms (e.g. rare earths) and multicharged ions, molecules, clusters and quantum dots in solids. The main quantity of the theory is the strength function which describes spreading of the eigenstates over many-part icle basis states (determinants) constructed using the shell-model orbital basis. A nonlinear equation for the strength function is derived, which enables one to describe the eigenstates without diagonalization of the Hamiltonian matrix. We show how to use this approach to calculate mean orbital occupation numbers and matrix elements between chaotic eigenstates and introduce typically statistical variable s such as t emperature in an isolated microscopic Fermi system of a few particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The field of surface polariton physics really took off with the prism coupling techniques developed by Kretschmann and Raether, and by Otto. This article reports on the construction and operation of a rotatable, in vacuo, variable temperature, Otto coupler with a coupling gap that can be varied by remote control. The specific design attributes of the system offer additional advantages to those of standard Otto systems of (i) temperature variation (ambient to 85 K), and (ii) the use of a valuable, additional reference point, namely the gap-independent reflectance at the Brewster angle at any given, fixed temperature. The instrument is placed firmly in a historical context of developments in the field. The efficacy of the coupler is demonstrated by sample attenuated total reflectance results on films of platinum, niobium, and yttrium barium copper oxide and on aluminum/gallium arsenide (Al/GaAs) Schottky diode structures. (C) 2000 American Institute of Physics. [S0034-6748(00)02411-4].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The validity of load estimates from intermittent, instantaneous grab sampling is dependent on adequate spatial coverage by monitoring networks and a sampling frequency that re?ects the variability in the system under study. Catchments with a ?ashy hydrology due to surface runoff pose a particular challenge as intense short duration rainfall events may account for a signi?cant portion of the total diffuse transfer of pollution from soil to water in any hydrological year. This can also be exacerbated by the presence of strong background pollution signals from point sources during low flows. In this paper, a range of sampling methodologies and load estimation techniques are applied to phosphorus data from such a surface water dominated river system, instrumented at three sub-catchments (ranging from 3 to 5 km2 in area) with near-continuous monitoring stations. Systematic and Monte Carlo approaches were applied to simulate grab sampling using multiple strategies and to calculate an estimated load, Le based on established load estimation methods. Comparison with the actual load, Lt, revealed signi?cant average underestimation, of up to 60%, and high variability for all feasible sampling approaches. Further analysis of the time series provides an insight into these observations; revealing peak frequencies and power-law scaling in the distributions of P concentration, discharge and load associated with surface runoff and background transfers. Results indicate that only near-continuous monitoring that re?ects the rapid temporal changes in these river systems is adequate for comparative monitoring and evaluation purposes. While the implications of this analysis may be more tenable to small scale ?ashy systems, this represents an appropriate scale in terms of evaluating catchment mitigation strategies such as agri-environmental policies for managing diffuse P transfers in complex landscapes.