120 resultados para Invariant polynomials
Resumo:
A simple parameter adaptive controller design methodology is introduced in which steady-state servo tracking properties provide the major control objective. This is achieved without cancellation of process zeros and hence the underlying design can be applied to non-minimum phase systems. As with other self-tuning algorithms, the design (user specified) polynomials of the proposed algorithm define the performance capabilities of the resulting controller. However, with the appropriate definition of these polynomials, the synthesis technique can be shown to admit different adaptive control strategies, e.g. self-tuning PID and self-tuning pole-placement controllers. The algorithm can therefore be thought of as an embodiment of other self-tuning design techniques. The performances of some of the resulting controllers are illustrated using simulation examples and the on-line application to an experimental apparatus.
Resumo:
The problem of identification of a nonlinear dynamic system is considered. A two-layer neural network is used for the solution of the problem. Systems disturbed with unmeasurable noise are considered, although it is known that the disturbance is a random piecewise polynomial process. Absorption polynomials and nonquadratic loss functions are used to reduce the effect of this disturbance on the estimates of the optimal memory of the neural-network model.
Resumo:
Differential geometry is used to investigate the structure of neural-network-based control systems. The key aspect is relative order—an invariant property of dynamic systems. Finite relative order allows the specification of a minimal architecture for a recurrent network. Any system with finite relative order has a left inverse. It is shown that a recurrent network with finite relative order has a local inverse that is also a recurrent network with the same weights. The results have implications for the use of recurrent networks in the inverse-model-based control of nonlinear systems.
Resumo:
We show that for any sample size, any size of the test, and any weights matrix outside a small class of exceptions, there exists a positive measure set of regression spaces such that the power of the Cli-Ord test vanishes as the autocorrelation increases in a spatial error model. This result extends to the tests that dene the Gaussian power envelope of all invariant tests for residual spatial autocorrelation. In most cases, the regression spaces such that the problem occurs depend on the size of the test, but there also exist regression spaces such that the power vanishes regardless of the size. A characterization of such particularly hostile regression spaces is provided.
Resumo:
The polar winter stratospheric vortex is a coherent structure that undergoes different types of deformation that can be revealed by the geometric invariant moments. Three moments are used—the aspect ratio, the centroid latitude, and the area of the vortex based on stratospheric data from the 40-yr ECMWF Re-Analysis (ERA-40) project—to study sudden stratospheric warmings. Hierarchical clustering combined with data image visualization techniques is used as well. Using the gap statistic, three optimal clusters are obtained based on the three geometric moments considered here. The 850-K potential vorticity field, as well as the vertical profiles of polar temperature and zonal wind, provides evidence that the clusters represent, respectively, the undisturbed (U), displaced (D), and split (S) states of the polar vortex. This systematic method for identifying and characterizing the state of the polar vortex using objective methods is useful as a tool for analyzing observations and as a test for climate models to simulate the observations. The method correctly identifies all previously identified major warmings and also identifies significant minor warmings where the atmosphere is substantially disturbed but does not quite meet the criteria to qualify as a major stratospheric warming.
Resumo:
The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.
Resumo:
Stochastic Diffusion Search is an efficient probabilistic bestfit search technique, capable of transformation invariant pattern matching. Although inherently parallel in operation it is difficult to implement efficiently in hardware as it requires full inter-agent connectivity. This paper describes a lattice implementation, which, while qualitatively retaining the properties of the original algorithm, restricts connectivity, enabling simpler implementation on parallel hardware. Diffusion times are examined for different network topologies, ranging from ordered lattices, over small-world networks to random graphs.
Resumo:
Higher order cumulant analysis is applied to the blind equalization of linear time-invariant (LTI) nonminimum-phase channels. The channel model is moving-average based. To identify the moving average parameters of channels, a higher-order cumulant fitting approach is adopted in which a novel relay algorithm is proposed to obtain the global solution. In addition, the technique incorporates model order determination. The transmitted data are considered as independently identically distributed random variables over some discrete finite set (e.g., set {±1, ±3}). A transformation scheme is suggested so that third-order cumulant analysis can be applied to this type of data. Simulation examples verify the feasibility and potential of the algorithm. Performance is compared with that of the noncumulant-based Sato scheme in terms of the steady state MSE and convergence rate.
Resumo:
This paper introduces a new blind equalisation algorithm for the pulse amplitude modulation (PAM) data transmitted through nonminimum phase (NMP) channels. The algorithm itself is based on a noncausal AR model of communication channels and the second- and fourth-order cumulants of the received data series, where only the diagonal slices of cumulants are used. The AR parameters are adjusted at each sample by using a successive over-relaxation (SOR) scheme, a variety of the ordinary LMS scheme, but with a faster convergence rate and a greater robustness to the selection of the ‘step-size’ in iterations. Computer simulations are implemented for both linear time-invariant (LTI) and linear time-variant (LTV) NMP channels, and the results show that the algorithm proposed in this paper has a fast convergence rate and a potential capability to track the LTV NMP channels.
Resumo:
This paper addresses the problem of tracking line segments corresponding to on-line handwritten obtained through a digitizer tablet. The approach is based on Kalman filtering to model linear portions of on-line handwritten, particularly, handwritten numerals, and to detect abrupt changes in handwritten direction underlying a model change. This approach uses a Kalman filter framework constrained by a normalized line equation, where quadratic terms are linearized through a first-order Taylor expansion. The modeling is then carried out under the assumption that the state is deterministic and time-invariant, while the detection relies on double thresholding mechanism which tests for a violation of this assumption. The first threshold is based on an approach of layout kinetics. The second one takes into account the jump in angle between the past observed direction of layout and its current direction. The method proposed enables real-time processing. To illustrate the methodology proposed, some results obtained from handwritten numerals are presented.
Resumo:
Smooth trajectories are essential for safe interaction in between human and a haptic interface. Different methods and strategies have been introduced to create such smooth trajectories. This paper studies the creation of human-like movements in haptic interfaces, based on the study of human arm motion. These motions are intended to retrain the upper limb movements of patients that lose manipulation functions following stroke. We present a model that uses higher degree polynomials to define a trajectory and control the robot arm to achieve minimum jerk movements. It also studies different methods that can be driven from polynomials to create more realistic human-like movements for therapeutic purposes.
Resumo:
The Routh-stability method is employed to reduce the order of discrete-time system transfer functions. It is shown that the Routh approximant is well suited to reduce both the denominator and the numerator polynomials, although alternative methods, such as PadÃ�Â(c)-Markov approximation, are also used to fit the model numerator coefficients.
Resumo:
Identifying a periodic time-series model from environmental records, without imposing the positivity of the growth rate, does not necessarily respect the time order of the data observations. Consequently, subsequent observations, sampled in the environmental archive, can be inversed on the time axis, resulting in a non-physical signal model. In this paper an optimization technique with linear constraints on the signal model parameters is proposed that prevents time inversions. The activation conditions for this constrained optimization are based upon the physical constraint of the growth rate, namely, that it cannot take values smaller than zero. The actual constraints are defined for polynomials and first-order splines as basis functions for the nonlinear contribution in the distance-time relationship. The method is compared with an existing method that eliminates the time inversions, and its noise sensitivity is tested by means of Monte Carlo simulations. Finally, the usefulness of the method is demonstrated on the measurements of the vessel density, in a mangrove tree, Rhizophora mucronata, and the measurement of Mg/Ca ratios, in a bivalve, Mytilus trossulus.
Resumo:
Previous work has demonstrated that observed and modeled climates show a near-time-invariant ratio of mean land to mean ocean surface temperature change under transient and equilibrium global warming. This study confirms this in a range of atmospheric models coupled to perturbed sea surface temperatures (SSTs), slab (thermodynamics only) oceans, and a fully coupled ocean. Away from equilibrium, it is found that the atmospheric processes that maintain the ratio cause a land-to-ocean heat transport anomaly that can be approximated using a two-box energy balance model. When climate is forced by increasing atmospheric CO2 concentration, the heat transport anomaly moves heat from land to ocean, constraining the land to warm in step with the ocean surface, despite the small heat capacity of the land. The heat transport anomaly is strongly related to the top-of-atmosphere radiative flux imbalance, and hence it tends to a small value as equilibrium is approached. In contrast, when climate is forced by prescribing changes in SSTs, the heat transport anomaly replaces ‘‘missing’’ radiative forcing over land by moving heat from ocean to land, warming the land surface. The heat transport anomaly remains substantial in steady state. These results are consistent with earlier studies that found that both land and ocean surface temperature changes may be approximated as local responses to global mean radiative forcing. The modeled heat transport anomaly has large impacts on surface heat fluxes but small impacts on precipitation, circulation, and cloud radiative forcing compared with the impacts of surface temperature change. No substantial nonlinearities are found in these atmospheric variables when the effects of forcing and surface temperature change are added.
Resumo:
This study uses a bootstrap methodology to explicitly distinguish between skill and luck for 80 Real Estate Investment Trust Mutual Funds in the period January 1995 to May 2008. The methodology successfully captures non-normality in the idiosyncratic risk of the funds. Using unconditional, beta conditional and alpha-beta conditional estimation models, the results indicate that all but one fund demonstrates poor skill. Tests of robustness show that this finding is largely invariant to REIT market conditions and maturity.