111 resultados para System Identification
em Queensland University of Technology - ePrints Archive
Resumo:
Fast thrust changes are important for authoritive control of VTOL micro air vehicles. Fixed-pitch rotors that alter thrust by varying rotor speed require high-bandwidth control systems to provide adequate performace. We develop a feedback compensator for a brushless hobby motor driving a custom rotor suitable for UAVs. The system plant is identified using step excitation experiments. The aerodynamic operating conditions of these rotors are unusual and so experiments are performed to characterise expected load disturbances. The plant and load models lead to a proportional controller design capable of significantly decreasing rise-time and propagation of disturbances, subject to bus voltage constraints.
Resumo:
This paper describes system identification, estimation and control of translational motion and heading angle for a cost effective open-source quadcopter — the MikroKopter. The dynamics of its built-in sensors, roll and pitch attitude controller, and system latencies are determined and used to design a computationally inexpensive multi-rate velocity estimator that fuses data from the built-in inertial sensors and a low-rate onboard laser range finder. Control is performed using a nested loop structure that is also computationally inexpensive and incorporates different sensors. Experimental results for the estimator and closed-loop positioning are presented and compared with ground truth from a motion capture system.
Resumo:
The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.
Resumo:
Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.
Resumo:
In this thesis, a new technique has been developed for determining the composition of a collection of loads including induction motors. The application would be to provide a representation of the dynamic electrical load of Brisbane so that the ability of the power system to survive a given fault can be predicted. Most of the work on load modelling to date has been on post disturbance analysis, not on continuous on-line models for loads. The post disturbance methods are unsuitable for load modelling where the aim is to determine the control action or a safety margin for a specific disturbance. This thesis is based on on-line load models. Dr. Tania Parveen considers 10 induction motors with different power ratings, inertia and torque damping constants to validate the approach, and their composite models are developed with different percentage contributions for each motor. This thesis also shows how measurements of a composite load respond to normal power system variations and this information can be used to continuously decompose the load continuously and to characterize regarding the load into different sizes and amounts of motor loads.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
In the ocean science community, researchers have begun employing novel sensor platforms as integral pieces in oceanographic data collection, which have significantly advanced the study and prediction of complex and dynamic ocean phenomena. These innovative tools are able to provide scientists with data at unprecedented spatiotemporal resolutions. This paper focuses on the newly developed Wave Glider platform from Liquid Robotics. This vehicle produces forward motion by harvesting abundant natural energy from ocean waves, and provides a persistent ocean presence for detailed ocean observation. This study is targeted at determining a kinematic model for offline planning that provides an accurate estimation of the vehicle speed for a desired heading and set of environmental parameters. Given the significant wave height, ocean surface and subsurface currents, wind speed and direction, we present the formulation of a system identification to provide the vehicle’s speed over a range of possible directions.
Resumo:
This paper discusses the principal domains of auto- and cross-trispectra. It is shown that the cumulant and moment based trispectra are identical except on certain planes in trifrequency space. If these planes are avoided, their principal domains can be derived by considering the regions of symmetry of the fourth order spectral moment. The fourth order averaged periodogram will then serve as an estimate for both cumulant and moment trispectra. Statistics of estimates of normalised trispectra or tricoherence are also discussed.
Resumo:
Time-domain models of marine structures based on frequency domain data are usually built upon the Cummins equation. This type of model is a vector integro-differential equation which involves convolution terms. These convolution terms are not convenient for analysis and design of motion control systems. In addition, these models are not efficient with respect to simulation time, and ease of implementation in standard simulation packages. For these reasons, different methods have been proposed in the literature as approximate alternative representations of the convolutions. Because the convolution is a linear operation, different approaches can be followed to obtain an approximately equivalent linear system in the form of either transfer function or state-space models. This process involves the use of system identification, and several options are available depending on how the identification problem is posed. This raises the question whether one method is better than the others. This paper therefore has three objectives. The first objective is to revisit some of the methods for replacing the convolutions, which have been reported in different areas of analysis of marine systems: hydrodynamics, wave energy conversion, and motion control systems. The second objective is to compare the different methods in terms of complexity and performance. For this purpose, a model for the response in the vertical plane of a modern containership is considered. The third objective is to describe the implementation of the resulting model in the standard simulation environment Matlab/Simulink.
Resumo:
This paper presents the application of a statistical method for model structure selection of lift-drag and viscous damping components in ship manoeuvring models. The damping model is posed as a family of linear stochastic models, which is postulated based on previous work in the literature. Then a nested test of hypothesis problem is considered. The testing reduces to a recursive comparison of two competing models, for which optimal tests in the Neyman sense exist. The method yields a preferred model structure and its initial parameter estimates. Alternatively, the method can give a reduced set of likely models. Using simulated data we study how the selection method performs when there is both uncorrelated and correlated noise in the measurements. The first case is related to instrumentation noise, whereas the second case is related to spurious wave-induced motion often present during sea trials. We then consider the model structure selection of a modern high-speed trimaran ferry from full scale trial data.
Resumo:
Commodity price modeling is normally approached in terms of structural time-series models, in which the different components (states) have a financial interpretation. The parameters of these models can be estimated using maximum likelihood. This approach results in a non-linear parameter estimation problem and thus a key issue is how to obtain reliable initial estimates. In this paper, we focus on the initial parameter estimation problem for the Schwartz-Smith two-factor model commonly used in asset valuation. We propose the use of a two-step method. The first step considers a univariate model based only on the spot price and uses a transfer function model to obtain initial estimates of the fundamental parameters. The second step uses the estimates obtained in the first step to initialize a re-parameterized state-space-innovations based estimator, which includes information related to future prices. The second step refines the estimates obtained in the first step and also gives estimates of the remaining parameters in the model. This paper is part tutorial in nature and gives an introduction to aspects of commodity price modeling and the associated parameter estimation problem.
Resumo:
This thesis presents an approach for a vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structure such as light and power distribution poles is a difficult task. There are challenges involved with developing such an inspection system, such as flying in close proximity to a target while maintaining a fixed stand-off distance from it. The contributions of this thesis fall into three main areas. Firstly, an approach to vehicle dynamic modeling is evaluated in simulation and experiments. Secondly, EKF-based state estimators are demonstrated, as well as estimator-free approaches such as image based visual servoing (IBVS) validated with motion capture ground truth data. Thirdly, an integrated pole inspection system comprising a VTOL platform with human-in-the-loop control, (shared autonomy) is demonstrated. These contributions are comprehensively explained through a series of published papers.