21 resultados para Dynamic state
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
In this brief, a hybrid filter algorithm is developed to deal with the state estimation (SE) problem for power systems by taking into account the impact from the phasor measurement units (PMUs). Our aim is to include PMU measurements when designing the dynamic state estimators for power systems with traditional measurements. Also, as data dropouts inevitably occur in the transmission channels of traditional measurements from the meters to the control center, the missing measurement phenomenon is also tackled in the state estimator design. In the framework of extended Kalman filter (EKF) algorithm, the PMU measurements are treated as inequality constraints on the states with the aid of the statistical criterion, and then the addressed SE problem becomes a constrained optimization one based on the probability-maximization method. The resulting constrained optimization problem is then solved using the particle swarm optimization algorithm together with the penalty function approach. The proposed algorithm is applied to estimate the states of the power systems with both traditional and PMU measurements in the presence of probabilistic data missing phenomenon. Extensive simulations are carried out on the IEEE 14-bus test system and it is shown that the proposed algorithm gives much improved estimation performances over the traditional EKF method.
Resumo:
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Resumo:
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. Firstly, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a Probability color Density Image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(z|x) using the Integral Image of the PDI.
Resumo:
Even though computational power used for structural analysis is ever increasing, there is still a fundamental need for testing in structural engineering, either for validation of complex numerical models or to assess material behaviour. In addition to analysis of structures using scale models, many structural engineers are aware to some extent of cyclic and shake-table test methods, but less so of ‘hybrid testing’. The latter is a combination of physical testing (e.g. hydraulic
actuators) and computational modelling (e.g. finite element modelling). Over the past 40 years, hybrid testing of engineering structures has developed from concept through to maturity to become a reliable and accurate dynamic testing technique. The hybrid test method provides users with some additional benefits that standard dynamic testing methods do not, and the method is more cost-effective in comparison to shake-table testing. This article aims to provide the reader with a basic understanding of the hybrid test method, including its contextual development and potential as a dynamic testing technique.
Resumo:
This paper presents a new strategy, “state-by-state transient screening”, for kinetic characterization of states of a multicomponent catalyst as applied to TAP pulse-response experiments. The key idea is to perform an insignificant chemical perturbation of the catalytic system so that the known essential characteristics of the catalyst (e.g. oxidation degree) do not change during the experiment. Two types of catalytic substances can be distinguished: catalyst state substances, which determine the catalyst state, and catalyst dynamic substances, which are created by the perturbation. The general methodological and theoretical framework for multi-pulse TAP experiments is developed, and the general model for a one-pulse TAP experiment is solved. The primary kinetic characteristics, basic kinetic coefficients, are extracted from diffusion–reaction data and calculated as functions of experimentally measured exit-flow moments without assumptions regarding the detailed kinetic mechanism. The new strategy presented in this paper provides essential information, which can be a basis for developing a detailed reaction mechanism. The theoretical results are illustrated using furan oxidation over a VPO catalyst.
Resumo:
This paper points out a serious flaw in dynamic multivariate statistical process control (MSPC). The principal component analysis of a linear time series model that is employed to capture auto- and cross-correlation in recorded data may produce a considerable number of variables to be analysed. To give a dynamic representation of the data (based on variable correlation) and circumvent the production of a large time-series structure, a linear state space model is used here instead. The paper demonstrates that incorporating a state space model, the number of variables to be analysed dynamically can be considerably reduced, compared to conventional dynamic MSPC techniques.
Resumo:
Extreme states of matter such as Warm Dense Matter “WDM” and Dense Strongly Coupled Plasmas “DSCP” play a key role in many high energy density experiments, however creating WDM and DSCP in a manner that can be quantified is not readily feasible. In this paper, isochoric heating of matter by intense heavy ion beams in spherical symmetry is investigated for WDM and DSCP research: The heating times are long (100 ns), the samples are macroscopically large (mm-size) and the symmetry is advantageous for diagnostic purposes. A dynamic confinement scheme in spherical symmetry is proposed which allows even ion beam heating times that are long on the hydrodynamic time scale of the target response. A particular selection of low Z-target tamper and x-ray probe radiation parameters allows to identify the x-ray scattering from the target material and use it for independent charge state measurements Z* of the material under study.
Resumo:
This paper introduces a novel modelling framework for identifying dynamic models of systems that are under feedback control. These models are identified under closed-loop conditions and produce a joint representation that includes both the plant and controller models in state space form. The joint plant/controller model is identified using subspace model identification (SMI), which is followed by the separation of the plant model from the identified one. Compared to previous research, this work (i) proposes a new modelling framework for identifying closed-loop systems, (ii) introduces a generic structure to represent the controller and (iii) explains how that the new framework gives rise to a simplified determination of the plant models. In contrast, the use of the conventional modelling approach renders the separation of the plant model a difficult task. The benefits of using the new model method are demonstrated using a number of application studies.
Resumo:
From the late 1970s onwards, the view that government intervention could provide a means of overcoming market failure in advanced was increasingly questioned. For some, intervention was to be discouraged because it interfered with individual liberty. For others, what was problematic was the welfare economist's assumption of an autonomous state acting in the public interest. Finally, there was the issue of the state's ability to achieve what it set out to do. Government failure it was argued was just as pervasive as market failure and no antidote to it. This paper critically evaluates such arguments in relation to competition, industrial change, innovation, and competitive advantage in production.
Resumo:
Deformation localisation is the main reason for material failure in cold forging of titanium alloys and is thus closely related to the production yield of cold forging. In the study of the influence of process parameters on dynamic compression, considering material constitutive behaviour, physical parameters and process parameters, a numerical dynamic compression model for titanium alloys has been constructed. By adjusting the process parameters, the severity of strain localisation and stress state in the localised zone can be controlled thus enhancing the compression performance of titanium alloys.
Resumo:
This paper investigates a dynamic buffer man-agement scheme for QoS control of multimedia services in be-yond 3G wireless systems. The scheme is studied in the context of the state-of-the-art 3.5G system i.e. the High Speed Downlink Packet Access (HSDPA) which enhances 3G UMTS to support high-speed packet switched services. Unlike earlier systems, UMTS-evolved systems from HSDPA and beyond incorporate mechanisms such as packet scheduling and HARQ in the base station necessitating data buffering at the air interface. This introduces a potential bottleneck to end-to-end communication. Hence, buffer management at the air interface is crucial for end-to-end QoS support of multimedia services with multi-plexed parallel diverse flows such as video and data in the same end-user session. The dynamic buffer management scheme for HSDPA multimedia sessions with aggregated real-time and non real-time flows is investigated via extensive HSDPA simulations. The impact of the scheme on end-to-end traffic performance is evaluated with an example multimedia session comprising a real-time streaming flow concurrent with TCP-based non real-time flow. Results demonstrate that the scheme can guar-antee the end-to-end QoS of the real-time streaming flow, whilst simultaneously protecting the non real-time flow from starva-tion resulting in improved end-to-end throughput performance
Resumo:
The proliferation of mobile devices in society accessing data via the ‘cloud’ is imposing a dramatic increase in the amount of information to be stored on hard disk drives (HDD) used in servers. Forecasts are that areal densities will need to increase by as much as 35% compound per annum and by 2020 cloud storage capacity will be around 7 zettabytes corresponding to areal densities of 2 Tb/in2. This requires increased performance from the magnetic pole of the electromagnetic writer in the read/write head in the HDD. Current state-of-art writing is undertaken by morphologically complex magnetic pole of sub 100 nm dimensions, in an environment of engineered magnetic shields and it needs to deliver strong directional magnetic field to areas on the recording media around 50 nm x 13 nm. This points to the need for a method to perform direct quantitative measurements of the magnetic field generated by the write pole at the nanometer scale. Here we report on the complete in situ quantitative mapping of the magnetic field generated by a functioning write pole in operation using electron holography. Opportunistically, it points the way towards a new nanoscale magnetic field source to further develop in situ Transmission Electron Microscopy.
Resumo:
In this paper the evolution of a time domain dynamic identification technique based on a statistical moment approach is presented. This technique can be used in the case of structures under base random excitations in the linear state and in the non linear one. By applying Itoˆ stochastic calculus, special algebraic equations can be obtained depending on the statistical moments of the response of the system to be identified. Such equations can be used for the dynamic identification of the mechanical parameters and of the input. The above equations, differently from many techniques in the literature, show the possibility of obtaining the identification of the dissipation characteristics independently from the input. Through the paper the first formulation of this technique, applicable to non linear systems, based on the use of a restricted class of the potential models, is presented. Further a second formulation of the technique in object, applicable to each kind of linear systems and based on the use of a class of linear models, characterized by a mass proportional damping matrix, is described.
Resumo:
One of the most widely used techniques in computer vision for foreground detection is to model each background pixel as a Mixture of Gaussians (MoG). While this is effective for a static camera with a fixed or a slowly varying background, it fails to handle any fast, dynamic movement in the background. In this paper, we propose a generalised framework, called region-based MoG (RMoG), that takes into consideration neighbouring pixels while generating the model of the observed scene. The model equations are derived from Expectation Maximisation theory for batch mode, and stochastic approximation is used for online mode updates. We evaluate our region-based approach against ten sequences containing dynamic backgrounds, and show that the region-based approach provides a performance improvement over the traditional single pixel MoG. For feature and region sizes that are equal, the effect of increasing the learning rate is to reduce both true and false positives. Comparison with four state-of-the art approaches shows that RMoG outperforms the others in reducing false positives whilst still maintaining reasonable foreground definition. Lastly, using the ChangeDetection (CDNet 2014) benchmark, we evaluated RMoG against numerous surveillance scenes and found it to amongst the leading performers for dynamic background scenes, whilst providing comparable performance for other commonly occurring surveillance scenes.