44 resultados para Reliability models in discrete time
Resumo:
This paper presents a second order sliding mode observer (SOSMO) design for discrete time uncertain linear multi-output system. The design procedure is effective for both matched and unmatched bounded uncertainties and/or disturbances. A second order sliding function and corresponding sliding manifold for discrete time system are defined similar to the lines of continuous time counterpart. A boundary layer concept is employed to avoid switching across the defined sliding manifold and the sliding trajectory is confined to a boundary layer once it converges to it. The condition for existence of convergent quasi-sliding mode (QSM) is derived. The observer estimation errors satisfying given stability conditions converge to an ultimate finite bound (within the specified boundary layer) with thickness O(T-2) where T is the sampling period. A relation between sliding mode gain and boundary layer is established for the existence of second order discrete sliding motion. The design strategy is very simple to apply and is demonstrated for three examples with different class of disturbances (matched and unmatched) to show the effectiveness of the design. Simulation results to show the robustness with respect to the measurement noise are given for SOSMO and the performance is compared with pseudo-linear Kalman filter (PLKF). (C) 2013 Published by Elsevier Ltd. on behalf of The Franklin Institute
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Evolution in the time series of vortex velocity fluctuations across different regimes of vortex flow
Resumo:
Investigations of vortex velocity fluctuation in time domain have revealed a presence of low frequency velocity fluctuations which evolve with the different driven phases of the vortex state in a single crystal of 2H-NbSe2. The observation of velocity fluctuations with a characteristic low frequency is associated with the onset of nonlinear nature of vortex flow deep in the driven elastic vortex state. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We investigate a model containing two species of one-dimensional fermions interacting via a gauge field determined by the positions of all particles of the opposite species. The model can be salved exactly via a simple unitary transformation. Nevertheless, correlation functions exhibit nontrivial interaction-dependent exponents. A similar model defined on a lattice is introduced and solved. Various generalizations, e.g., to the case of internal symmetries of the fermions, are discussed. The present treatment also clarifies certain aspects of Luttinger's original solution of the "Luttinger model."
Resumo:
We present a randomized and a deterministic data structure for maintaining a dynamic family of sequences under equality tests of pairs of sequences and creations of new sequences by joining or splitting existing sequences. Both data structures support equality tests in O(1) time. The randomized version supports new sequence creations in O(log(2) n) expected time where n is the length of the sequence created. The deterministic solution supports sequence creations in O(log n (log m log* m + log n)) time for the mth operation.
Resumo:
The soil moisture characteristic (SMC) forms an important input to mathematical models of water and solute transport in the unsaturated-soil zone. Owing to their simplicity and ease of use, texture-based regression models are commonly used to estimate the SMC from basic soil properties. In this study, the performances of six such regression models were evaluated on three soils. Moisture characteristics generated by the regression models were statistically compared with the characteristics developed independently from laboratory and in-situ retention data of the soil profiles. Results of the statistical performance evaluation, while providing useful information on the errors involved in estimating the SMC, also highlighted the importance of the nature of the data set underlying the regression models. Among the models evaluated, the one possessing an underlying data set of in-situ measurements was found to be the best estimator of the in-situ SMC for all the soils. Considerable errors arose when a textural model based on laboratory data was used to estimate the field retention characteristics of unsaturated soils.
Resumo:
The statistical thermodynamics of adsorption in caged zeolites is developed by treating the zeolite as an ensemble of M identical cages or subsystems. Within each cage adsorption is assumed to occur onto a lattice of n identical sites. Expressions for the average occupancy per cage are obtained by minimizing the Helmholtz free energy in the canonical ensemble subject to the constraints of constant M and constant number of adsorbates N. Adsorbate-adsorbate interactions in the Brag-Williams or mean field approximation are treated in two ways. The local mean field approximation (LMFA) is based on the local cage occupancy and the global mean field approximation (GMFA) is based on the average coverage of the ensemble. The GMFA is shown to be equivalent in formulation to treating the zeolite as a collection of interacting single site subsystems. In contrast, the treatment in the LMFA retains the description of the zeolite as an ensemble of identical cages, whose thermodynamic properties are conveniently derived in the grand canonical ensemble. For a z coordinated lattice within the zeolite cage, with epsilon(aa) as the adsorbate-adsorbate interaction parameter, the comparisons for different values of epsilon(aa)(*)=epsilon(aa)z/2kT, and number of sites per cage, n, illustrate that for -1
Resumo:
In this paper we discuss the recent progresses in spectral finite element modeling of complex structures and its application in real-time structural health monitoring system based on sensor-actuator network and near real-time computation of Damage Force Indicator (DFI) vector. A waveguide network formalism is developed by mapping the original variational problem into the variational problem involving product spaces of 1D waveguides. Numerical convergence is studied using a h()-refinement scheme, where is the wavelength of interest. Computational issues towards successful implementation of this method with SHM system are discussed.
Resumo:
In the determination of the response time of u.h.v. damped capacitive impulse voltage dividers using the CIGRE IMR-1MS group (1) method and the arrangement suggested by the International Electrotechnical Commission (the I EC square loop),the surge impedance of the connecting lead has been found to influence the accuracy of determination. To avoid this difficulty,a new graphical procedure is proposed. As this method uses only those data points which can be determined with good accuracy, errors in response-time area evaluation do not influence the result.
Resumo:
We present an extensive study of Mott insulator (MI) and superfluid (SF) shells in Bose-Hubbard (BH) models for bosons in optical lattices with harmonic traps. For this we apply the inhomogeneous mean-field theory developed by Sheshadri et al. Phys. Rev. Lett. 75, 4075 (1995)]. Our results for the BH model with one type of spinless bosons agree quantitatively with quantum Monte Carlo simulations. Our approach is numerically less intensive than such simulations, so we are able to perform calculations on experimentally realistic, large three-dimensional systems, explore a wide range of parameter values, and make direct contact with a variety of experimental measurements. We also extend our inhomogeneous mean-field theory to study BH models with harmonic traps and (a) two species of bosons or (b) spin-1 bosons. With two species of bosons, we obtain rich phase diagrams with a variety of SF and MI phases and associated shells when we include a quadratic confining potential. For the spin-1 BH model, we show, in a representative case, that the system can display alternating shells of polar SF and MI phases, and we make interesting predictions for experiments in such systems.
Resumo:
Fix a prime p. Given a positive integer k, a vector of positive integers Delta = (Delta(1), Delta(2), ... , Delta(k)) and a function Gamma : F-p(k) -> F-p, we say that a function P : F-p(n) -> F-p is (k, Delta, Gamma)-structured if there exist polynomials P-1, P-2, ..., P-k : F-p(n) -> F-p with each deg(P-i) <= Delta(i) such that for all x is an element of F-p(n), P(x) = Gamma(P-1(x), P-2(x), ..., P-k(x)). For instance, an n-variate polynomial over the field Fp of total degree d factors nontrivially exactly when it is (2, (d - 1, d - 1), prod)- structured where prod(a, b) = a . b. We show that if p > d, then for any fixed k, Delta, Gamma, we can decide whether a given polynomial P(x(1), x(2), ..., x(n)) of degree d is (k, Delta, Gamma)-structured and if so, find a witnessing decomposition. The algorithm takes poly(n) time. Our approach is based on higher-order Fourier analysis.
Resumo:
The ability of Coupled General Circulation Models (CGCMs) participating in the Intergovernmental Panel for Climate Change's fourth assessment report (IPCC AR4) for the 20th century climate (20C3M scenario) to simulate the daily precipitation over the Indian region is explored. The skill is evaluated on a 2.5A degrees x 2.5A degrees grid square compared with the Indian Meteorological Department's (IMD) gridded dataset, and every GCM is ranked for each of these grids based on its skill score. Skill scores (SSs) are estimated from the probability density functions (PDFs) obtained from observed IMD datasets and GCM simulations. The methodology takes into account (high) extreme precipitation events simulated by GCMs. The results are analyzed and presented for three categories and six zones. The three categories are the monsoon season (JJASO - June to October), non-monsoon season (JFMAMND - January to May, November, December) and for the entire year (''Annual''). The six precipitation zones are peninsular, west central, northwest, northeast, central northeast India, and the hilly region. Sensitivity analysis was performed for three spatial scales, 2.5A degrees grid square, zones, and all of India, in the three categories. The models were ranked based on the SS. The category JFMAMND had a higher SS than the JJASO category. The northwest zone had higher SSs, whereas the peninsular and hilly regions had lower SS. No single GCM can be identified as the best for all categories and zones. Some models consistently outperformed the model ensemble, and one model had particularly poor performance. Results show that most models underestimated the daily precipitation rates in the 0-1 mm/day range and overestimated it in the 1-15 mm/day range.
Resumo:
The trapezoidal rule, which is a special case of the Newmark family of algorithms, is one of the most widely used methods for transient hyperbolic problems. In this work, we show that this rule conserves linear and angular momenta and energy in the case of undamped linear elastodynamics problems, and an ``energy-like measure'' in the case of undamped acoustic problems. These conservation properties, thus, provide a rational basis for using this algorithm. In linear elastodynamics problems, variants of the trapezoidal rule that incorporate ``high-frequency'' dissipation are often used, since the higher frequencies, which are not approximated properly by the standard displacement-based approach, often result in unphysical behavior. Instead of modifying the trapezoidal algorithm, we propose using a hybrid finite element framework for constructing the stiffness matrix. Hybrid finite elements, which are based on a two-field variational formulation involving displacement and stresses, are known to approximate the eigenvalues much more accurately than the standard displacement-based approach, thereby either bypassing or reducing the need for high-frequency dissipation. We show this by means of several examples, where we compare the numerical solutions obtained using the displacement-based and hybrid approaches against analytical solutions.