894 resultados para robust extended Kalman filter
Resumo:
In this paper we approach the problem of computing the characteristic polynomial of a matrix from the combinatorial viewpoint. We present several combinatorial characterizations of the coefficients of the characteristic polynomial, in terms of walks and closed walks of different kinds in the underlying graph. We develop algorithms based on these characterizations, and show that they tally with well-known algorithms arrived at independently from considerations in linear algebra.
Resumo:
Electric power utilities are installing distribution automation systems (DAS) for better management and control of the distribution networks during the recent past. The success of DAS, largely depends on the availability of reliable database of the control centre and thus requires an efficient state estimation (SE) solution technique. This paper presents an efficient and robust three-phase SE algorithm for application to radial distribution networks. This method exploits the radial nature of the network and uses forward and backward propagation scheme to estimate the line flows, node voltage and loads at each node, based on the measured quantities. The SE cannot be executed without adequate number of measurements. The extension of the method to the network observability analysis and bad data detection is also discussed. The proposed method has been tested to analyze several practical distribution networks of various voltage levels and also having high R:X ratio of lines. The results for a typical network are presented for illustration purposes. © 2000 Elsevier Science S.A. All rights reserved.
Resumo:
This paper proposes a new approach for solving the state estimation problem. The approach is aimed at producing a robust estimator that rejects bad data, even if they are associated with leverage-point measurements. This is achieved by solving a sequence of Linear Programming (LP) problems. Optimization is carried via a new algorithm which is a combination of “upper bound optimization technique" and “an improved algorithm for discrete linear approximation". In this formulation of the LP problem, in addition to the constraints corresponding to the measurement set, constraints corresponding to bounds of state variables are also involved, which enables the LP problem more efficient in rejecting bad data, even if they are associated with leverage-point measurements. Results of the proposed estimator on IEEE 39-bus system and a 24-bus EHV equivalent system of the southern Indian grid are presented for illustrative purpose.
Resumo:
We develop an inhomogeneous mean-field theory for the extended Bose-Hubbard model with a quadratic, confining potential. In the absence of this potential, our mean-field theory yields the phase diagram of the homogeneous extended Bose-Hubbard model. This phase diagram shows a superfluid (SF) phase and lobes of Mott-insulator (MI), density-wave (DW), and supersolid (SS) phases in the plane of the chemical potential mu and on-site repulsion U; we present phase diagrams for representative values of V, the repulsive energy for bosons on nearest-neighbor sites. We demonstrate that, when the confining potential is present, superfluid and density-wave order parameters are nonuniform; in particular, we obtain, for a few representative values of parameters, spherical shells of SF, MI, DW, and SS phases. We explore the implications of our study for experiments on cold-atom dipolar condensates in optical lattices in a confining potential.
Resumo:
Four new three-dimensional Mn2+ ion-containing compounds have been prepared by employing a hydrothermal reaction between Mn(CH3COO)(2)center dot 4H(2)O, sulfodibenzoic acid (H(2)SDBA), imidazole, alkali hydroxide and water at 220 degrees C for 1 day. The compounds have Mn-5 (1-4) clusters connected by SDBA, forming the three-dimensional structure. A time and temperature dependent study on the synthesis mixture revealed the formation of a one-dimensional compound, Mn(SDBA)(H2O)(2), at lower temperatures (T <= 180 degrees C). The stabilization of the fcu related topology in the compounds is noteworthy. Magnetic studies indicate strong anti-ferromagnetic interactions between the Mn2+ ions within the clusters in the temperature range 75-300 K. The rare participation of a sulfonyl group in the bonding is important and can pave way for the design of new structures.
Resumo:
Accurate estimation of mass transport parameters is necessary for overall design and evaluation processes of the waste disposal facilities. The mass transport parameters, such as effective diffusion coefficient, retardation factor and diffusion accessible porosity, are estimated from observed diffusion data by inverse analysis. Recently, particle swarm optimization (PSO) algorithm has been used to develop inverse model for estimating these parameters that alleviated existing limitations in the inverse analysis. However, PSO solver yields different solutions in successive runs because of the stochastic nature of the algorithm and also because of the presence of multiple optimum solutions. Thus the estimated mean solution from independent runs is significantly different from the best solution. In this paper, two variants of the PSO algorithms are proposed to improve the performance of the inverse analysis. The proposed algorithms use perturbation equation for the gbest particle to gain information around gbest region on the search space and catfish particles in alternative iterations to improve exploration capabilities. Performance comparison of developed solvers on synthetic test data for two different diffusion problems reveals that one of the proposed solvers, CPPSO, significantly improves overall performance with improved best, worst and mean fitness values. The developed solver is further used to estimate transport parameters from 12 sets of experimentally observed diffusion data obtained from three diffusion problems and compared with published values from the literature. The proposed solver is quick, simple and robust on different diffusion problems. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In 2003, Babin et al. theoretically predicted (J. Appl. Phys. 94:4244, 2003) that fabrication of organic-inorganic hybrid materials would probably be required to implement structures with multiple photonic band gaps. In tune with their prediction, we report synthesis of such an inorganic-organic nanocomposite, comprising Cu4O3-CuO-C thin films that experimentally exhibit the highest (of any known material) number (as many as eleven) of photonic band gaps in the near infrared. On contrary to the report by Wang et al. (Appl. Phys. Lett. 84:1629, 2004) that photonic crystals with multiple stop gaps require highly correlated structural arrangement such as multilayers of variable thicknesses, we demonstrate experimental realization of multiple stop gaps in completely randomized structures comprising inorganic oxide nanocrystals (Cu4O3 and CuO) randomly embedded in a randomly porous carbonaceous matrix. We report one step synthesis of such nanostructured films through the metalorganic chemical vapor deposition technique using a single source metalorganic precursor, Cu-4(deaH)(dea)(oAc)(5) a <...aEuro parts per thousand(CH3)(2)CO. The films displaying multiple (4/9/11) photonic band gaps with equal transmission losses in the infrared are promising materials to find applications as multiple channel photonic band gap based filter for WDM technology.
Resumo:
Monitoring and visualizing specimens at a large penetration depth is a challenge. At depths of hundreds of microns, several physical effects (such as, scattering, PSF distortion and noise) deteriorate the image quality and prohibit a detailed study of key biological phenomena. In this study, we use a Bessel-like beam in-conjugation with an orthogonal detection system to achieve depth imaging. A Bessel-like penetrating diffractionless beam is generated by engineering the back-aperture of the excitation objective. The proposed excitation scheme allows continuous scanning by simply translating the detection PSF. This type of imaging system is beneficial for obtaining depth information from any desired specimen layer, including nano-particle tracking in thick tissue. As demonstrated by imaging the fluorescent polymer-tagged-CaCO3 particles and yeast cells in a tissue-like gel-matrix, the system offers a penetration depth that extends up to 650 mu m. This achievement will advance the field of fluorescence imaging and deep nano-particle tracking.
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Resumo:
The Turkevich-Frens synthesis starting conditions are expanded, ranging the gold salt concentrations up to 2 mM and citrate/gold(III) molar ratios up to 18:1. For each concentration of the initial gold salt solution, the citrate/gold(III) molar ratios are systematically varied from 2:1 to 18:1 and both the size and size distribution of the resulting gold nanoparticles are compared. This study reveals a different nanoparticle size evolution for gold salt solutions ranging below 0.8 mM compared to the case of gold salt solutions above 0.8 mM. In the case of Au3+]<0.8 mM, both the size and size distribution vary substantially with the citrate/gold(III) ratio, both displaying plateaux that evolve inversely to Au3+] at larger ratios. Conversely, for Au3+]>= 0.8 mM, the size and size distribution of the synthesized gold nanoparticles continuously rise as the citrate/gold(III) ratio is increased. A starting gold salt concentration of 0.6 mM leads to the formation of the most monodisperse gold nanoparticles (polydispersity index<0.1) for a wide range of citrate/gold(III) molar ratios (from 4:1 to 18:1). Via a model for the formation of gold nanoparticles by the citrate method, the experimental trends in size could be qualitatively predicted:the simulations showed that the destabilizing effect of increased electrolyte concentration at high initial Au3+] is compensated by a slight increase in zeta potential of gold nanoparticles to produce concentrated dispersion of gold nanoparticles of small sizes.
Resumo:
The implementation of semiconductor circuits and systems in nano-technology makes it possible to achieve high speed, lower voltage level and smaller area. The unintended and undesirable result of this scaling is that it makes integrated circuits susceptible to soft errors normally caused by alpha particle or neutron hits. These events of radiation strike resulting into bit upsets referred to as single event upsets(SEU), become increasingly of concern for the reliable circuit operation in the field. Storage elements are worst hit by this phenomenon. As we further scale down, there is greater interest in reliability of the circuits and systems, apart from the performance, power and area aspects. In this paper we propose an improved 12T SEU tolerant SRAM cell design. The proposed SRAM cell is economical in terms of area overhead. It is easy to fabricate as compared to earlier designs. Simulation results show that the proposed cell is highly robust, as it does not flip even for a transient pulse with 62 times the Q(crit) of a standard 6T SRAM cell.
Resumo:
In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.
Resumo:
Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.
Resumo:
This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.
Resumo:
Design of a dual band pass filter employing microstrip line with defected ground is presented in this paper. A dual band filter at 2.45GHz and 3.5GHz (covering WLAN and WiMAX) with 6% bandwidth has been designed at each frequency. Apertures in ground plane were used to improve the stop band rejection characteristics and coupling levels in the filter. Measured results of the experimental filter were compared against the simulation results for the purpose of validation.