278 resultados para Summed estimation scales
Resumo:
Many problems of state estimation in structural dynamics permit a partitioning of system states into nonlinear and conditionally linear substructures. This enables a part of the problem to be solved exactly, using the Kalman filter, and the remainder using Monte Carlo simulations. The present study develops an algorithm that combines sequential importance sampling based particle filtering with Kalman filtering to a fairly general form of process equations and demonstrates the application of a substructuring scheme to problems of hidden state estimation in structures with local nonlinearities, response sensitivity model updating in nonlinear systems, and characterization of residual displacements in instrumented inelastic structures. The paper also theoretically demonstrates that the sampling variance associated with the substructuring scheme used does not exceed the sampling variance corresponding to the Monte Carlo filtering without substructuring. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We demonstrate quantitative optical property and elastic property imaging from ultrasound assisted optical tomography data. The measurements, which are modulation depth M and phase phi of the speckle pattern, are shown to be sensitively dependent on these properties of the object in the insonified focal region of the ultrasound (US) transducer. We demonstrate that Young's modulus (E) can be recovered from the resonance observed in M versus omega (the US frequency) plots and optical absorption (mu(a)) and scattering (mu(s)) coefficients from the measured differential phase changes. All experimental observations are verified also using Monte Carlo simulations. (c) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.101507]
Resumo:
The presence of new matter fields charged under the Standard Model gauge group at intermediate scales below the Grand Unification scale modifies the renormalization group evolution of the gauge couplings. This can in turn significantly change the running of the Minimal Supersymmetric Standard Model parameters, in particular the gaugino and the scalar masses. In the absence of new large Yukawa couplings we can parameterise all the intermediate scale models in terms of only two parameters controlling the size of the unified gauge coupling. As a consequence of the modified running, the low energy spectrum can be strongly affected with interesting phenomenological consequences. In particular, we show that scalar over gaugino mass ratios tend to increase and the regions of the parameter space with neutralino Dark Matter compatible with cosmological observations get drastically modified. Moreover, we discuss some observables that can be used to test the intermediate scale physics at the LHC in a wide class of models.
Resumo:
Epoxy resin bonded mica splitting is the insulation of choice for machine stators. However, this system is seen to be relatively weak under time varying mechanical stress, in particular the vibration causing delamination of mica and deboning of mica from the resin matrix. The situation is accentuated under the combined action of electrical, thermal and mechanical stress. Physical and probabilistic models for failure of such systems have been proposed by one of the authors of this paper earlier. This paper presents a pragmatic accelerated failure data acquisition and analytical paradigm under multi factor coupled stress, Electrical, Thermal. The parameters of the phenomenological model so developed are estimated based on sound statistical treatment of failure data.
Resumo:
Structural Support Vector Machines (SSVMs) have become a popular tool in machine learning for predicting structured objects like parse trees, Part-of-Speech (POS) label sequences and image segments. Various efficient algorithmic techniques have been proposed for training SSVMs for large datasets. The typical SSVM formulation contains a regularizer term and a composite loss term. The loss term is usually composed of the Linear Maximum Error (LME) associated with the training examples. Other alternatives for the loss term are yet to be explored for SSVMs. We formulate a new SSVM with Linear Summed Error (LSE) loss term and propose efficient algorithms to train the new SSVM formulation using primal cutting-plane method and sequential dual coordinate descent method. Numerical experiments on benchmark datasets demonstrate that the sequential dual coordinate descent method is faster than the cutting-plane method and reaches the steady-state generalization performance faster. It is thus a useful alternative for training SSVMs when linear summed error is used.
Resumo:
This study uses precipitation estimates from the Tropical Rainfall Measuring Mission to quantify the spatial and temporal scales of northward propagation of convection over the Indian monsoon region during boreal summer. Propagating modes of convective systems in the intraseasonal time scales such as the Madden-Julian oscillation can interact with the intertropical convergence zone and bring active and break spells of the Indian summer monsoon. Wavelet analysis was used to quantify the spatial extent (scale) and center of these propagating convective bands, as well as the time period associated with different spatial scales. Results presented here suggest that during a good monsoon year the spatial scale of this oscillation is about 30 degrees centered around 10 degrees N. During weak monsoon years, the scale of propagation decreases and the center shifts farther south closer to the equator. A strong linear relationship is obtained between the center/scale of convective wave bands and intensity of monsoon precipitation over Indian land on the interannual time scale. Moreover, the spatial scale and its center during the break monsoon were found to be similar to an overall weak monsoon year. Based on this analysis, a new index is proposed to quantify the spatial scales associated with propagating convective bands. This automated wavelet-based technique developed here can be used to study meridional propagation of convection in a large volume of datasets from observations and model simulations. The information so obtained can be related to the interannual and intraseasonal variation of Indian monsoon precipitation.
Resumo:
Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.
Resumo:
In a cooperative system with an amplify-and-forward relay, the cascaded channel training protocol enables the destination to estimate the source-destination channel gain and the product of the source-relay (SR) and relay-destination (RD) channel gains using only two pilot transmissions from the source. Notably, the destination does not require a separate estimate of the SR channel. We develop a new expression for the symbol error probability (SEP) of AF relaying when imperfect channel state information (CSI) is acquired using the above training protocol. A tight SEP upper bound is also derived; it shows that full diversity is achieved, albeit at a high signal-to-noise ratio (SNR). Our analysis uses fewer simplifying assumptions, and leads to expressions that are accurate even at low SNRs and are different from those in the literature. For instance, it does not approximate the estimate of the product of SR and RD channel gains by the product of the estimates of the SR and RD channel gains. We show that cascaded channel estimation often outperforms a channel estimation protocol that incurs a greater training overhead by forwarding a quantized estimate of the SR channel gain to the destination. The extent of pilot power boosting, if allowed, that is required to improve performance is also quantified.
Resumo:
We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.
Resumo:
This paper proposes an algorithm for joint data detection and tracking of the dominant singular mode of a time varying channel at the transmitter and receiver of a time division duplex multiple input multiple output beamforming system. The method proposed is a modified expectation maximization algorithm which utilizes an initial estimate to track the dominant modes of the channel at the transmitter and the receiver blindly; and simultaneously detects the un known data. Furthermore, the estimates are constrained to be within a confidence interval of the previous estimate in order to improve the tracking performance and mitigate the effect of error propagation. Monte-Carlo simulation results of the symbol error rate and the mean square inner product between the estimated and the true singular vector are plotted to show the performance benefits offered by the proposed method compared to existing techniques.
Resumo:
This paper presents methodologies for incorporating phasor measurements into conventional state estimator. The angle measurements obtained from Phasor Measurement Units are handled as angle difference measurements rather than incorporating the angle measurements directly. Handling in such a manner overcomes the problems arising due to the choice of reference bus. Current measurements obtained from Phasor Measurement Units are treated as equivalent pseudo-voltage measurements at the neighboring buses. Two solution approaches namely normal equations approach and linear programming approach are presented to show how the Phasor Measurement Unit measurements can be handled. Comparative evaluation of both the approaches is also presented. Test results on IEEE 14 bus system are presented to validate both the approaches.
Resumo:
Some bulk metallic glasses (BMGs) exhibit high crack initiation toughness due to shear band mediated plastic flow at the crack tip and yet do not display additional resistance to crack growth due to the lack of a microstructure. Thus, at crack initiation, the fracture behavior of BMGs transits from that of ductile alloys to that of brittle ceramics. In this paper, we attempt to understand the physics behind the characteristic length from the notch root at which this transition occurs, through testing of four-point bend specimens made of a nominally ductile Zr-based BMG in three different structural states. In the as-cast state, both symmetric (mode I) and asymmetric (mixed mode) bend specimens are tested. The process of shear band mediated plastic flow followed by crack initiation at the notch root was monitored through in situ imaging. Results show that stable crack growth occurs inside a dominant shear band through a distance of, similar to 60 mu m, irrespective of the structural state and mode mixity, before attaining criticality. Detailed finite element simulations show that this length corresponds to the distance from the notch root over which a positive hydrostatic stress gradient prevails. The mean ridge heights on fractured surfaces are found to correlate with the toughness of the BMG. The Argon and Salama model, which is based on the meniscus instability phenomenon at the notch root, is modified to explain the experimentally observed physics of fracture in ductile BMGs. (C) 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
Traction insulators are solid core insulators widely used for railway electrification. Constant exposure to detrimental effects of vandalism, and mechanical vibrations begets certain faults like shorting of sheds or cracks in the sheds. Due to fault in one/two sheds, stress on the remaining healthy sheds increases, owing to atmospheric pollution the stress may lead to a flashover of the insulator. Presently due to non availability of the electric stress data for the insulators, simulation study is carried out to find the potential and electric field for most widely used traction insulators in the country. The results of potential and electric field stress obtained for normal and faulty imposed insulators are presented.
Resumo:
The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.