157 resultados para Error estimate.
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.
Resumo:
Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.
Resumo:
In this paper, we address the design of codes which achieve modulation diversity in block fading single-input single-output (SISO) channels with signal quantization at the receiver. With an unquantized receiver, coding based on algebraic rotations is known to achieve maximum modulation coding diversity. On the other hand, with a quantized receiver, algebraic rotations may not guarantee gains in diversity. Through analysis, we propose specific rotations which result in the codewords having equidistant component-wise projections. We show that the proposed coding scheme achieves maximum modulation diversity with a low-complexity minimum distance decoder and perfect channel knowledge. Relaxing the perfect channel knowledge assumption we propose a novel channel training/estimation technique to estimate the channel. We show that our coding/training/estimation scheme and minimum distance decoding achieves an error probability performance similar to that achieved with perfect channel knowledge.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Resumo:
The realistic estimation of the dynamic characteristics for a known set of loading conditions continues to be difficult despite many contributions in the past. The design of a machine foundation is generally made on the basis of limiting amplitude or resonant frequency. These parameters are in turn dependent on the dynamic characteristics of soil viz., the shear modulus/stiffness and damping. The work reported herein is an attempt to relate statistically the shear modulus of a soil to its resonant amplitude under a known set of static and dynamic loading conditions as well as wide ranging soil conditions. The two parameters have been statistically related with a good correlation coefficient and low standard error of estimate.
Resumo:
A low strain shear modulus plays a fundamental role in earthquake geotechnical engineering to estimate the ground response parameters for seismic microzonation. A large number of site response studies are being carried out using the standard penetration test (SPT) data, considering the existing correlation between SPT N values and shear modulus. The purpose of this paper is to review the available empirical correlations between shear modulus and SPT N values and to generate a new correlation by combining the new data obtained by the author and the old available data. The review shows that only few authors have used measured density and shear wave velocity to estimate shear modulus, which were related to the SPT N values. Others have assumed a constant density for all the shear wave velocities to estimate the shear modulus. Many authors used the SPT N values of less than 1 and more than 100 to generate the correlation by extrapolation or assumption, but practically these N values have limited applications, as measuring of the SPT N values of less than 1 is not possible and more than 100 is not carried out. Most of the existing correlations were developed based on the studies carried out in Japan, where N values are measured with a hammer energy of 78%, which may not be directly applicable for other regions because of the variation in SPT hammer energy. A new correlation has been generated using the measured values in Japan and in India by eliminating the assumed and extrapolated data. This correlation has higher regression coefficient and lower standard error. Finally modification factors are suggested for other regions, where the hammer energy is different from 78%. Crown Copyright (C) 2012 Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a current-error space-vector-based hysteresis controller with online computation of boundary for two-level inverter-fed induction motor (IM) drives. The proposed hysteresis controller has got all advantages of conventional current-error space-vector-based hysteresis controllers like quick transient response, simplicity, adjacent voltage vector switching, etc. Major advantage of the proposed controller-based voltage-source-inverters-fed drive is that phase voltage frequency spectrum produced is exactly similar to that of a constant switching frequency space-vector pulsewidth modulated (SVPWM) inverter. In this proposed hysteresis controller, stator voltages along alpha- and beta-axes are estimated during zero and active voltage vector periods using current errors along alpha- and beta-axes and steady-state model of IM. Online computation of hysteresis boundary is carried out using estimated stator voltages in the proposed hysteresis controller. The proposed scheme is simple and capable of taking inverter upto six-step-mode operation, if demanded by drive system. The proposed hysteresis-controller-based inverter-fed drive scheme is experimentally verified. The steady state and transient performance of the proposed scheme is extensively tested. The experimental results are giving constant frequency spectrum for phase voltage similar to that of constant frequency SVPWM inverter-fed drive.
Resumo:
In this study, an effort has been made to study heavy rainfall events during cyclonic storms over Indian Ocean. This estimate is based on microwave observations from tropical rainfall measuring mission (TRMM) Microwave Imager (TMI). Regional scattering index (SI) developed for Indian region based on measurements at 19-, 21- and 85-GHz brightness temperature and polarization corrected temperature (PCT) at 85 GHz have been utilized in this study. These PCT and SI are collocated against Precipitation Radar (PR) onboard TRMM to establish a relationship between rainfall rate, PCT and SI. The retrieval technique using both linear and nonlinear regressions has been developed utilizing SI, PCT and the combination of SI and PCT. The results have been compared with the observations from PR. It was found that a nonlinear algorithm using combination of SI and PCT is more accurate than linear algorithm or nonlinear algorithm using either SI or PCT. Statistical comparison with PR exhibits the correlation coefficients (CC) of 0.68, 0.66 and 0.70, and root mean square error (RMSE) of 1.78, 1.96 and 1.68 mm/h from the observations of SI, PCT and combination of SI and PCT respectively using linear regressions. When nonlinear regression is used, the CC of 0.73, 0.71, 0.79 and RMSE of 1.64, 1.95, 1.54 mm/h are observed from the observations of SI, PCT and combination of SI and PCT, respectively. The error statistics for high rain events (above 10 mm/h) shows the CC of 0.58, 0.59, 0.60 and RMSE of 5.07, 5.47, 5.03 mm/h from the observations of SI, PCT and combination of SI and PCT, respectively, using linear regression, and on the other hand, use of nonlinear regression yields the CC of 0.66, 0.64, 0.71 and RMSE of 4.68, 5.78 and 4.02 mm/h from the observations of SI, PCT and combined SI and PCT, respectively.
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Resumo:
Structural Support Vector Machines (SSVMs) have become a popular tool in machine learning for predicting structured objects like parse trees, Part-of-Speech (POS) label sequences and image segments. Various efficient algorithmic techniques have been proposed for training SSVMs for large datasets. The typical SSVM formulation contains a regularizer term and a composite loss term. The loss term is usually composed of the Linear Maximum Error (LME) associated with the training examples. Other alternatives for the loss term are yet to be explored for SSVMs. We formulate a new SSVM with Linear Summed Error (LSE) loss term and propose efficient algorithms to train the new SSVM formulation using primal cutting-plane method and sequential dual coordinate descent method. Numerical experiments on benchmark datasets demonstrate that the sequential dual coordinate descent method is faster than the cutting-plane method and reaches the steady-state generalization performance faster. It is thus a useful alternative for training SSVMs when linear summed error is used.
Resumo:
This paper discusses the use of Jason-2 radar altimeter measurements to estimate the Ganga-Brahmaputra surface freshwater flux into the Bay of Bengal for the period mid-2008 to December 2011. A previous estimate was generated for 1993-2008 using TOPEX-Poseidon, ERS-2 and ENVISAT, and is now extended using Jason-2. To take full advantages of the new availability of in situ rating curves, the processing scheme is adapted and the adjustments of the methodology are discussed here. First, using a large sample of in situ river height measurements, we estimate the standard error of Jason-2-derived water levels over the Ganga and the Brahmaputra to be respectively of 0.28 m and 0.19 m, or less than similar to 4% of the annual peak-to-peak variations of these two rivers. Using the in situ rating curves between water levels and river discharges, we show that Jason-2 accurately infers Ganga and Brahmaputra instantaneous discharges for 2008-2011 with mean errors ranging from similar to 2180 m(3)/s (6.5%) over the Brahmaputra to similar to 1458 m(3)/s (13%) over the Ganga. The combined Ganga-Brahmaputra monthly discharges meet the requirements of acceptable accuracy (15-20%) with a mean error of similar to 16% for 2009-2011 and similar to 17% for 1993-2011. The Ganga-Brahmaputra monthly discharge at the river mouths is then presented, showing a marked interannual variability with a standard deviation of similar to 12500 m(3)/s, much larger than the data set uncertainty. Finally, using in situ sea surface salinity observations, we illustrate the possible impact of extreme continental freshwater discharge event on the northern Bay of Bengal as observed in 2008.
Resumo:
Density-functional calculations are performed to explore the relationship between the work function and Young's modulus of RhSi, and to estimate the p-Schottky-barrier height (SBH) at the Si/RhSi(010) interface. It is shown that the Young's modulus and the workfunction of RhSi satisfy the generic sextic relation, proposed recently for elemental metals. The calculated p-SBH at the Si/RhSi interface is found to differ only by 0.04 eV in opposite limits, viz., no-pinning and strong pinning. We find that the p-SBH is reduced as much as by 0.28 eV due to vacancies at the interface. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4761994]
Resumo:
Motivated by applications to distributed storage, Gopalan et al recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such ``local'' parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code.