103 resultados para absolute error

em Indian Institute of Science - Bangalore - Índia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a fractional order proportional-integral controller is developed for a miniature air vehicle for rectilinear path following and trajectory tracking. The controller is implemented by constructing a vector field surrounding the path to be followed, which is then used to generate course commands for the miniature air vehicle. The fractional order proportional-integral controller is simulated using the fundamentals of fractional calculus, and the results for this controller are compared with those obtained for a proportional controller and a proportional integral controller. In order to analyze the performance of the controllers, four performance metrics, namely (maximum) overshoot, control effort, settling time and integral of the timed absolute error cost, have been selected. A comparison of the nominal as well as the robust performances of these controllers indicates that the fractional order proportional-integral controller exhibits the best performance in terms of ITAE while showing comparable performances in all other aspects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work considers the identification of the available whitespace, i.e., the regions that do not contain any existing transmitter within a given geographical area. To this end, n sensors are deployed at random locations within the area. These sensors detect for the presence of a transmitter within their radio range r(s) using a binary sensing model, and their individual decisions are combined to estimate the available whitespace. The limiting behavior of the recovered whitespace as a function of n and r(s) is analyzed. It is shown that both the fraction of the available whitespace that the nodes fail to recover as well as their radio range optimally scale as log(n)/n as n gets large. The problem of minimizing the sum absolute error in transmitter localization is also analyzed, and the corresponding optimal scaling of the radio range and the necessary minimum transmitter separation is determined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computing the maximum of sensor readings arises in several environmental, health, and industrial monitoring applications of wireless sensor networks (WSNs). We characterize the several novel design trade-offs that arise when green energy harvesting (EH) WSNs, which promise perpetual lifetimes, are deployed for this purpose. The nodes harvest renewable energy from the environment for communicating their readings to a fusion node, which then periodically estimates the maximum. For a randomized transmission schedule in which a pre-specified number of randomly selected nodes transmit in a sensor data collection round, we analyze the mean absolute error (MAE), which is defined as the mean of the absolute difference between the maximum and that estimated by the fusion node in each round. We optimize the transmit power and the number of scheduled nodes to minimize the MAE, both when the nodes have channel state information (CSI) and when they do not. Our results highlight how the optimal system operation depends on the EH rate, availability and cost of acquiring CSI, quantization, and size of the scheduled subset. Our analysis applies to a general class of sensor reading and EH random processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a multiple initialization based spectral peak tracking (MISPT) technique for heart rate monitoring from photoplethysmography (PPG) signal. MISPT is applied on the PPG signal after removing the motion artifact using an adaptive noise cancellation filter. MISPT yields several estimates of the heart rate trajectory from the spectrogram of the denoised PPG signal which are finally combined using a novel measure called trajectory strength. Multiple initializations help in correcting erroneous heart rate trajectories unlike the typical SPT which uses only single initialization. Experiments on the PPG data from 12 subjects recorded during intensive physical exercise show that the MISPT based heart rate monitoring indeed yields a better heart rate estimate compared to the SPT with single initialization. On the 12 datasets MISPT results in an average absolute error of 1.11 BPM which is lower than 1.28 BPM obtained by the state-of-the-art online heart rate monitoring algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drop formation at conical tips which is of relevance to metallurgists is investigated based on the principle of minimization of free energy using the variational approach. The dimensionless governing equations for drop profiles are computer solved using the fourth order Runge-Kutta method. For different cone angles, the theoretical plots of XT and ZT vs their ratio, are statistically analyzed, where XT and ZT are the dimensionless x and z coordinates of the drop profile at a plane at the conical tip, perpendicular to the axis of symmetry. Based on the mathematical description of these curves, an absolute method has been proposed for the determination of surface tension of liquids, which is shown to be preferable in comparison with the earlier pendent-drop profile methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we discuss the measurements of spectral surface reflectance (rho(s)(lambda)) in the wavelength range 350-2500 nm measured using a spectroradiometer onboard a low-flying aircraft over Bangalore (12.95 degrees N, 77.65 degrees E), an urban site in southern India. The large discrepancies in the retrieval of aerosol propertiesover land by the Moderate-Resolution Imaging Spectroradiometer (MODIS), which could be attributed to the inaccurate estimation of surface reflectance at many sites in India and elsewhere, provided motivation for this paper. The aim of this paper was to verify the surface reflectance relationships assumed by the MODIS aerosol algorithm for the estimation of surface reflectance in the visible channels (470 and 660 nm) from the surface reflectance at 2100 nm for aerosol retrieval over land. The variety of surfaces observed in this paper includes green and dry vegetations, bare land, and urban surfaces. The measuredreflectance data were first corrected for the radiative effects of atmosphere lying between the ground and aircraft using the Second Simulation of Satellite Signal in the Solar Spectrum (6S) radiative transfer code. The corrected surface reflectance in the MODIS's blue (rho(s)(470)), red (rho(s)(660)), and shortwave-infrared (SWIR) channel (rho(s)(2100)) was linearly correlated. We found that the slope of reflectance relationship between 660 and 2100 nm derived from the forward scattering data was 0.53 with an intercept of 0.07, whereas the slope for the relationship between the reflectance at 470 and 660 nm was 0.85. These values are much higher than the slope (similar to 0.49) for either wavelengths assumed by the MODIS aerosol algorithm over this region. The reflectance relationship for the backward scattering data has a slope of 0.39, with an intercept of 0.08 for 660 nm, and 0.65, with an intercept of 0.08 for 470 nm. The large values of the intercept (which is very small in the MODIS reflectance relationships) result in larger values of absolute surface reflectance in the visible channels. The discrepancy between the measured and assumed surface reflectances could lead to error in the aerosol retrieval. The reflectance ratio (rho(s)(660)/rho(s)(2100)) showed a clear dependence on the N D V I-SWIR where the ratio increased from 0.5 to 1 with an increase in N V I-SWIR from 0 to 0.5. The high correlation between the reflectance at SWIR wavelengths (2100, 1640, and 1240 nm) indicated an opportunity to derive the surface reflectance and, possibly, aerosol properties at these wavelengths. We need more experiments to characterize the surface reflectance and associated inhomogeneity of land surfaces, which play a critical role in the remote sensing of aerosols over land.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A residual-based strategy to estimate the local truncation error in a finite volume framework for steady compressible flows is proposed. This estimator, referred to as the -parameter, is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. The behaviour of the residual estimator for linear and non-linear hyperbolic problems is systematically analysed. The relationship of the residual to the global error is also studied. The -parameter is used to derive a target length scale and consequently devise a suitable criterion for refinement/derefinement. This strategy, devoid of any user-defined parameters, is validated using two standard test cases involving smooth flows. A hybrid adaptive strategy based on both the error indicators and the -parameter, for flows involving shocks is also developed. Numerical studies on several compressible flow cases show that the adaptive algorithm performs excellently well in both two and three dimensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple error detecting and correcting procedure is described for nonbinary symbol words; here, the error position is located using the Hamming method and the correct symbol is substituted using a modulo-check procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Doss and Agarwal 1 discovered the "redoxokinetic effect" which is now familiarly known as faradaic rectification. Subsequently, the theory and applications of faradaic rectification due to a single electrode reaction have been developed by several workers 2-5. The theory and application of faradaic rectification in the case of a corrosion cell sustaining mixed electrode reactions on a corroding metal was reported recently 6"7. This led to the development of a new electrochemical method of corrosion rate determination. It was shown that changes in the instantaneous corrosion rates of a metal are readily evaluated by faradaic rectification measurements at the corrosion potential of the metal in a given medium. The aim of the present work is to show that absolute values of instantaneous corrosion rates may also be obtained by the new method under certain conditions. The practical advantages that arise from this development are pointed out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A module containing all the functional components required for a digital absolute positioning process of one axis of a machine tool has been designed and constructed. Circuit realization makes use of integrated circuit elements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.