52 resultados para False confession

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a robust method for mosaicing of document images using features derived from connected components. Each connected component is described using the Angular Radial Tran. form (ART). To ensure geometric consistency during feature matching, the ART coefficients of a connected component are augmented with those of its two nearest neighbors. The proposed method addresses two critical issues often encountered in correspondence matching: (i) The stability of features and (ii) Robustness against false matches due to the multiple instances of characters in a document image. The use of connected components guarantees a stable localization across images. The augmented features ensure a successful correspondence matching even in the presence of multiple similar regions within the page. We illustrate the effectiveness of the proposed method on camera captured document images exhibiting large variations in viewpoint, illumination and scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A critical assessment of a published paper (by Agrawal) is presented. The procedure proposed and used by Agrawal to distinguish a false compensation effect from a true one is shown not to be correct.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of tracking a maneuvering target in clutter. In such an environment, missed detections and false alarms make it impossible to decide, with certainty, the origin of received echoes. Processing radar returns in cluttered environments consists of three functions: 1) target detection and plot formation, 2) plot-to-track association, and 3) track updating. Two inadequacies of the present approaches are 1) Optimization of detection characteristics have not been considered and 2) features that can be used in the plot-to-track correlation process are restricted to a specific class. This paper presents a new approach to overcome these limitations. This approach facilitates tracking of a maneuvering target in clutter and improves tracking performance for weak targets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a new approach to improve the detection and tracking performance of a track-while-scan (TWS) radar. The contribution consists of three parts. In Part 1 the scope of various papers in this field is reviewed. In Part 2, a new approach for integrating the detection and tracking functions is presented. It shows how a priori information from the TWS computer can be used to improve detection. A new multitarget tracking algorithm has also been developed. It is specifically oriented towards solving the combinatorial problems in multitarget tracking. In Part 3, analytical derivations are presented for quantitatively assessing, a priori, the performance of a track-while-scan radar system (true track initiation, false track initiation, true track continuation and false track deletion characteristics). Simulation results are also shown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents, in three parts, a new approach to improve the detection and tracking performance of a track-while-scan radar. Part 1 presents a review of the current status of the subject. Part 2 details the new approach. It shows how a priori information provided by the tracker can be used to improve detection. It also presents a new multitarget tracking algorithm. In the present Part, analytical derivations are presented for assessing, a priori, the performance of the TWS radar system. True track initiation, false track initiation, true track continuation and false track deletion characteristics have been studied. It indicates how the various thresholds can be chosen by the designer to optimise performance. Simulation results are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typhoid fever is becoming an ever increasing threat in the developing countries. We have improved considerably upon the existing PCR-based diagnosis method by designing primers against a region that is unique to Salmonella enterica subsp. enterica serovar Typhi and Salmonella enterica subsp. enterica serovar Paratyphi A, corresponding to the STY0312 gene in S. Typhi and its homolog SPA2476 in S. Paratyphi A. An additional set of primers amplify another region in S. Typhi CT18 and S. Typhi Ty2 corresponding to the region between genes STY0313 to STY0316 but which is absent in S. Paratyphi A. The possibility of a false-negative result arising due to mutation in hypervariable genes has been reduced by targeting a gene unique to typhoidal Salmonella serovars as a diagnostic marker. The amplified region has been tested for genomic stability by amplifying the region from clinical isolates of patients from various geographical locations in India, thereby showing that this region is potentially stable. These set of primers can also differentiate between S. Typhi CT18, S. Typhi Ty2, and S. Paratyphi A, which have stable deletions in this specific locus. The PCR assay designed in this study has a sensitivity of 95% compared to the Widal test which has a sensitivity of only 63%. As observed, in certain cases, the PCR assay was more sensitive than the blood culture test was, as the PCR-based detection could also detect dead bacteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The finite-difference form of the basic conservation equations in laminar film boiling have been solved by the false-transient method. By a judicious choice of the coordinate system the vapour-liquid interface is fitted to the grid system. Central differencing is used for diffusion terms, upwind differencing for convection terms, and explicit differencing for transient terms. Since an explicit method is used the time step used in the false-transient method is constrained by numerical instability. In the present problem the limits on the time step are imposed by conditions in the vapour region. On the other hand the rate of convergence of finite-difference equations is dependent on the conditions in the liquid region. The rate of convergence was accelerated by using the over-relaxation technique in the liquid region. The results obtained compare well with previous work and experimental data available in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The significance of treating rainfall as a chaotic system instead of a stochastic system for a better understanding of the underlying dynamics has been taken up by various studies recently. However, an important limitation of all these approaches is the dependence on a single method for identifying the chaotic nature and the parameters involved. Many of these approaches aim at only analyzing the chaotic nature and not its prediction. In the present study, an attempt is made to identify chaos using various techniques and prediction is also done by generating ensembles in order to quantify the uncertainty involved. Daily rainfall data of three regions with contrasting characteristics (mainly in the spatial area covered), Malaprabha, Mahanadi and All-India for the period 1955-2000 are used for the study. Auto-correlation and mutual information methods are used to determine the delay time for the phase space reconstruction. Optimum embedding dimension is determined using correlation dimension, false nearest neighbour algorithm and also nonlinear prediction methods. The low embedding dimensions obtained from these methods indicate the existence of low dimensional chaos in the three rainfall series. Correlation dimension method is done on th phase randomized and first derivative of the data series to check whether the saturation of the dimension is due to the inherent linear correlation structure or due to low dimensional dynamics. Positive Lyapunov exponents obtained prove the exponential divergence of the trajectories and hence the unpredictability. Surrogate data test is also done to further confirm the nonlinear structure of the rainfall series. A range of plausible parameters is used for generating an ensemble of predictions of rainfall for each year separately for the period 1996-2000 using the data till the preceding year. For analyzing the sensitiveness to initial conditions, predictions are done from two different months in a year viz., from the beginning of January and June. The reasonably good predictions obtained indicate the efficiency of the nonlinear prediction method for predicting the rainfall series. Also, the rank probability skill score and the rank histograms show that the ensembles generated are reliable with a good spread and skill. A comparison of results of the three regions indicates that although they are chaotic in nature, the spatial averaging over a large area can increase the dimension and improve the predictability, thus destroying the chaotic nature. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spike detection in neural recordings is the initial step in the creation of brain machine interfaces. The Teager energy operator (TEO) treats a spike as an increase in the `local' energy and detects this increase. The performance of TEO in detecting action potential spikes suffers due to its sensitivity to the frequency of spikes in the presence of noise which is present in microelectrode array (MEA) recordings. The multiresolution TEO (mTEO) method overcomes this shortcoming of the TEO by tuning the parameter k to an optimal value m so as to match to frequency of the spike. In this paper, we present an algorithm for the mTEO using the multiresolution structure of wavelets along with inbuilt lowpass filtering of the subband signals. The algorithm is efficient and can be implemented for real-time processing of neural signals for spike detection. The performance of the algorithm is tested on a simulated neural signal with 10 spike templates obtained from [14]. The background noise is modeled as a colored Gaussian random process. Using the noise standard deviation and autocorrelation functions obtained from recorded data, background noise was simulated by an autoregressive (AR(5)) filter. The simulations show a spike detection accuracy of 90%and above with less than 5% false positives at an SNR of 2.35 dB as compared to 80% accuracy and 10% false positives reported [6] on simulated neural signals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.