133 resultados para decision error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study describes the design and implementation of DSS for assessment of Mini, Micro and Small Schemes. The design links a set of modelling, manipulation, spatial analyses and display tools to a structured database that has the facility to store both observed and simulated data. The main hypothesis is that this tool can be used to form a core of practical methodology that will result in more resilient in less time and can be used by decision-making bodies to assess the impacts of various scenarios (e.g.: changes in land use pattern) and to review, cost and benefits of decisions to be made. It also offers means of entering, accessing and interpreting the information for the purpose of sound decision making. Thus, the overall objective of this DSS is the development of set of tools aimed at transforming data into information and aid decisions at different scales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity appears to be the energy carrier of choice for modern economics since growth in electricity has outpaced growth in the demand for fuels. A decision maker (DM) for accurate and efficient decisions in electricity distribution requires the sector wise and location wise electricity consumption information to predict the requirement of electricity. In this regard, an interactive computer-based Decision Support System (DSS) has been developed to compile, analyse and present the data at disaggregated levels for regional energy planning. This helps in providing the precise information needed to make timely decisions related to transmission and distribution planning leading to increased efficiency and productivity. This paper discusses the design and implementation of a DSS, which facilitates to analyse the consumption of electricity at various hierarchical levels (division, taluk, sub division, feeder) for selected periods. This DSS is validated with the data of transmission and distribution systems of Kolar district in Karnataka State, India.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A class of model reference adaptive control system which make use of an augmented error signal has been introduced by Monopoli. Convergence problems in this attractive class of systems have been investigated in this paper using concepts from hyperstability theory. It is shown that the condition on the linear part of the system has to be stronger than the one given earlier. A boundedness condition on the input to the linear part of the system has been taken into account in the analysis - this condition appears to have been missed in the previous applications of hyperstability theory. Sufficient conditions for the convergence of the adaptive gain to the desired value are also given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design and operation of the minimum cost classifier, where the total cost is the sum of the measurement cost and the classification cost, is computationally complex. Noting the difficulties associated with this approach, decision tree design directly from a set of labelled samples is proposed in this paper. The feature space is first partitioned to transform the problem to one of discrete features. The resulting problem is solved by a dynamic programming algorithm over an explicitly ordered state space of all outcomes of all feature subsets. The solution procedure is very general and is applicable to any minimum cost pattern classification problem in which each feature has a finite number of outcomes. These techniques are applied to (i) voiced, unvoiced, and silence classification of speech, and (ii) spoken vowel recognition. The resulting decision trees are operationally very efficient and yield attractive classification accuracies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process control rules may be specified using decision tables. Such a specification is superior when logical decisions to be taken in control dominate. In this paper we give a method of detecting redundancies, incompleteness, and contradictions in such specifications. Using such a technique thus ensures the validity of the specifications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy for assessing the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a current-error space-vector-based hysteresis controller with online computation of boundary for two-level inverter-fed induction motor (IM) drives. The proposed hysteresis controller has got all advantages of conventional current-error space-vector-based hysteresis controllers like quick transient response, simplicity, adjacent voltage vector switching, etc. Major advantage of the proposed controller-based voltage-source-inverters-fed drive is that phase voltage frequency spectrum produced is exactly similar to that of a constant switching frequency space-vector pulsewidth modulated (SVPWM) inverter. In this proposed hysteresis controller, stator voltages along alpha- and beta-axes are estimated during zero and active voltage vector periods using current errors along alpha- and beta-axes and steady-state model of IM. Online computation of hysteresis boundary is carried out using estimated stator voltages in the proposed hysteresis controller. The proposed scheme is simple and capable of taking inverter upto six-step-mode operation, if demanded by drive system. The proposed hysteresis-controller-based inverter-fed drive scheme is experimentally verified. The steady state and transient performance of the proposed scheme is extensively tested. The experimental results are giving constant frequency spectrum for phase voltage similar to that of constant frequency SVPWM inverter-fed drive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of mutagenic drugs to drive HIV-1 past its error threshold presents a novel intervention strategy, as suggested by the quasispecies theory, that may be less susceptible to failure via viral mutation-induced emergence of drug resistance than current strategies. The error threshold of HIV-1, mu(c), however, is not known. Application of the quasispecies theory to determine mu(c) poses significant challenges: Whereas the quasispecies theory considers the asexual reproduction of an infinitely large population of haploid individuals, HIV-1 is diploid, undergoes recombination, and is estimated to have a small effective population size in vivo. We performed population genetics-based stochastic simulations of the within-host evolution of HIV-1 and estimated the structure of the HIV-1 quasispecies and mu(c). We found that with small mutation rates, the quasispecies was dominated by genomes with few mutations. Upon increasing the mutation rate, a sharp error catastrophe occurred where the quasispecies became delocalized in sequence space. Using parameter values that quantitatively captured data of viral diversification in HIV-1 patients, we estimated mu(c) to be 7 x 10(-5) -1 x 10(-4) substitutions/site/replication, similar to 2-6 fold higher than the natural mutation rate of HIV-1, suggesting that HIV-1 survives close to its error threshold and may be readily susceptible to mutagenic drugs. The latter estimate was weakly dependent on the within-host effective population size of HIV-1. With large population sizes and in the absence of recombination, our simulations converged to the quasispecies theory, bridging the gap between quasispecies theory and population genetics-based approaches to describing HIV-1 evolution. Further, mu(c) increased with the recombination rate, rendering HIV-1 less susceptible to error catastrophe, thus elucidating an added benefit of recombination to HIV-1. Our estimate of mu(c) may serve as a quantitative guideline for the use of mutagenic drugs against HIV-1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural Support Vector Machines (SSVMs) have become a popular tool in machine learning for predicting structured objects like parse trees, Part-of-Speech (POS) label sequences and image segments. Various efficient algorithmic techniques have been proposed for training SSVMs for large datasets. The typical SSVM formulation contains a regularizer term and a composite loss term. The loss term is usually composed of the Linear Maximum Error (LME) associated with the training examples. Other alternatives for the loss term are yet to be explored for SSVMs. We formulate a new SSVM with Linear Summed Error (LSE) loss term and propose efficient algorithms to train the new SSVM formulation using primal cutting-plane method and sequential dual coordinate descent method. Numerical experiments on benchmark datasets demonstrate that the sequential dual coordinate descent method is faster than the cutting-plane method and reaches the steady-state generalization performance faster. It is thus a useful alternative for training SSVMs when linear summed error is used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objects viewed through transparent sheets with residual non-parallelism and irregularity appear shifted and distorted. This distortion is measured in terms of angular and binocular deviation of an object viewed through the transparent sheet. The angular and binocular deviations introduced are particularly important in the context of aircraft windscreens and canopies as they can interfere with decision making of pilots especially while landing, leading to accidents. In this work, we have developed an instrument to measure both the angular and binocular deviations introduced by transparent sheets. This instrument is especially useful in the qualification of aircraft windscreens and canopies. It measures the deviation in the geometrical shadow cast by a periodic dot pattern trans-illuminated by the distorted light beam from the transparent test specimen compared to the reference pattern. Accurate quantification of the shift in the pattern is obtained by cross-correlating the reference shadow pattern with the specimen shadow pattern and measuring the location of the correlation peak. The developed instrument is handy to use and computes both angular and binocular deviation with an accuracy of less than +/- 0.1 mrad (approximate to 0.036 mrad) and has an excellent repeatability with an error of less than 2%. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4769756]