55 resultados para Decision Taking
em Indian Institute of Science - Bangalore - Índia
Resumo:
Management of large projects, especially the ones in which a major component of R&D is involved and those requiring knowledge from diverse specialised and sophisticated fields, may be classified as semi-structured problems. In these problems, there is some knowledge about the nature of the work involved, but there are also uncertainties associated with emerging technologies. In order to draw up a plan and schedule of activities of such a large and complex project, the project manager is faced with a host of complex decisions that he has to take, such as, when to start an activity, for how long the activity is likely to continue, etc. An Intelligent Decision Support System (IDSS) which aids the manager in decision making and drawing up a feasible schedule of activities while taking into consideration the constraints of resources and time, will have a considerable impact on the efficient management of the project. This report discusses the design of an IDSS that helps in project planning phase through the scheduling phase. The IDSS uses a new project scheduling tool, the Project Influence Graph (PIG).
Resumo:
Due to the inherent feedback in a decision feedback equalizer (DFE) the minimum mean square error (MMSE) or Wiener solution is not known exactly. The main difficulty in such analysis is due to the propagation of the decision errors, which occur because of the feedback. Thus in literature, these errors are neglected while designing and/or analyzing the DFEs. Then a closed form expression is obtained for Wiener solution and we refer this as ideal DFE (IDFE). DFE has also been designed using an iterative and computationally efficient alternative called least mean square (LMS) algorithm. However, again due to the feedback involved, the analysis of an LMS-DFE is not known so far. In this paper we theoretically analyze a DFE taking into account the decision errors. We study its performance at steady state. We then study an LMS-DFE and show the proximity of LMS-DFE attractors to that of the optimal DFE Wiener filter (obtained after considering the decision errors) at high signal to noise ratios (SNR). Further, via simulations we demonstrate that, even at moderate SNRs, an LMS-DFE is close to the MSE optimal DFE. Finally, we compare the LMS DFE attractors with IDFE via simulations. We show that an LMS equalizer outperforms the IDFE. In fact, the performance improvement is very significant even at high SNRs (up to 33%), where an IDFE is believed to be closer to the optimal one. Towards the end, we briefly discuss the tracking properties of the LMS-DFE.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
The literature on the subject of the present investigation is somewhat meagre. A rotary converter or synchronous motor no! provided with any special starting devices forms, when started from the alternating current side, a type of induction motor whoso Htator is provided with a polyphase winding, and whoso rotor has a single-phase (or single magnetic axis) winding.
Resumo:
The minimum cost classifier when general cost functionsare associated with the tasks of feature measurement and classification is formulated as a decision graph which does not reject class labels at intermediate stages. Noting its complexities, a heuristic procedure to simplify this scheme to a binary decision tree is presented. The optimizationof the binary tree in this context is carried out using ynamicprogramming. This technique is applied to the voiced-unvoiced-silence classification in speech processing.
Resumo:
The statistical minimum risk pattern recognition problem, when the classification costs are random variables of unknown statistics, is considered. Using medical diagnosis as a possible application, the problem of learning the optimal decision scheme is studied for a two-class twoaction case, as a first step. This reduces to the problem of learning the optimum threshold (for taking appropriate action) on the a posteriori probability of one class. A recursive procedure for updating an estimate of the threshold is proposed. The estimation procedure does not require the knowledge of actual class labels of the sample patterns in the design set. The adaptive scheme of using the present threshold estimate for taking action on the next sample is shown to converge, in probability, to the optimum. The results of a computer simulation study of three learning schemes demonstrate the theoretically predictable salient features of the adaptive scheme.
Resumo:
Optimal bang-coast maintenance policies for a machine, subject to failure, are considered. The approach utilizes a semi-Markov model for the system. A simplified model for modifying the probability of machine failure with maintenance is employed. A numerical example is presented to illustrate the procedure and results.
Resumo:
In this paper we present a novel algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess goodness of hyperplanes at each node. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy, based on some recent variants of SVM, to assess the hyperplanes in such a way that the geometric structure in the data is taken into account. We show through empirical studies that our method is effective.
Resumo:
We develop a simulation based algorithm for finite horizon Markov decision processes with finite state and finite action space. Illustrative numerical experiments with the proposed algorithm are shown for problems in flow control of communication networks and capacity switching in semiconductor fabrication.
Resumo:
Control centers (CC) play a very important role in power system operation. An overall view of the system with information about all existing resources and needs is implemented through SCADA (Supervisory control and data acquisition system) and an EMS (energy management system). As advanced technologies have made their way into the utility environment, the operators are flooded with huge amount of data. The last decade has seen extensive applications of AI techniques, knowledge-based systems, Artificial Neural Networks in this area. This paper focuses on the need for development of an intelligent decision support system to assist the operator in making proper decisions. The requirements for realization of such a system are recognized for the effective operation and energy management of the southern grid in India The application of Petri nets leading to decision support system has been illustrated considering 24 bus system that is a part of southern grid.
Resumo:
We present the theoretical foundations for the multiple rendezvous problem involving design of local control strategies that enable groups of visibility-limited mobile agents to split into subgroups, exhibit simultaneous taxis behavior towards, and eventually rendezvous at, multiple unknown locations of interest. The theoretical results are proved under certain restricted set of assumptions. The algorithm used to solve the above problem is based on a glowworm swarm optimization (GSO) technique, developed earlier, that finds multiple optima of multimodal objective functions. The significant difference between our work and most earlier approaches to agreement problems is the use of a virtual local-decision domain by the agents in order to compute their movements. The range of the virtual domain is adaptive in nature and is bounded above by the maximum sensor/visibility range of the agent. We introduce a new decision domain update rule that enhances the rate of convergence by a factor of approximately two. We use some illustrative simulations to support the algorithmic correctness and theoretical findings of the paper.
Resumo:
Production scheduling in a flexible manufacturing system (FMS) is a real-time combinatorial optimization problem that has been proved to be NP-complete. Solving this problem needs on-line monitoring of plan execution and requires real-time decision-making in selecting alternative routings, assigning required resources, and rescheduling when failures occur in the system. Expert systems provide a natural framework for solving this kind of NP-complete problems.In this paper an expert system with a novel parallel heuristic approach is implemented for automatic short-term dynamic scheduling of FMS. The principal features of the expert system presented in this paper include easy rescheduling, on-line plan execution, load balancing, an on-line garbage collection process, and the use of advanced knowledge representational schemes. Its effectiveness is demonstrated with two examples.
Resumo:
Whether HIV-1 evolution in infected individuals is dominated by deterministic or stochastic effects remains unclear because current estimates of the effective population size of HIV-1 in vivo, N-e, are widely varying. Models assuming HIV-1 evolution to be neutral estimate N-e similar to 10(2)-10(4), smaller than the inverse mutation rate of HIV-1 (similar to 10(5)), implying the predominance of stochastic forces. In contrast, a model that includes selection estimates N-e>10(5), suggesting that deterministic forces would hold sway. The consequent uncertainty in the nature of HIV-1 evolution compromises our ability to describe disease progression and outcomes of therapy. We perform detailed bit-string simulations of viral evolution that consider large genome lengths and incorporate the key evolutionary processes underlying the genomic diversification of HIV-1 in infected individuals, namely, mutation, multiple infections of cells, recombination, selection, and epistatic interactions between multiple loci. Our simulations describe quantitatively the evolution of HIV-1 diversity and divergence in patients. From comparisons of our simulations with patient data, we estimate N-e similar to 10(3)-10(4), implying predominantly stochastic evolution. Interestingly, we find that N-e and the viral generation time are correlated with the disease progression time, presenting a route to a priori prediction of disease progression in patients. Further, we show that the previous estimate of N-e>10(5) reduces as the frequencies of multiple infections of cells and recombination assumed increase. Our simulations with N-e similar to 10(3)-10(4) may be employed to estimate markers of disease progression and outcomes of therapy that depend on the evolution of viral diversity and divergence.
Resumo:
In this paper we present a novel macroblock mode decision algorithm to speedup H.264/SVC Intra frame encoding. We replace the complex mode-decision calculations by a classifier which has been trained specifically to minimize the reduction in RD performance. This results in a significant speedup in encoding. The results show that machine learning has a great potential and can reduce the complexity substantially with negligible impact on quality. The results show that the proposed method reduces encoding time to about 70% in base layer and up to 50% in enhancement layer of the reference implementation with a negligible loss in quality.