168 resultados para Probabilistic Algorithms
Resumo:
A spanning tree T of a graph G is said to be a tree t-spanner if the distance between any two vertices in T is at most t times their distance in G. A graph that has a tree t-spanner is called a tree t-spanner admissible graph. The problem of deciding whether a graph is tree t-spanner admissible is NP-complete for any fixed t >= 4 and is linearly solvable for t <= 2. The case t = 3 still remains open. A chordal graph is called a 2-sep chordal graph if all of its minimal a - b vertex separators for every pair of non-adjacent vertices a and b are of size two. It is known that not all 2-sep chordal graphs admit tree 3-spanners This paper presents a structural characterization and a linear time recognition algorithm of tree 3-spanner admissible 2-sep chordal graphs. Finally, a linear time algorithm to construct a tree 3-spanner of a tree 3-spanner admissible 2-sep chordal graph is proposed. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed
Resumo:
IEEE 802.16 standards for Wireless Metropolitan Area Networks (WMANs) include a mesh mode of operation for improving the coverage and throughput of the network. In this paper, we consider the problem of routing and centralized scheduling for such networks. We first fix the routing, which reduces the network to a tree. We then present a finite horizon dynamic programming framework. Using it we obtain various scheduling algorithms depending upon the cost function. Next we consider simpler suboptimal algorithms and compare their performances.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected non-negative edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. Although in most such applications any cycle basis can be used, a low weight cycle basis often translates to better performance and/or numerical stability. Despite the fact that the problem can be solved exactly in polynomial time, we design approximation algorithms since the performance of the exact algorithms may be too expensive for some practical applications. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time O(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time O(n(3+2/k) ), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega) ) bound. We also present a 2-approximation algorithm with expected running time O(M-omega root n log n), a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
Reduction of the execution time of a job through equitable distribution of work load among the processors in a distributed system is the goal of load balancing. Performance of static and dynamic load balancing algorithms for the extended hypercube, is discussed. Threshold algorithms are very well-known algorithms for dynamic load balancing in distributed systems. An extension of the threshold algorithm, called the multilevel threshold algorithm, has been proposed. The hierarchical interconnection network of the extended hypercube is suitable for implementing the proposed algorithm. The new algorithm has been implemented on a transputer-based system and the performance of the algorithm for an extended hypercube is compared with those for mesh and binary hypercube networks
Resumo:
This paper considers a multi-person discrete game with random payoffs. The distribution of the random payoff is unknown to the players and further none of the players know the strategies or the actual moves of other players. A class of absolutely expedient learning algorithms for the game based on a decentralised team of Learning Automata is presented. These algorithms correspond, in some sense, to rational behaviour on the part of the players. All stable stationary points of the algorithm are shown to be Nash equilibria for the game. It is also shown that under some additional constraints on the game, the team will always converge to a Nash equilibrium.
Resumo:
We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.
Resumo:
In this paper we address a scheduling problem for minimising total weighted tardiness. The motivation for the paper comes from the automobile gear manufacturing process. We consider the bottleneck operation of heat treatment stage of gear manufacturing. Real life scenarios like unequal release times, incompatible job families, non-identical job sizes and allowance for job splitting have been considered. A mathematical model taking into account dynamic starting conditions has been developed. Due to the NP-hard nature of the problem, a few heuristic algorithms have been proposed. The performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small size problem instances, and (b) in comparison with `estimated optimal solution' for large size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently obtaining near-optimal solutions (that is, statistically estimated one) in very reasonable computational time.
Resumo:
This paper considers the problem of spectrum sensing in cognitive radio networks when the primary user employs Orthogonal Frequency Division Multiplexing (OFDM). We specifically consider the scenario when the channel between the primary and a secondary user is frequency selective. We develop cooperative sequential detection algorithms based on energy detectors. We modify the detectors to mitigate the effects of some common model uncertainties such as timing and frequency offset, IQ-imbalance and uncertainty in noise and transmit power. The performance of the proposed algorithms are studied via simulations. We show that the performance of the energy detector is not affected by the frequency selective channel. We also provide a theoretical analysis for some of our algorithms.
Resumo:
Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception
Resumo:
This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.
Resumo:
For a class of distributed recursive algorithms, it is shown that a stochastic approximation-like tapering stepsize routine suppresses the effects of interprocessor delays.
Resumo:
Two new line clipping algorithms, the opposite-corner algorithm and the perpendicular-distance algorithm, that are based on simple geometric observations are presented. These algorithms do not require computation of outcodes nor do they depend on the parametric representations of the lines. It is shown that the opposite-corner algorithm perform consistently better than an algorithm due to Nicholl, Lee, and Nicholl which is claimed to be better than the classic algorithm due to Cohen-Sutherland and the more recent Liang-Barsky algorithm. The pseudo-code of the opposite-corner algorithm is provided in the Appendix.
Resumo:
In this paper, we give a method for probabilistic assignment to the Realistic Abductive Reasoning Model, The knowledge is assumed to be represented in the form of causal chaining, namely, hyper-bipartite network. Hyper-bipartite network is the most generalized form of knowledge representation for which, so far, there has been no way of assigning probability to the explanations, First, the inference mechanism using realistic abductive reasoning model is briefly described and then probability is assigned to each of the explanations so as to pick up the explanations in the decreasing order of plausibility.