49 resultados para unconditional guarantees


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports new results concerning the capabilities of a family of service disciplines aimed at providing per-connection end-to-end delay (and throughput) guarantees in high-speed networks. This family consists of the class of rate-controlled service disciplines, in which traffic from a connection is reshaped to conform to specific traffic characteristics, at every hop on its path. When used together with a scheduling policy at each node, this reshaping enables the network to provide end-to-end delay guarantees to individual connections. The main advantages of this family of service disciplines are their implementation simplicity and flexibility. On the other hand, because the delay guarantees provided are based on summing worst case delays at each node, it has also been argued that the resulting bounds are very conservative which may more than offset the benefits. In particular, other service disciplines such as those based on Fair Queueing or Generalized Processor Sharing (GPS), have been shown to provide much tighter delay bounds. As a result, these disciplines, although more complex from an implementation point-of-view, have been considered for the purpose of providing end-to-end guarantees in high-speed networks. In this paper, we show that through ''proper'' selection of the reshaping to which we subject the traffic of a connection, the penalty incurred by computing end-to-end delay bounds based on worst cases at each node can be alleviated. Specifically, we show how rate-controlled service disciplines can be designed to outperform the Rate Proportional Processor Sharing (RPPS) service discipline. Based on these findings, we believe that rate-controlled service disciplines provide a very powerful and practical solution to the problem of providing end-to-end guarantees in high-speed networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a robust method for mosaicing of document images using features derived from connected components. Each connected component is described using the Angular Radial Tran. form (ART). To ensure geometric consistency during feature matching, the ART coefficients of a connected component are augmented with those of its two nearest neighbors. The proposed method addresses two critical issues often encountered in correspondence matching: (i) The stability of features and (ii) Robustness against false matches due to the multiple instances of characters in a document image. The use of connected components guarantees a stable localization across images. The augmented features ensure a successful correspondence matching even in the presence of multiple similar regions within the page. We illustrate the effectiveness of the proposed method on camera captured document images exhibiting large variations in viewpoint, illumination and scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Protocols for secure archival storage are becoming increasingly important as the use of digital storage for sensitive documents is gaining wider practice. Wong et al.[8] combined verifiable secret sharing with proactive secret sharing without reconstruction and proposed a verifiable secret redistribution protocol for long term storage. However their protocol requires that each of the receivers is honest during redistribution. We proposed[3] an extension to their protocol wherein we relaxed the requirement that all the recipients should be honest to the condition that only a simple majority amongst the recipients need to be honest during the re(distribution) processes. Further, both of these protocols make use of Feldman's approach for achieving integrity during the (redistribution processes. In this paper, we present a revised version of our earlier protocol, and its adaptation to incorporate Pedersen's approach instead of Feldman's thereby achieving information theoretic secrecy while retaining integrity guarantees.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cooperation among unrelated individuals is an enduring evolutionary riddle and a number of possible solutions have been suggested. Most of these suggestions attempt to refine cooperative strategies, while little attention is given to the fact that novel defection strategies can also evolve in the population. Especially in the presence of punishment to the defectors and public knowledge of strategies employed by the players, a defecting strategy that avoids getting punished by selectively cooperating only with the punishers can get a selective benefit over non-conditional defectors. Furthermore, if punishment ensures cooperation from such discriminating defectors, defectors who punish other defectors can evolve as well. We show that such discriminating and punishing defectors can evolve in the population by natural selection in a Prisoner’s Dilemma game scenario, even if discrimination is a costly act. These refined defection strategies destabilize unconditional defectors. They themselves are, however, unstable in the population. Discriminating defectors give selective benefit to the punishers in the presence of non-punishers by cooperating with them and defecting with others. However, since these players also defect with other discriminators they suffer fitness loss in the pure population. Among the punishers, punishing cooperators always benefit in contrast to the punishing defectors, as the latter not only defect with other punishing defectors but also punish them and get punished. As a consequence of both these scenarios, punishing cooperators get stabilized in the population. We thus show ironically that refined defection strategies stabilize cooperation. Furthermore, cooperation stabilized by such defectors can work under a wide range of initial conditions and is robust to mistakes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the extension of the work of the preceding paper, the relativistic front form for Maxwell's equations for electromagnetism is developed and shown to be particularly suited to the description of paraxial waves. The generators of the Poincaré group in a form applicable directly to the electric and magnetic field vectors are derived. It is shown that the effect of a thin lens on a paraxial electromagnetic wave is given by a six-dimensional transformation matrix, constructed out of certain special generators of the Poincaré group. The method of construction guarantees that the free propagation of such waves as well as their transmission through ideal optical systems can be described in terms of the metaplectic group, exactly as found for scalar waves by Bacry and Cadilhac. An alternative formulation in terms of a vector potential is also constructed. It is chosen in a gauge suggested by the front form and by the requirement that the lens transformation matrix act locally in space. Pencils of light with accompanying polarization are defined for statistical states in terms of the two-point correlation function of the vector potential. Their propagation and transmission through lenses are briefly considered in the paraxial limit. This paper extends Fourier optics and completes it by formulating it for the Maxwell field. We stress that the derivations depend explicitly on the "henochromatic" idealization as well as the identification of the ideal lens with a quadratic phase shift and are heuristic to this extent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The maximum independent set problem is NP-complete even when restricted to planar graphs, cubic planar graphs or triangle free graphs. The problem of finding an absolute approximation still remains NP-complete. Various polynomial time approximation algorithms, that guarantee a fixed worst case ratio between the independent set size obtained to the maximum independent set size, in planar graphs have been proposed. We present in this paper a simple and efficient, O(|V|) algorithm that guarantees a ratio 1/2, for planar triangle free graphs. The algorithm differs completely from other approaches, in that, it collects groups of independent vertices at a time. Certain bounds we obtain in this paper relate to some interesting questions in the theory of extremal graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiaction learning automata which update their action probabilities on the basis of the responses they get from an environment are considered in this paper. The automata update the probabilities according to whether the environment responds with a reward or a penalty. Learning automata are said to possess ergodicity of the mean if the mean action probability is the state probability (or unconditional probability) of an ergodic Markov chain. In an earlier paper [11] we considered the problem of a two-action learning automaton being ergodic in the mean (EM). The family of such automata was characterized completely by proving the necessary and sufficient conditions for automata to be EM. In this paper, we generalize the results of [11] and obtain necessary and sufficient conditions for the multiaction learning automaton to be EM. These conditions involve two families of probability updating functions. It is shown that for the automaton to be EM the two families must be linearly dependent. The vector defining the linear dependence is the only vector parameter which controls the rate of convergence of the automaton. Further, the technique for reducing the variance of the limiting distribution is discussed. Just as in the two-action case, it is shown that the set of absolutely expedient schemes and the set of schemes which possess ergodicity of the mean are mutually disjoint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Euler–Bernoulli beams are distributed parameter systems that are governed by a non-linear partial differential equation (PDE) of motion. This paper presents a vibration control approach for such beams that directly utilizes the non-linear PDE of motion, and hence, it is free from approximation errors (such as model reduction, linearization etc.). Two state feedback controllers are presented based on a newly developed optimal dynamic inversion technique which leads to closed-form solutions for the control variable. In one formulation a continuous controller structure is assumed in the spatial domain, whereas in the other approach it is assumed that the control force is applied through a finite number of discrete actuators located at predefined discrete locations in the spatial domain. An implicit finite difference technique with unconditional stability has been used to solve the PDE with control actions. Numerical simulation studies show that the beam vibration can effectively be decreased using either of the two formulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider systems composed of a base system with multiple “features” or “controllers”, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a way that guarantees the “maximal” use of each feature. The methodology is based on the notion of “conflict-tolerant” features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based composition scheme for such features, which ensures that each feature is maximally utilized. We also provide a formal framework for specifying, verifying, and synthesizing such features. In particular we obtain a compositional technique for verifying systems developed in this framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority based scheme for composing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of centralized routing and scheduling for IEEE 802.16 mesh networks so as to provide Quality of Service (QoS) to individual real and interactive data applications. We first obtain an optimal and fair routing and scheduling policy for aggregate demands for different source- destination pairs. We then present scheduling algorithms which provide per flow QoS guarantees while utilizing the network resources efficiently. Our algorithms are also scalable: they do not require per flow processing and queueing and the computational requirements are modest. We have verified our algorithms via extensive simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time and hybrid systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict-tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based scheme forcomposing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.