160 resultados para Traffic Conflict Techniques
em Indian Institute of Science - Bangalore - Índia
Resumo:
In recent years, there has been an upsurge of research interest in cooperative wireless communications in both academia and industry. This article presents a simple overview of the pivotal topics in both mobile station (MS)- and base station (BS)- assisted cooperation in the context of cellular radio systems. Owing to the ever-increasing amount of literature in this particular field, this article is by no means exhaustive, but is intended to serve as a roadmap by assembling a representative sample of recent results and to stimulate further research. The emphasis is initially on relay-base cooperation, relying on network coding, followed by the design of cross-layer cooperative protocols conceived for MS cooperation and the concept of coalition network element (CNE)-assisted BS cooperation. Then, a range of complexity and backhaul traffic reduction techniques that have been proposed for BS cooperation are reviewed. A more detailed discussion is provided in the context of MS cooperation concerning the pros and cons of dispensing with high-complexity, power-hungry channel estimation. Finally, generalized design guidelines, conceived for cooperative wireless communications, are presented.
Resumo:
Prediction of variable bit rate compressed video traffic is critical to dynamic allocation of resources in a network. In this paper, we propose a technique for preprocessing the dataset used for training a video traffic predictor. The technique involves identifying the noisy instances in the data using a fuzzy inference system. We focus on three prediction techniques, namely, linear regression, neural network and support vector regression and analyze their performance on H.264 video traces. Our experimental results reveal that data preprocessing greatly improves the performance of linear regression and neural network, but is not effective on support vector regression.
Resumo:
The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.
Resumo:
Remote sensing provides a lucid and effective means for crop coverage identification. Crop coverage identification is a very important technique, as it provides vital information on the type and extent of crop cultivated in a particular area. This information has immense potential in the planning for further cultivation activities and for optimal usage of the available fertile land. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Further, image classification forms the core of the solution to the crop coverage identification problem. No single classifier can prove to satisfactorily classify all the basic crop cover mapping problems of a cultivated region. We present in this paper the experimental results of multiple classification techniques for the problem of crop cover mapping of a cultivated region. A detailed comparison of the algorithms inspired by social behaviour of insects and conventional statistical method for crop classification is presented in this paper. These include the Maximum Likelihood Classifier (MLC), Particle Swarm Optimisation (PSO) and Ant Colony Optimisation (ACO) techniques. The high resolution satellite image has been used for the experiments.
Resumo:
Studies of valence bands and core levels of solids by photoelectron spectroscopy are described at length. Satellite phenomena in the core level spectra have been discussed in some detail and it has been pointed out that the intensity of satellites appearing next to metal and ligand core levels critically depends on the metal-ligand overlap. Use of photoelectron spectroscopy in investigating metal-insulator transitions and spin-state transitions in solids is examined. It is shown that relative intensities of metal Auger lines in transition metal oxides and other systems provide valuable information on the valence bands. Occurrence of interatomic Auger transitions in competition with intraatomic transitions is discussed. Applications of electron energy loss spectroscopy and other techniques of electron spectroscopy in the study of gas-solid interactions are briefly presented.
Resumo:
In this article, several basic swarming laws for Unmanned Aerial Vehicles (UAVs) are developed for both two-dimensional (2D) plane and three-dimensional (3D) space. Effects of these basic laws on the group behaviour of swarms of UAVs are studied. It is shown that when cohesion rule is applied an equilibrium condition is reached in which all the UAVs settle at the same altitude on a circle of constant radius. It is also proved analytically that this equilibrium condition is stable for all values of velocity and acceleration. A decentralised autonomous decision-making approach that achieves collision avoidance without any central authority is also proposed in this article. Algorithms are developed with the help of these swarming laws for two types of collision avoidance, Group-wise and Individual, in 2D plane and 3D space. Effect of various parameters are studied on both types of collision avoidance schemes through extensive simulations.
Resumo:
Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.
Resumo:
In this paper we study two problems in feedback stabilization. The first is the simultaneous stabilization problem, which can be stated as follows. Given plantsG_{0}, G_{1},..., G_{l}, does there exist a single compensatorCthat stabilizes all of them? The second is that of stabilization by a stable compensator, or more generally, a "least unstable" compensator. Given a plantG, we would like to know whether or not there exists a stable compensatorCthat stabilizesG; if not, what is the smallest number of right half-place poles (counted according to their McMillan degree) that any stabilizing compensator must have? We show that the two problems are equivalent in the following sense. The problem of simultaneously stabilizingl + 1plants can be reduced to the problem of simultaneously stabilizinglplants using a stable compensator, which in turn can be stated as the following purely algebraic problem. Given2lmatricesA_{1}, ..., A_{l}, B_{1}, ..., B_{l}, whereA_{i}, B_{i}are right-coprime for alli, does there exist a matrixMsuch thatA_{i} + MB_{i}, is unimodular for alli?Conversely, the problem of simultaneously stabilizinglplants using a stable compensator can be formulated as one of simultaneously stabilizingl + 1plants. The problem of determining whether or not there exists anMsuch thatA + BMis unimodular, given a right-coprime pair (A, B), turns out to be a special case of a question concerning a matrix division algorithm in a proper Euclidean domain. We give an answer to this question, and we believe this result might be of some independent interest. We show that, given twon times mplantsG_{0} and G_{1}we can generically stabilize them simultaneously provided eithernormis greater than one. In contrast, simultaneous stabilizability, of two single-input-single-output plants, g0and g1, is not generic.
Resumo:
In this paper we develop compilation techniques for the realization of applications described in a High Level Language (HLL) onto a Runtime Reconfigurable Architecture. The compiler determines Hyper Operations (HyperOps) that are subgraphs of a data flow graph (of an application) and comprise elementary operations that have strong producer-consumer relationship. These HyperOps are hosted on computation structures that are provisioned on demand at runtime. We also report compiler optimizations that collectively reduce the overheads of data-driven computations in runtime reconfigurable architectures. On an average, HyperOps offer a 44% reduction in total execution time and a 18% reduction in management overheads as compared to using basic blocks as coarse grained operations. We show that HyperOps formed using our compiler are suitable to support data flow software pipelining.
Resumo:
Sets of multivalued dependencies (MVDs) having conflict-free covers are important to the theory and design of relational databases [2,12,15,16]. Their desirable properties motivate the problem of testing a set M of MVDs for the existence of a confiict-free cover. In [8] Goodman and Tay have proposed an approach based on the possible equivalence of M to a single (acyclic) join dependency (JD). We remark that their characterization does not lend an insight into the nature of such sets of MVDs. Here, we use notions that are intrinsic to MVDs to develop a new characterization. Our approach proceeds in two stages. In the first stage, we use the notion of “split-free” sets of MVDs and obtain a characterization of sets M of MVDs having split-free covers. In the second, we use the notion of “intersection” of MVDs to arrive at a necessary and sufficient condition for a split-free set of MVDs to be conflict-free. Based on our characterizations, we also give polynomial-time algorithms for testing whether M has split-free and conflict-free covers. The highlight of our approach is the clear insight it provides into the nature of sets of MVDs having conflict-free covers. Less emphasis is given in this paper to the actual efficiency of the algorthms. Finally, as a bonus, we derive a desirable property of split-free sets of MVDs,thereby showing that they are interesting in their own right.
Resumo:
Multi-access techniques are widely used in computer networking and distributed multiprocessor systems. On-the-fly arbitration schemes permit one of the many contenders to access the medium without collisions. Serial arbitration is cost effective but is slow and hence unsuitable for high-speed multiprocessor environments supporting very high data transfer rates. A fully parallel arbitration scheme takes less time but is not practically realisable for large numbers of contenders. In this paper, a generalised parallel-serial scheme is proposed which significantly reduces the arbitration time and is practically realisable.
Resumo:
The network scenario is that of an infrastructure IEEE 802.11 WLAN with a single AP with which several stations (STAs) are associated. The AP has a finite size buffer for storing packets. In this scenario, we consider TCP controlled upload and download file transfers between the STAs and a server on the wireline LAN (e.g., 100 Mbps Ethernet) to which the AP is connected. In such a situation, it is known (see, for example, (3), [9]) that because of packet loss due to finite buffers at the Ap, upload file transfers obtain larger throughputs than download transfers. We provide an analytical model for estimating the upload and download throughputs as a function of the buffer size at the AP. We provide models for the undelayed and delayed ACK cases for a TCP that performs loss recovery only by timeout, and also for TCP Reno.
Resumo:
In our earlier work ([1]) we proposed WLAN Manager (or WM) a centralised controller for QoS management of infrastructure WLANs based on the IEEE 802.11 DCF standards. The WM approach is based on queueing and scheduling packets in a device that sits between all traffic flowing between the APs and the wireline LAN, requires no changes to the AP or the STAs, and can be viewed as implementing a "Split-MAC" architecture. The objectives of WM were to manage various TCP performance related issues (such as the throughput "anomaly" when STAs associate with an AP with mixed PHY rates, and upload-download unfairness induced by finite AP buffers), and also to serve as the controller for VoIP admission control and handovers, and for other QoS management measures. In this paper we report our experiences in implementing the proposals in [1]: the insights gained, new control techniques developed, and the effectiveness of the WM approach in managing TCP performance in an infrastructure WLAN. We report results from a hybrid experiment where a physical WM manages actual TCP controlled packet flows between a server and clients, with the WLAN being simulated, and also from a small physical testbed with an actual AP.
Resumo:
The heat capacity of a substance is related to the structure and constitution of the material and its measurement is a standard technique of physical investigation. In this review, the classical methods are first analyzed briefly and their recent extensions are summarized. The merits and demerits of these methods are pointed out. The newer techniques such as the a.c. method, the relaxation method, the pulse methods, the laser flash calorimetry and other methods developed to extend the heat capacity measurements to newer classes of materials and to extreme conditions of sample geometry, pressure and temperature are comprehensively reviewed. Examples of recent work and details of the experimental systems are provided for each method. The introduction of automation in control systems for the monitoring of the experiments and for data processing is also discussed. Two hundred and eight references and 18 figures are used to illustrate the various techniques.
Resumo:
Three new procedures for the extrapolation of series coefficients from a given power series expansion are proposed. They are based on (i) a novel resummation identity, (ii) parametrised Euler transformation (pet) and (iii) a modifiedpet. Several examples taken from the Ising model series expansions, ferrimagnetic systems, etc., are illustrated. Apart from these applications, the higher order virial coefficients for hard spheres and hard discs have also been evaluated using the new techniques and these are compared with the estimates obtained by other methods. A satisfactory agreement is revealed between the two.