918 resultados para Chevrolet Citation.
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.
Resumo:
Three dimensional clipping is a critical component of the 3D graphics pipeline. A new 3D clipping algorithm is presented in this paper. An efficient 2D clipping routine reported earlier has been used as a submodule. This algorithm uses a new classification scheme for lines of all possible orientations with respect to a rectangular parallelopiped view volume. The performance of this algorithm has been evaluated using exact arithmetic operation counts. It is shown that our algorithm requires less arithmetic operations than the Cyrus-Beck 3D clipping algorithm in all cases. It is also shown that for lines that intersect the clipping volume, our algorithm performs better than the Liang-Barsky 3D clipping algorithm.
Resumo:
We consider the problem of optimally scheduling a processor executing a multilayer protocol in an intelligent Network Interface Controller (NIC). In particular, we assume a typical LAN environment with class 4 transport service, a connectionless network service, and a class 1 link level protocol. We develop a queuing model for the problem. In the most general case this becomes a cyclic queuing network in which some queues have dedicated servers, and the others have a common schedulable server. We use sample path arguments and Markov decision theory to determine optimal service schedules. The optimal throughputs are compared with those obtained with simple policies. The optimal policy yields upto 25% improvement in some cases. In some other cases, the optimal policy does only slightly better than much simpler policies.
Analyzing Cache Performance Bottlenecks of STM Applications and addressing them with Compiler's help
Resumo:
Software transactional memory (STM) is a promising programming paradigm for shared memory multithreaded programs as an alternative to traditional lock based synchronization. However adoption of STM in mainstream software has been quite low due to its considerable overheads and its poor cache/memory performance. In this paper, we perform a detailed study of the cache behavior of STM applications and quantify the impact of different STM factors on the cache misses experienced by the applications. Based on our analysis, we propose a compiler driven Lock-Data Colocation (LDC), targeted at reducing the cache overheads on STM. We show that LDC is effective in improving the cache behavior of STM applications by reducing the dcache miss latency and improving execution time performance.
Resumo:
In this work, we propose a new organization for the last level shared cache of a rnulticore system. Our design is based on the observation that the Next-Use distance, measured in terms of intervening misses between the eviction of a line and its next use, for lines brought in by a given delinquent PC falls within a predictable range of values. We exploit this correlation to improve the performance of shared caches in multi-core architectures by proposing the NUcache organization.
Resumo:
Channel assignment in multi-channel multi-radio wireless networks poses a significant challenge due to scarcity of number of channels available in the wireless spectrum. Further, additional care has to be taken to consider the interference characteristics of the nodes in the network especially when nodes are in different collision domains. This work views the problem of channel assignment in multi-channel multi-radio networks with multiple collision domains as a non-cooperative game where the objective of the players is to maximize their individual utility by minimizing its interference. Necessary and sufficient conditions are derived for the channel assignment to be a Nash Equilibrium (NE) and efficiency of the NE is analyzed by deriving the lower bound of the price of anarchy of this game. A new fairness measure in multiple collision domain context is proposed and necessary and sufficient conditions for NE outcomes to be fair are derived. The equilibrium conditions are then applied to solve the channel assignment problem by proposing three algorithms, based on perfect/imperfect information, which rely on explicit communication between the players for arriving at an NE. A no-regret learning algorithm known as Freund and Schapire Informed algorithm, which has an additional advantage of low overhead in terms of information exchange, is proposed and its convergence to the stabilizing outcomes is studied. New performance metrics are proposed and extensive simulations are done using Matlab to obtain a thorough understanding of the performance of these algorithms on various topologies with respect to these metrics. It was observed that the algorithms proposed were able to achieve good convergence to NE resulting in efficient channel assignment strategies.
Resumo:
We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.
Resumo:
In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.
Resumo:
: Within two months of the first report on quasicrystals in PRL in November 1984, Indian research which had a 'premature discovery' in 1978 in this area got under way, In the past nine years these efforts have led to original discoveries relating to new types of quasicrystalline phases as well as extensive investigations involving tiling theory, hyperspace, positron annihilation and electrical properties, These researches have been multi-institutional and multi-disciplinary. Enlightened and generous funding was extended by DST from 1986 by recognizing it as a thrust area in basic research via SERC and US-India Funds. International recognition, subjective though it is, in the form of citation of Indian papers, invited lectures and reviews, books as well as the membership of International Advisory Committee has followed and is among the highest in the fields of condensed matter science covered at the Bangalore meeting, Future directions pertaining to the exploration of mechanical and electronic properties as well as structures beyond the quasicrystalline order will be pointed out.
Resumo:
Recently, it was found that a reduction in atmospheric CO2 concentration leads to a temporary increase in global precipitation. We use the Hadley Center coupled atmosphere-ocean model, HadCM3L, to demonstrate that this precipitation increase is a consequence of precipitation sensitivity to changes in atmospheric CO2 concentrations through fast tropospheric adjustment processes. Slow ocean cooling explains the longer-term decrease in precipitation. Increased CO2 tends to suppress evaporation/precipitation whereas increased temperatures tend to increase evaporation/precipitation. When the enhanced CO2 forcing is removed, global precipitation increases temporarily, but this increase is not observed when a similar negative radiative forcing is applied as a reduction of solar intensity. Therefore, transient precipitation increase following a reduction in CO2-radiative forcing is a consequence of the specific character of CO2 forcing and is not a general feature associated with decreases in radiative forcing. Citation: Cao, L., G. Bala, and K. Caldeira (2011), Why is there a short-term increase in global precipitation in response to diminished CO2 forcing?, Geophys. Res. Lett., 38, L06703, doi:10.1029/2011GL046713.
Resumo:
An (alpha, beta)-spanner of an unweighted graph G is a subgraph H that distorts distances in G up to a multiplicative factor of a and an additive term beta. It is well known that any graph contains a (multiplicative) (2k - 1, 0)-spanner of size O(n(1+1/k)) and an (additive) (1, 2)-spanner of size O(n(3/2)). However no other additive spanners are known to exist. In this article we develop a couple of new techniques for constructing (alpha, beta)-spanners. Our first result is an additive (1, 6)-spanner of size O(n(4/3)). The construction algorithm can be understood as an economical agent that assigns costs and values to paths in the graph, purchasing affordable paths and ignoring expensive ones, which are intuitively well approximated by paths already purchased. We show that this path buying algorithm can be parameterized in different ways to yield other sparseness-distortion tradeoffs. Our second result addresses the problem of which (alpha, beta)-spanners can be computed efficiently, ideally in linear time. We show that, for any k, a (k, k - 1)-spanner with size O(kn(1+1/k)) can be found in linear time, and, further, that in a distributed network the algorithm terminates in a constant number of rounds. Previous spanner constructions with similar performance had roughly twice the multiplicative distortion.
Resumo:
This paper(1) presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given kernels are grouped and employ l(1) norm regularization for promoting sparsity within RKHS norms of each group and l(s), s >= 2 norm regularization for promoting non-sparse combinations across groups. Various sparsity levels in combining the kernels can be achieved by varying the grouping of kernels-hence we name the formulations as Variable Sparsity Kernel Learning (VSKL) formulations. While previous attempts have a non-convex formulation, here we present a convex formulation which admits efficient Mirror-Descent (MD) based solving techniques. The proposed MD based algorithm optimizes over product of simplices and has a computational complexity of O (m(2)n(tot) log n(max)/epsilon(2)) where m is no. training data points, n(max), n(tot) are the maximum no. kernels in any group, total no. kernels respectively and epsilon is the error in approximating the objective. A detailed proof of convergence of the algorithm is also presented. Experimental results show that the VSKL formulations are well-suited for multi-modal learning tasks like object categorization. Results also show that the MD based algorithm outperforms state-of-the-art MKL solvers in terms of computational efficiency.