991 resultados para DL-PCBs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The idea of ubiquity and seamless connectivity in networks is gaining more importance in recent times because of the emergence of mobile devices with added capabilities like multiple interfaces and more processing abilities. The success of ubiquitous applications depends on how effectively the user is provided with seamless connectivity. In a ubiquitous application, seamless connectivity encompasses the smooth migration of a user between networks and providing him/her with context based information automatically at all times. In this work, we propose a seamless connectivity scheme in the true sense of ubiquitous networks by providing smooth migration to a user along with providing information based on his/her contexts automatically without re-registration with the foreign network. The scheme uses Ubi-SubSystems(USS) and Soft-Switches(SS) for maintaining the ubiquitous application resources and the users. The scheme has been tested by considering the ubiquitous touring system with several sets of tourist spots and users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiple Clock Domain processors provide an attractive solution to the increasingly challenging problems of clock distribution and power dissipation. They allow their chips to be partitioned into different clock domains, and each domain’s frequency (voltage) to be independently configured. This flexibility adds new dimensions to the Dynamic Voltage and Frequency Scaling problem, while providing better scope for saving energy and meeting performance demands. In this paper, we propose a compiler directed approach for MCD-DVFS. We build a formal petri net based program performance model, parameterized by settings of microarchitectural components and resource configurations, and integrate it with our compiler passes for frequency selection.Our model estimates the performance impact of a frequency setting, unlike the existing best techniques which rely on weaker indicators of domain performance such as queue occupancies(used by online methods) and slack manifestation for a particular frequency setting (software based methods).We evaluate our method with subsets of SPECFP2000,Mediabench and Mibench benchmarks. Our mean energy savings is 60.39% (versus 33.91% of the best software technique)in a memory constrained system for cache miss dominated benchmarks, and we meet the performance demands.Our ED2 improves by 22.11% (versus 18.34%) for other benchmarks. For a CPU with restricted frequency settings, our energy consumption is within 4.69% of the optimal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2 + m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovász [9] and Mader [11].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Java Memory Model (JMM) provides a semantics of Java multithreading for any implementation platform. The JMM is defined in a declarative fashion with an allowed program execution being defined in terms of existence of "commit sequences" (roughly, the order in which actions in the execution are committed). In this work, we develop OpMM, an operational under-approximation of the JMM. The immediate motivation of this work lies in integrating a formal specification of the JMM with software model checkers. We show how our operational memory model description can be integrated into a Java Path Finder (JPF) style model checker for Java programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digest caches have been proposed as an effective method tospeed up packet classification in network processors. In this paper, weshow that the presence of a large number of small flows and a few largeflows in the Internet has an adverse impact on the performance of thesedigest caches. In the Internet, a few large flows transfer a majority ofthe packets whereas the contribution of several small flows to the totalnumber of packets transferred is small. In such a scenario, the LRUcache replacement policy, which gives maximum priority to the mostrecently accessed digest, tends to evict digests belonging to the few largeflows. We propose a new cache management algorithm called SaturatingPriority (SP) which aims at improving the performance of digest cachesin network processors by exploiting the disparity between the number offlows and the number of packets transferred. Our experimental resultsdemonstrate that SP performs better than the widely used LRU cachereplacement policy in size constrained caches. Further, we characterizethe misses experienced by flow identifiers in digest caches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we address the problem of multi-agent search. We formulate two deploy and search strategies based on optimal deployment of agents in search space so as to maximize the search effectiveness in a single step. We show that a variation of centroidal Voronoi configuration is the optimal deployment. When the agents have sensors with different capabilities, the problem will be heterogeneous in nature. We introduce a new concept namely, generalized Voronoi partition in order to formulate and solve the heterogeneous multi-agent search problem. We address a few theoretical issues such as optimality of deployment, convergence and spatial distributedness of the control law and the search strategies. Simulation experiments are carried out to compare performances of the proposed strategies with a few simple search strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transfer function coefficients (TFC) are widely used to test linear analog circuits for parametric and catastrophic faults. This paper presents closed form expressions for an upper bound on the defect level (DL) and a lower bound on fault coverage (FC) achievable in TFC based test method. The computed bounds have been tested and validated on several benchmark circuits. Further, application of these bounds to scalable RC ladder networks reveal a number of interesting characteristics. The approach adopted here is general and can be extended to find bounds of DL and FC of other parametric test methods for linear and non-linear circuits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we explore an implementation of a high-throughput, streaming application on REDEFINE-v2, which is an enhancement of REDEFINE. REDEFINE is a polymorphic ASIC combining the flexibility of a programmable solution with the execution speed of an ASIC. In REDEFINE Compute Elements are arranged in an 8x8 grid connected via a Network on Chip (NoC) called RECONNECT, to realize the various macrofunctional blocks of an equivalent ASIC. For a 1024-FFT we carry out an application-architecture design space exploration by examining the various characterizations of Compute Elements in terms of the size of the instruction store. We further study the impact by using application specific, vectorized FUs. By setting up different partitions of the FFT algorithm for persistent execution on REDEFINE-v2, we derive the benefits of setting up pipelined execution for higher performance. The impact of the REDEFINE-v2 micro-architecture for any arbitrary N-point FFT (N > 4096) FFT is also analyzed. We report the various algorithm-architecture tradeoffs in terms of area and execution speed with that of an ASIC implementation. In addition we compare the performance gain with respect to a GPP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a fluid queue in discrete time with random service rate. Such a queue has been used in several recent studies on wireless networks where the packets can be arbitrarily fragmented. We provide conditions on finiteness of moments of stationary delay, its Laplace-Stieltjes transform and various approximations under heavy traffic. Results are extended to the case where the wireless link can transmit in only a few slots during a frame.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of scheduling of a wireless channel (server) to several queues. Each queue has its own link (transmission) rate. The link rate of a queue can vary randomly from slot to slot. The queue lengths and channel states of all users are known at the beginning of each slot. We show the existence of an optimal policy that minimizes the long term (discounted) average sum of queue lengths. The optimal policy, in general needs to be computed numerically. Then we identify a greedy (one step optimal) policy, MAX-TRANS which is easy to implement and does not require the channel and traffic statistics. The cost of this policy is close to optimal and better than other well-known policies (when stable) although it is not throughput optimal for asymmetric systems. We (approximately) identify its stability region and obtain approximations for its mean queue lengths and mean delays. We also modify this policy to make it throughput optimal while retaining good performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In phase-encoded optical CDMA (OCDMA) spreading is achieved by encoding the phase of signal spectrum. Here, a mathematical model for the output signal of a phase-encoded OCDMA system is first derived. This is shown to lead to a performance metric for the design of spreading sequences for asynchronous transmission. Generalized bent functions are used to construct a family of efficient phase-encoding sequences. It is shown how M-ary modulation of these spreading sequences is possible. The problem of designing efficient phaseencoded sequences is then related to the problem of minimizing PMEPR (peak-to-mean envelope power ratio) in an OFDM communication system.