893 resultados para running reward
Resumo:
We describe a QCD motivated model for total cross-sections which uses the eikonal representation and incorporates QCD mini-jets to drive the rise with energy of the cross-section, while the impact parameter distribution is obtained through the Fourier transform of the transverse momentum distribution of soft gluons emitted in the parton-parton interactions giving rise to mini-jets in the final state. A singular but integral expression for the running coupling constant in the infrared region is part of this model.
Resumo:
Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.
Resumo:
We present a fast algorithm for computing a Gomory-Hu tree or cut tree for an unweighted undirected graph G = (V,E). The expected running time of our algorithm is Õ(mc) where |E| = m and c is the maximum u-vedge connectivity, where u,v ∈ V. When the input graph is also simple (i.e., it has no parallel edges), then the u-v edge connectivity for each pair of vertices u and v is at most n-1; so the expected running time of our algorithm for simple unweighted graphs is Õ(mn).All the algorithms currently known for constructing a Gomory-Hu tree [8,9] use n-1 minimum s-t cut (i.e., max flow) subroutines. This in conjunction with the current fastest Õ(n20/9) max flow algorithm due to Karger and Levine [11] yields the current best running time of Õ(n20/9n) for Gomory-Hu tree construction on simpleunweighted graphs with m edges and n vertices. Thus we present the first Õ(mn) algorithm for constructing a Gomory-Hu tree for simple unweighted graphs.We do not use a max flow subroutine here; we present an efficient tree packing algorithm for computing Steiner edge connectivity and use this algorithm as our main subroutine. The advantage in using a tree packing algorithm for constructing a Gomory-Hu tree is that the work done in computing a minimum Steiner cut for a Steiner set S ⊆ V can be reused for computing a minimum Steiner cut for certain Steiner sets S' ⊆ S.
Resumo:
Abstract. Let G = (V,E) be a weighted undirected graph, with non-negative edge weights. We consider the problem of efficiently computing approximate distances between all pairs of vertices in G. While many efficient algorithms are known for this problem in unweighted graphs, not many results are known for this problem in weighted graphs. Zwick [14] showed that for any fixed ε> 0, stretch 1 1 + ε distances between all pairs of vertices in a weighted directed graph on n vertices can be computed in Õ(n ω) time, where ω < 2.376 is the exponent of matrix multiplication and n is the number of vertices. It is known that finding distances of stretch less than 2 between all pairs of vertices in G is at least as hard as Boolean matrix multiplication of two n×n matrices. It is also known that all-pairs stretch 3 distances can be computed in Õ(n 2) time and all-pairs stretch 7/3 distances can be computed in Õ(n 7/3) time. Here we consider efficient algorithms for the problem of computing all-pairs stretch (2+ε) distances in G, for any 0 < ε < 1. We show that all pairs stretch (2 + ε) distances for any fixed ε> 0 in G can be computed in expected time O(n 9/4 logn). This algorithm uses a fast rectangular matrix multiplication subroutine. We also present a combinatorial algorithm (that is, it does not use fast matrix multiplication) with expected running time O(n 9/4) for computing all-pairs stretch 5/2 distances in G. 1
Resumo:
We develop a simulation-based, two-timescale actor-critic algorithm for infinite horizon Markov decision processes with finite state and action spaces, with a discounted reward criterion. The algorithm is of the gradient ascent type and performs a search in the space of stationary randomized policies. The algorithm uses certain simultaneous deterministic perturbation stochastic approximation (SDPSA) gradient estimates for enhanced performance. We show an application of our algorithm on a problem of mortgage refinancing. Our algorithm obtains the optimal refinancing strategies in a computationally efficient manner
Resumo:
In the recent years, there has been a trend to run metallic pipelines carrying petroleum products and high voltage AC power lines parallel to each other in a relatively narrow strip of land. Due to this sharing of the right-of-way, verhead AC power line electric field may induce voltages on the metallic pipelines running in close vicinity leading to serious adverse effects. In this paper, the induced voltages on metallic pipelines running in close vicinity of high voltage power transmission lines have been computed. Before computing the induced voltages, an optimum configuration of the phase conductors based on the lowest conductor surface gradient and field under transmission line has been arrived at. This paper reports the conductor surface field gradients calculated for the various configurations. Also the electric fields under transmission line, for single circuit and double circuit (various phase arrangements) have been analyzed. Based on the above results, an optimum configuration giving the lowest field under the power line as well as the lowest conductor surface gradient has been arrived at and for this configuration, induced voltage on the pipeline has been computed using the Charge Simulation Method (CSM). For comparison, induced voltages on the pipeline has been computed for the various other phase configurations also.
Resumo:
Long running multi-physics coupled parallel applications have gained prominence in recent years. The high computational requirements and long durations of simulations of these applications necessitate the use of multiple systems of a Grid for execution. In this paper, we have built an adaptive middleware framework for execution of long running multi-physics coupled applications across multiple batch systems of a Grid. Our framework, apart from coordinating the executions of the component jobs of an application on different batch systems, also automatically resubmits the jobs multiple times to the batch queues to continue and sustain long running executions. As the set of active batch systems available for execution changes, our framework performs migration and rescheduling of components using a robust rescheduling decision algorithm. We have used our framework for improving the application throughput of a foremost long running multi-component application for climate modeling, the Community Climate System Model (CCSM). Our real multi-site experiments with CCSM indicate that Grid executions can lead to improved application throughput for climate models.
Resumo:
This paper proposes a Petri net model for a commercial network processor (Intel iXP architecture) which is a multithreaded multiprocessor architecture. We consider and model three different applications viz., IPv4 forwarding, network address translation, and IP security running on IXP 2400/2850. A salient feature of the Petri net model is its ability to model the application, architecture and their interaction in great detail. The model is validated using the Intel proprietary tool (SDK 3.51 for IXP architecture) over a range of configurations. We conduct a detailed performance evaluation, identify the bottleneck resource, and propose a few architectural extensions and evaluate them in detail.
Resumo:
We consider the problem of computing a minimum cycle basis in a directed graph G. The input to this problem is a directed graph whose arcs have positive weights. In this problem a {- 1, 0, 1} incidence vector is associated with each cycle and the vector space over Q generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of weights of the cycles is minimum is called a minimum cycle basis of G. The current fastest algorithm for computing a minimum cycle basis in a directed graph with m arcs and n vertices runs in O(m(w+1)n) time (where w < 2.376 is the exponent of matrix multiplication). If one allows randomization, then an (O) over tilde (m(3)n) algorithm is known for this problem. In this paper we present a simple (O) over tilde (m(2)n) randomized algorithm for this problem. The problem of computing a minimum cycle basis in an undirected graph has been well-studied. In this problem a {0, 1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of the graph. The fastest known algorithm for computing a minimum cycle basis in an undirected graph runs in O(m(2)n + mn(2) logn) time and our randomized algorithm for directed graphs almost matches this running time.
Resumo:
In developing countries, a high rate of growth in the demand for electric energy is felt, and so the addition of new generating units becomes inevitable. In deregulated power systems, private generating stations are encouraged to add new generations. Some of the factors considered while placing a new generating unit are: availability of esources, ease of transmitting power, distance from the load centre, etc. Finding the most appropriate locations for generation expansion can be done by running repeated power flows and carrying system studies like analyzing the voltage profile, voltage stability, loss analysis, etc. In this paper a new methodology is proposed which will mainly consider the existing network topology. A concept of T-index is introduced in this paper, which considers the electrical distances between generator and load nodes. This index is used for ranking the most significant new generation expansion locations and also indicates the amount of permissible generations that can be installed at these new locations. This concept facilitates for the medium and long term planning of power generation expansions within the available transmission corridors. Studies carried out on an EHV equivalent 10-bus system and IEEE 30 bus systems are presented for illustration purposes.
Resumo:
There has been substantial public debate recently on a host of issues such as climate change, genetically modified crops and nuclear power; a common theme running through these issues involves science and public policy. This note will be based broadly on three interlinked themes: growth of specialization in science, significant commercial interests pushing science and technology, and a checkered track record of the promises made and the broken-reality.
Resumo:
Today's SoCs are complex designs with multiple embedded processors, memory subsystems, and application specific peripherals. The memory architecture of embedded SoCs strongly influences the power and performance of the entire system. Further, the memory subsystem constitutes a major part (typically up to 70%) of the silicon area for the current day SoC. In this article, we address the on-chip memory architecture exploration for DSP processors which are organized as multiple memory banks, where banks can be single/dual ported with non-uniform bank sizes. In this paper we propose two different methods for physical memory architecture exploration and identify the strengths and applicability of these methods in a systematic way. Both methods address the memory architecture exploration for a given target application by considering the application's data access characteristics and generates a set of Pareto-optimal design points that are interesting from a power, performance and VLSI area perspective. To the best of our knowledge, this is the first comprehensive work on memory space exploration at physical memory level that integrates data layout and memory exploration to address the system objectives from both hardware design and application software development perspective. Further we propose an automatic framework that explores the design space identifying 100's of Pareto-optimal design points within a few hours of running on a standard desktop configuration.
Resumo:
We present two online algorithms for maintaining a topological order of a directed n-vertex acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm handles m arc additions in O(m(3/2)) time. For sparse graphs (m/n = O(1)), this bound improves the best previous bound by a logarithmic factor, and is tight to within a constant factor among algorithms satisfying a natural locality property. Our second algorithm handles an arbitrary sequence of arc additions in O(n(5/2)) time. For sufficiently dense graphs, this bound improves the best previous bound by a polynomial factor. Our bound may be far from tight: we show that the algorithm can take Omega(n(2)2 root(2lgn)) time by relating its performance to a generalization of the k-levels problem of combinatorial geometry. A completely different algorithm running in Theta (n(2) log n) time was given recently by Bender, Fineman, and Gilbert. We extend both of our algorithms to the maintenance of strong components, without affecting the asymptotic time bounds.
Resumo:
Software transactional memory (STM) is a promising programming paradigm for shared memory multithreaded programs. In order for STMs to be adopted widely for performance critical software, understanding and improving the cache performance of applications running on STM becomes increasingly crucial, as the performance gap between processor and memory continues to grow. In this paper, we present the most detailed experimental evaluation to date, of the cache behavior of STM applications and quantify the impact of the different STM factors on the cache misses experienced by the applications. We find that STMs are not cache friendly, with the data cache stall cycles contributing to more than 50% of the execution cycles in a majority of the benchmarks. We find that on an average, misses occurring inside the STM account for 62% of total data cache miss latency cycles experienced by the applications and the cache performance is impacted adversely due to certain inherent characteristics of the STM itself. The above observations motivate us to propose a set of specific compiler transformations targeted at making the STMs cache friendly. We find that STM's fine grained and application unaware locking is a major contributor to its poor cache behavior. Hence we propose selective Lock Data co-location (LDC) and Redundant Lock Access Removal (RLAR) to address the lock access misses. We find that even transactions that are completely disjoint access parallel, suffer from costly coherence misses caused by the centralized global time stamp updates and hence we propose the Selective Per-Partition Time Stamp (SPTS) transformation to address this. We show that our transformations are effective in improving the cache behavior of STM applications by reducing the data cache miss latency by 20.15% to 37.14% and improving execution time by 18.32% to 33.12% in five of the 8 STAMP applications.
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.