850 resultados para Use the resources


Relevância:

90.00% 90.00%

Publicador:

Resumo:

GPUs have been used for parallel execution of DOALL loops. However, loops with indirect array references can potentially cause cross iteration dependences which are hard to detect using existing compilation techniques. Applications with such loops cannot easily use the GPU and hence do not benefit from the tremendous compute capabilities of GPUs. In this paper, we present an algorithm to compute at runtime the cross iteration dependences in such loops. The algorithm uses both the CPU and the GPU to compute the dependences. Specifically, it effectively uses the compute capabilities of the GPU to quickly collect the memory accesses performed by the iterations by executing the slice functions generated for the indirect array accesses. Using the dependence information, the loop iterations are levelized such that each level contains independent iterations which can be executed in parallel. Another interesting aspect of the proposed solution is that it pipelines the dependence computation of the future level with the actual computation of the current level to effectively utilize the resources available in the GPU. We use NVIDIA Tesla C2070 to evaluate our implementation using benchmarks from Polybench suite and some synthetic benchmarks. Our experiments show that the proposed technique can achieve an average speedup of 6.4x on loops with a reasonable number of cross iteration dependences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Each new generation of GPUs vastly increases the resources available to GPGPU programs. GPU programming models (like CUDA) were designed to scale to use these resources. However, we find that CUDA programs actually do not scale to utilize all available resources, with over 30% of resources going unused on average for programs of the Parboil2 suite that we used in our work. Current GPUs therefore allow concurrent execution of kernels to improve utilization. In this work, we study concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs. On two-program workloads from the Parboil2 benchmark suite we find concurrent execution is often no better than serialized execution. We identify that the lack of control over resource allocation to kernels is a major serialization bottleneck. We propose transformations that convert CUDA kernels into elastic kernels which permit fine-grained control over their resource usage. We then propose several elastic-kernel aware concurrency policies that offer significantly better performance and concurrency compared to the current CUDA policy. We evaluate our proposals on real hardware using multiprogrammed workloads constructed from benchmarks in the Parboil 2 suite. On average, our proposals increase system throughput (STP) by 1.21x and improve the average normalized turnaround time (ANTT) by 3.73x for two-program workloads when compared to the current CUDA concurrency implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mobile Ad hoc Networks (MANETs) having strikingly superior features also come with notable disadvantage and troubles and the most exigent amongst all being security related issues. Such an ringent network dexterously pave approach for the malicious nodes. Hence providing security is a tedious task. For such a dynamic environment, a security system which dynamically observes the attacker's plans and protect the highly sophisticated resources is in high demand. In this paper we present a method of providing security against wormhole attacks to a MANET by learning about the environment dynamically and adapting itself to avoid malicious nodes. We accomplish this with the assistance of Honeypot. Our method predicts the wormhole attack that may take place and protect the resources well-in advance. Also it cleverly deal with the attacker by using previous history and different type of messages to locate the attacker. Several experiments suggest that the system is accurate in handling wormhole attack.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We use the Bouguer coherence (Morlet isostatic response function) technique to compute the spatial variation of effective elastic thickness (T-e) of the Andaman subduction zone. The recovered T-e map resolves regional-scale features that correlate well with known surface structures of the subducting Indian plate and the overriding Burma plate. The major structure on the India plate, the Ninetyeast Ridge (NER), exhibits a weak mechanical strength, which is consistent with the expected signature of an oceanic ridge of hotspot origin. However, a markedly low strength (0< T-e <3 km) in that region, where the NER is close to the Andaman trench (north of 10 N), receives our main attention in this study. The subduction geometry derived from the Bouguer gravity forward modeling suggests that the NER has indented beneath the Andaman arc. We infer that the bending stresses of the viscous plate, which were reinforced within the subducting oceanic plate as a result of the partial subduction of the NER buoyant load, have reduced the lithospheric strength. The correlation, T-e < T-s (seismogenic thickness) reveals that the upper crust is actively deforming beneath the frontal arc Andaman region. The occurrence of normal-fault earthquakes in the frontal arc, low Te zone, is indicative of structural heterogeneities within the subducting plate. The fact that the NER along with its buoyant root is subducting under the Andaman region is inhibiting the subduction processes, as suggested by the changes in trench line, interrupted back-arc volcanism, variation in seismicity mechanism, slow subduction, etc. The low T-e and thinned crustal structure of the Andaman back-arc basin are attributed to a thermomechanically weakened lithosphere. The present study reveals that the ongoing back-arc spreading and strike-slip motion along the West Andaman Fault coupled with the ridge subduction exerts an important control on the frequency and magnitude of seismicity in the Andaman region. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We use the recently measured accurate BaBaR data on the modulus of the pion electromagnetic form factor,Fπ(t), up to an energy of 3 GeV, the I=1P-wave phase of the π π scattering ampli-tude up to the ω−π threshold, the pion charge radius known from Chiral Perturbation Theory,and the recently measured JLAB value of Fπ in the spacelike region at t=−2.45GeV2 as inputs in a formalism that leads to bounds on Fπ in the intermediate spacelike region. We compare our constraints with experimental data and with perturbative QCD along with the results of several theoretical models for the non-perturbative contribution s proposed in the literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of ``fair'' scheduling the resources to one of the many mobile stations by a centrally controlled base station (BS). The BS is the only entity taking decisions in this framework based on truthful information from the mobiles on their radio channel. We study the well-known family of parametric alpha-fair scheduling problems from a game-theoretic perspective in which some of the mobiles may be noncooperative. We first show that if the BS is unaware of the noncooperative behavior from the mobiles, the noncooperative mobiles become successful in snatching the resources from the other cooperative mobiles, resulting in unfair allocations. If the BS is aware of the noncooperative mobiles, a new game arises with BS as an additional player. It can then do better by neglecting the signals from the noncooperative mobiles. The BS, however, becomes successful in eliciting the truthful signals from the mobiles only when it uses additional information (signal statistics). This new policy along with the truthful signals from mobiles forms a Nash equilibrium (NE) that we call a Truth Revealing Equilibrium. Finally, we propose new iterative algorithms to implement fair scheduling policies that robustify the otherwise nonrobust (in presence of noncooperation) alpha-fair scheduling algorithms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a complete bipartite graph with vertex sets of cardinalities n and n', assign random weights from exponential distribution with mean 1, independently to each edge. We show that, as n -> infinity, with n' = n/alpha] for any fixed alpha > 1, the minimum weight of many-to-one matchings converges to a constant (depending on alpha). Many-to-one matching arises as an optimization step in an algorithm for genome sequencing and as a measure of distance between finite sets. We prove that a belief propagation (BP) algorithm converges asymptotically to the optimal solution. We use the objective method of Aldous to prove our results. We build on previous works on minimum weight matching and minimum weight edge cover problems to extend the objective method and to further the applicability of belief propagation to random combinatorial optimization problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estimation of the dissociation constant, or pK(a), of weak acids continues to be a central goal in theoretical chemistry. Here we show that ab initio Car-Parrinello molecular dynamics simulations in conjunction with metadynamics calculations of the free energy profile of the dissociation reaction can provide reasonable estimates of the successive pK(a) values of polyprotic acids. We use the distance-dependent coordination number of the protons bound to the hydroxyl oxygen of the carboxylic group as the collective variable to explore the free energy profile of the dissociation process. Water molecules, sufficient to complete three hydration shells surrounding the acid molecule, were included explicitly in the computation procedure. Two distinct minima corresponding to the dissociated and un-dissociated states of the acid are observed and the difference in their free energy values provides the estimate for pK(a), the acid dissociation constant. We show that the method predicts the pK(a) value of benzoic acid in good agreement with experiment and then show using phthalic acid (benzene dicarboxylic acid) as a test system that both the first and second pK(a) values as well, as the subtle difference in their values for different isomers can be predicted in reasonable agreement with experimental data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The role of the molar volume on the estimated diffusion parameters has been speculated for decades. The Matano-Boltzmann method was the first to be developed for the estimation of the variation of the interdiffusion coefficients with composition. However, this could be used only when the molar volume varies ideally or remains constant. Although there are no such systems, this method is still being used to consider the ideal variation. More efficient methods were developed by Sauer-Freise, Den Broeder, and Wagner to tackle this problem. However, there is a lack of research indicating the most efficient method. We have shown that Wagner's method is the most suitable one when the molar volume deviates from the ideal value. Similarly, there are two methods for the estimation of the ratio of intrinsic diffusion coefficients at the Kirkendall marker plane proposed by Heumann and van Loo. The Heumann method, like the Matano-Boltzmann method, is suitable to use only when the molar volume varies more or less ideally or remains constant. In most of the real systems, where molar volume deviates from the ideality, it is safe to use the van Loo method. We have shown that the Heumann method introduces large errors even for a very small deviation of the molar volume from the ideal value. On the other hand, the van Loo method is relatively less sensitive to it. Overall, the estimation of the intrinsic diffusion coefficient is more sensitive than the interdiffusion coefficient.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fluctuations exhibited by the cross sections generated in a compound-nucleus reaction or, more generally, in a quantum-chaotic scattering process, when varying the excitation energy or another external parameter, are characterized by the width Gamma(corr) of the cross-section correlation function. Brink and Stephen Phys. Lett. 5, 77 (1963)] proposed a method for its determination by simply counting the number of maxima featured by the cross sections as a function of the parameter under consideration. They stated that the product of the average number of maxima per unit energy range and Gamma(corr) is constant in the Ercison region of strongly overlapping resonances. We use the analogy between the scattering formalism for compound-nucleus reactions and for microwave resonators to test this method experimentally with unprecedented accuracy using large data sets and propose an analytical description for the regions of isolated and overlapping resonances.