137 resultados para Lattice Points
Resumo:
We investigate the spatial search problem on the two-dimensional square lattice, using the Dirac evolution operator discretized according to the staggered lattice fermion formalism. d=2 is the critical dimension for the spatial search problem, where infrared divergence of the evolution operator leads to logarithmic factors in the scaling behavior. As a result, the construction used in our accompanying article [ A. Patel and M. A. Rahaman Phys. Rev. A 82 032330 (2010)] provides an O(√NlnN) algorithm, which is not optimal. The scaling behavior can be improved to O(√NlnN) by cleverly controlling the massless Dirac evolution operator by an ancilla qubit, as proposed by Tulsi Phys. Rev. A 78 012310 (2008). We reinterpret the ancilla control as introduction of an effective mass at the marked vertex, and optimize the proportionality constants of the scaling behavior of the algorithm by numerically tuning the parameters.
Resumo:
Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.
Resumo:
The problem of intrusion detection and location identification in the presence of clutter is considered for a hexagonal sensor-node geometry. It is noted that in any practical application,for a given fixed intruder or clutter location, only a small number of neighboring sensor nodes will register a significant reading. Thus sensing may be regarded as a local phenomenon and performance is strongly dependent on the local geometry of the sensor nodes. We focus on the case when the sensor nodes form a hexagonal lattice. The optimality of the hexagonal lattice with respect to density of packing and covering and largeness of the kissing number suggest that this is the best possible arrangement from a sensor network viewpoint. The results presented here are clearly relevant when the particular sensing application permits a deterministic placement of sensors. The results also serve as a performance benchmark for the case of a random deployment of sensors. A novel feature of our analysis of the hexagonal sensor grid is a signal-space viewpoint which sheds light on achievable performance.Under this viewpoint, the problem of intruder detection is reduced to one of determining in a distributed manner, the optimal decision boundary that separates the signal spaces SI and SC associated to intruder and clutter respectively. Given the difficulty of implementing the optimal detector, we present a low-complexity distributive algorithm under which the surfaces SI and SC are separated by a wellchosen hyperplane. The algorithm is designed to be efficient in terms of communication cost by minimizing the expected number of bits transmitted by a sensor.
Resumo:
A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d
Resumo:
Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].
Resumo:
In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.
Resumo:
Electronic states of CeO(2), Ce(1 -aEuro parts per thousand x) Pt (x) O(2 -aEuro parts per thousand delta) , and Ce(1 -aEuro parts per thousand x -aEuro parts per thousand y) Ti (y) Pt (x) O(2 -aEuro parts per thousand delta) electrodes have been investigated by X-ray photoelectron spectroscopy as a function of applied potential for oxygen evolution and formic acid and methanol oxidation. Ionically dispersed platinum in Ce(1 -aEuro parts per thousand x) Pt (x) O(2 -aEuro parts per thousand delta) and Ce(1 -aEuro parts per thousand x -aEuro parts per thousand y) Ti (y) Pt (x) O(2 -aEuro parts per thousand delta) is active toward these reactions compared with CeO(2) alone. Higher electrocatalytic activity of Pt(2+) ions in CeO(2) and Ce(1 -aEuro parts per thousand x) Ti (x) O(2) compared with the same amount of Pt(0) in Pt/C is attributed to Pt(2+) ion interaction with CeO(2) and Ce(1 -aEuro parts per thousand x) Ti (x) O(2) to activate the lattice oxygen of the support oxide. Utilization of this activated lattice oxygen has been demonstrated in terms of high oxygen evolution in acid medium with these catalysts. Further, ionic platinum in CeO(2) and Ce(1 -aEuro parts per thousand x) Ti (x) O(2) does not suffer from CO poisoning effect unlike Pt(0) in Pt/C due to participation of activated lattice oxygen which oxidizes the intermediate CO to CO(2). Hence, higher activity is observed toward formic acid and methanol oxidation compared with same amount of Pt metal in Pt/C.
Resumo:
The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.
Resumo:
In the present study singular fractal functions (SFF) were used to generate stress-strain plots for quasibrittle material like concrete and cement mortar and subsequently stress-strain plot of cement mortar obtained using SFF was used for modeling fracture process in concrete. The fracture surface of concrete is rough and irregular. The fracture surface of concrete is affected by the concrete's microstructure that is influenced by water cement ratio, grade of cement and type of aggregate 11-41. Also the macrostructural properties such as the size and shape of the specimen, the initial notch length and the rate of loading contribute to the shape of the fracture surface of concrete. It is known that concrete is a heterogeneous and quasi-brittle material containing micro-defects and its mechanical properties strongly relate to the presence of micro-pores and micro-cracks in concrete 11-41. The damage in concrete is believed to be mainly due to initiation and development of micro-defects with irregularity and fractal characteristics. However, repeated observations at various magnifications also reveal a variety of additional structures that fall between the `micro' and the `macro' and have not yet been described satisfactorily in a systematic manner [1-11,15-17]. The concept of singular fractal functions by Mosolov was used to generate stress-strain plot of cement concrete, cement mortar and subsequently the stress-strain plot of cement mortar was used in two-dimensional lattice model [28]. A two-dimensional lattice model was used to study concrete fracture by considering softening of matrix (cement mortar). The results obtained from simulations with lattice model show softening behavior of concrete and fairly agrees with the experimental results. The number of fractured elements are compared with the acoustic emission (AE) hits. The trend in the cumulative fractured beam elements in the lattice fracture simulation reasonably reflected the trend in the recorded AE measurements. In other words, the pattern in which AE hits were distributed around the notch has the same trend as that of the fractured elements around the notch which is in support of lattice model. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We have studied the magnetic field dependent rf (20 MHz) losses in Bi2Sr2CaCu2O8 single crystals in the low field and high temperature regime. Above HCl the dissipation begins to decrease as the field is increased and exhibits a minimum at HM>HCl. For H>HM the loss increases monotonically. We attribute the decrease in loss above HCl to the stiffening of the vortex lines due to the attractive electromagnetic interaction between the 2D vortices (that comprise the vortex line at low fields) in adjacent CuO bilayers. The minimum at HM implies that the vortex lines are stiffest and hence represents a transition into vortex solid state from the narrow vortex liquid in the vicinity of HCl. The increase in loss for H>HM marks the melting of the vortex lattice and hence a second transition into vortex liquid regime. We discuss our results in the light of recent theory of reentrant melting of the vortex lattice by G. Blatter et al. (Phys. Rev. B 54, 72 (1996)).