949 resultados para Trigger-points


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We calculate analytically the average number of fixed points in the Hopfield model of associative memory when a random antisymmetric part is added to the otherwise symmetric synaptic matrix. Addition of the antisymmetric part causes an exponential decrease in the total number of fixed points. If the relative strength of the antisymmetric component is small, then its presence does not cause any substantial degradation of the quality of retrieval when the memory loading level is low. We also present results of numerical simulations which provide qualitative (as well as quantitative for some aspects) confirmation of the predictions of the analytic study. Our numerical results suggest that the analytic calculation of the average number of fixed points yields the correct value for the typical number of fixed points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present here a critical assessment of two vortex approaches (both two-dimensional) to the modelling of turbulent mixing layers. In the first approach the flow is represented by point vortices, and in the second it is simulated as the evolution of a continuous vortex sheet composed of short linear elements or ''panels''. The comparison is based on fresh simulations using approximately the same number of elements in either model, paying due attention in both to the boundary conditions far downstream as well as those on the splitter plate from which the mixing layer issues. The comparisons show that, while both models satisfy the well-known invariants of vortex dynamics approximately to the same accuracy, the vortex panel model, although ultimately not convergent, leads to smoother roll-up and values of stresses and moments that are in closer agreement with the experiment, and has a higher computational efficiency for a given degree of convergence on moments. The point vortex model, while faster for a given number of elements, produces an unsatisfactory roll-up which (for the number of elements used) is rendered worse by the incorporation of the Van der Vooren correction for sheet curvature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a novel phase behavior in aqueous solutions of simple organic solutes near their liquid/liquid critical points, where a solid-like third phase appears at the liquid/liquid interface. The phenomenon has been found in three different laboratories. It appears in many aqueous systems of organic solutes and becomes enhanced upon the addition of salt to these solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The time delay to the firing of a triggered vacuum gap (t.v.g.) containing barium titanate in the trigger gap is investigated as a function of the main gap voltage, main gap length, trigger pulse duration, trigger current and trigger voltage. The time delay decreases steadily with increasing trigger current and trigger voltage until it reaches saturation. The effect of varying the main gap length and voltage on the time delay is not strong. Before `conditioning�¿ the t.v.g. two groups of time delays, long (>100�¿s) and short (<10�¿s), are simultaneously observed when a large number of trials are conducted. After conditioning, only the group of short time delays are present. This is attributed to the marked reduction of the resistance of the trigger gap across the surface of the solid dielectric resulting directly from the conditioning effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the diversity-multiplexing gain tradeoff (DMT) of single-source, single-sink (ss-ss), multihop relay networks having slow-fading links is studied. In particular, the two end-points of the DMT of ss-ss full-duplex networks are determined, by showing that the maximum achievable diversity gain is equal to the min-cut and that the maximum multiplexing gain is equal to the min-cut rank, the latter by using an operational connection to a deterministic network. Also included in the paper, are several results that aid in the computation of the DMT of networks operating under amplify-and-forward (AF) protocols. In particular, it is shown that the colored noise encountered in amplify-and-forward protocols can be treated as white for the purpose of DMT computation, lower bounds on the DMT of lower-triangular channel matrices are derived and the DMT of parallel MIMO channels is computed. All protocols appearing in the paper are explicit and rely only upon AF relaying. Half-duplex networks and explicit coding schemes are studied in a companion paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Closed-shell contacts between two copper(I) ions are expected to be repulsive. However, such contacts are quite frequent and are well documented. Crystallographic characterization of such contacts in unsupported and bridged multinuclear copper(I) complexes has repeatedly invited debates on the existence of cuprophilicity. Recent developments in the application of Baders theory of atoms-in-molecules (AIM) to systems in which weak hydrogen bonds are involved suggests that the copper(I)copper(I) contacts would benefit from a similar analysis. Thus the nature of electron-density distributions in copper(I) dimers that are unsupported, and those that are bridged, have been examined. A comparison of complexes that are dimers of symmetrical monomers and those that are dimers of two copper(I) monomers with different coordination spheres has also been made. AIM analysis shows that a bond critical point (BCP) between two Cu atoms is present in most cases. The nature of the BCP in terms of the electron density, ?, and its Laplacian is quite similar to the nature of critical points observed in hydrogen bonds in the same systems. The ? is inversely correlated to Cu?Cu distance. It is higher in asymmetrical systems than what is observed in corresponding symmetrical systems. By examining the ratio of the local electron potential-energy density (Vc) to the kinetic energy density (Gc), |Vc|/Gc at the critical point suggests that these interactions are not perfectly ionic but have some shared nature. Thus an analysis of critical points by using AIM theory points to the presence of an attractive metallophilic interaction similar to other well-documented weak interactions like hydrogen bonding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the quenching dynamics of a many-body system in one dimension described by a Hamiltonian that has spatial periodicity. Specifically, we consider a spin-1/2 chain with equal xx and yy couplings and subject to a periodically varying magnetic field in the (z) over cap direction or, equivalently, a tight-binding model of spinless fermions with a periodic local chemical potential, having period 2q, where q is a positive integer. For a linear quench of the strength of the magnetic field (or chemical potential) at a rate 1/tau across a quantum critical point, we find that the density of defects thereby produced scales as 1/tau(q/(q+1)), deviating from the 1/root tau scaling that is ubiquitous in a range of systems. We analyze this behavior by mapping the low-energy physics of the system to a set of fermionic two-level systems labeled by the lattice momentum k undergoing a nonlinear quench as well as by performing numerical simulations. We also show that if the magnetic field is a superposition of different periods, the power law depends only on the smallest period for very large values of tau, although it may exhibit a crossover at intermediate values of tau. Finally, for the case where a zz coupling is also present in the spin chain, or equivalently, where interactions are present in the fermionic system, we argue that the power associated with the scaling law depends on a combination of q and the interaction strength.