997 resultados para rational points


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We calculate analytically the average number of fixed points in the Hopfield model of associative memory when a random antisymmetric part is added to the otherwise symmetric synaptic matrix. Addition of the antisymmetric part causes an exponential decrease in the total number of fixed points. If the relative strength of the antisymmetric component is small, then its presence does not cause any substantial degradation of the quality of retrieval when the memory loading level is low. We also present results of numerical simulations which provide qualitative (as well as quantitative for some aspects) confirmation of the predictions of the analytic study. Our numerical results suggest that the analytic calculation of the average number of fixed points yields the correct value for the typical number of fixed points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present here a critical assessment of two vortex approaches (both two-dimensional) to the modelling of turbulent mixing layers. In the first approach the flow is represented by point vortices, and in the second it is simulated as the evolution of a continuous vortex sheet composed of short linear elements or ''panels''. The comparison is based on fresh simulations using approximately the same number of elements in either model, paying due attention in both to the boundary conditions far downstream as well as those on the splitter plate from which the mixing layer issues. The comparisons show that, while both models satisfy the well-known invariants of vortex dynamics approximately to the same accuracy, the vortex panel model, although ultimately not convergent, leads to smoother roll-up and values of stresses and moments that are in closer agreement with the experiment, and has a higher computational efficiency for a given degree of convergence on moments. The point vortex model, while faster for a given number of elements, produces an unsatisfactory roll-up which (for the number of elements used) is rendered worse by the incorporation of the Van der Vooren correction for sheet curvature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the absence of a reliable method for a priori prediction of structure and properties of inorganic solid materials, an experimental approach involving a systematic study of composition, structure and properties combined with chemical intuition based on previous experience is likely to be a viable alternative to the problem of rational design of inorganic materials. The approach is illustrated by taking perovskite lithium-ion conductors as an example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a novel phase behavior in aqueous solutions of simple organic solutes near their liquid/liquid critical points, where a solid-like third phase appears at the liquid/liquid interface. The phenomenon has been found in three different laboratories. It appears in many aqueous systems of organic solutes and becomes enhanced upon the addition of salt to these solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of the paper is to make use of statistical digital human model to better understand the nature of reach probability of points in the taskspace. The concept of task-dependent boundary manikin is introduced to geometrically characterize the extreme individuals in the given population who would accomplish the task. For a given point of interest and task, the map of the acceptable variation in anthropometric parameters is superimposed with the distribution of the same parameters in the given population to identify the extreme individuals. To illustrate the concept, the task space mapping is done for the reach probability of human arms. Unlike the boundary manikins, who are completely defined by the population, the dimensions of these manikins will vary with task, say, a point to be reached, as in the present case. Hence they are referred to here as the task-dependent boundary manikins. Simulations with these manikins would help designers to visualize how differently the extreme individuals would perform the task. Reach probability at the points in a 3D grid in the operational space is computed; for objects overlaid in this grid, approximate probabilities are derived from the grid for rendering them with colors indicating the reach probability. The method may also help in providing a rational basis for selection of personnel for a given task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address a key problem faced by advertisers in sponsored search auctions on the web: how much to bid, given the bids of the other advertisers, so as to maximize individual payoffs? Assuming the generalized second price auction as the auction mechanism, we formulate this problem in the framework of an infinite horizon alternative-move game of advertiser bidding behavior. For a sponsored search auction involving two advertisers, we characterize all the pure strategy and mixed strategy Nash equilibria. We also prove that the bid prices will lead to a Nash equilibrium, if the advertisers follow a myopic best response bidding strategy. Following this, we investigate the bidding behavior of the advertisers if they use Q-learning. We discover empirically an interesting trend that the Q-values converge even if both the advertisers learn simultaneously.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a computational grid, the presence of grid resource providers who are rational and intelligent could lead to an overall degradation in the efficiency of the grid. In this paper, we design incentive compatible grid resource procurement mechanisms which ensure that the efficiency of the grid is not affected by the rational behavior of resource providers.In particular, we offer three elegant incentive compatible mechanisms for this purpose: (1) G-DSIC (Grid-Dominant Strategy Incentive Compatible) mechanism (2) G-BIC (Grid-Bayesian Nash Incentive Compatible) mechanism (3) G-OPT(Grid-Optimal) mechanism which minimizes the cost to the grid user, satisfying at the same time, (a) Bayesian incentive compatibility and (b) individual rationality. We evaluate the relative merits and demerits of the above three mechanisms using game theoretical analysis and numerical experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.