951 resultados para BRANCH-POINTS
Resumo:
We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.
Resumo:
We illustrate the potential of using higher order critical points in the deeper understanding of several interesting problems of condensed matter science, e.g. critical adsorption, finite size effects, morphology of critical fluctuations, reversible aggregation of colloids, dynamics of the ordering process, etc.
Resumo:
The distributed implementation of an algorithm for computing fixed points of an infinity-nonexpansive map is shown to converge to the set of fixed points under very general conditions.
Resumo:
Hyperbranched polyurethanes, with varying oligoethyleneoxy spacer segments between the branching points, have been synthesized by a one-pot approach starting from the appropriately designed carbonyl azide that incorporates the different spacer segments. The structures of monomers and polymers were confirmed by IR and H-1-NMR spectroscopy. The solution viscosity of the polymers suggested that they were of reasonably high molecular weight. Reversal of terminal functional groups was achieved by preparing the appropriate monohydroxy dicarbonyl azide monomer. The large number of terminal isocyanate groups at the chain ends of such hyperbranched macromolecules caused them to crosslink prior to its isolation. However, carrying out the polymerization in the presence of 1 equiv of a capping agent, such as an alcohol, resulted in soluble polymers with carbamate chain ends. Using a biphenyl-containing alcohol as a capping agent, we have also prepared novel hyperbranched perbranched polyurethanes with pendant mesogenic segments. These mesogen-containing polyurethanes, however, did not exhibit liquid crystallinity probably due to the wholly aromatic rigid polymer backbone. (C) 1996 John Wiley & Sons, Inc.
Resumo:
The principle of the conservation of bond orders during radical-exchange reactions is examined using Mayer's definition of bond orders. This simple intuitive approximation is not valid in a quantitative sense. Ab initio results reveal that free valences (or spin densities) develop on the migrating atom during reactions. For several examples of hydrogen-transfer reactions, the sum of the reaction coordinate bond orders in the transition state was found to be 0.92 +/- 0.04 instead of the theoretical 1.00 because free valences (or spin densities) develop on the migrating atom during reactions. It is shown that free valence is almost equal to the square of the spin density on the migrating hydrogen atom and the maxima in the free valence (or spin density) profiles coincide (or nearly coincide) with the saddle points in the corresponding energy profiles.
Resumo:
We calculate analytically the average number of fixed points in the Hopfield model of associative memory when a random antisymmetric part is added to the otherwise symmetric synaptic matrix. Addition of the antisymmetric part causes an exponential decrease in the total number of fixed points. If the relative strength of the antisymmetric component is small, then its presence does not cause any substantial degradation of the quality of retrieval when the memory loading level is low. We also present results of numerical simulations which provide qualitative (as well as quantitative for some aspects) confirmation of the predictions of the analytic study. Our numerical results suggest that the analytic calculation of the average number of fixed points yields the correct value for the typical number of fixed points.
Resumo:
We present here a critical assessment of two vortex approaches (both two-dimensional) to the modelling of turbulent mixing layers. In the first approach the flow is represented by point vortices, and in the second it is simulated as the evolution of a continuous vortex sheet composed of short linear elements or ''panels''. The comparison is based on fresh simulations using approximately the same number of elements in either model, paying due attention in both to the boundary conditions far downstream as well as those on the splitter plate from which the mixing layer issues. The comparisons show that, while both models satisfy the well-known invariants of vortex dynamics approximately to the same accuracy, the vortex panel model, although ultimately not convergent, leads to smoother roll-up and values of stresses and moments that are in closer agreement with the experiment, and has a higher computational efficiency for a given degree of convergence on moments. The point vortex model, while faster for a given number of elements, produces an unsatisfactory roll-up which (for the number of elements used) is rendered worse by the incorporation of the Van der Vooren correction for sheet curvature.
Resumo:
We report a novel phase behavior in aqueous solutions of simple organic solutes near their liquid/liquid critical points, where a solid-like third phase appears at the liquid/liquid interface. The phenomenon has been found in three different laboratories. It appears in many aqueous systems of organic solutes and becomes enhanced upon the addition of salt to these solutions.
Resumo:
In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.
Resumo:
A modified lattice model using finite element method has been developed to study the mode-I fracture analysis of heterogeneous materials like concrete. In this model, the truss members always join at points where aggregates are located which are modeled as plane stress triangular elements. The truss members are given the properties of cement mortar matrix randomly, so as to represent the randomness of strength in concrete. It is widely accepted that the fracture of concrete structures should not be based on strength criterion alone, but should be coupled with energy criterion. Here, by incorporating the strain softening through a parameter ‘α’, the energy concept is introduced. The softening branch of load-displacement curves was successfully obtained. From the sensitivity study, it was observed that the maximum load of a beam is most sensitive to the tensile strength of mortar. It is seen that by varying the values of properties of mortar according to a normal random distribution, better results can be obtained for load-displacement diagram.
Resumo:
Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.
Resumo:
A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d
Resumo:
Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].
Resumo:
In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.