916 resultados para HINGE POINTS
Resumo:
Hyperbranched polyurethanes, with varying oligoethyleneoxy spacer segments between the branching points, have been synthesized by a one-pot approach starting from the appropriately designed carbonyl azide that incorporates the different spacer segments. The structures of monomers and polymers were confirmed by IR and H-1-NMR spectroscopy. The solution viscosity of the polymers suggested that they were of reasonably high molecular weight. Reversal of terminal functional groups was achieved by preparing the appropriate monohydroxy dicarbonyl azide monomer. The large number of terminal isocyanate groups at the chain ends of such hyperbranched macromolecules caused them to crosslink prior to its isolation. However, carrying out the polymerization in the presence of 1 equiv of a capping agent, such as an alcohol, resulted in soluble polymers with carbamate chain ends. Using a biphenyl-containing alcohol as a capping agent, we have also prepared novel hyperbranched perbranched polyurethanes with pendant mesogenic segments. These mesogen-containing polyurethanes, however, did not exhibit liquid crystallinity probably due to the wholly aromatic rigid polymer backbone. (C) 1996 John Wiley & Sons, Inc.
Resumo:
The principle of the conservation of bond orders during radical-exchange reactions is examined using Mayer's definition of bond orders. This simple intuitive approximation is not valid in a quantitative sense. Ab initio results reveal that free valences (or spin densities) develop on the migrating atom during reactions. For several examples of hydrogen-transfer reactions, the sum of the reaction coordinate bond orders in the transition state was found to be 0.92 +/- 0.04 instead of the theoretical 1.00 because free valences (or spin densities) develop on the migrating atom during reactions. It is shown that free valence is almost equal to the square of the spin density on the migrating hydrogen atom and the maxima in the free valence (or spin density) profiles coincide (or nearly coincide) with the saddle points in the corresponding energy profiles.
Resumo:
We calculate analytically the average number of fixed points in the Hopfield model of associative memory when a random antisymmetric part is added to the otherwise symmetric synaptic matrix. Addition of the antisymmetric part causes an exponential decrease in the total number of fixed points. If the relative strength of the antisymmetric component is small, then its presence does not cause any substantial degradation of the quality of retrieval when the memory loading level is low. We also present results of numerical simulations which provide qualitative (as well as quantitative for some aspects) confirmation of the predictions of the analytic study. Our numerical results suggest that the analytic calculation of the average number of fixed points yields the correct value for the typical number of fixed points.
Resumo:
We present here a critical assessment of two vortex approaches (both two-dimensional) to the modelling of turbulent mixing layers. In the first approach the flow is represented by point vortices, and in the second it is simulated as the evolution of a continuous vortex sheet composed of short linear elements or ''panels''. The comparison is based on fresh simulations using approximately the same number of elements in either model, paying due attention in both to the boundary conditions far downstream as well as those on the splitter plate from which the mixing layer issues. The comparisons show that, while both models satisfy the well-known invariants of vortex dynamics approximately to the same accuracy, the vortex panel model, although ultimately not convergent, leads to smoother roll-up and values of stresses and moments that are in closer agreement with the experiment, and has a higher computational efficiency for a given degree of convergence on moments. The point vortex model, while faster for a given number of elements, produces an unsatisfactory roll-up which (for the number of elements used) is rendered worse by the incorporation of the Van der Vooren correction for sheet curvature.
Resumo:
We report a novel phase behavior in aqueous solutions of simple organic solutes near their liquid/liquid critical points, where a solid-like third phase appears at the liquid/liquid interface. The phenomenon has been found in three different laboratories. It appears in many aqueous systems of organic solutes and becomes enhanced upon the addition of salt to these solutions.
Resumo:
In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.
Resumo:
Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.
Resumo:
A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d
Resumo:
Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].
Resumo:
In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.
Resumo:
The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.
Resumo:
In this paper, the diversity-multiplexing gain tradeoff (DMT) of single-source, single-sink (ss-ss), multihop relay networks having slow-fading links is studied. In particular, the two end-points of the DMT of ss-ss full-duplex networks are determined, by showing that the maximum achievable diversity gain is equal to the min-cut and that the maximum multiplexing gain is equal to the min-cut rank, the latter by using an operational connection to a deterministic network. Also included in the paper, are several results that aid in the computation of the DMT of networks operating under amplify-and-forward (AF) protocols. In particular, it is shown that the colored noise encountered in amplify-and-forward protocols can be treated as white for the purpose of DMT computation, lower bounds on the DMT of lower-triangular channel matrices are derived and the DMT of parallel MIMO channels is computed. All protocols appearing in the paper are explicit and rely only upon AF relaying. Half-duplex networks and explicit coding schemes are studied in a companion paper.