85 resultados para points in agreement


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The decomposition of the beta phase in rapidly quenched Ti-2.8 at. pet Co, Ti-5.4 at. pet Ni, Ti-4.5 at. pet, and 5.5 at. pet Cu alloys has been investigated by electron microscopy. During rapid quenching, two competitive phase transformations, namely martensitic and eutectoid transformation, have occurred, and the region of eutectoid transformation is extended due to the high cooling rates involved. The beta phase decomposed into nonlamellar eutectoid product (bainite) having a globular morphology in Ti-2.8 pet Co and Ti-4.5 pet Cu (hypoeutectoid) alloys. In the near-eutectoid Ti-5.5 pet Cu alloy, the decomposition occurred by a lamellar (pearlite) type, whereas in Ti-5.4 pct Ni (hypereutectoid), both morphologies were observed. The interfaces between the proeutectoid alpha and the intermetallic compound in the nonlamellar type as well as between the proeutectoid alpha and the pearlite were often found to be partially coherent. These findings are in agreement with the Lee and Aaronson model proposed recently for the evolution of bainite and pearlite structures during the solid-state transformations of some titanium-eutectoid alloys. The evolution of the Ti2Cu phase during rapid quenching involved the formation of a metastable phase closely related to an ''omega-type'' phase before the equilibrium phase formed. Further, the lamellar intermetallic compound Ti2Cu was found to evolve by a sympathetic nucleation process. Evidence is established for the sympathetic nucleation of the proeutectoid alpha crystals formed during rapid quenching.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The recently evaluated two-pion contribution to the muon g - 2 and the phase of the pion electromagnetic form factor in the elastic region, known from pi pi scattering by Fermi-Watson theorem, are exploited by analytic techniques for finding correlations between the coefficients of the Taylor expansion at t = 0 and the values of the form factor at several points in the spacelike region. We do not use specific parametrizations, and the results are fully independent of the unknown phase in the inelastic region. Using for instance, from recent determinations, < r(pi)(2)> = (0.435 +/- 0.005) fm(2) and F(-1.6 GeV2) = 0.243(-0.014)(+0.022), we obtain the allowed ranges 3.75 GeV-4 less than or similar to c less than or similar to 3.98 GeV-4 and 9.91 GeV-6 less than or similar to d less than or similar to 10.46 GeV-6 for the curvature and the next Taylor coefficient, with a strong correlation between them. We also predict a large region in the complex plane where the form factor cannot have zeros.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The combined mechanism involving phonon and lochon (local charged boson) induced pairing of fermions developed earlier for cuprate superconductors is used to study the variation of the oxygen isotope effect (alpha(0)) in these systems. The recently observed results for some cuprates are in agreement with the calculated trend in which (alpha(0)) tends to larger value when the critical temperature (T-c) is reduced by appropriate doping. These results support the combined phononic and electronic (lochonic) mechanism for cuprates with the latter dominating in the higher T-c regions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Non-exponential electron transfer kinetics in complex systems are often analyzed in terms of a quenched, static disorder model. In this work we present an alternative analysis in terms of a simple dynamic disorder model where the solvent is characterized by highly non-exponential dynamics. We consider both low and high barrier reactions. For the former, the main result is a simple analytical expression for the survival probability of the reactant. In this case, electron transfer, in the long time, is controlled by the solvent polarization relaxation-in agreement with the analyses of Rips and Jortner and of Nadler and Marcus. The short time dynamics is also non-exponential, but for different reasons. The high barrier reactions, on the other hand, show an interesting dynamic dependence on the electronic coupling element, V-el.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nanoparticle synthesis in a microemulsion route is typically controlled by changing the water to surfactant ratio, concentration of precursors, and/or concentration of micelles. The experiments carried out in this work with chloroauric acid and hydrazine hydrate as precursors in water/AOT-Brij30/isooctane microemulsions show that the reagent addition rate can also be used to tune the size of stable spherical gold nanoparticles to some extent. The particle size goes through a minimum with variation in feed addition rate. The increase in particle size with an increase in reaction temperature is in agreement with an earlier report. A population balance model is used to interpret the experimental findings. The reduced extent of nucleation at low feed addition rates and suppression of nucleation due to the finite rate of mixing at higher addition rates produce a minimum in particle size. The increase in particle size at higher reaction temperatures is explained through an increase in fusion efficiency of micelles which dissipates supersaturation; increase in solubility is shown to play an insignificant role. The moderate polydispersity of the synthesized particles is due to the continued nucleation and growth of particles. The polydispersity of micelle sizes by itself plays a minor role.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports measurements of turbulent quantities in an axisymmetric wall jet subjected to an adverse pressure gradient in a conical diffuser, in such a way that a suitably defined pressure-gradient parameter is everywhere small. Self-similarity is observed in the mean velocity profile, as well as the profiles of many turbulent quantities at sufficiently large distances from the injection slot. Autocorrelation measurements indicate that, in the region of turbulent production, the time scale of ν fluctuations is very much smaller than the time scale of u fluctuations. Based on the data on these time scales, a possible model is proposed for the Reynolds stress. One-dimensional energy spectra are obtained for the u, v and w components at several points in the wall jet. It is found that self-similarity is exhibited by the one-dimensional wavenumber spectrum of $\overline{q^2}(=\overline{u^2}+\overline{v^2}+\overline{w^2})$, if the half-width of the wall jet and the local mean velocity are used for forming the non-dimensional wavenumber. Both the autocorrelation curves and the spectra indicate the existence of periodicity in the flow. The rate of dissipation of turbulent energy is estimated from the $\overline{q^2}$ spectra, using a slightly modified version of a previously suggested method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rotational spectra of five isotopologues of the title complex, C(6)H(5)CCH center dot center dot center dot H(2)O, C(6)H(5)CCH center dot center dot center dot HOD, C(6)H(5)CCH center dot center dot center dot D(2)O, C(6)H(5)CCH center dot center dot center dot H(2)(18)O and C(6)H(5)CCD center dot center dot center dot H(2)O, were measured and analyzed. The parent isotopologue is an asymmetric top with kappa = -0.73. The complex is effectively planar (ab inertial plane) and both `a' and `b' dipole transitions have been observed but no c dipole transition could be seen. All the transitions of the parent complex are split into two resulting from an internal motion interchanging the two H atoms in H(2)O. This is confirmed by the absence of such doubling for the C(6)H(5)CCH center dot center dot center dot HOD complex and a significant reduction in the splitting for the D(2)O analog. The rotational spectra, unambiguously, reveal a structure in which H(2)O has both O-H center dot center dot center dot pi (pi cloud of acetylene moiety) and C-H center dot center dot center dot O (ortho C-H group of phenylacetylene) interactions. This is in agreement with the structure deduced by IR-UV double resonance studies (Singh et al., J. Phys. Chem. A, 2008, 112, 3360) and also with the global minimum predicted by advanced electronic structure theory calculations (Sedlack et al., J. Phys. Chem. A, 2009, 113, 6620). Atoms in Molecule (AIM) theoretical analysis of the complex reveals the presence of both O-H center dot center dot center dot pi and C-H center dot center dot center dot O hydrogen bonds. More interestingly, based on the electron densities at the bond critical points, this analysis suggests that both these interactions are equally strong. Moreover, the presence of both these interactions leads to significant deviation from linearity of both hydrogen bonds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the statistical properties of spatially averaged global injected power fluctuations for Taylor-Couette flow of a wormlike micellar gel formed by surfactant cetyltrimethylammonium tosylate. At sufficiently high Weissenberg numbers the shear rate, and hence the injected power p(t), at a constant applied stress shows large irregular fluctuations in time. The nature of the probability distribution function (PDF) of p(t) and the power-law decay of its power spectrum are very similar to that observed in recent studies of elastic turbulence for polymer solutions. Remarkably, these non-Gaussian PDFs can be well described by a universal, large deviation functional form given by the generalized Gumbel distribution observed in the context of spatially averaged global measures in diverse classes of highly correlated systems. We show by in situ rheology and polarized light scattering experiments that in the elastic turbulent regime the flow is spatially smooth but random in time, in agreement with a recent hypothesis for elastic turbulence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The maintenance of chlorine residual is needed at all the points in the distribution system supplied with chlorine as a disinfectant. The propagation and level of chlorine in a distribution system is affected by both bulk and pipe wall reactions. It is well known that the field determination of wall reaction parameter is difficult. The source strength of chlorine to maintain a specified chlorine residual at a target node is also an important parameter. The inverse model presented in the paper determines these water quality parameters, which are associated with different reaction kinetics, either in single or in groups of pipes. The weighted-least-squares method based on the Gauss-Newton minimization technique is used for the estimation of these parameters. The validation and application of the inverse model is illustrated with an example pipe distribution system under steady state. A generalized procedure to handle noisy and bad (abnormal) data is suggested, which can be used to estimate these parameters more accurately. The developed inverse model is useful for water supply agencies to calibrate their water distribution system and to improve their operational strategies to maintain water quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As computational Grids are increasingly used for executing long running multi-phase parallel applications, it is important to develop efficient rescheduling frameworks that adapt application execution in response to resource and application dynamics. In this paper, three strategies or algorithms have been developed for deciding when and where to reschedule parallel applications that execute on multi-cluster Grids. The algorithms derive rescheduling plans that consist of potential points in application execution for rescheduling and schedules of resources for application execution between two consecutive rescheduling points. Using large number of simulations, it is shown that the rescheduling plans developed by the algorithms can lead to large decrease in application execution times when compared to executions without rescheduling on dynamic Grid resources. The rescheduling plans generated by the algorithms are also shown to be competitive when compared to the near-optimal plans generated by brute-force methods. Of the algorithms, genetic algorithm yielded the most efficient rescheduling plans with 9-12% smaller average execution times than the other algorithms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The primary objective of the paper is to make use of statistical digital human model to better understand the nature of reach probability of points in the taskspace. The concept of task-dependent boundary manikin is introduced to geometrically characterize the extreme individuals in the given population who would accomplish the task. For a given point of interest and task, the map of the acceptable variation in anthropometric parameters is superimposed with the distribution of the same parameters in the given population to identify the extreme individuals. To illustrate the concept, the task space mapping is done for the reach probability of human arms. Unlike the boundary manikins, who are completely defined by the population, the dimensions of these manikins will vary with task, say, a point to be reached, as in the present case. Hence they are referred to here as the task-dependent boundary manikins. Simulations with these manikins would help designers to visualize how differently the extreme individuals would perform the task. Reach probability at the points in a 3D grid in the operational space is computed; for objects overlaid in this grid, approximate probabilities are derived from the grid for rendering them with colors indicating the reach probability. The method may also help in providing a rational basis for selection of personnel for a given task.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a dense multi-hop network of mobile nodes capable of applying adaptive power control, we consider the problem of finding the optimal hop distance that maximizes a certain throughput measure in bit-metres/sec, subject to average network power constraints. The mobility of nodes is restricted to a circular periphery area centered at the nominal location of nodes. We incorporate only randomly varying path-loss characteristics of channel gain due to the random motion of nodes, excluding any multi-path fading or shadowing effects. Computation of the throughput metric in such a scenario leads us to compute the probability density function of random distance between points in two circles. Using numerical analysis we discover that choosing the nearest node as next hop is not always optimal. Optimal throughput performance is also attained at non-trivial hop distances depending on the available average network power.