142 resultados para cache consistency
Resumo:
Mandelstam�s argument that PCAC follows from assigning Lorentz quantum numberM=1 to the massless pion is examined in the context of multiparticle dual resonance model. We construct a factorisable dual model for pions which is formulated operatorially on the harmonic oscillator Fock space along the lines of Neveu-Schwarz model. The model has bothm ? andm ? as arbitrary parameters unconstrained by the duality requirement. Adler self-consistency condition is satisfied if and only if the conditionm?2?m?2=1/2 is imposed, in which case the model reduces to the chiral dual pion model of Neveu and Thorn, and Schwarz. The Lorentz quantum number of the pion in the dual model is shown to beM=0.
Resumo:
General relativity has very specific predictions for the gravitational waveforms from inspiralling compact binaries obtained using the post-Newtonian (PN) approximation. We investigate the extent to which the measurement of the PN coefficients, possible with the second generation gravitational-wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) and the third generation gravitational-wave detectors such as the Einstein Telescope (ET), could be used to test post-Newtonian theory and to put bounds on a subclass of parametrized-post-Einstein theories which differ from general relativity in a parametrized sense. We demonstrate this possibility by employing the best inspiralling waveform model for nonspinning compact binaries which is 3.5PN accurate in phase and 3PN in amplitude. Within the class of theories considered, Advanced LIGO can test the theory at 1.5PN and thus the leading tail term. Future observations of stellar mass black hole binaries by ET can test the consistency between the various PN coefficients in the gravitational-wave phasing over the mass range of 11-44M(circle dot). The choice of the lower frequency cutoff is important for testing post-Newtonian theory using the ET. The bias in the test arising from the assumption of nonspinning binaries is indicated.
Resumo:
The different formalisms for the representation of thermodynamic data on dilute multicomponent solutions are critically reviewed. The thermodynamic consistency of the formalisms are examined and the interrelations between them are highlighted. The options are constraints in the use of the interaction parameter and Darken's quadratic formalisms for multicomponent solutions are discussed in the light of the available experimental data. Truncatred Maclaurin series expansion is thermodynamically inconsistent unless special relations between interaction parameters are invoked. However, the lack of strict mathematical consistency does not affect the practical use of the formalism. Expressions for excess partial properties can be integrated along defined composition paths without significant loss of accuracy. Although thermodynamically consistent, the applicability of Darken's quadratic formalism to strongly interacting systems remains to be established by experiment.
Resumo:
We present a general formalism for deriving bounds on the shape parameters of the weak and electromagnetic form factors using as input correlators calculated from perturbative QCD, and exploiting analyticity and unitarily. The values resulting from the symmetries of QCD at low energies or from lattice calculations at special points inside the analyticity domain can be included in an exact way. We write down the general solution of the corresponding Meiman problem for an arbitrary number of interior constraints and the integral equations that allow one to include the phase of the form factor along a part of the unitarity cut. A formalism that includes the phase and some information on the modulus along a part of the cut is also given. For illustration we present constraints on the slope and curvature of the K-l3 scalar form factor and discuss our findings in some detail. The techniques are useful for checking the consistency of various inputs and for controlling the parameterizations of the form factors entering precision predictions in flavor physics.
Resumo:
Vapor-liquid equilibrium data have been measured for the binary systems methyl ethyl ketone-p-xylene and chlorobenzene-p-xylene, at 685 mmHg pressure. The activity coefficients have been evaluated taking Into consideration the vapor-phase nonideallty. The f-x-y data have been subjected to a thermodynamic consistency test and the activity coefficients have been correlated by the Wilson equation.
Resumo:
Ion transport in a polymer-ionic liquid (IL) soft matter composite electrolyte is discussed here in detail in the context of polymer-ionic liquid interaction and glass transition temperature The dispersion of polymethylmetacrylate (PMMA) in 1-butyl-3-methylimidazolium hexafluorophosphate (BMIPF6) and 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide (BMITFSI) resulted in transparent composite electrolytes with a jelly-like consistency The composite ionic conductivity measured over the range -30 C to 60 C was always lower than that of the neat BMITFSI/BMIPF6 and LiTFSI-BMITFSI/LiTFSI-BMIPF6 electrolytes but still very high (>1 mS/cm at 25 degrees C up to 50 wt% PMMA) While addition of LiTFSI to IL does not influence the glass T-g and T-m melting temperature significantly dispersion of PMMA (especially at higher contents) resulted in increase in T-g and disappearance of T-m In general the profile of temperature-dependent ionic conductivity could be fitted to Vogel-Tamman-Fulcher (VTF) suggesting a solvent assisted ion transport However for higher PMMA concentration sharp demarcation of temperature regimes between thermally activated and solvent assisted ion transport were observed with the glass transition temperature acting as the reference point for transformation from one form of transport mechanism to the other Because of the beneficial physico-chemical properties and interesting ion transport mechanism we envisage the present soft matter electrolytes to be promising for application in electrochromic devices (C) 2010 Elsevier Ltd All rights reserved
Resumo:
The present study aims to assess whether the smectite-rich Cochin and Mangalore clays, which were deposited in a marine medium and subsequently uplifted, exhibit consistency limits response typical of expanding lattice or nonexpanding (fixed) lattice-type clays on artificially changing the chemical environment. The chemical and engineering behaviors of Cochin and Mangalore marine clays are also compared with those of the smectite-rich Ariake Bay marine clay from Japan. Although Cochin, Mangalore, and Ariake clays contain comparable amounts of smectite (32-45%), Ariake clay exhibits lower consistency limits and much higher ranges of liquidity indices than the Indian marine clays. The lower consistency limits of the Ariake clay are attributed to the absence of well-developed, long-range, interparticle forces associated with the clay. Also, Ariake clay exhibits a significantly large (48-714 times) decrease in undrained strength on remolding in comparison to Cochin and Mangalore clays (sensitivity ranges between 1 and 4). A preponderance of long-range, interparticle forces reflected in the high consistency limits of Cochin and Mangalore clays (wL range from 75 to 180%) combined with low natural water contents yield low liquidity indices (typically <1) and high, remolded, undrained strengths and are considered to be responsible for the low sensitivity of the Indian marine clays.
Resumo:
Belief revision systems aim at keeping a database consistent. They mostly concentrate on how to record and maintain dependencies. We propose an axiomatic system, called MFOT, as a solution to the problem of belief revision. MFOT has a set of proper axioms which selects a set of most plausible and consistent input beliefs. The proposed nonmonotonic inference rule further maintains consistency while generating the consequences of input beliefs. It also permits multiple property inheritance with exceptions. We have also examined some important properties of the proposed axiomatic system. We also propose a belief revision model that is object-centered. The relevance of such a model in maintaining the beliefs of a physician is examined.
Resumo:
The reported presence in marine clays and the recognized role of polysaccharide as a bonding agent provided the motivation to examine the role of starch polysaccharide in the remoulded properties of nonswelling (kaolinite) and swelling (bentonite) groups of clays. The starch polysaccharide belongs to a group of naturally occurring, large-sized organic molecules (termed polymers) and is built up by extensive repetition of simple chemical units called repeat units. The results of the study indicate that the impact of the starch polysaccharide on the remoulded properties of clays is dependent on the mineralogy of the clays. On addition to bentonite clay, the immensely large number of segments (repeat units) of the starch polysaccharide create several polymer segment - clay surface bonds that cause extensive aggregation of the bentonite units layers. The aggregation of the bentonite unit layers greatly curtails the available surface area of the clay mineral for diffuse ion layer formation. The reduction in diffuse ion layer thickness markedly lowers the consistency limits and vane shear strength of the bentonite clay. On addition to kaolinite, the numerous polymer segment - clay surface bonds enhance the tendency of the kaolinite particles to flocculate. The enhanced particle flocculation is responsible apparently for a small to moderate increase in the liquid limit and remoulded undrained strength of the nonswelling clay.
Resumo:
This study explores the utility of polarimetric measurements for discriminating between hydrometeor types with the emphasis on (a) hail detection and discrimination of its size, (b) measurement of heavy precipitation, (c) identification and quantification of mixed-phase hydrometeors, and (d) discrimination of ice forms. In particular, we examine the specific differential phase, the backscatter differential phase, the correlation coefficient between vertically and horizontally polarized waves, and the differential reflectivity, collected from a storm at close range. Three range–height cross sections are analyzed together with complementary data from a prototype WSR-88D radar. The case is interesting because it demonstrates the complementary nature of these polarimetric measurands. Self-consistency among them allows qualitative and some quantitative discrimination between hydrometeors.
Resumo:
The constructional details of an 18-bit binary inductive voltage divider (IVD) for a.c. bridge applications is described. Simplified construction with less number of windings, interconnection of winding through SPDT solid state relays instead of DPDT relays, improves reliability of IVD. High accuracy for most precision measurement achieved without D/A converters. The checks for self consistency in voltage division shows that the error is less than 2 counts in 2(18).
Resumo:
The importance of long-range prediction of rainfall pattern for devising and planning agricultural strategies cannot be overemphasized. However, the prediction of rainfall pattern remains a difficult problem and the desired level of accuracy has not been reached. The conventional methods for prediction of rainfall use either dynamical or statistical modelling. In this article we report the results of a new modelling technique using artificial neural networks. Artificial neural networks are especially useful where the dynamical processes and their interrelations for a given phenomenon are not known with sufficient accuracy. Since conventional neural networks were found to be unsuitable for simulating and predicting rainfall patterns, a generalized structure of a neural network was then explored and found to provide consistent prediction (hindcast) of all-India annual mean rainfall with good accuracy. Performance and consistency of this network are evaluated and compared with those of other (conventional) neural networks. It is shown that the generalized network can make consistently good prediction of annual mean rainfall. Immediate application and potential of such a prediction system are discussed.
Resumo:
CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.
Resumo:
The variation of the viscosity as a function of the sequence distribution in an A-B random copolymer melt is determined. The parameters that characterize the random copolymer are the fraction of A monomers f, the parameter lambda which determines the correlation in the monomer identities along a chain and the Flory chi parameter chi(F) which determines the strength of the enthalpic repulsion between monomers of type A and B. For lambda>0, there is a greater probability of finding like monomers at adjacent positions along the chain, and for lambda<0 unlike monomers are more likely to be adjacent to each other. The traditional Markov model for the random copolymer melt is altered to remove ultraviolet divergences in the equations for the renormalized viscosity, and the phase diagram for the modified model has a binary fluid type transition for lambda>0 and does not exhibit a phase transition for lambda<0. A mode coupling analysis is used to determine the renormalization of the viscosity due to the dependence of the bare viscosity on the local concentration field. Due to the dissipative nature of the coupling. there are nonlinearities both in the transport equation and in the noise correlation. The concentration dependence of the transport coefficient presents additional difficulties in the formulation due to the Ito-Stratonovich dilemma, and there is some ambiguity about the choice of the concentration to be used while calculating the noise correlation. In the Appendix, it is shown using a diagrammatic perturbation analysis that the Ito prescription for the calculation of the transport coefficient, when coupled with a causal discretization scheme, provides a consistent formulation that satisfies stationarity and the fluctuation dissipation theorem. This functional integral formalism is used in the present analysis, and consistency is verified for the present problem as well. The upper critical dimension for this type of renormaliaation is 2, and so there is no divergence in the viscosity in the vicinity of a critical point. The results indicate that there is a systematic dependence of the viscosity on lambda and chi(F). The fluctuations tend to increase the viscosity for lambda<0, and decrease the viscosity for lambda>0, and an increase in chi(F) tends to decrease the viscosity. (C) 1996 American Institute of Physics.
Resumo:
We explore the consequences of the model of spin-down-induced flux expulsion for the magnetic field evolution in solitary as well as in binary neutron stars. The spin evolution of pulsars, allowing for their field evolution according to this model, is shown to be consistent with the existing observational constraints in both low- and high-mass X-ray binary systems. The contribution from pulsars recycled in massive binaries to the observed excess in the number of low-field (10(11)-10(12) G) solitary pulsars is argued to be negligible in comparison with that of normal pulsars undergoing a 'restricted' field decay predicted by the adopted field decay model. Magnetic fields of neutron stars born in close binaries with intermediate- or high-mass main-sequence companions are predicted to decay down to values as low as similar to 10(6) G, which would leave them unobservable as pulsars during most of their lifetimes. The post-recycling evolution of some of these systems can, however, account for the observed binary pulsars having neutron star or massive white dwarf companions. Pulsars recycled in the disc population low-mass binaries are expected to have residual fields greater than or similar to 10(8) G, while for those processed in globular clusters larger residual fields are predicted because of the lower field strength of the neutron star at the epoch of binary formation. A value of tau similar to 1-2 x 10(7) yr for the mean value of the Ohmic decay time-scale in the crusts of neutron stars is suggested, based on the consistency of the model predictions with the observed distribution of periods and magnetic fields in the single and binary pulsars.