979 resultados para consistency


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explore the consequences of the model of spin-down-induced flux expulsion for the magnetic field evolution in solitary as well as in binary neutron stars. The spin evolution of pulsars, allowing for their field evolution according to this model, is shown to be consistent with the existing observational constraints in both low- and high-mass X-ray binary systems. The contribution from pulsars recycled in massive binaries to the observed excess in the number of low-field (10(11)-10(12) G) solitary pulsars is argued to be negligible in comparison with that of normal pulsars undergoing a 'restricted' field decay predicted by the adopted field decay model. Magnetic fields of neutron stars born in close binaries with intermediate- or high-mass main-sequence companions are predicted to decay down to values as low as similar to 10(6) G, which would leave them unobservable as pulsars during most of their lifetimes. The post-recycling evolution of some of these systems can, however, account for the observed binary pulsars having neutron star or massive white dwarf companions. Pulsars recycled in the disc population low-mass binaries are expected to have residual fields greater than or similar to 10(8) G, while for those processed in globular clusters larger residual fields are predicted because of the lower field strength of the neutron star at the epoch of binary formation. A value of tau similar to 1-2 x 10(7) yr for the mean value of the Ohmic decay time-scale in the crusts of neutron stars is suggested, based on the consistency of the model predictions with the observed distribution of periods and magnetic fields in the single and binary pulsars.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Memory models of shared memory concurrent programs define the values a read of a shared memory location is allowed to see. Such memory models are typically weaker than the intuitive sequential consistency semantics to allow efficient execution. In this paper, we present WOMM (abbreviation for Weak Operational Memory Model) that formally unifies two sources of weak behavior in hardware memory models: reordering of instructions and weakly consistent memory. We show that a large number of optimizations are allowed by WOMM. We also show that WOMM is weaker than a number of hardware memory models. Consequently, if a program behaves correctly under WOMM, it will be correct with respect to those hardware memory models. Hence, WOMM can be used as a formally specified abstraction of the hardware memory models. Moreover; unlike most weak memory models, WOMM is described using operational semantics, making it easy to integrate into a model checker for concurrent programs. We further show that WOMM has an important property - it has sequential consistency semantics for datarace-free programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suspensions of testicular germ cells from six species of mammals were prepared and stained for the DNA content with a fluorochrome (ethidium bromide) adopting a common technique and subjected to DNA flow cytometry. While uniform staining of the germ cells of the mouse, hamster, rat and monkey could be obtained by treating with 0.5% pepsin for 60 min followed by staining with ethidium bromide for 30 min, that of the guinea pig and rabbit required for optimal staining pepsinization for 90 min and treatment with ethidium bromide for 60 min. The procedure adopted here provided a uniform recovery of over 80% of germ cells with each one of the species tested and the cell population distributed itself according to the DNA content (expressed as C values) into 5 major classes-spermatogonia (2C), cells in S-phase, primary spermatocytes (4C), round spermatids (1C), and elongating/elongated spermatids (HC). Comparison of the DNA distribution pattern of the germ cell populations between species revealed little variation in the relative quantities of cells with 2C (8-11%), S-phase (6-9%), and 4C (6-9%) amount of DNA. Though the spermatid cell populations exhibited variations (1C:31-46%, HCI:7-20% and and HC2:11-25%) they represented the bulk of germ cells (70-80%). The overall conversion of 2C to 1C (1C:2C ratio) and meiotic transformation of 4C cells to IC (1C:4C ratio) kinetics were relatively constant between the species studied. The present study clearly demonstrates that DNA flow cytometry can be adopted with ease and assurance to quantify germ cell transformation and as such spermatogenesis by analysing a large number of samples with consistency both within and across the species barrier. Any variation from the norms in germ cell proportions observed following treatment, for e.g. hormonal stimulation or deprivation can then be ascribed due to a specific effect of the hormone/drug on single/multiple steps in germ cell transformation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integral excess free energy of a quaternary system has been expressed in terms of the MacLaurin infinite series. The series is subjected to appropriate boundary conditions and each of the derivatives correlated to the corresponding interaction coefficients. The derivation of the partial functions involves extensive summation of various infinite series pertaining to the first order and quaternary parameters to remove any truncational error. The thermodynamic consistency of the derived partials has been established based on the Gibbs-Duhem relations. The equations are used to interpret the thermodynamic properties of the Fe-Cr-Ni-N system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Porous, large surface area, metastable zirconias, are of importance to catalytic, electrochemical, biological, and thermal insulation applications. Combustion synthesis is a very commonly used method for producing such zirconias. However, its rapid nature makes control difficult. A simple modification has been made to traditional solution combustion synthesis to address this problem. It involves the addition of starch to yield a starting mixture with a ``dough-like'' consistency. Just 5 wt% starch is seen to significantly alter the combustion characteristics of the ``dough.'' In particular, it helps to achieve better control over reaction zone temperature that is significantly lower than the one calculated by the adiabatic approximation typically used in self-propagating high-temperature synthesis. The effect of such control is demonstrated by the ability to tune dough composition to yield zirconias with different phase compositions from the relatively elusive ``amorphous'' to monoclinic (> 30 nm grain size) and tetragonal pure zirconia (< 30 nm grain size). The nature of this amorphous phase has been investigated using infrared spectroscopy. Starch content also helps tailor porosity in the final product. Zirconias with an average pore size of about 50 mu m and specific surface area as large as 110 m2/g have been obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thermodynamic constraints on component chemical potentials in three-phase fields introduced by the various isograms suggested in the literature are derived for a ternary system containing compounds. When compositions of two compounds lie on an isogram, it is associated with specific characteristics which can be used to obtain further understanding of the interplay of thermodynamic factors that determine phase equilibria. When two compounds are shared by adjacent three-phase fields, the constraints are dictated by binary compositions generated by the intersection of a line passing through the shared compounds with the sides of the ternary triangle. Generalized expressions for an arbitrary line through the triangle are presented. These are consistent with special relations obtained along Kohler, Colinet and Jacob isograms. Five axioms are introduced and proved. They provide valuable tools for checking consistency of thermodynamic measurements and for deriving thermodynamic properties from phase diagrams. (C) 1997 Elsevier Science S.A.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present multifrequency Very Large Array (VLA) observations of two giant quasars, 0437-244 and 1025-229, from the Molonglo Complete Sample. These sources have well-defined FR II radio structure, possible one-sided jets, no significant depolarization between 1365 and 4935 MHz and low rotation measure (\ RM \ < 20 rad m(-2)). The giant sources are defined to be those with overall projected size greater than or equal to 1 Mpc. We have compiled a sample of about 50 known giant radio sources from the literature, and have compared some of their properties with a complete sample of 3CR radio sources of smaller sizes to investigate the evolution of giant sources, and test their consistency with the unified scheme for radio galaxies and quasars. We find an inverse correlation between the degree of core prominence and total radio luminosity, and show that the giant radio sources have similar core strengths to smaller sources of similar total luminosity. Hence their large sizes are unlikely to be caused by stronger nuclear activity. The degree of collinearity of the giant sources is also similar to that of the sample of smaller sources. The luminosity-size diagram shows that the giant sources are less luminous than our sample of smaller sized 3CR sources, consistent with evolutionary scenarios in which the giants have evolved from the smaller sources, losing energy as they expand to these large dimensions. For the smaller sources, radiative losses resulting from synchrotron radiation are more significant while for the giant sources the equipartition magnetic fields are smaller and inverse Compton lass owing to microwave background radiation is the dominant process. The radio properties of the giant radio galaxies and quasars are consistent with the unified scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The consistency of very soft sediments prevents the conventional oedometer test from being applied to study their compressibility and permeability characteristics. The hydraulic consolidation test in existence requires sophisticated instrumentation and testing procedures. The present paper proposes a seepage-force-induced consolidation testing procedure for studying the compressibility and permeability behavior of soft sediments at low effective stress levels. The good agreement that has been observed between the results obtained from the proposed method and the conventional oedometer test at overlapping effective stress levels indicates that the proposed method can be used to study the compressibility and permeability characteristics of soft sediments at low effective stress levels satisfactorily.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Control of sound transmission through the structure and reflection from the structure immersed in fluid media impose highly conflicting requirements on the design of the carpeted noise control linings. These requirements become even more stringent if the structure is expected to be moving with considerable speed particularly under intense hydrostatic pressure. Numerous configurations are possible for designing these linings. Therefore, in this paper, a few lining configurations are identified from the literature for parametric study so that the designer is provided with an environment to analyze and design the lining. A scheme of finite element analysis is used to analyze these linings for their acoustic performance. Commercial finite element software, NISA®, is used as a platform to develop a customized environment wherein design parameters of different configurations can be varied with consistency checks and generate the finite element meshes using the 8-noded hexahedral element. Four types of designs proposed and analysed here address the parameters of interest such as the echo reduction and the transmission loss. Study of the effect of different surface distributions of the cavities is carried out. Effect of static pressure on different designs is reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A methodology termed the “filtered density function” (FDF) is developed and implemented for large eddy simulation (LES) of chemically reacting turbulent flows. In this methodology, the effects of the unresolved scalar fluctuations are taken into account by considering the probability density function (PDF) of subgrid scale (SGS) scalar quantities. A transport equation is derived for the FDF in which the effect of chemical reactions appears in a closed form. The influences of scalar mixing and convection within the subgrid are modeled. The FDF transport equation is solved numerically via a Lagrangian Monte Carlo scheme in which the solutions of the equivalent stochastic differential equations (SDEs) are obtained. These solutions preserve the Itô-Gikhman nature of the SDEs. The consistency of the FDF approach, the convergence of its Monte Carlo solution and the performance of the closures employed in the FDF transport equation are assessed by comparisons with results obtained by direct numerical simulation (DNS) and by conventional LES procedures in which the first two SGS scalar moments are obtained by a finite difference method (LES-FD). These comparative assessments are conducted by implementations of all three schemes (FDF, DNS and LES-FD) in a temporally developing mixing layer and a spatially developing planar jet under both non-reacting and reacting conditions. In non-reacting flows, the Monte Carlo solution of the FDF yields results similar to those via LES-FD. The advantage of the FDF is demonstrated by its use in reacting flows. In the absence of a closure for the SGS scalar fluctuations, the LES-FD results are significantly different from those based on DNS. The FDF results show a much closer agreement with filtered DNS results. © 1998 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomechanical signals due to human movements during exercise are represented in time-frequency domain using Wigner Distribution Function (WDF). Analysis based on WDF reveals instantaneous spectral and power changes during a rhythmic exercise. Investigations were carried out on 11 healthy subjects who performed 5 cycles of sun salutation, with a body-mounted Inertial Measurement Unit (IMU) as a motion sensor. Variance of Instantaneous Frequency (I.F) and Instantaneous Power (I.P) for performance analysis of the subject is estimated using one-way ANOVA model. Results reveal that joint Time-Frequency analysis of biomechanical signals during motion facilitates a better understanding of grace and consistency during rhythmic exercise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Retaining walls are one of the important structures in nearshore environment and are generally designed based on deterministic approaches. The present paper focuses on the reliability assessment of cantilever retaining walls with due consideration to the uncertainties in soil parameters. Reliability analysis quantifies the level of reliability associated with designs and the associated risk. It also gives the formalisation of a design situation that is normally recognised by experienced designers and provides a greater level of consistency in design. The results are also examined in terms of a simple cost function. The study shows that sliding mode is the critical failure mode and the consequent failure costs are also higher. The study also shows that provision of shear key results in improved reliability and reduction in expected costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ladle constitutes a crucial element in the pouring system for developing process consistency and quality in the manufacture of castings. Flow of molten metal from T-spout ladles, with the spout projecting at an angle from the shell of the ladle, was investigated by simulation, modeling and analysis. This was followed by experimental validation for water flow in translucent models, and verification in a cast iron foundry. Key parameters in the design of a ladle (for its geometry) have been identified as the ladle size, spout diameter and its angle. Velocity and flow rate from a T-spout ladle are governed by and increase with increase in angle of tilt apart from the aforementioned design parameters. Cross section and profile of the issuing jet displaying a stable twisting pattern, during its free-fall of model fluid water and molten metal iron, are notable aspects in the design of the ladle, particularly the spout exit.