9 resultados para INCONSISTENCY
em Indian Institute of Science - Bangalore - Índia
Resumo:
The necessary and sufficient condition for the existence of the one-parameter scale function, the /Munction, is obtained exactly. The analysis reveals certain inconsistency inherent in the scaling theory, and tends to support Motts’ idea of minimum metallic conductivity.
Resumo:
The necessary and sufficient condition for the existence of the one-parameter scale function, the /Munction, is obtained exactly. The analysis reveals certain inconsistency inherent in the scaling theory, and tends to support Motts’ idea of minimum metallic conductivity.
Resumo:
The effective medium theory for a system with randomly distributed point conductivity and polarisability is reformulated, with attention to cross-terms involving the two disorder parameters. The treatment reveals a certain inconsistency of the conventional theory owing to the neglect of the Maxwell-Wagner effect. The results are significant for the critical resistivity and dielectric anomalies of a binary liquid mixture at the phase separation point.
Resumo:
A ternary thermodynamic function has been developed based on statistico-thermodynamic considerations, with a particular emphasis on the higher-order terms indicating the effects of truncation at the various stages of the treatment. Although the truncation of a series involved in the equation introduces inconsistency, the latter may be removed by imposing various thermodynamic boundary conditions. These conditions are discussed in the paper. The present equation with higher-order terms shows that the α function of a component reduces to a quadratic function of composition at constant compositional paths involving the other two components in the system. The form of the function has been found to be representative of various experimental observations.
Resumo:
An attempt to systematically investigate the effects of microstructural parameters in influencing the resistance to fatigue crack growth (FCG) in the near-threshold region under three different temper levels has been made for a high strength low alloy steel to observe in general, widely different trends in the dependence of both the total threshold stress intensity range, DELTA-K(th) and the intrinsic or effective threshold stress intensity range, DELTA-K(eff-th) on the prior austenitic grain size (PAGS). While a low strain hardening microstructure obtained by tempering at high temperatures exhibited strong dependence of DELTA-K(th) on the PAGS by virtue of strong interactions of crack tip slip with the grain boundary, a high strength, high strain hardening microstructure as a result of tempering at low temperature exhibited a weak dependence. The lack of a systematic variation of the near-threshold parameters with respect to grain size in temper embrittled structures appears to be related to the wide variations in the amount of intergranular fracture near threshold. Crack closure, to some extent provides a basis on which the increases in DELTA-K(th) at larger grain sizes can be rationalised. This study, in addition, provides a wide perspective on the relative roles of slip behaviour embrittlement and environment that result in the different trends observed in the grain size dependence of near-threshold fatigue parameters, based on which the inconsistency in the results reported in the literature can be clearly understood. Assessment of fracture modes through extensive fractography revealed that prior austenitic grain boundaries are effective barriers to cyclic crack growth compared to martensitic packet boundaries, especially at low stress intensities. Fracture morphologies comprising of low energy flat transgranular fracture can occur close to threshold depending on the combinations of strain hardening behaviour, yield strength and embrittlement effects. A detailed consideration is given to the discussion of cyclic stress strain behaviour, embrittlement and environmental effects and the implications of these phenomena on the crack growth behaviour near threshold.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
The study set out to investigate the compositional inconsistency in lanthanum zirconate system revealed the presence of nonstoichiometry in lanthanum zirconate powders when synthesized by coprecipitation route. X-ray diffraction (XRD) and high-resolution transmission electron microscopy (HRTEM) investigations confirmed the depletion of La3+ ions in the system. Analysis using Vegard's law showed the La/Zr mole ratio in the sample to be around 0.45. An extra step of ultrasonication, introduced during the washing stage followed by the coprecipitation reaction, ensured the formation of stoichiometric La2Zr2O7. Noteworthy is also the difference between crystal sizes in the samples prepared by with and without ultrasonication step. This difference has been explained in light of the formation of individual nuclei and their scope of growth within the precipitate core. The differential scanning calorimetry (DSC) analyses revealed that optimum pH for the synthesis of La2Zr2O7 is about 11. The ultrasonication step was pivotal in assuring consistency in mixing and composition for the lanthanum zirconate powders.
Resumo:
This letter presents an accurate steady-state phasor model for a doubly fed induction machine. The drawback of existing steady-state phasor model is discussed. In particular, the inconsistency of existing equivalent model with respect to reactive power flows when operated at supersynchronous speeds is highlighted. Relevant mathematical basis for the proposed model is presented and its validity is illustrated on a 2-MW doubly fed induction machine.