918 resultados para Scalar failure


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strain-based failure criteria have several advantages over stress-based failure criteria: they can account for elastic and inelastic strains, they utilise direct, observables effects instead of inferred effects (strain gauges vs. stress estimates), and model complete stress-strain curves including pre-peak, non-linear elasticity and post-peak strain weakening. In this study, a strain-based failure criterion derived from thermodynamic first principles utilising the concepts of continuum damage mechanics is presented. Furthermore, implementation of this failure criterion into a finite-element simulation is demonstrated and applied to the stability of underground mining coal pillars. In numerical studies, pillar strength is usually expressed in terms of critical stresses or stress-based failure criteria where scaling with pillar width and height is common. Previous publications have employed the finite-element method for pillar stability analysis using stress-based failure criterion such as Mohr-Coulomb and Hoek-Brown or stress-based scalar damage models. A novel constitutive material model, which takes into consideration anisotropy as well as elastic strain and damage as state variables has been developed and is presented in this paper. The damage threshold and its evolution are strain-controlled, and coupling of the state variables is achieved through the damage-induced degradation of the elasticity tensor. This material model is implemented into the finite-element software ABAQUS and can be applied to 3D problems. Initial results show that this new material model is capable of describing the non-linear behaviour of geomaterials commonly observed before peak strength is reached as well as post-peak strain softening. Furthermore, it is demonstrated that the model can account for directional dependency of failure behaviour (i.e. anisotropy) and has the potential to be expanded to environmental controls like temperature or moisture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to test various available turbulent burning velocity models on an experimental version of Siemens small scale combustor using the commercial CFD code. Failure of burning velocity model with different expressions for turbulent burning velocity is observed with an unphysical flame flashback into the swirler. Eddy Dissipation Model/Finite Rate Chemistry is found to over-predict mean temperature and species concentrations. Solving for reaction progress equation with its variance using scalar dissipation rate modelling produced reasonably good agreement with the available experimental data. Two different turbulence models Shear Stress Transport (SST) and Scale Adaptive Simulation (SAS) SST are tested and results from transient SST simulations are observed to be predicting well. SAS-SST is found to under-predict with temperature and species distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)