973 resultados para Weibull distribution function


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The available wind power is stochastic and requires appropriate tools in the OPF model for economic and reliable power system operation. This paper exhibit the OPF formulation with factors involved in the intermittency of wind power. Weibull distribution is adopted to find the stochastic wind speed and power distribution. The reserve requirement is evaluated based on the wind distribution and risk of under/over estimation of the wind power. In addition, the Wind Energy Conversion System (WECS) is represented by Doubly Fed Induction Generator (DFIG) based wind farms. The reactive power capability for DFIG based wind farm is also analyzed. The study is performed on IEEE-30 bus system with wind farm located at different buses and with different wind profiles. Also the reactive power capacity to be installed in the wind farm to maintain a satisfactory voltage profile under the various wind flow scenario is demonstrated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The cotton strip assay (CSA) is an established technique for measuring soil microbial activity. The technique involves burying cotton strips and measuring their tensile strength after a certain time. This gives a measure of the rotting rate, R, of the cotton strips. R is then a measure of soil microbial activity. This paper examines properties of the technique and indicates how the assay can be optimised. Humidity conditioning of the cotton strips before measuring their tensile strength reduced the within and between day variance and enabled the distribution of the tensile strength measurements to approximate normality. The test data came from a three-way factorial experiment (two soils, two temperatures, three moisture levels). The cotton strips were buried in the soil for intervals of time ranging up to 6 weeks. This enabled the rate of loss of cotton tensile strength with time to be studied under a range of conditions. An inverse cubic model accounted for greater than 90% of the total variation within each treatment combination. This offers support for summarising the decomposition process by a single parameter R. The approximate variance of the decomposition rate was estimated from a function incorporating the variance of tensile strength and the differential of the function for the rate of decomposition, R, with respect to tensile strength. This variance function has a minimum when the measured strength is approximately 2/3 that of the original strength. The estimates of R are almost unbiased and relatively robust against the cotton strips being left in the soil for more or less than the optimal time. We conclude that the rotting rate X should be measured using the inverse cubic equation, and that the cotton strips should be left in the soil until their strength has been reduced to about 2/3.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the probability distribution of the angle by which the tangent to the trajectory rotates in the course of a plane random walk. It is shown that the determination of this distribution function can be reduced to an integral equation, which can be rigorously transformed into a differential equation of Hill's type. We derive the asymptotic distribution for very long walks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We derive a very general expression of the survival probability and the first passage time distribution for a particle executing Brownian motion in full phase space with an absorbing boundary condition at a point in the position space, which is valid irrespective of the statistical nature of the dynamics. The expression, together with the Jensen's inequality, naturally leads to a lower bound to the actual survival probability and an approximate first passage time distribution. These are expressed in terms of the position-position, velocity-velocity, and position-velocity variances. Knowledge of these variances enables one to compute a lower bound to the survival probability and consequently the first passage distribution function. As examples, we compute these for a Gaussian Markovian process and, in the case of non-Markovian process, with an exponentially decaying friction kernel and also with a power law friction kernel. Our analysis shows that the survival probability decays exponentially at the long time irrespective of the nature of the dynamics with an exponent equal to the transition state rate constant.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the distribution of residence time or equivalently that of "mean magnetization" for a family of Gaussian Markov processes indexed by a positive parameter alpha. The persistence exponent for these processes is simply given by theta=alpha but the residence time distribution is nontrivial. The shape of this distribution undergoes a qualitative change as theta increases, indicating a sharp change in the ergodic properties of the process. We develop two alternate methods to calculate exactly but recursively the moments of the distribution for arbitrary alpha. For some special values of alpha, we obtain closed form expressions of the distribution function. [S1063-651X(99)03306-1].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we report an analysis of the protein sequence length distribution for 13 bacteria, four archaea and one eukaryote whose genomes have been completely sequenced, The frequency distribution of protein sequence length for all the 18 organisms are remarkably similar, independent of genome size and can be described in terms of a lognormal probability distribution function. A simple stochastic model based on multiplicative processes has been proposed to explain the sequence length distribution. The stochastic model supports the random-origin hypothesis of protein sequences in genomes. Distributions of large proteins deviate from the overall lognormal behavior. Their cumulative distribution follows a power-law analogous to Pareto's law used to describe the income distribution of the wealthy. The protein sequence length distribution in genomes of organisms has important implications for microbial evolution and applications. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The velocity distribution for a vibrated granular material is determined in the dilute limit where the frequency of particle collisions with the vibrating surface is large compared to the frequency of binary collisions. The particle motion is driven by the source of energy due to particle collisions with the vibrating surface, and two dissipation mechanisms-inelastic collisions and air drag-are considered. In the latter case, a general form for the drag force is assumed. First, the distribution function for the vertical velocity for a single particle colliding with a vibrating surface is determined in the limit where the dissipation during a collision due to inelasticity or between successive collisions due to drag is small compared to the energy of a particle. In addition, two types of amplitude functions for the velocity of the surface, symmetric and asymmetric about zero velocity, are considered. In all cases, differential equations for the distribution of velocities at the vibrating surface are obtained using a flux balance condition in velocity space, and these are solved to determine the distribution function. It is found that the distribution function is a Gaussian distribution when the dissipation is due to inelastic collisions and the amplitude function is symmetric, and the mean square velocity scales as [[U-2](s)/(1 - e(2))], where [U-2](s) is the mean square velocity of the vibrating surface and e is the coefficient of restitution. The distribution function is very different from a Gaussian when the dissipation is due to air drag and the amplitude function is symmetric, and the mean square velocity scales as ([U-2](s)g/mu(m))(1/(m+2)) when the acceleration due to the fluid drag is -mu(m)u(y)\u(y)\(m-1), where g is the acceleration due to gravity. For an asymmetric amplitude function, the distribution function at the vibrating surface is found to be sharply peaked around [+/-2[U](s)/(1-e)] when the dissipation is due to inelastic collisions, and around +/-[(m +2)[U](s)g/mu(m)](1/(m+1)) when the dissipation is due to fluid drag, where [U](s) is the mean velocity of the surface. The distribution functions are compared with numerical simulations of a particle colliding with a vibrating surface, and excellent agreement is found with no adjustable parameters. The distribution function for a two-dimensional vibrated granular material that includes the first effect of binary collisions is determined for the system with dissipation due to inelastic collisions and the amplitude function for the velocity of the vibrating surface is symmetric in the limit delta(I)=(2nr)/(1 - e)much less than 1. Here, n is the number of particles per unit width and r is the particle radius. In this Limit, an asymptotic analysis is used about the Limit where there are no binary collisions. It is found that the distribution function has a power-law divergence proportional to \u(x)\((c delta l-1)) in the limit u(x)-->0, where u(x) is the horizontal velocity. The constant c and the moments of the distribution function are evaluated from the conservation equation in velocity space. It is found that the mean square velocity in the horizontal direction scales as O(delta(I)T), and the nontrivial third moments of the velocity distribution scale as O(delta(I)epsilon(I)T(3/2)) where epsilon(I) = (1 - e)(1/2). Here, T = [2[U2](s)/(1 - e)] is the mean square velocity of the particles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper outlines a technique for sensitive measurement of conduction phenomena in liquid dielectrics. The special features of this technique are the simplicity of the electrical system, the inexpensive instrumentation and the high accuracy. Detection, separation and analysis of a random function of current that is superimposed on the prebreakdown direct current forms the basis of this investigation. In this case, prebreakdown direct current is the output data of a test cell with large electrodes immersed in a liquid medium subjected to high direct voltages. Measurement of the probability-distribution function of a random fluctuating component of current provides a method that gives insight into the mechanism of conduction in a liquid medium subjected to high voltages and the processes that are responsible for the existence of the fluctuating component of the current.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We report a universal large deviation behavior of spatially averaged global injected power just before the rejuvenation of the jammed state formed by an aging suspension of laponite clay under an applied stress. The probability distribution function (PDF) of these entropy consuming strongly non-Gaussian fluctuations follow an universal large deviation functional form described by the generalized Gumbel (GG) distribution like many other equilibrium and nonequilibrium systems with high degree of correlations but do not obey the Gallavotti-Cohen steady-state fluctuation relation (SSFR). However, far from the unjamming transition (for smaller applied stresses) SSFR is satisfied for both Gaussian as well as non-Gaussian PDF. The observed slow variation of the mean shear rate with system size supports a recent theoretical prediction for observing GG distribution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we consider the inference for the component and system lifetime distribution of a k-unit parallel system with independent components based on system data. The components are assumed to have identical Weibull distribution. We obtain the maximum likelihood estimates of the unknown parameters based on system data. The Fisher information matrix has been derived. We propose -expectation tolerance interval and -content -level tolerance interval for the life distribution of the system. Performance of the estimators and tolerance intervals is investigated via simulation study. A simulated dataset is analyzed for illustration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To calculate static response properties of a many-body system, local density approximation (LDA) can be safely applied. But, to obtain dynamical response functions, the applicability of LDA is limited bacause dynamics of the system needs to be considered as well. To examine this in the context of cold atoms, we consider a system of non-interacting spin4 fermions confined by a harmonic trapping potential. We have calculated a very important response function, the spectral intensity distribution function (SIDF), both exactly and using LDA at zero temperature and compared with each other for different dimensions, trap frequencies and momenta. The behaviour of the SIDF at a particular momentum can be explained by noting the behaviour of the density of states (DoS) of the free system (without trap) in that particular dimension. The agreement between exact and LDA SIDFs becomes better with increase in dimensions and number of particles.