953 resultados para function estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate the distribution of the product of Rayleigh distributed random variables. Considering the Mellin-Barnes inversion formula and using the saddle point approach we obtain an upper bound for the product distribution. The accuracy of this tail-approximation increases as the number of random variables in the product increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, some models have been proposed for the fault section estimation and state identification of unobserved protective relays (FSE-SIUPR) under the condition of incomplete state information of protective relays. In these models, the temporal alarm information from a faulted power system is not well explored although it is very helpful in compensating the incomplete state information of protective relays, quickly achieving definite fault diagnosis results and evaluating the operating status of protective relays and circuit breakers in complicated fault scenarios. In order to solve this problem, an integrated optimization mathematical model for the FSE-SIUPR, which takes full advantage of the temporal characteristics of alarm messages, is developed in the framework of the well-established temporal constraint network. With this model, the fault evolution procedure can be explained and some states of unobserved protective relays identified. The model is then solved by means of the Tabu search (TS) and finally verified by test results of fault scenarios in a practical power system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a high-speed, 100Hz, visionbased state estimator that is suitable for quadrotor control in close quarters manoeuvring applications. We describe the hardware and algorithms for estimating the state of the quadrotor. Experimental results for position, velocity and yaw angle estimators are presented and compared with motion capture data. Quantitative performance comparison with state-of-the-art achievements are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Population-representative data for dioxin and PCB congener concentrations are available for the Australian population based on measurements in age- and gender-specific serum pools.1 Such data provide a basis for characterizing the mean concentrations of these compounds in the population, but do not provide information on the inter-individual variation in serum concentrations that may exist in the population within an age- and gender-specific group. Such variation may occur due to inter-individual differences in long-term exposure levels or elimination rates. Reference values are estimates of upper percentiles (often the 95th percentile) of measured values in a defined population that can be used to evaluate data from individuals in the population in order to identify concentrations that are elevated, for example, from occupational exposures.2 The objective of this analysis is to estimate reference values corresponding to the 95th percentile (RV95s) for Australia on an age-specific basis for individual dioxin-like congeners based on measurements in serum pools from Toms and Mueller (2010).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cold water immersion (CWI) is a popular recovery modality, but actual physiological responses to CWI after exercise in the heat have not been well documented. The purpose of this study was to examine effects of 20-min CWI (14 degrees C) on neuromuscular function, rectal (T(re)) and skin temperature (T(sk)), and femoral venous diameter after exercise in the heat. Ten well-trained male cyclists completed two bouts of exercise consisting of 90-min cycling at a constant power output (216+/-12W) followed by a 16.1km time trial (TT) in the heat (32 degrees C). Twenty-five minutes post-TT, participants were assigned to either CWI or control (CON) recovery conditions in a counterbalanced order. T(re) and T(sk) were recorded continuously, and maximal voluntary isometric contraction torque of the knee extensors (MVIC), MVIC with superimposed electrical stimulation (SMVIC), and femoral venous diameters were measured prior to exercise, 0, 45, and 90min post-TT. T(re) was significantly lower in CWI beginning 50min post-TT compared with CON, and T(sk) was significantly lower in CWI beginning 25min post-TT compared with CON. Decreases in MVIC, and SMVIC torque after the TT were significantly greater for CWI compared with CON; differences persisted 90min post-TT. Femoral vein diameter was approximately 9% smaller for CWI compared with CON at 45min post-TT. These results suggest that CWI decreases T(re), but has a negative effect on neuromuscular function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Smartphones become very critical part of our lives as they offer advanced capabilities with PC-like functionalities. They are getting widely deployed while not only being used for classical voice-centric communication. New smartphone malwares keep emerging where most of them still target Symbian OS. In the case of Symbian OS, application signing seemed to be an appropriate measure for slowing down malware appearance. Unfortunately, latest examples showed that signing can be bypassed resulting in new malware outbreak. In this paper, we present a novel approach to static malware detection in resource-limited mobile environments. This approach can be used to extend currently used third-party application signing mechanisms for increasing malware detection capabilities. In our work, we extract function calls from binaries in order to apply our clustering mechanism, called centroid. This method is capable of detecting unknown malwares. Our results are promising where the employed mechanism might find application at distribution channels, like online application stores. Additionally, it seems suitable for directly being used on smartphones for (pre-)checking installed applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose In this study we examine neuroretinal function in five amblyopes, who had been shown in previous functional MRI (fMRI) studies to have compromised function of the lateral geniculate nucleus (LGN), to determine if the fMRI deficit in amblyopia may have its origin at the retinal level. Methods We used slow flash multifocal ERG (mfERG) and compared averaged five ring responses of the amblyopic and fellow eyes across a 35 deg field. Central responses were also assessed over a field which was about 6.3 deg in diameter. We measured central retinal thickness using optical coherence tomography. Central fields were measured using the MP1-Microperimeter which also assesses ocular fixation during perimetry. MfERG data were compared with fMRI results from a previous study. Results Amblyopic eyes had reduced response density amplitudes (first major negative to first positive (N1-P1) responses) for the central and paracentral retina (up to 18 deg diameter) but not for the mid-periphery (from 18 to 35 deg). Retinal thickness was within normal limits for all eyes, and not different between amblyopic and fellow eyes. Fixation was maintained within the central 4° more than 80% of the time by four of the five participants; fixation assessed using bivariate contour ellipse areas (BCEA) gave rankings similar to those of the MP-1 system. There was no significant relationship between BCEA and mfERG response for either amblyopic or fellow eye. There was no significant relationship between the central mfERG eye response difference and the selective blood oxygen level dependent (BOLD) LGN eye response difference previously seen in these participants. Conclusions Retinal responses in amblyopes can be reduced within the central field without an obvious anatomical basis. Additionally, this retinal deficit may not be the reason why the LGN BOLD (blood oxygen level dependent) responses are reduced for amblyopic eye stimulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To determine whether neuroretinal function differs in healthy persons with and without common risk gene variants for age- related macular degeneration (AMD) and no ophthalmoscopic signs of AMD, and to compare those findings in persons with manifest early AMD. Methods and Participants: Neuroretinal function was assessed with the multifocal electroretinogram (mfERG) (VERIS, Redwood City, CA,) in 32 participants (22 healthy persons with no clinical signs of AMD and 10 early AMD patients). The 22 healthy participants with no AMD were risk genotypes for either the CFH (rs380390) and/or ARMS2 (rs10490920). We used a slow flash mfERG paradigm (3 inserted frames) and a 103 hexagon stimulus array. Recordings were made with DTL electrodes; fixation and eye movements were monitored online. Trough N1 to peak P1 (N1P1) response densities and P1-implicit times (IT) were analysed in 5 concentric rings. Results: N1P1 response densities (mean ± SD) for concentric rings 1-3 were on average significantly higher in at-risk genotypes (ring 1: 17.97 nV/deg2 ± 1.9, ring 2: 11.7 nV/deg2 ±1.3, ring 3: 8.7 nV/deg2 ± 0.7) compared to those without risk (ring 1: 13.7 nV/deg2 ± 1.9, ring 2: 9.2 nV/deg2 ±0.8, ring 3: 7.3 nV/deg2 ± 1.1) and compared to persons with early AMD (ring 1: 15.3 nV/deg2 ± 4.8, ring 2: 9.1 nV/deg2 ±2.3, ring 3 nV/deg2: 7.3± 1.3) (p<0.5). The group implicit times, P1-ITs for ring 1 were on average delayed in the early AMD patients (36.4 ms ± 1.0) compared to healthy participants with (35.1 ms ± 1.1) or without risk genotypes (34.8 ms ±1.3), although these differences were not significant. Conclusion: Neuroretinal function in persons with normal fundi can be differentiated into subgroups based on their genetics. Increased neuroretinal activity in persons who carry AMD risk genotypes may be due to genetically determined subclinical inflammatory and/or histological changes in the retina. Assessment of neuroretinal function in healthy persons genetically susceptible to AMD may be a useful early biomarker before there is clinical manifestation of AMD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A routine activity for a sports dietitian is to estimate energy and nutrient intake from an athlete's self-reported food intake. Decisions made by the dietitian when coding a food record are a source of variability in the data. The aim of the present study was to determine the variability in estimation of the daily energy and key nutrient intakes of elite athletes, when experienced coders analyzed the same food record using the same database and software package. Seven-day food records from a dietary survey of athletes in the 1996 Australian Olympic team were randomly selected to provide 13 sets of records, each set representing the self-reported food intake of an endurance, team, weight restricted, and sprint/power athlete. Each set was coded by 3-5 members of Sports Dietitians Australia, making a total of 52 athletes, 53 dietitians, and 1456 athlete-days of data. We estimated within- and between- athlete and dietitian variances for each dietary nutrient using mixed modeling, and we combined the variances to express variability as a coefficient of variation (typical variation as a percent of the mean). Variability in the mean of 7-day estimates of a nutrient was 2- to 3-fold less than that of a single day. The variability contributed by the coder was less than the true athlete variability for a 1-day record but was of similar magnitude for a 7-day record. The most variable nutrients (e.g., vitamin C, vitamin A, cholesterol) had approximately 3-fold more variability than least variable nutrients (e.g., energy, carbohydrate, magnesium). These athlete and coder variabilities need to be taken into account in dietary assessment of athletes for counseling and research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In practical cases for active noise control (ANC), the secondary path has usually a time varying behavior. For these cases, an online secondary path modeling method that uses a white noise as a training signal is required to ensure convergence of the system. The modeling accuracy and the convergence rate are increased when a white noise with a larger variance is used. However, the larger variance increases the residual noise, which decreases performance of the system and additionally causes instability problem to feedback structures. A sudden change in the secondary path leads to divergence of the online secondary path modeling filter. To overcome these problems, this paper proposes a new approach for online secondary path modeling in feedback ANC systems. The proposed algorithm uses the advantages of white noise with larger variance to model the secondary path, but the injection is stopped at the optimum point to increase performance of the algorithm and to prevent the instability effect of the white noise. In this approach, instead of continuous injection of the white noise, a sudden change in secondary path during the operation makes the algorithm to reactivate injection of the white noise to correct the secondary path estimation. In addition, the proposed method models the secondary path without the need of using off-line estimation of the secondary path. Considering the above features increases the convergence rate and modeling accuracy, which results in a high system performance. Computer simulation results shown in this paper indicate effectiveness of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel multiple regression method (RM) is developed to predict identity-by-descent probabilities at a locus L (IBDL), among individuals without pedigree, given information on surrounding markers and population history. These IBDL probabilities are a function of the increase in linkage disequilibrium (LD) generated by drift in a homogeneous population over generations. Three parameters are sufficient to describe population history: effective population size (Ne), number of generations since foundation (T), and marker allele frequencies among founders (p). IBD L are used in a simulation study to map a quantitative trait locus (QTL) via variance component estimation. RM is compared to a coalescent method (CM) in terms of power and robustness of QTL detection. Differences between RM and CM are small but significant. For example, RM is more powerful than CM in dioecious populations, but not in monoecious populations. Moreover, RM is more robust than CM when marker phases are unknown or when there is complete LD among founders or Ne is wrong, and less robust when p is wrong. CM utilises all marker haplotype information, whereas RM utilises information contained in each individual marker and all possible marker pairs but not in higher order interactions. RM consists of a family of models encompassing four different population structures, and two ways of using marker information, which contrasts with the single model that must cater for all possible evolutionary scenarios in CM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bus travel time estimation and prediction are two important modelling approaches which could facilitate transit users in using and transit providers in managing the public transport network. Bus travel time estimation could assist transit operators in understanding and improving the reliability of their systems and attracting more public transport users. On the other hand, bus travel time prediction is an important component of a traveller information system which could reduce the anxiety and stress for the travellers. This paper provides an insight into the characteristic of bus in traffic and the factors that influence bus travel time. A critical overview of the state-of-the-art in bus travel time estimation and prediction is provided and the needs for research in this important area are highlighted. The possibility of using Vehicle Identification Data (VID) for studying the relationship between bus and cars travel time is also explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We applied small-angle neutron scattering (SANS) and ultra small-angle neutron scattering (USANS) to monitor evolution of the CO2 adsorption in porous silica as a function of CO2 pressure and temperature in pores of different sizes. The range of pressures (0 < P < 345 bar) and temperatures (T=18 OC, 35 OC and 60 OC) corresponded to subcritical, near critical and supercritical conditions of bulk fluid. We observed that the adsorption behavior of CO2 is fundamentally different in large and small pores with the sizes D > 100 Å and D < 30 Å, respectively. Scattering data from large pores indicate formation of a dense adsorbed film of CO2 on pore walls with the liquid-like density (ρCO2)ads≈0.8 g/cm3. The adsorbed film coexists with unadsorbed fluid in the inner pore volume. The density of unadsorbed fluid in large pores is temperature and pressure dependent: it is initially lower than (ρCO2)ads and gradually approaches it with pressure. In small pores compressed CO2 gas completely fills the pore volume. At the lowest pressures of the order of 10 bar and T=18 OC, the fluid density in smallest pores available in the matrix with D ~ 10 Å exceeds bulk fluid density by a factor of ~ 8. As pressure increases, progressively larger pores become filled with the condensed CO2. Fluid densification is only observed in pores with sizes less than ~ 25 – 30 Å. As the density of the invading fluid reaches (ρCO2)bulk~ 0.8 g/cm3, pores of all sizes become uniformly filled with CO2 and the confinement effects disappear. At higher densities the fluid in small pores appears to follow the equation of state of bulk CO2 although there is an indication that the fluid density in the inner volume of large pores may exceed the density of the adsorbed layer. The equivalent internal pressure (Pint) in the smallest pores exceeds the external pressure (Pext) by a factor of ~ 5 for both sub- and supercritical CO2. Pint gradually approaches Pext as D → 25 – 30 Å and is independent of temperature in the studied range of 18 OC ≤ T ≤ 60 OC. The obtained results demonstrate certain similarity as well as differences between adsorption of subcritical and supercritical CO2 in disordered porous silica. High pressure small angle scattering experiments open new opportunities for in situ studies of the fluid adsorption in porous media of interest to CO2 sequestration, energy storage, and heterogeneous catalysis.