226 resultados para Load rejection test data
Resumo:
To The ratcheting behavior of high-strength rail steel (Australian Standard AS1085.1) is studied in this work for the purpose of predicting wear and damage to the rail surface. Historically, researchers have used circular test coupons obtained from the rail head to conduct cyclic load tests, but according to hardness profile data, considerable variation exists across the rail head section. For example, the induction-hardened rail (AS1085.1) shows high hardness (400-430 HV100) up to four-millimeters into the rail head’s surface, but then drops considerably beyond that. Given that cyclic test coupons five millimeters in diameter at the gauge area are usually taken from the rail sample, there is a high probability that the original surface properties of the rail do not apply across the entire test coupon and, therefore, data representing only average material properties are obtained. In the literature, disks (47 mm in diameter) for a twin-disk rolling contact test machine have been obtained directly from the rail sample and used to validate rolling contact fatigue wear models. The question arises: How accurate are such predictions? In this research paper, the effect of rail sampling position on the ratcheting behavior of AS1085.1 rail steel was investigated using rectangular shaped specimens. Uniaxial stress-controlled tests were conducted with samples obtained at four different depths to observe the ratcheting behaviour of each. Micro-hardness measurements of the test coupons were carried out to obtain a constitutive relationship to predict the effect of depth on the ratcheting behaviour of the rail material. This work ultimately assists the selection of valid material parameters for constitutive models in the study of rail surface ratcheting.
Resumo:
Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.
Resumo:
The MFG test is a family-based association test that detects genetic effects contributing to disease in offspring, including offspring allelic effects, maternal allelic effects and MFG incompatibility effects. Like many other family-based association tests, it assumes that the offspring survival and the offspring-parent genotypes are conditionally independent provided the offspring is affected. However, when the putative disease-increasing locus can affect another competing phenotype, for example, offspring viability, the conditional independence assumption fails and these tests could lead to incorrect conclusions regarding the role of the gene in disease. We propose the v-MFG test to adjust for the genetic effects on one phenotype, e.g., viability, when testing the effects of that locus on another phenotype, e.g., disease. Using genotype data from nuclear families containing parents and at least one affected offspring, the v-MFG test models the distribution of family genotypes conditional on offspring phenotypes. It simultaneously estimates genetic effects on two phenotypes, viability and disease. Simulations show that the v-MFG test produces accurate genetic effect estimates on disease as well as on viability under several different scenarios. It generates accurate type-I error rates and provides adequate power with moderate sample sizes to detect genetic effects on disease risk when viability is reduced. We demonstrate the v-MFG test with HLA-DRB1 data from study participants with rheumatoid arthritis (RA) and their parents, we show that the v-MFG test successfully detects an MFG incompatibility effect on RA while simultaneously adjusting for a possible viability loss.
Resumo:
This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.
Resumo:
Selection criteria and misspecification tests for the intra-cluster correlation structure (ICS) in longitudinal data analysis are considered. In particular, the asymptotical distribution of the correlation information criterion (CIC) is derived and a new method for selecting a working ICS is proposed by standardizing the selection criterion as the p-value. The CIC test is found to be powerful in detecting misspecification of the working ICS structures, while with respect to the working ICS selection, the standardized CIC test is also shown to have satisfactory performance. Some simulation studies and applications to two real longitudinal datasets are made to illustrate how these criteria and tests might be useful.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.
Resumo:
The bentiromide test was evaluated using plasma p-aminobenzoic acid as an indirect test of pancreatic insufficiency in young children between 2 months and 4 years of age. To determine the optimal test method, the following were examined: (a) the best dose of bentiromide (15 mg/kg or 30 mg/kg); (b) the optimal sampling time for plasma p-aminobenzoic acid, and; (c) the effect of coadministration of a liquid meal. Sixty-nine children (1.6 ± 1.0 years) were studied, including 34 controls with normal fat absorption and 35 patients (34 with cystic fibrosis) with fat maldigestion due to pancreatic insufficiency. Control and pancreatic insufficient subjects were studied in three age-matched groups: (a) low-dose bentiromide (15 mg/kg) with clear fluids; (b) high-dose bentiromide (30 mg/kg) with clear fluids, and; (c) high-dose bentiromide with a liquid meal. Plasma p-aminobenzoic acid was determined at 0, 30, 60, and 90 minutes then hourly for 6 hours. The dose effect of bentiromide with clear liquids was evaluated. High-dose bentiromide best discriminated control and pancreatic insufficient subjects, due to a higher peak plasma p-aminobenzoic acid level in controls, but poor sensitivity and specificity remained. High-dose bentiromide with a liquid meal produced a delayed increase in plasma p-aminobenzoic acid in the control subjects probably caused by retarded gastric emptying. However, in the pancreatic insufficient subjects, use of a liquid meal resulted in significantly lower plasma p-aminobenzoic acid levels at all time points; plasma p-aminobenzoic acid at 2 and 3 hours completely discriminated between control and pancreatic insufficient patients. Evaluation of the data by area under the time-concentration curve failed to improve test results. In conclusion, the bentiromide test is a simple, clinically useful means of detecting pancreatic insufficiency in young children, but a higher dose administered with a liquid meal is recommended.
Resumo:
Cyclic plastic deformation of subgrade and other engineered layers is generally not taken into account in the design of railway bridge transition zones, although the plastic deformation is the governing factor of frequent track deterioration. Actual stress behavior of fine grained subgrade/embankment layers under train traffic is, however, difficult to replicate using the conventional laboratory test apparatus and techniques. A new type of torsional simple shear apparatus, known as multi-ring shear apparatus, was therefore developed to evaluate the actual stress state and the corresponding cyclic plastic deformation characteristics of subgrade materials under moving wheel load conditions. Multi-ring shear test results has been validated using a theoretical model test results; the capability of the multi-ring shear apparatus for replicating the cyclic plastic deformation characteristics of subgrade under moving train wheel load conditions is thus established. This paper describes the effects of principal stress rotation (PSR) of the subgrade materials to the cyclic plastic deformation in a railroad and impacts of testing methods in evaluating the influence of principal stress rotation to the track deterioration of rail track.
Resumo:
The aim of this study was to asses results obtained from a range of commonly performed lower extremity “open and closed” chain kinetic tests used for predicting foot function and correlate these test findings to data obtained from the Zebris WinFDM-T system®. When performed correctly these tests are thought to be indicators of lower extremity function. Podiatrists frequently perform examinations of joint and muscle structures to understand biomechanical function; however the relationship between these routine tests and forces generated during the gait cycle are not always well understood. This can introduce a degree of variability in clinical interpretation which creates conjecture regarding the value of these tests.
Resumo:
This paper investigates quality of service (QoS) and resource productivity implications of transit route passenger loading and travel time. It highlights the value of occupancy load factor as a direct passenger comfort QoS measure. Automatic Fare Collection data for a premium radial bus route in Brisbane, Australia, is used to investigate time series correlation between occupancy load factor and passenger average travel time. Correlation is strong across the entire span of service in both directions. Passengers tend to be making longer, peak direction commuter trips under significantly less comfortable conditions than off-peak. The Transit Capacity and Quality of Service Manual uses segment based load factor as a measure of onboard loading comfort QoS. This paper provides additional insight into QoS by relating the two route based dimensions of occupancy load factor and passenger average travel time together in a two dimensional format, both from the passenger’s and operator’s perspectives. Future research will apply Value of Time to QoS measurement, reflecting perceived passenger comfort through crowding and average time spent onboard. This would also assist in transit service quality econometric modeling. The methodology can be readily applied in a practical setting where AFC data for fixed scheduled routes is available. The study outcomes also provide valuable research and development directions.
Resumo:
To strive to improve the rehabilitation program of individuals with transfemoral amputation fitted with bone-anchored prosthesis based on data from direct measurements of the load applied on the residuum we first of all need to understand the load applied on the fixation. Therefore the load applied on the residuum was first directly measured during standardized activities of daily living such as straight line level walking, ascending and descending stairs and a ramp and walking around a circle. From measuring the load in standardized activities of daily living the load was also measured during different phases of the rehabilitation program such as during walking with walking aids and during load bearing exercises.[1-15] The rehabilitation program for individuals with a transfemoral amputation fitted with an OPRA implant relies on a combination of dynamic and static load bearing exercises.[16-20] This presentation will focus on the study of a set of experimental static load bearing exercises. [1] A group of eleven individuals with unilateral transfemoral amputation fitted with an OPRA implant participated in this study. The load on the implant during the static load bearing exercises was measured using a portable system including a commercial transducer embedded in a short pylon, a laptop and a customized software package. This apparatus was previously shown effective in a proof-of-concept study published by Prof. Frossard. [1-9] The analysis of the static load bearing exercises included an analysis of the reliability as well as the loading compliance. The analysis of the loading reliability showed a high reliability between the loading sessions indicating a correct repetition of the LBE by the participants. [1, 5] The analysis of the loading compliance showed a significant lack of axial compliance leading to a systematic underloading of the long axis of the implant during the proposed experimental static LBE.
Resumo:
Large integration of solar Photo Voltaic (PV) in distribution network has resulted in over-voltage problems. Several control techniques are developed to address over-voltage problem using Deterministic Load Flow (DLF). However, intermittent characteristics of PV generation require Probabilistic Load Flow (PLF) to introduce variability in analysis that is ignored in DLF. The traditional PLF techniques are not suitable for distribution systems and suffer from several drawbacks such as computational burden (Monte Carlo, Conventional convolution), sensitive accuracy with the complexity of system (point estimation method), requirement of necessary linearization (multi-linear simulation) and convergence problem (Gram–Charlier expansion, Cornish Fisher expansion). In this research, Latin Hypercube Sampling with Cholesky Decomposition (LHS-CD) is used to quantify the over-voltage issues with and without the voltage control algorithm in the distribution network with active generation. LHS technique is verified with a test network and real system from an Australian distribution network service provider. Accuracy and computational burden of simulated results are also compared with Monte Carlo simulations.