418 resultados para Roundness errors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pattern recognition is a promising approach for the identification of structural damage using measured dynamic data. Much of the research on pattern recognition has employed artificial neural networks (ANNs) and genetic algorithms as systematic ways of matching pattern features. The selection of a damage-sensitive and noise-insensitive pattern feature is important for all structural damage identification methods. Accordingly, a neural networks-based damage detection method using frequency response function (FRF) data is presented in this paper. This method can effectively consider uncertainties of measured data from which training patterns are generated. The proposed method reduces the dimension of the initial FRF data and transforms it into new damage indices and employs an ANN method for the actual damage localization and quantification using recognized damage patterns from the algorithm. In civil engineering applications, the measurement of dynamic response under field conditions always contains noise components from environmental factors. In order to evaluate the performance of the proposed strategy with noise polluted data, noise contaminated measurements are also introduced to the proposed algorithm. ANNs with optimal architecture give minimum training and testing errors and provide precise damage detection results. In order to maximize damage detection results, the optimal architecture of ANN is identified by defining the number of hidden layers and the number of neurons per hidden layer by a trial and error method. In real testing, the number of measurement points and the measurement locations to obtain the structure response are critical for damage detection. Therefore, optimal sensor placement to improve damage identification is also investigated herein. A finite element model of a two storey framed structure is used to train the neural network. It shows accurate performance and gives low error with simulated and noise-contaminated data for single and multiple damage cases. As a result, the proposed method can be used for structural health monitoring and damage detection, particularly for cases where the measurement data is very large. Furthermore, it is suggested that an optimal ANN architecture can detect damage occurrence with good accuracy and can provide damage quantification with reasonable accuracy under varying levels of damage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People get into healthcare because they want to help society. And when a new hospital is briefed, everyone tries to do their best, but the process is mired by the impossibility of the task. Stakeholders rarely understand the architectural process, nobody can predict the future, and the only thing for certain is that everything will change as the project unfolds, revealing errors in initial assumptions and calculations, shifts in needs, new technologies etc. Yet there’s always pressure to keep to the programme and to press on regardless. This chaos leads eventually to suboptimal results: hospitals the world over are riddled with inefficiencies, idiosyncrasies, incredible wastage and features that lead to poor clinical outcomes. This talk will sketch out the basics of Scrum, the most popular open-source Lean/Agile methodology. It will discuss what healthcare designers can learn from the geeks in Silicon Valley reduce risk, meet deadlines and deliver the highest possible value for the budget despite the uncertainty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Paediatric onset inflammatory bowel disease (IBD) may cause alterations in energy requirements and invalidate the use of standard prediction equations. Our aim was to evaluate four commonly used prediction equations for resting energy expenditure (REE) in children with IBD. Methods: Sixty-three children had repeated measurements of REE as part of a longitudinal research study yielding a total of 243 measurements. These were compared with predicted REE from Schofield, Oxford, FAO/WHO/UNU, and Harris-Benedict equations using the Bland-Altman method. Results: Mean (±SD) age of the patients was 14.2 (2.4) years. Mean measured REE was 1566 (336) kcal per day compared with 1491 (236), 1441 (255), 1481 (232), and 1435 (212) kcal per day calculated from Schofield, Oxford, FAO/WHO/UNU, and Harris-Benedict, respectively. While the Schofield equation demonstrated the least difference between measured and predicted REE, it, along with the other equations tested, did not perform uniformly across all subjects, indicating greater errors at either end of the spectrum of energy expenditure. Smaller differences were found for all prediction equations for Crohn's disease compared with ulcerative colitis. Conclusions: Of the commonly used equations, the equation of Schofield should be used in pediatric patients with IBD when measured values are not able to be obtained. (Inflamm Bowel Dis 2010;) Copyright © 2010 Crohn's & Colitis Foundation of America, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter considers the role of the law in communicating patient safety. Downie, Lahey, Ford, et al’s (2006) preventing, knowing and responding theoretical framework is adopted to classify the different elements of patient safety law. Rather than setting out all relevant patient safety laws in detail, this chapter highlights key legal strategies which are employed to: prevent the occurrence of patient safety incidents (preventing); support the discovery and open discussion of patient safety incidents when they do occur (knowing),; and guide responses after they occur (responding) (Downie, Lahey, Ford, et al 2006). The law is increasingly being invoked to facilitate open discussion of and communication surrounding patient safety. After highlighting some legal strategies used to communicate patient safety, two practice examples are presented. The practice examples highlight different aspects of patient safety law and are indicative of communication issues commonly faced in practice. The first practice example focuses on the role of the Ccoroner in communicating patient safety. This example highlights the investigative role of the law in relation to patient safety (knowing). It also showcases the preventing responding and preventing elements in respect of the significant number of communication errors that can occur in a multi-disciplinary, networked health system. The main focus of the second practice example is responding example illustrates how the law responds to health service providers’ and professionals’ miscommunication (and subsequent incidents) during treatment, however it also touches upon knowing and preventing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many software applications extend their functionality by dynamically loading libraries into their allocated address space. However, shared libraries are also often of unknown provenance and quality and may contain accidental bugs or, in some cases, deliberately malicious code. Most sandboxing techniques which address these issues require recompilation of the libraries using custom tool chains, require significant modifications to the libraries, do not retain the benefits of single address-space programming, do not completely isolate guest code, or incur substantial performance overheads. In this paper we present LibVM, a sandboxing architecture for isolating libraries within a host application without requiring any modifications to the shared libraries themselves, while still retaining the benefits of a single address space and also introducing a system call inter-positioning layer that allows complete arbitration over a shared library’s functionality. We show how to utilize contemporary hardware virtualization support towards this end with reasonable performance overheads and, in the absence of such hardware support, our model can also be implemented using a software-based mechanism. We ensure that our implementation conforms as closely as possible to existing shared library manipulation functions, minimizing the amount of effort needed to apply such isolation to existing programs. Our experimental results show that it is easy to gain immediate benefits in scenarios where the goal is to guard the host application against unintentional programming errors when using shared libraries, as well as in more complex scenarios, where a shared library is suspected of being actively hostile. In both cases, no changes are required to the shared libraries themselves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs vs. 0.18 μs standard deviation), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Commercial environments may receive only a fraction of expected genetic gains for growth rate as predicted from the selection environment This fraction is the result of undesirable genotype-by-environment interactions (G x E) and measured by the genetic correlation (r(g)) of growth between environments. Rapid estimates of genetic correlation achieved in one generation are notoriously difficult to estimate with precision. A new design is proposed where genetic correlations can be estimated by utilising artificial mating from cryopreserved semen and unfertilised eggs stripped from a single female. We compare a traditional phenotype analysis of growth to a threshold model where only the largest fish are genotyped for sire identification. The threshold model was robust to differences in family mortality differing up to 30%. The design is unique as it negates potential re-ranking of families caused by an interaction between common maternal environmental effects and growing environment. The design is suitable for rapid assessment of G x E over one generation with a true 0.70 genetic correlation yielding standard errors as low as 0.07. Different design scenarios were tested for bias and accuracy with a range of heritability values, number of half-sib families created, number of progeny within each full-sib family, number of fish genotyped, number of fish stocked, differing family survival rates and at various simulated genetic correlation levels

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rank-based inference is widely used because of its robustness. This article provides optimal rank-based estimating functions in analysis of clustered data with random cluster effects. The extensive simulation studies carried out to evaluate the performance of the proposed method demonstrate that it is robust to outliers and is highly efficient given the existence of strong cluster correlations. The performance of the proposed method is satisfactory even when the correlation structure is misspecified, or when heteroscedasticity in variance is present. Finally, a real dataset is analyzed for illustration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a linear quantile regression analysis method for longitudinal data that combines the between- and within-subject estimating functions, which incorporates the correlations between repeated measurements. Therefore, the proposed method results in more efficient parameter estimation relative to the estimating functions based on an independence working model. To reduce computational burdens, the induced smoothing method is introduced to obtain parameter estimates and their variances. Under some regularity conditions, the estimators derived by the induced smoothing method are consistent and have asymptotically normal distributions. A number of simulation studies are carried out to evaluate the performance of the proposed method. The results indicate that the efficiency gain for the proposed method is substantial especially when strong within correlations exist. Finally, a dataset from the audiology growth research is used to illustrate the proposed methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.