133 resultados para Coefficient of variation

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has not yet been established whether the spatial variation of particle number concentration (PNC) within a microscale environment can have an effect on exposure estimation results. In general, the degree of spatial variation within microscale environments remains unclear, since previous studies have only focused on spatial variation within macroscale environments. The aims of this study were to determine the spatial variation of PNC within microscale school environments, in order to assess the importance of the number of monitoring sites on exposure estimation. Furthermore, this paper aims to identify which parameters have the largest influence on spatial variation, as well as the relationship between those parameters and spatial variation. Air quality measurements were conducted for two consecutive weeks at each of the 25 schools across Brisbane, Australia. PNC was measured at three sites within the grounds of each school, along with the measurement of meteorological and several other air quality parameters. Traffic density was recorded for the busiest road adjacent to the school. Spatial variation at each school was quantified using coefficient of variation (CV). The portion of CV associated with instrument uncertainty was found to be 0.3 and therefore, CV was corrected so that only non-instrument uncertainty was analysed in the data. The median corrected CV (CVc) ranged from 0 to 0.35 across the schools, with 12 schools found to exhibit spatial variation. The study determined the number of required monitoring sites at schools with spatial variability and tested the deviation in exposure estimation arising from using only a single site. Nine schools required two measurement sites and three schools required three sites. Overall, the deviation in exposure estimation from using only one monitoring site was as much as one order of magnitude. The study also tested the association of spatial variation with wind speed/direction and traffic density, using partial correlation coefficients to identify sources of variation and non-parametric function estimation to quantify the level of variability. Traffic density and road to school wind direction were found to have a positive effect on CVc, and therefore, also on spatial variation. Wind speed was found to have a decreasing effect on spatial variation when it exceeded a threshold of 1.5 (m/s), while it had no effect below this threshold. Traffic density had a positive effect on spatial variation and its effect increased until it reached a density of 70 vehicles per five minutes, at which point its effect plateaued and did not increase further as a result of increasing traffic density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates stochastic analysis of transit segment hourly passenger load factor variation for transit capacity and quality of service (QoS) analysis using Automatic Fare Collection data for a premium radial bus route in Brisbane, Australia. It compares stochastic analysis to traditional peak hour factor (PHF) analysis to gain further insight into variability of transit route segments’ passenger loading during a study hour. It demonstrates that hourly design load factor is a useful method of modeling a route segment’s capacity and QoS time history across the study weekday. This analysis method is readily adaptable to different passenger load standards by adjusting design percentile, reflecting either a more relaxed or more stringent condition. This paper also considers hourly coefficient of variation of load factor as a capacity and QoS assessment measure, in particular through its relationships with hourly average and design load factors. Smaller value reflects uniform passenger loading, which is generally indicative of well dispersed passenger boarding demands and good schedule maintenance. Conversely, higher value may be indicative of pulsed or uneven passenger boarding demands, poor schedule maintenance, and/or bus bunching. An assessment table based on hourly coefficient of variation of load factor is developed and applied to this case study. Inferences are drawn for a selection of study hours across the weekday studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study uses weekday Automatic Fare Collection (AFC) data on a premium bus line in Brisbane, Australia •Stochastic analysis is compared to peak hour factor (PHF) analysis for insight into passenger loading variability •Hourly design load factor (e.g. 88th percentile) is found to be a useful method of modeling a segment’s passenger demand time-history across a study weekday, for capacity and QoS assessment •Hourly coefficient of variation of load factor is found to be a useful QoS and operational assessment measure, particularly through its relationship with hourly average load factor, and with design load factor •An assessment table based on hourly coefficient of variation of load factor is developed from the case study

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To investigate whether venous occlusion plethysmography (VOP) may be used to measure high rates of arterial inflow associated with exercise, venous occlusions were performed at rest, and following dynamic handgrip exercise at 15, 30, 45, and 60 % of maximum voluntary contraction (MVC) in seven healthy males. The effect of including more than one cardiac cycle in the calculation of blood flow was assessed by comparing the cumulative blood flow over one, two, three, or four cardiac cycles. The inclusion of more than one cardiac cycle at 30 and 60 % MVC, and more than two cardiac cycles at 15 and 45 % MVC resulted in a lower blood flow compared to using only the first cardiac cycle (P < 0.05). Despite the small time interval over which arterial inflow was measured (~1 second), this did not affect the reproducibility of the technique. Reproducibility (coefficient of variation for arterial inflow over three trials) tended to be poorer at the higher workloads, although this was not significant (12.7 ± 6.6 %, 16.2 ± 7.3 %, and 22.9 ± 9.9 % for the 15, 30, and 45 % MVC workloads; P=0.102). There was also a tendency for greater reproducibility with the inclusion of more cardiac cycles at the highest workload, but this did not reach significance (P=0.070). In conclusion, when calculated over the first cardiac cycle only during venous occlusion, high rates of FBF can be measured using VOP, and this can be achieved without a significant decrease in the reproducibility of the measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational models for cardiomyocyte action potentials (AP) often make use of a large parameter set. This parameter set can contain some elements that are fitted to experimental data independently of any other element, some elements that are derived concurrently with other elements to match experimental data, and some elements that are derived purely from phenomenological fitting to produce the desired AP output. Furthermore, models can make use of several different data sets, not always derived for the same conditions or even the same species. It is consequently uncertain whether the parameter set for a given model is physiologically accurate. Furthermore, it is only recently that the possibility of degeneracy in parameter values in producing a given simulation output has started to be addressed. In this study, we examine the effects of varying two parameters (the L-type calcium current (I(CaL)) and the delayed rectifier potassium current (I(Ks))) in a computational model of a rabbit ventricular cardiomyocyte AP on both the membrane potential (V(m)) and calcium (Ca(2+)) transient. It will subsequently be determined if there is degeneracy in this model to these parameter values, which will have important implications on the stability of these models to cell-to-cell parameter variation, and also whether the current methodology for generating parameter values is flawed. The accuracy of AP duration (APD) as an indicator of AP shape will also be assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine the test-retest reliability of biceps brachii tissue oxygenation index (TOI) parameters measured by near-infrared spectroscopy during a 10-s sustained and a 30-repeated (1-s contraction, 1-s relaxation) isometric contraction task at 30% of maximal voluntary contraction (30% MVC) and maximal (100% MVC) intensities. Eight healthy men (23 to 33 yr) were tested on three sessions separated by 3 h and 24 h, and the within-subject reliability of torque and each TOI parameter were determined by Bland-Altman+/-2 SD limits of agreement plots and coefficient of variation (CV). No significant (P>0.05) differences between the three sessions were found for mean values of torque and TOI parameters during the sustained and repeated tasks at both contraction intensities. All TOI parameters were within+/-2 SD limits of agreement. The CVs for torque integral were similar between the sustained and repeated task at both intensities (4 to 7%); however, the CVs for TOI parameters during the sustained and repeated task were lower for 100% MVC (7 to 11%) than for 30% MVC (22 to 36%). It is concluded that the reliability of the biceps brachii NIRS parameters during both sustained and repeated isometric contraction tasks is acceptable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A routine activity for a sports dietitian is to estimate energy and nutrient intake from an athlete's self-reported food intake. Decisions made by the dietitian when coding a food record are a source of variability in the data. The aim of the present study was to determine the variability in estimation of the daily energy and key nutrient intakes of elite athletes, when experienced coders analyzed the same food record using the same database and software package. Seven-day food records from a dietary survey of athletes in the 1996 Australian Olympic team were randomly selected to provide 13 sets of records, each set representing the self-reported food intake of an endurance, team, weight restricted, and sprint/power athlete. Each set was coded by 3-5 members of Sports Dietitians Australia, making a total of 52 athletes, 53 dietitians, and 1456 athlete-days of data. We estimated within- and between- athlete and dietitian variances for each dietary nutrient using mixed modeling, and we combined the variances to express variability as a coefficient of variation (typical variation as a percent of the mean). Variability in the mean of 7-day estimates of a nutrient was 2- to 3-fold less than that of a single day. The variability contributed by the coder was less than the true athlete variability for a 1-day record but was of similar magnitude for a 7-day record. The most variable nutrients (e.g., vitamin C, vitamin A, cholesterol) had approximately 3-fold more variability than least variable nutrients (e.g., energy, carbohydrate, magnesium). These athlete and coder variabilities need to be taken into account in dietary assessment of athletes for counseling and research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects of ethanol fumigation on the inter-cycle variability of key in-cylinder pressure parameters in a modern common rail diesel engine have been investigated. Specifically, maximum rate of pressure rise, peak pressure, peak pressure timing and ignition delay were investigated. A new methodology for investigating the start of combustion was also proposed and demonstrated—which is particularly useful with noisy in-cylinder pressure data as it can have a significant effect on the calculation of an accurate net rate of heat release indicator diagram. Inter-cycle variability has been traditionally investigated using the coefficient of variation. However, deeper insight into engine operation is given by presenting the results as kernel density estimates; hence, allowing investigation of otherwise unnoticed phenomena, including: multi-modal and skewed behaviour. This study has found that operation of a common rail diesel engine with high ethanol substitutions (>20% at full load, >30% at three quarter load) results in a significant reduction in ignition delay. Further, this study also concluded that if the engine is operated with absolute air to fuel ratios (mole basis) less than 80, the inter-cycle variability is substantially increased compared to normal operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of alternative fuels, such as biodiesels and related blends, it is important to develop an understanding of their effects on inter-cycle variability which, in turn, influences engine performance as well as its emission. Using four methanol trans-esterified biomass fuels of differing carbon chain length and degree of unsaturation, this paper provides insight into the effect that alternative fuels have on inter-cycle variability. The experiments were conducted with a heavy-duty Cummins, turbo-charged, common-rail compression ignition engine. Combustion performance is reported in terms of the following key in-cylinder parameters: indicated mean effective pressure (IMEP), net heat release rate (NHRR), standard deviation of variability (StDev), coefficient of variation (CoV), peak pressure, peak pressure timing and maximum rate of pressure rise. A link is also established between the cyclic variability and oxygen ratio, which is a good indicator of stoichiometry. The results show that the fatty acid structures did not have a significant effect on injection timing, injection duration, injection pressure, StDev of IMEP, or the timing of peak motoring and combustion pressures. However, a significant effect was noted on the premixed and diffusion combustion proportions, combustion peak pressure and maximum rate of pressure rise. Additionally, the boost pressure, IMEP and combustion peak pressure were found to be directly correlated to the oxygen ratio. The emission of particles positively correlates with oxygen content in the fuel as well as in the air-fuel mixture resulting in a higher total number of particles per unit of mass.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background & aim To understand whether any change in gastric emptying (GE) is physiologically relevant, it is important to identify its variability. Information regarding the variability of GE in overweight and obese individuals is lacking. The aim of this study was to determine the reproducibility of GE in overweight and obese males. Methods Fifteen overweight and obese males [body mass index 30.3 (4.9) kg/m2] completed two identical GE tests 7 days apart. GE of a standard pancake breakfast was assessed by 13C-octanoic acid breath test. Data are presented as mean (±SD). Results There were no significant differences in GE between test days (half time (t1/2): 179 (15) and 176 (19 min), p = 0.56; lag time (tlag): 108 (14) and 104 (8) min, p = 0.26). Mean intra-individual coefficient of variation for t1/2 was 7.9% and tlag 7.5%. Based on these findings, to detect a treatment effect in a paired design with a power of 80% and α = 0.05, minimum mean effect sizes for t1/2 would need to be ≥14.4 min and tlag ≥ 8.1 min. Conclusions These data show that GE is reproducible in overweight and obese males and provide minimum mean effect sizes required to detect a hypothetical treatment effect in this population.