986 resultados para Estimated parameters
Resumo:
The effects of particulate matter on environment and public health have been widely studied in recent years. A number of studies in the medical field have tried to identify the specific effect on human health of particulate exposure, but agreement amongst these studies on the relative importance of the particles’ size and its origin with respect to health effects is still lacking. Nevertheless, air quality standards are moving, as the epidemiological attention, towards greater focus on the smaller particles. Current air quality standards only regulate the mass of particulate matter less than 10 μm in aerodynamic diameter (PM10) and less than 2.5 μm (PM2.5). The most reliable method used in measuring Total Suspended Particles (TSP), PM10, PM2.5 and PM1 is the gravimetric method since it directly measures PM concentration, guaranteeing an effective traceability to international standards. This technique however, neglects the possibility to correlate short term intra-day variations of atmospheric parameters that can influence ambient particle concentration and size distribution (emission strengths of particle sources, temperature, relative humidity, wind direction and speed and mixing height) as well as human activity patterns that may also vary over time periods considerably shorter than 24 hours. A continuous method to measure the number size distribution and total number concentration in the range 0.014 – 20 μm is the tandem system constituted by a Scanning Mobility Particle Sizer (SMPS) and an Aerodynamic Particle Sizer (APS). In this paper, an uncertainty budget model of the measurement of airborne particle number, surface area and mass size distributions is proposed and applied for several typical aerosol size distributions. The estimation of such an uncertainty budget presents several difficulties due to i) the complexity of the measurement chain, ii) the fact that SMPS and APS can properly guarantee the traceability to the International System of Measurements only in terms of number concentration. In fact, the surface area and mass concentration must be estimated on the basis of separately determined average density and particle morphology. Keywords: SMPS-APS tandem system, gravimetric reference method, uncertainty budget, ultrafine particles.
Resumo:
Background: Exercise is widely promoted as a method of weight management, while the other health benefits are often ignored. The purpose of this study was to examine whether exercise-induced improvements in health are influenced by changes in body weight. Methods: Fifty-eight sedentary overweight/obese men and women (BMI 31.8 (SD 4.5) kg/m2) participated in a 12-week supervised aerobic exercise intervention (70% heart rate max, five times a week, 500 kcal per session). Body composition, anthropometric parameters, aerobic capacity, blood pressure and acute psychological response to exercise were measured at weeks 0 and 12. Results: The mean reduction in body weight was −3.3 (3.63) kg (p<0.01). However, 26 of the 58 participants failed to attain the predicted weight loss estimated from individuals’ exercise-induced energy expenditure. Their mean weight loss was only −0.9 (1.8) kg (p<0.01). Despite attaining a lower-than-predicted weight reduction, these individuals experienced significant increases in aerobic capacity (6.3 (6.0) ml/kg/min; p<0.01), and a decreased systolic (−6.00 (11.5) mm Hg; p<0.05) and diastolic blood pressure (−3.9 (5.8) mm Hg; p<0.01), waist circumference (−3.7 (2.7) cm; p<0.01) and resting heart rate (−4.8 (8.9) bpm, p<0.001). In addition, these individuals experienced an acute exercise-induced increase in positive mood. Conclusions: These data demonstrate that significant and meaningful health benefits can be achieved even in the presence of lower-than-expected exercise-induced weight loss. A less successful reduction in body weight does not undermine the beneficial effects of aerobic exercise. From a public health perspective, exercise should be encouraged and the emphasis on weight loss reduced.
Resumo:
Intimate partner violence (IPV) is not only a problem for heterosexual couples. Although research in the area is beset by methodological and definitional problems, studies generally demonstrate that IPV also affects those who identify as non-heterosexual; that is, those sexualities that are typically categorized as lesbian, gay, bisexual, transgender, or intersex (LGBTI). IPV appears to be at least as prevalent in LGBTI relationships as it is in heterosexual couples, and follows similar patterns (e.g. Australian Research Centre on Sex, Health and Society 2006; Donovan et al. 2006; Chan 2005; Craft and Serovich 2005; Burke et al. 2002; Jeffries and Ball 2008; Kelly and Warshafsky 1987; Letellier 1994; Turrell 2000; Ristock 2003; Vickers 1996). There is, however, little in the way of specific community or social services support available to either victims or perpetrators of violence in same-sex relationships (see Vickers 1996). In addition, there are important differences in the experience of IPV between LGBTI and non-LGBTI victims, and even among LGBTI individuals; for example, among transgender populations (Chan 2005), and those who are HIV sero-positive (Craft and Serovich 2005). These different experiences of IPV include the use of HIV and the threat of “outing” a partner as tools of control, as just two examples (Jeffries and Ball 2008; Salyer 1999; WA Government 2008b). Such differences impact on how LGBTI victims respond to the violence, including whether or not and how they seek help, what services they are able to avail themselves of, and how likely they are to remain with, or return to, their violent partners (Burke et al. 2002). This chapter explores the prevalent heteronormative discourses that surround IPV, both within the academic literature, and in general social and government discourses. It seeks to understand how same-sex IPV remains largely invisible, and suggests that these dominant discourses play a major role in maintaining this invisibility. In many respects, it builds on work by a number of scholars who have begun to interrogate the criminal justice and social discourses surrounding violent crime, primarily sexual violence, and who problematize these discourses (see for example Carmody 2003; Carmody and Carrington 2000; Marcus 1992). It will begin by outlining these dominant discourses, and then problematize these by identifying some of the important differences between LGBTI IPV and IPV in heterosexual relationships. In doing so, this chapter will suggest some possible reasons for the silence regarding IPV in LGBTI relationships, and the effects that this can have on victims. Although an equally important area of research, and another point at which the limitations of dominant social discourses surrounding IPV can be brought to light, this chapter will not examine violence experienced by heterosexual men at the hands of their intimate female partners. Instead, it will restrict itself to IPV perpetrated within same-sex relationships.
Resumo:
Surveillance for invasive non-indigenous species (NIS) is an integral part of a quarantine system. Estimating the efficiency of a surveillance strategy relies on many uncertain parameters estimated by experts, such as the efficiency of its components in face of the specific NIS, the ability of the NIS to inhabit different environments, and so on. Due to the importance of detecting an invasive NIS within a critical period of time, it is crucial that these uncertainties be accounted for in the design of the surveillance system. We formulate a detection model that takes into account, in addition to structured sampling for incursive NIS, incidental detection by untrained workers. We use info-gap theory for satisficing (not minimizing) the probability of detection, while at the same time maximizing the robustness to uncertainty. We demonstrate the trade-off between robustness to uncertainty, and an increase in the required probability of detection. An empirical example based on the detection of Pheidole megacephala on Barrow Island demonstrates the use of info-gap analysis to select a surveillance strategy.
Resumo:
Purpose. To explore the role of the neighborhood environment in supporting walking Design. Cross sectional study of 10,286 residents of 200 neighborhoods. Participants were selected using a stratified two-stage cluster design. Data were collected by mail survey (68.5% response rate). Setting. The Brisbane City Local Government Area, Australia, 2007. Subjects. Brisbane residents aged 40 to 65 years. Measures. Environmental: street connectivity, residential density, hilliness, tree coverage, bikeways, and street lights within a one kilometer circular buffer from each resident’s home; and network distance to nearest river or coast, public transport, shop, and park. Walking: minutes in the previous week categorized as < 30 minutes, ≥ 30 < 90 minutes, ≥ 90 < 150 minutes, ≥ 150 < 300 minutes, and ≥ 300 minutes. Analysis. The association between each neighborhood characteristic and walking was examined using multilevel multinomial logistic regression and the model parameters were estimated using Markov chain Monte Carlo simulation. Results. After adjustment for individual factors, the likelihood of walking for more than 300 minutes (relative to <30 minutes) was highest in areas with the most connectivity (OR=1.93, 99% CI 1.32-2.80), the greatest residential density (OR=1.47, 99% CI 1.02-2.12), the least tree coverage (OR=1.69, 99% CI 1.13-2.51), the most bikeways (OR=1.60, 99% CI 1.16-2.21), and the most street lights (OR=1.50, 99% CI 1.07-2.11). The likelihood of walking for more than 300 minutes was also higher among those who lived closest to a river or the coast (OR=2.06, 99% CI 1.41-3.02). Conclusion. The likelihood of meeting (and exceeding) physical activity recommendations on the basis of walking was higher in neighborhoods with greater street connectivity and residential density, more street lights and bikeways, closer proximity to waterways, and less tree coverage. Interventions targeting these neighborhood characteristics may lead to improved environmental quality as well as lower rates of overweight and obesity and associated chromic disease.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Resumo:
Background: The Malnutrition Screening Tool (MST) is a valid nutrition screening tool in the acute hospital setting but has not been assessed in residential aged care facilities. The aim of this secondary analysis was to determine whether the MST could be a useful nutrition screening tool when compared with a full nutrition assessment by Subjective Global Assessment (SGA) in the residential aged care setting. ----- Methods: Two hundred and eighty-five residents (29% male; mean age: 84 ± 9 years) from eight residential aged care facilities in Australia participated. A secondary analysis of data collected during a nutrition intervention study was conducted. The MST consists of two questions related to recent weight loss and appetite. While the MST was not specifically applied, weight loss and appetite information was available and an estimated MST score (0-5) calculated. Nutritional status was assessed by a research assistant trained in using SGA. ----- Results: Malnutrition prevalence was 42.8% (122 malnourished out of 285 residents). Compared to the SGA, the MST was an effective predictor of nutritional risk (sensitivity = 83.6%, specificity = 65.6%, positive predictive value = 0.65, negative predictive value =0.84). ----- Conclusions: The components of the MST have acceptable sensitivity and specificity suggesting it can play a valuable role in quickly identifying malnutrition risk in the residential aged care setting. Further prospective research using the MST tool against a broader array of objective and subjective nutritional parameters is required to confirm its validity as a screening tool in aged care settings.
Resumo:
Osteoporosis is a disease characterized by low bone mass and micro-architectural deterioration of bone tissue, with a consequent increase in bone fragility and susceptibility to fracture. Osteoporosis affects over 200 million people worldwide, with an estimated 1.5 million fractures annually in the United States alone, and with attendant costs exceeding $10 billion dollars per annum. Osteoporosis reduces bone density through a series of structural changes to the honeycomb-like trabecular bone structure (micro-structure). The reduced bone density, coupled with the microstructural changes, results in significant loss of bone strength and increased fracture risk. Vertebral compression fractures are the most common type of osteoporotic fracture and are associated with pain, increased thoracic curvature, reduced mobility, and difficulty with self care. Surgical interventions, such as kyphoplasty or vertebroplasty, are used to treat osteoporotic vertebral fractures by restoring vertebral stability and alleviating pain. These minimally invasive procedures involve injecting bone cement into the fractured vertebrae. The techniques are still relatively new and while initial results are promising, with the procedures relieving pain in 70-95% of cases, medium-term investigations are now indicating an increased risk of adjacent level fracture following the procedure. With the aging population, understanding and treatment of osteoporosis is an increasingly important public health issue in developed Western countries. The aim of this study was to investigate the biomechanics of spinal osteoporosis and osteoporotic vertebral compression fractures by developing multi-scale computational, Finite Element (FE) models of both healthy and osteoporotic vertebral bodies. The multi-scale approach included the overall vertebral body anatomy, as well as a detailed representation of the internal trabecular microstructure. This novel, multi-scale approach overcame limitations of previous investigations by allowing simultaneous investigation of the mechanics of the trabecular micro-structure as well as overall vertebral body mechanics. The models were used to simulate the progression of osteoporosis, the effect of different loading conditions on vertebral strength and stiffness, and the effects of vertebroplasty on vertebral and trabecular mechanics. The model development process began with the development of an individual trabecular strut model using 3D beam elements, which was used as the building block for lattice-type, structural trabecular bone models, which were in turn incorporated into the vertebral body models. At each stage of model development, model predictions were compared to analytical solutions and in-vitro data from existing literature. The incremental process provided confidence in the predictions of each model before incorporation into the overall vertebral body model. The trabecular bone model, vertebral body model and vertebroplasty models were validated against in-vitro data from a series of compression tests performed using human cadaveric vertebral bodies. Firstly, trabecular bone samples were acquired and morphological parameters for each sample were measured using high resolution micro-computed tomography (CT). Apparent mechanical properties for each sample were then determined using uni-axial compression tests. Bone tissue properties were inversely determined using voxel-based FE models based on the micro-CT data. Specimen specific trabecular bone models were developed and the predicted apparent stiffness and strength were compared to the experimentally measured apparent stiffness and strength of the corresponding specimen. Following the trabecular specimen tests, a series of 12 whole cadaveric vertebrae were then divided into treated and non-treated groups and vertebroplasty performed on the specimens of the treated group. The vertebrae in both groups underwent clinical-CT scanning and destructive uniaxial compression testing. Specimen specific FE vertebral body models were developed and the predicted mechanical response compared to the experimentally measured responses. The validation process demonstrated that the multi-scale FE models comprising a lattice network of beam elements were able to accurately capture the failure mechanics of trabecular bone; and a trabecular core represented with beam elements enclosed in a layer of shell elements to represent the cortical shell was able to adequately represent the failure mechanics of intact vertebral bodies with varying degrees of osteoporosis. Following model development and validation, the models were used to investigate the effects of progressive osteoporosis on vertebral body mechanics and trabecular bone mechanics. These simulations showed that overall failure of the osteoporotic vertebral body is initiated by failure of the trabecular core, and the failure mechanism of the trabeculae varies with the progression of osteoporosis; from tissue yield in healthy trabecular bone, to failure due to instability (buckling) in osteoporotic bone with its thinner trabecular struts. The mechanical response of the vertebral body under load is highly dependent on the ability of the endplates to deform to transmit the load to the underlying trabecular bone. The ability of the endplate to evenly transfer the load through the core diminishes with osteoporosis. Investigation into the effect of different loading conditions on the vertebral body found that, because the trabecular bone structural changes which occur in osteoporosis result in a structure that is highly aligned with the loading direction, the vertebral body is consequently less able to withstand non-uniform loading states such as occurs in forward flexion. Changes in vertebral body loading due to disc degeneration were simulated, but proved to have little effect on osteoporotic vertebra mechanics. Conversely, differences in vertebral body loading between simulated invivo (uniform endplate pressure) and in-vitro conditions (where the vertebral endplates are rigidly cemented) had a dramatic effect on the predicted vertebral mechanics. This investigation suggested that in-vitro loading using bone cement potting of both endplates has major limitations in its ability to represent vertebral body mechanics in-vivo. And lastly, FE investigation into the biomechanical effect of vertebroplasty was performed. The results of this investigation demonstrated that the effect of vertebroplasty on overall vertebra mechanics is strongly governed by the cement distribution achieved within the trabecular core. In agreement with a recent study, the models predicted that vertebroplasty cement distributions which do not form one continuous mass which contacts both endplates have little effect on vertebral body stiffness or strength. In summary, this work presents the development of a novel, multi-scale Finite Element model of the osteoporotic vertebral body, which provides a powerful new tool for investigating the mechanics of osteoporotic vertebral compression fractures at the trabecular bone micro-structural level, and at the vertebral body level.
Resumo:
This thesis details methodology to estimate urban stormwater quality based on a set of easy to measure physico-chemical parameters. These parameters can be used as surrogate parameters to estimate other key water quality parameters. The key pollutants considered in this study are nitrogen compounds, phosphorus compounds and solids. The use of surrogate parameter relationships to evaluate urban stormwater quality will reduce the cost of monitoring and so that scientists will have added capability to generate a large amount of data for more rigorous analysis of key urban stormwater quality processes, namely, pollutant build-up and wash-off. This in turn will assist in the development of more stringent stormwater quality mitigation strategies. The research methodology was based on a series of field investigations, laboratory testing and data analysis. Field investigations were conducted to collect pollutant build-up and wash-off samples from residential roads and roof surfaces. Past research has identified that these impervious surfaces are the primary pollutant sources to urban stormwater runoff. A specially designed vacuum system and rainfall simulator were used in the collection of pollutant build-up and wash-off samples. The collected samples were tested for a range of physico-chemical parameters. Data analysis was conducted using both univariate and multivariate data analysis techniques. Analysis of build-up samples showed that pollutant loads accumulated on road surfaces are higher compared to the pollutant loads on roof surfaces. Furthermore, it was found that the fraction of solids smaller than 150 ìm is the most polluted particle size fraction in solids build-up on both roads and roof surfaces. The analysis of wash-off data confirmed that the simulated wash-off process adopted for this research agrees well with the general understanding of the wash-off process on urban impervious surfaces. The observed pollutant concentrations in wash-off from road surfaces were different to pollutant concentrations in wash-off from roof surfaces. Therefore, firstly, the identification of surrogate parameters was undertaken separately for roads and roof surfaces. Secondly, a common set of surrogate parameter relationships were identified for both surfaces together to evaluate urban stormwater quality. Surrogate parameters were identified for nitrogen, phosphorus and solids separately. Electrical conductivity (EC), total organic carbon (TOC), dissolved organic carbon (DOC), total suspended solids (TSS), total dissolved solids (TDS), total solids (TS) and turbidity (TTU) were selected as the relatively easy to measure parameters. Consequently, surrogate parameters for nitrogen and phosphorus were identified from the set of easy to measure parameters for both road surfaces and roof surfaces. Additionally, surrogate parameters for TSS, TDS and TS which are key indicators of solids were obtained from EC and TTU which can be direct field measurements. The regression relationships which were developed for surrogate parameters and key parameter of interest were of a similar format for road and roof surfaces, namely it was in the form of simple linear regression equations. The identified relationships for road surfaces were DTN-TDS:DOC, TP-TS:TOC, TSS-TTU, TDS-EC and TSTTU: EC. The identified relationships for roof surfaces were DTN-TDS and TSTTU: EC. Some of the relationships developed had a higher confidence interval whilst others had a relatively low confidence interval. The relationships obtained for DTN-TDS, DTN-DOC, TP-TS and TS-EC for road surfaces demonstrated good near site portability potential. Currently, best management practices are focussed on providing treatment measures for stormwater runoff at catchment outlets where separation of road and roof runoff is not found. In this context, it is important to find a common set of surrogate parameter relationships for road surfaces and roof surfaces to evaluate urban stormwater quality. Consequently DTN-TDS, TS-EC and TS-TTU relationships were identified as the common relationships which are capable of providing measurements of DTN and TS irrespective of the surface type.
Resumo:
Background: Reducing rates of healthcare acquired infection has been identified by the Australian Commission on Safety and Quality in Health Care as a national priority. One of the goals is the prevention of central venous catheter-related bloodstream infection (CR-BSI). At least 3,500 cases of CR-BSI occur annually in Australian hospitals, resulting in unnecessary deaths and costs to the healthcare system between $25.7 and $95.3 million. Two approaches to preventing these infections have been proposed: use of antimicrobial catheters (A-CVCs); or a catheter care and management ‘bundle’. Given finite healthcare budgets, decisions about the optimal infection control policy require consideration of the effectiveness and value for money of each approach. Objectives: The aim of this research is to use a rational economic framework to inform efficient infection control policy relating to the prevention of CR-BSI in the intensive care unit. It addresses three questions relating to decision-making in this area: 1. Is additional investment in activities aimed at preventing CR-BSI an efficient use of healthcare resources? 2. What is the optimal infection control strategy from amongst the two major approaches that have been proposed to prevent CR-BSI? 3. What uncertainty is there in this decision and can a research agenda to improve decision-making in this area be identified? Methods: A decision analytic model-based economic evaluation was undertaken to identify an efficient approach to preventing CR-BSI in Queensland Health intensive care units. A Markov model was developed in conjunction with a panel of clinical experts which described the epidemiology and prognosis of CR-BSI. The model was parameterised using data systematically identified from the published literature and extracted from routine databases. The quality of data used in the model and its validity to clinical experts and sensitivity to modelling assumptions was assessed. Two separate economic evaluations were conducted. The first evaluation compared all commercially available A-CVCs alongside uncoated catheters to identify which was cost-effective for routine use. The uncertainty in this decision was estimated along with the value of collecting further information to inform the decision. The second evaluation compared the use of A-CVCs to a catheter care bundle. We were unable to estimate the cost of the bundle because it is unclear what the full resource requirements are for its implementation, and what the value of these would be in an Australian context. As such we undertook a threshold analysis to identify the cost and effectiveness thresholds at which a hypothetical bundle would dominate the use of A-CVCs under various clinical scenarios. Results: In the first evaluation of A-CVCs, the findings from the baseline analysis, in which uncertainty is not considered, show that the use of any of the four A-CVCs will result in health gains accompanied by cost-savings. The MR catheters dominate the baseline analysis generating 1.64 QALYs and cost-savings of $130,289 per 1.000 catheters. With uncertainty, and based on current information, the MR catheters remain the optimal decision and return the highest average net monetary benefits ($948 per catheter) relative to all other catheter types. This conclusion was robust to all scenarios tested, however, the probability of error in this conclusion is high, 62% in the baseline scenario. Using a value of $40,000 per QALY, the expected value of perfect information associated with this decision is $7.3 million. An analysis of the expected value of perfect information for individual parameters suggests that it may be worthwhile for future research to focus on providing better estimates of the mortality attributable to CR-BSI and the effectiveness of both SPC and CH/SSD (int/ext) catheters. In the second evaluation of the catheter care bundle relative to A-CVCs, the results which do not consider uncertainty indicate that a bundle must achieve a relative risk of CR-BSI of at least 0.45 to be cost-effective relative to MR catheters. If the bundle can reduce rates of infection from 2.5% to effectively zero, it is cost-effective relative to MR catheters if national implementation costs are less than $2.6 million ($56,610 per ICU). If the bundle can achieve a relative risk of 0.34 (comparable to that reported in the literature) it is cost-effective, relative to MR catheters, if costs over an 18 month period are below $613,795 nationally ($13,343 per ICU). Once uncertainty in the decision is considered, the cost threshold for the bundle increases to $2.2 million. Therefore, if each of the 46 Level III ICUs could implement an 18 month catheter care bundle for less than $47,826 each, this approach would be cost effective relative to A-CVCs. However, the uncertainty is substantial and the probability of error in concluding that the bundle is the cost-effective approach at a cost of $2.2 million is 89%. Conclusions: This work highlights that infection control to prevent CR-BSI is an efficient use of healthcare resources in the Australian context. If there is no further investment in infection control, an opportunity cost is incurred, which is the potential for a more efficient healthcare system. Minocycline/rifampicin catheters are the optimal choice of antimicrobial catheter for routine use in Australian Level III ICUs, however, if a catheter care bundle implemented in Australia was as effective as those used in the large studies in the United States it would be preferred over the catheters if it was able to be implemented for less than $47,826 per Level III ICU. Uncertainty is very high in this decision and arises from multiple sources. There are likely greater costs to this uncertainty for A-CVCs, which may carry hidden costs, than there are for a catheter care bundle, which is more likely to provide indirect benefits to clinical practice and patient safety. Research into the mortality attributable to CR-BSI, the effectiveness of SPC and CH/SSD (int/ext) catheters and the cost and effectiveness of a catheter care bundle in Australia should be prioritised to reduce uncertainty in this decision. This thesis provides the economic evidence to inform one area of infection control, but there are many other infection control decisions for which information about the cost-effectiveness of competing interventions does not exist. This work highlights some of the challenges and benefits to generating and using economic evidence for infection control decision-making and provides support for commissioning more research into the cost-effectiveness of infection control.
Resumo:
An algorithm based on the concept of Kalman filtering is proposed in this paper for the estimation of power system signal attributes, like amplitude, frequency and phase angle. This technique can be used in protection relays, digital AVRs, DSTATCOMs, FACTS and other power electronics applications. Furthermore this algorithm is particularly suitable for the integration of distributed generation sources to power grids when fast and accurate detection of small variations of signal attributes are needed. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations are presented to highlight the usefulness of the proposed approach. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
Background: Altered mechanical properties of the heel pad have been implicated in the development of plantar heel pain. However, the in vivo properties of the heel pad during gait remain largely unexplored in this cohort. The aim of the current study was to characterise the bulk compressive properties of the heel pad in individuals with and without plantar heel pain while walking. ---------- Methods: The sagittal thickness and axial compressive strain of the heel pad were estimated in vivo from dynamic lateral foot radiographs acquired from nine subjects with unilateral plantar heel pain and an equivalent number of matched controls, while walking at their preferred speed. Compressive stress was derived from simultaneously acquired plantar pressure data. Principal viscoelastic parameters of the heel pad, including peak strain, secant modulus and energy dissipation (hysteresis), were estimated from subsequent stress–strain curves.---------- Findings: There was no significant difference in loaded and unloaded heel pad thickness, peak stress, peak strain, or secant and tangent modulus in subjects with and without heel pain. However, the fat pad of symptomatic feet had a significantly lower energy dissipation ratio (0.55 ± 0.17 vs. 0.69 ± 0.08) when compared to asymptomatic feet (P < .05).---------- Interpretation: Plantar heel pain is characterised by reduced energy dissipation ratio of the heel pad when measured in vivo and under physiologically relevant strain rates.