671 resultados para Measuring methods
Resumo:
Background The wellness construct has application in a number of fields including education, healthcare and counseling, particularly with regard to female adolescents. The effective measurement of wellness in adolescents can assist researchers and practitioners in determining lifestyle behaviors in which they are lacking. Behavior change interventions can then be designed which directly aid in the promotion of these areas. Methods The 5-Factor Wellness Inventory (designed to measure the Indivisible Self model of wellness) is a popular instrument for measuring the broad aspects of wellness amongst adolescents. The instrument comprises 97 items contributing to 17 subscales, five dimension scores, four context scores, total wellness score, and a life satisfaction index. This investigation evaluated the test-retest (intra-rater) reliability of the 5 F-Wel instrument in repeated assessments (seven days apart) among adolescent females aged 12-14 years. Percentages of exact agreement for individual items, and the number of respondents who scored within +/-5, +/-7.5 and +/-10 points for total wellness and the five summary dimension scores were calculated. Results Overall, 46 (95.8%) participants responded with complete data and were included in the analysis. Item agreement ranged from 47.8% to 100% across the 97 items (median 69.9%, interquartile range 60.9%-73.9%). The percentage of respondents who scored within +/-5, +/-7.5 and +/-10 points for total wellness at the re-assessment was 87.0%, 97.8% and 97.8% respectively. The percentage of respondents who scored within +/-5, +/-7.5 and +/-10 for the domain scores at the reassessment ranged between 54.3-76.1%, 78.3-95.7% and 89.1-95.7% respectively across the five dimensions. Conclusions These findings suggest there was considerable variation in agreement between the two assessments on some individual items. However, the total wellness score and the five dimension summary scores remained comparatively stable between assessments.
Resumo:
Design Science is the process of solving ‘wicked problems’ through designing, developing, instantiating, and evaluating novel solutions (Hevner, March, Park and Ram, 2004). Wicked problems are described as agent finitude in combination with problem complexity and normative constraint (Farrell and Hooker, 2013). In Information Systems Design Science, determining that problems are ‘wicked’ differentiates Design Science research from Solutions Engineering (Winter, 2008) and is a necessary part of proving the relevance to Information Systems Design Science research (Hevner, 2007; Iivari, 2007). Problem complexity is characterised as many problem components with nested, dependent and co-dependent relationships interacting through multiple feedback and feed-forward loops. Farrell and Hooker (2013) specifically state for wicked problems “it will often be impossible to disentangle the consequences of specific actions from those of other co-occurring interactions”. This paper discusses the application of an Enterprise Information Architecture modelling technique to disentangle the wicked problem complexity for one case. It proposes that such a modelling technique can be applied to other wicked problems and can lay the foundations for proving relevancy to DSR, provide solution pathways for artefact development, and aid to substantiate those elements required to produce Design Theory.
Resumo:
In the Australian sugar industry, sugar cane is smashed into a straw like material by hammers before being squeezed between large rollers to extract the sugar juice. The straw like material is initially called prepared cane and then bagasse as it passes through successive roller milling units. The sugar cane materials are highly compressible, have high moisture content, are fibrous, and they resemble some peat soils in both appearance and mechanical behaviour. A promising avenue to improve the performance of milling units for increased throughput and juice extraction, and to reduce costs is by modelling of the crushing process. To achieve this, it is believed necessary that milling models should be able to reproduce measured bagasse behaviour. This investigation sought to measure the mechanical (compression, shear, and volume) behaviour of prepared cane and bagasse, to identify limitations in currently used material models, and to progress towards a material model that can predict bagasse behaviour adequately. Tests were carried out using a modified direct shear test equipment and procedure at most of the large range of pressures occurring in the crushing process. The investigation included an assessment of the performance of the direct shear test for measuring bagasse behaviour. The assessment was carried out using finite element modelling. It was shown that prepared cane and bagasse exhibited critical state behavior similar to that of soils and the magnitudes of material parameters were determined. The measurements were used to identify desirable features for a bagasse material model. It was shown that currently used material models had major limitations for reproducing bagasse behaviour. A model from the soil mechanics literature was modified and shown to achieve improved reproduction while using magnitudes of material parameters that better reflected the measured values. Finally, a typical three roller mill pressure feeder configuration was modelled. The predictions and limitations were assessed by comparison to measured data from a sugar factory.
Resumo:
Variations that exist in the treatment of patients (with similar symptoms) across different hospitals do substantially impact the quality and costs of healthcare. Consequently, it is important to understand the similarities and differences between the practices across different hospitals. This paper presents a case study on the application of process mining techniques to measure and quantify the differences in the treatment of patients presenting with chest pain symptoms across four South Australian hospitals. Our case study focuses on cross-organisational benchmarking of processes and their performance. Techniques such as clustering, process discovery, performance analysis, and scientific workflows were applied to facilitate such comparative analyses. Lessons learned in overcoming unique challenges in cross-organisational process mining, such as ensuring population comparability, data granularity comparability, and experimental repeatability are also presented.
Resumo:
Introduction Intense exercise induced acidosis occurs from the accumulation of hydrogen ions as by-products of anaerobic metabolism. Oral ingestion of ß-alanine, a limiting precursor of the intracellular physiochemical buffer carnosine in skeletal muscle, may counteract any detrimental effect of acidosis and benefit performance. The aim of this study was to investigate the effect of ß-alanine as an ergogenic aid during high intensity exercise performance in healthy males. Methods Five males ingested either ß-alanine (BAl) (4.8 g.d-1 for 4wk, then 6.4 g.d-1 for 2wk) or placebo (Pl) (CaCO3) in a crossover design with 6 wk washout between. Following supplementation, participants performed two different intense exercise protocols over consecutive days. On the first day a repeated sprint ability (RSA) test of 5 x 6s, with 24s rest periods, was performed. On the second day a cycling capacity test measuring the time to exhaustion (TTE) was performed at 110% of their max workload achieved in a pre supplementation max test (CCT110%). Non-invasive quantification of carnosine, prior to, and following each supplementation, with magnetic resonance spectrometry was performed in the soleus and gastrocnemius. Time to fatigue (CCT110%), peak and mean power (RSA), blood pH, and plasma lactate were measured. Results Muscle carnosine concentration was not different prior to ß-alanine supplementation and increased 18% in the soleus and 26% in the gastrocnemius, respectively with 6 wk supplementation. There was no difference in the measured performance variables during the RSA test (peak and average power output). TTE during the CCT110% was significantly enhanced following the ingestion of BAl (155s ± 19.03) compared to Pl (134s ± 26.16). No changes were observed in blood pH during either exercise protocol and during the recovery from exercise. Plasma lactate in the BAl condition was significantly higher than Pl only from the 15th minute following exercise during the CCT110%. FIG. 1: Changes in carnosine concentration in the gastrocnemius prior and post 6 week chronic supplementation of placebo and β-alanine. Values expressed as mean.* p<0.05 from Pl at 6 weeks, # p<0.05 from pre supplementation. Conclusion/Discussion Greater muscle carnosine content following 6wk supplementation of ß-alanine enhanced the potential for intracellular buffering capacity. However, this only translated into enhanced performance during the CCT110% high intensity cycling exercise protocol, with no change observed during the RSA test. No differences in post exercise and recovery plasma lactates and blood pH, indicates that 6wks ß-alanine supplementation has no effect on anaerobic metabolism during multiple bout high intensity exercise. Changes in plasma lactate during recovery supports that ß-alanine supplementation may affect anaerobic metabolism however during single bout high intensity.
Resumo:
Thin plate spline finite element methods are used to fit a surface to an irregularly scattered dataset [S. Roberts, M. Hegland, and I. Altas. Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions. SIAM, 1:208--234, 2003]. The computational bottleneck for this algorithm is the solution of large, ill-conditioned systems of linear equations at each step of a generalised cross validation algorithm. Preconditioning techniques are investigated to accelerate the convergence of the solution of these systems using Krylov subspace methods. The preconditioners under consideration are block diagonal, block triangular and constraint preconditioners [M. Benzi, G. H. Golub, and J. Liesen. Numerical solution of saddle point problems. Acta Numer., 14:1--137, 2005]. The effectiveness of each of these preconditioners is examined on a sample dataset taken from a known surface. From our numerical investigation, constraint preconditioners appear to provide improved convergence for this surface fitting problem compared to block preconditioners.
Resumo:
Introduction This investigation aimed to assess the consistency and accuracy of radiation therapists (RTs) performing cone beam computed tomography (CBCT) alignment to fiducial markers (FMs) (CBCTFM) and the soft tissue prostate (CBCTST). Methods Six patients receiving prostate radiation therapy underwent daily CBCTs. Manual alignment of CBCTFM and CBCTST was performed by three RTs. Inter-observer agreement was assessed using a modified Bland–Altman analysis for each alignment method. Clinically acceptable 95% limits of agreement with the mean (LoAmean) were defined as ±2.0 mm for CBCTFM and ±3.0 mm for CBCTST. Differences between CBCTST alignment and the observer-averaged CBCTFM (AvCBCTFM) alignment were analysed. Clinically acceptable 95% LoA were defined as ±3.0 mm for the comparison of CBCTST and AvCBCTFM. Results CBCTFM and CBCTST alignments were performed for 185 images. The CBCTFM 95% LoAmean were within ±2.0 mm in all planes. CBCTST 95% LoAmean were within ±3.0 mm in all planes. Comparison of CBCTST with AvCBCTFM resulted in 95% LoA of −4.9 to 2.6, −1.6 to 2.5 and −4.7 to 1.9 mm in the superior–inferior, left–right and anterior–posterior planes, respectively. Conclusions Significant differences were found between soft tissue alignment and the predicted FM position. FMs are useful in reducing inter-observer variability compared with soft tissue alignment. Consideration needs to be given to margin design when using soft tissue matching due to increased inter-observer variability. This study highlights some of the complexities of soft tissue guidance for prostate radiation therapy.
Resumo:
Optimisation is a fundamental step in the turbine design process, especially in the development of non-classical designs of radial-inflow turbines working with high-density fluids in low-temperature Organic Rankine Cycles (ORCs). The present work discusses the simultaneous optimisation of the thermodynamic cycle and the one-dimensional design of radial-inflow turbines. In particular, the work describes the integration between a 1D meanline preliminary design code adapted to real gases and the performance estimation approach for radial-inflow turbines in an established ORC cycle analysis procedure. The optimisation approach is split in two distinct loops; the inner operates on the 1D design based on the parameters received from the outer loop, which optimises the thermodynamic cycle. The method uses parameters including brine flow rate, temperature and working fluid, shifting assumptions such as head and flow coefficients into the optimisation routine. The discussed design and optimisation method is then validated against published benchmark cases. Finally, using the same conditions, the coupled optimisation procedure is extended to the preliminary design of a radial-inflow turbine with R143a as working fluid in realistic geothermal conditions and compared against results from commercially-available software RITAL from Concepts-NREC.
Resumo:
Purpose To establish whether the use of a passive or active technique of planning target volume (PTV) definition and treatment methods for non-small cell lung cancer (NSCLC) deliver the most effective results. This literature review assesses the advantages and disadvantages in recent studies of each, while assessing the validity of the two approaches for planning and treatment. Methods A systematic review of literature focusing on the planning and treatment of radiation therapy to NSCLC tumours. Different approaches which have been published in recent articles are subjected to critical appraisal in order to determine their relative efficacy. Results Free-breathing (FB) is the optimal method to perform planning scans for patients and departments, as it involves no significant increase in cost, workload or education. Maximum intensity projection (MIP) is the fastest form of delineation, however it is noted to be less accurate than the ten-phase overlap approach for computed tomography (CT). Although gating has proven to reduce margins and facilitate sparing of organs at risk, treatment times can be longer and planning time can be as much as 15 times higher for intensity modulated radiation therapy (IMRT). This raises issues with patient comfort and stabilisation, impacting on the chance of geometric miss. Stereotactic treatments can take up to 3 hours to treat, along with increases in planning and treatment, as well as the additional hardware, software and training required. Conclusion Four-dimensional computed tomography (4DCT) is superior to 3DCT, with the passive FB approach for PTV delineation and treatment optimal. Departments should use a combination of MIP with visual confirmation ensuring coverage for stage 1 disease. Stages 2-3 should be delineated using ten-phases overlaid. Stereotactic and gated treatments for early stage disease should be used accordingly; FB-IMRT is optimal for latter stage disease.
Resumo:
The Japanese government initiated a series of regulatory reforms in the mid-1990s. The Japanese urban gas industry consists of various sized private and non-private firms. Numerous previous studies find that deregulation leads to productivity improvements. We extend the literature by analyzing deregulation, privatization, and other aspects of a regulated industry using unique firm level data. This study measures productivity to evaluate the effect of the deregulation reform. Using data from 205 firms from 1993 to 2004, we find that the deregulation effect differs depending on firm size. Competitive pressure contributes to advanced productivity. The deregulation of gas sales to commercial customers is the most important factor for advancing productivity. Copyright © 2013 by the IAEE. All rights reserved.
Resumo:
The Environmental Kuznets Curve (EKC) hypothesises an inverse U-shaped relationship between a measure of environmental pollution and per capita income levels. In this study, we apply non-parametric estimation of local polynomial regression (local quadratic fitting) to allow more flexibility in local estimation. This study uses a larger and globally representative sample of many local and global pollutants and natural resources including Biological Oxygen Demand (BOD) emission, CO2 emission, CO2 damage, energy use, energy depletion, mineral depletion, improved water source, PM10, particulate emission damage, forest area and net forest depletion. Copyright © 2009 Inderscience Enterprises Ltd.
Resumo:
We implemented six different boarding strategies (Wilma, Steffen, Reverse Pyramid, Random, Blocks and By letter) in order to investigate boarding times for Boeing 777 and Airbus 380 aircraft. We also introduce three new boarding methods to find the optimum boarding strategy. Our models explicitly simulate the behaviour of groups of people travelling together and we explicitly simulate the timing to store their luggage as part of the boarding process. Results from the simulation demonstrates the Reverse Pyramid method is the best boarding method for Boeing 777, and the Steffen method is the best boarding method for Airbus 380. For the new suggested boarding methods, aisle first boarding method is the best boarding strategy for Boeing 777 and row arrangement method is the best boarding strategy for Airbus 380. Overall best boarding strategy is aisle first boarding method for Boeing 777 and Steffen method for Airbus 380.
Resumo:
Purpose : To investigate the application of retinal nerve fibre layer (RNFL) thickness as a marker for severity of diabetic peripheral neuropathy (DPN) in people with Type 2 diabetes. Methods : This was a cross-sectional study whereby 61 participants (mean age 61 [41-75 years], mean duration of diabetes 14 [1-40 years], 70% male) with Type 2 diabetes and DPN underwent optical coherence tomography (OCT) scans. Global and 4 quadrant (TSNI) RNFL thicknesses were measured at 3.45mm around the optic nerve head of one eye. Neuropathy disability score (NDS) was used to assess the severity of DPN on a 0 to 10 scale. Participants were divided into three age-matched groups representing mild (NDS=3-5), moderate (NDS=6-8) and severe (NDS=9-10) neuropathy. Two regression models were fitted for statistical analysis: 1) NDS scores as co-variate for global and quadrant RNFL thicknesses, 2) NDS groups as a factor for global RNFL thickness only. Results : Mean (SD) RNFL thickness (µm) was 103(9) for mild neuropathy (n=34), 101(10) for moderate neuropathy (n=16) and 95(13) in the group with severe neuropathy (n=11). Global RNFL thickness and NDS scores were statistically significantly related (b=-1.20, p=0.048). When neuropathy was assessed across groups, a trend of thinner mean RNFL thickness was observed with increasing severity of neuropathy; however, this result was not statistically significant (F=2.86, p=0.065). TSNI quadrant analysis showed that mean RNFL thickness reduction in the inferior quadrant was 2.55 µm per 1 unit increase in NDS score (p=0.005). However, the regression coefficients were not statistically significant for RNFL thickness in the superior (b=-1.0, p=0.271), temporal (b=-0.90, p=0.238) and nasal (b=-0.99, p=0.205) quadrants. Conclusions : RNFL thickness was reduced with increasing severity of DPN and the effect was most evident in the inferior quadrant. Measuring RNFL thickness using OCT may prove to be a useful, non-invasive technique for identifying severity of DPN and may also provide additional insight into common mechanisms for peripheral neuropathy and RNFL damage.
Resumo:
Lower airway inflammation is generally classified as eosinophilic or neutrophilic. In conditions where eosinophilic inflammation predominates such as asthma in children, corticosteroids are usually beneficial. Traditionally, lower airway eosinophilia is measured using cellular count (through bronchoalveolar lavage or induced sputum). Both methods have limited applicability in children. When instruments to measure fractional exhaled nitric oxide (FeNO) became available, it presented an attractive option as it provided a non-invasive method of measuring eosinophilic inflammation suitable for children and adult. Not surprisingly, proposals have been made that FeNO measurement can be clinically used in many scenarios including monitoring the response to anti-inflammatory medications, to verify the adherence to treatment, and to predict upcoming asthma exacerbations. This thesis addresses the utility of FeNO levels in various scenarios, specifically in relation to asthma control and cough, a contentious aspect of the diagnosis of asthma. The thesis consists of a series of systematic reviews (related to the main question) and original studies in children. The over-arching aim of the thesis is to determine if FeNO is a clinically useful tool in the management of asthma and common asthma symptoms. The specific aims of the thesis were, to: 1. Determine if children with asthma have more severe acute respiratory symptoms at presentation with an asthma exacerbation and at days 7, 10 and 14 using validated scales. We also examined if children with asthma were more likely to have a persistent cough on day 14 than children with protracted bronchitis and/or controls. 2. Evaluate the efficacy of tailoring asthma interventions based on sputum analysis in comparison to clinical symptoms (with or without spirometry/peak flow) for asthma related outcomes in children and adults. 3. Evaluate the efficacy of tailoring asthma interventions based on exhaled nitric oxide in comparison to clinical symptoms (with or without spirometry/peak flow) for asthma related outcomes in children and adults. 4. Determine if adjustment of asthma medications based on FeNO levels (compared to management based on clinical symptoms) reduces severe exacerbations in children with asthma. 5. Examine the relationship between FeNO and exercise induced broncho-constriction and cough in children The aims above are addressed in respective chapters and all but one has been published/submitted. A synopsis of the findings are: In study-1 (Aim 1), we found that children with protracted bronchitis had the most severe acute respiratory infection symptoms and higher percentage of respiratory morbidity at day 14 in comparison to children with asthma and healthy controls. The systematic review of study-2 (Aim 2) included 246 randomised adult participants (no children) with 221 completing the trials. In the meta-analysis, a significant reduction in number of participants who had one or more asthma exacerbations occurred when treatment was based on sputum eosinophils in comparison to clinical symptoms. In the systematic review of study-3 (Aim 3), we found no significant difference between the intervention group (treatment adjusted based on FeNO) and control group (treatment adjusted based on clinical symptoms) for the primary outcome of asthma exacerbations or for the other outcomes (clinical symptoms, FeNO level and spirometry). In post-hoc analysis, a significant reduction in mean final daily dose ICS per adult was found in the group where treatment was based on FeNO in comparison to clinical symptoms. In contrast, in the paediatric studies, there was a significant increase in ICS dose in the FeNO strategy arm. Thus, controversy remains of the benefit or otherwise of utilising exhaled nitric oxide (FeNO) in routine clinical practice. FeNO levels are dependent on atopy and none of the 7 published trials have considered atopic status in FeNO levels when medications were adjusted. In study-4 (Aim 4), 64 children with asthma were recruited. Their asthma medications were adjusted according to either FeNO levels or usual clinical care utilising a management hierarchy taking into account atopy. It was concluded that tailoring of asthma medications in accordance to FeNO levels (compared to usual management), taking into account atopy status, reduced the number of children with severe exacerbations. However, a FeNO-based strategy resulted in higher daily ICS doses and had no benefit on asthma control. In study-5 (Aim 5), 33 children with cough and 17 controls were recruited. They were randomised to undertake an exercise challenge on day 1, or dry powder mannitol challenge on day 1 (with alternative challenge being done on day 2). In addition, a 24 hour cough meter, skin prick test, capsaicin cough sensitivity test and cough diary were undertaken. The change in cough frequency post exercise was significantly increased in the children with cough. FeNO decreases post exercise regardless of whether EIB is present or not. Limitations in the studies were addressed in the respective chapters. In summary, the studies from this thesis have provided new information on: • The severity of respiratory symptoms was increased in the early phase of the asthma exacerbation but not in the later recovery phase when compared with controls. • The utility of FeNO in the management of children with asthma. • The relationship of FeNO, cough and EIB in children. • Systematic reviews on the efficacy of tailoring asthma interventions based on eosinophilic inflammatory markers (sputum analysis and FeNO) in comparison to clinical symptoms.
Resumo:
Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression