93 resultados para Error of measurement
Resumo:
Protein-energy wasting (PEW) is commonly seen in patients with chronic kidney disease (CKD). The condition is characterised by chronic, systemic low-grade inflammation which affects nutritional status by a variety of mechanisms including reducing appetite and food intake and increasing muscle catabolism. PEW is linked with co-morbidities such as cardiovascular disease, and is associated with lower quality of life, increased hospitalisations and a 6-fold increase in risk of death1. Significant gender differences have been found in the severity and effects of several markers of PEW. There have been limited studies testing the ability of anti-inflammatory agents or nutritional interventions to reduce the effects of PEW in dialysis patients. This thesis makes a significant contribution to the understanding of PEW in dialysis patients. It advances understanding of measurement techniques for two of the key components, appetite and inflammation, and explores the effect of fish oil, an anti-inflammatory agent, on markers of PEW in dialysis patients. The first part of the thesis consists of two methodological studies conducted using baseline data. The first study aims to validate retrospective ratings of hunger, desire to eat and fullness on visual analog scales (VAS) (paper and pen and electronic) as a new method of measuring appetite in dialysis patients. The second methodological study aims to assess the ability of a variety of methods available in routine practice to detect the presence of inflammation. The second part of the thesis aims to explore the effect of 12 weeks supplementation with 2g per day of Eicosapentaenoic Acid (EPA), a longchain fatty acid found in fish oil, on markers of PEW. A combination of biomarkers and psychomarkers of appetite and inflammation are the main outcomes being explored, with nutritional status, dietary intake and quality of life included as secondary outcomes. A lead in phase of 3 months prior to baseline was used so that each person acts as their own historical control. The study also examines whether there are gender differences in response to the treatment. Being an exploratory study, an important part of the work is to test the feasibility of the intervention, thus the level of adherence and factors associated with adherence are also presented. The studies were conducted at the hemodialysis unit of the Wesley Hospital. Participants met the following criteria: adult, stage 5 CKD on hemodialysis for at least 3 months, not expected to receive a transplant or switch to another dialysis modality during the study, absence of intellectual impairment or mental illness impairing ability to follow instructions or complete the intervention. A range of intermediate, clinical and patient-centred outcome measures were collected at baseline and 12 weeks. Inflammation was measured using five biomarkers: c-reactive protein (CRP), interleukin-6 (IL6), intercellular adhesion molecule (sICAM-1), vascular cell adhesion molecule (sVCAM-1) and white cell count (WCC). Subjective appetite was measured using the first question from the Appetite and Dietary Assessment (ADAT) tool and VAS for measurements of hunger, desire to eat and fullness. A novel feature of the study was the assessment of the appetite peptides leptin, ghrelin and peptide YY as biomarkers of appetite. Nutritional status/inflammation was assessed using the Malnutrition-Inflammation Score (MIS) and the Patient-Generated Subjective Global Assessment (PG-SGA). Dietary intake was measured using 3-day records. Quality of life was measured using the Kidney Disease Quality of Life Short Form version 1.3 (KDQOL-SF™ v1.3 © RAND University), which combines the Short-Form 36 (SF36) with a kidney-disease specific module2. A smaller range of these variables was available for analysis during the control phase (CRP, ADAT, dietary intake and nutritional status). Statistical analysis was carried out using SPSS version 14 (SPSS Inc, Chicago IL, USA). Analysis of the first part of the thesis involved descriptive and bivariate statistics, as well as Bland-Altman plots to assess agreement between methods, and sensitivity analysis/ROC curves to test the ability of methods to predict the presence of inflammation. The unadjusted (paired ttests) and adjusted (linear mixed model) change over time is presented for the main outcome variables of inflammation and appetite. Results are shown for the whole group followed by analyses according to gender and adherence to treatment. Due to the exploratory nature of the study, trends and clinical significance were considered as important as statistical significance. Twenty-eight patients (mean age 61±17y, 50% male, dialysis vintage 19.5 (4- 101) months) underwent baseline assessment. Seven out of 28 patients (25%) reported sub-optimal appetite (self-reported as fair, poor or very poor) despite all being well nourished (100% SGA A). Using the VAS, ratings of hunger, but not desire to eat or fullness, were significantly (p<0.05) associated with a range of relevant clinical variables including age (r=-0.376), comorbidities (r=-0.380) nutritional status (PG-SGA score, r=-0.451), inflammatory markers (CRP r=-0.383; sICAM-1 r=-0.387) and seven domains of quality of life. Patients expressed a preference for the paper and pen method of administering VAS. None of the tools (appetite, MIS, PG-SGA, albumin or iron) showed an acceptable ability to detect patients who are inflamed. It is recommended that CRP should be tested more frequently as a matter of course rather than seeking alternative methods of measuring inflammation. 27 patients completed the 12 week intervention. 20 patients were considered adherent based on changes in % plasma EPA, which rose from 1.3 (0.94)% to 5.2 (1.1)%, p<0.001, in this group. The major barriers to adherence were forgetting to take the tablets as well as their size. At 12 weeks, inflammatory markers remained steady apart from the white cell count which decreased (7.6(2.5) vs 7.0(2.2) x109/L, p=0.058) and sVCAM-1 which increased (1685(654) vs 2249(925) ng/mL, p=0.001). Subjective appetite using VAS increased (51mm to 57mm, +12%) and there was a trend towards reduction in peptide YY (660(31) vs 600(30) pg/mL, p=0.078). There were some gender differences apparent, with the following adjusted change between baseline and week 12: CRP (males -3% vs females +17%, p=0.19), IL6 (males +17% vs females +48%, p=0.77), sICAM-1 (males -5% vs females +11%, p=0.07), sVCAM-1 (males +54% vs females +19%, p=0.08) and hunger ratings (males 20% vs females -5%, p=0.18). On balance, males experienced a maintainence or reduction in three inflammatory markers and an improvement in hunger ratings, and therefore appeared to have responded better to the intervention. Compared to those who didn’t adhere, adherent patients maintained weight (mean(SE) change: +0.5(1.6) vs - 0.8(1.2) kg, p=0.052) and fat-free mass (-0.1 (1.6) vs -1.8 (1.8) kg, p=0.045). There was no difference in change between the intervention and control phase for CRP, appetite, nutritional status or dietary intake. The thesis makes a significant contribution to the evidence base for understanding of PEW in dialysis patients. It has advanced knowledge of methods of assessing inflammation and appetite. Retrospective ratings of hunger on a VAS appear to be a valid method of assessing appetite although samples which include patients with very poor appetite are required to confirm this. Supplementation with fish oil appeared to improve subjective appetite and dampen the inflammatory response. The effectiveness of the intervention is influenced by gender and adherence. Males appear to be more responsive to the primary outcome variables than females, and the quality of response is improved with better adherence. These results provide evidence to support future interventions aimed at reducing the effects of PEW in dialysis patients.
Resumo:
Motor vehicles are a major source of gaseous and particulate matter pollution in urban areas, particularly of ultrafine sized particles (diameters < 0.1 µm). Exposure to particulate matter has been found to be associated with serious health effects, including respiratory and cardiovascular disease, and mortality. Particle emissions generated by motor vehicles span a very broad size range (from around 0.003-10 µm) and are measured as different subsets of particle mass concentrations or particle number count. However, there exist scientific challenges in analysing and interpreting the large data sets on motor vehicle emission factors, and no understanding is available of the application of different particle metrics as a basis for air quality regulation. To date a comprehensive inventory covering the broad size range of particles emitted by motor vehicles, and which includes particle number, does not exist anywhere in the world. This thesis covers research related to four important and interrelated aspects pertaining to particulate matter generated by motor vehicle fleets. These include the derivation of suitable particle emission factors for use in transport modelling and health impact assessments; quantification of motor vehicle particle emission inventories; investigation of the particle characteristic modality within particle size distributions as a potential for developing air quality regulation; and review and synthesis of current knowledge on ultrafine particles as it relates to motor vehicles; and the application of these aspects to the quantification, control and management of motor vehicle particle emissions. In order to quantify emissions in terms of a comprehensive inventory, which covers the full size range of particles emitted by motor vehicle fleets, it was necessary to derive a suitable set of particle emission factors for different vehicle and road type combinations for particle number, particle volume, PM1, PM2.5 and PM1 (mass concentration of particles with aerodynamic diameters < 1 µm, < 2.5 µm and < 10 µm respectively). The very large data set of emission factors analysed in this study were sourced from measurement studies conducted in developed countries, and hence the derived set of emission factors are suitable for preparing inventories in other urban regions of the developed world. These emission factors are particularly useful for regions with a lack of measurement data to derive emission factors, or where experimental data are available but are of insufficient scope. The comprehensive particle emissions inventory presented in this thesis is the first published inventory of tailpipe particle emissions prepared for a motor vehicle fleet, and included the quantification of particle emissions covering the full size range of particles emitted by vehicles, based on measurement data. The inventory quantified particle emissions measured in terms of particle number and different particle mass size fractions. It was developed for the urban South-East Queensland fleet in Australia, and included testing the particle emission implications of future scenarios for different passenger and freight travel demand. The thesis also presents evidence of the usefulness of examining modality within particle size distributions as a basis for developing air quality regulations; and finds evidence to support the relevance of introducing a new PM1 mass ambient air quality standard for the majority of environments worldwide. The study found that a combination of PM1 and PM10 standards are likely to be a more discerning and suitable set of ambient air quality standards for controlling particles emitted from combustion and mechanically-generated sources, such as motor vehicles, than the current mass standards of PM2.5 and PM10. The study also reviewed and synthesized existing knowledge on ultrafine particles, with a specific focus on those originating from motor vehicles. It found that motor vehicles are significant contributors to both air pollution and ultrafine particles in urban areas, and that a standardized measurement procedure is not currently available for ultrafine particles. The review found discrepancies exist between outcomes of instrumentation used to measure ultrafine particles; that few data is available on ultrafine particle chemistry and composition, long term monitoring; characterization of their spatial and temporal distribution in urban areas; and that no inventories for particle number are available for motor vehicle fleets. This knowledge is critical for epidemiological studies and exposure-response assessment. Conclusions from this review included the recommendation that ultrafine particles in populated urban areas be considered a likely target for future air quality regulation based on particle number, due to their potential impacts on the environment. The research in this PhD thesis successfully integrated the elements needed to quantify and manage motor vehicle fleet emissions, and its novelty relates to the combining of expertise from two distinctly separate disciplines - from aerosol science and transport modelling. The new knowledge and concepts developed in this PhD research provide never before available data and methods which can be used to develop comprehensive, size-resolved inventories of motor vehicle particle emissions, and air quality regulations to control particle emissions to protect the health and well-being of current and future generations.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
This paper presents the preliminary results in establishing a strategy for predicting Zenith Tropospheric Delay (ZTD) and relative ZTD (rZTD) between Continuous Operating Reference Stations (CORS) in near real-time. It is anticipated that the predicted ZTD or rZTD can assist the network-based Real-Time Kinematic (RTK) performance over long inter-station distances, ultimately, enabling a cost effective method of delivering precise positioning services to sparsely populated regional areas, such as Queensland. This research firstly investigates two ZTD solutions: 1) the post-processed IGS ZTD solution and 2) the near Real-Time ZTD solution. The near Real-Time solution is obtained through the GNSS processing software package (Bernese) that has been deployed for this project. The predictability of the near Real-Time Bernese solution is analyzed and compared to the post-processed IGS solution where it acts as the benchmark solution. The predictability analyses were conducted with various prediction time of 15, 30, 45, and 60 minutes to determine the error with respect to timeliness. The predictability of ZTD and relative ZTD is determined (or characterized) by using the previously estimated ZTD as the predicted ZTD of current epoch. This research has shown that both the ZTD and relative ZTD predicted errors are random in nature; the STD grows from a few millimeters to sub-centimeters while the predicted delay interval ranges from 15 to 60 minutes. Additionally, the RZTD predictability shows very little dependency on the length of tested baselines of up to 1000 kilometers. Finally, the comparison of near Real-Time Bernese solution with IGS solution has shown a slight degradation in the prediction accuracy. The less accurate NRT solution has an STD error of 1cm within the delay of 50 minutes. However, some larger errors of up to 10cm are observed.
Resumo:
Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.
Resumo:
During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.
Resumo:
Mid-infrared (MIR) and near-infrared (NIR) spectroscopy have been compared and evaluated for differentiating kaolinite, coal bearing kaolinite and halloysite. Kaolinite, coal bearing kaolinite and halloysite are the three relative abundant mineral of the kaolin group, especially in China. In the MIR spectra, the differences are shown in the 3000-3600 cm-1 between kaolinite and halloysite. It can not be obviously differentiated the kaolinite and halloysite, let alone kaolinite and coal bearing kaolinite. However, NIR, together with MIR, give us the sufficient evidence to differentiate the kaolinite and halloysite, especially kaolinite and coal bearing kaolinite. There are obvious differences between kaolinite and halloysite in the all range of their spectra, and it also show some difference between kaolinite and coal bearing kaolinite. Therefore, the reproducibility of measurement, signal to noise ratio and richness of qualitative information should be simultaneously considered for proper selection of a spectroscopic method for mineral analysis.
Resumo:
The potential to sequester atmospheric carbon in agricultural and forest soils to offset greenhouse gas emissions has generated interest in measuring changes in soil carbon resulting from changes in land management. However, inherent spatial variability of soil carbon limits the precision of measurement of changes in soil carbon and hence, the ability to detect changes. We analyzed variability of soil carbon by intensively sampling sites under different land management as a step toward developing efficient soil sampling designs. Sites were tilled crop-land and a mixed deciduous forest in Tennessee, and old-growth and second-growth coniferous forest in western Washington, USA. Six soil cores within each of three microplots were taken as an initial sample and an additional six cores were taken to simulate resampling. Soil C variability was greater in Washington than in Tennessee, and greater in less disturbed than in more disturbed sites. Using this protocol, our data suggest that differences on the order of 2.0 Mg C ha(-1) could be detected by collection and analysis of cores from at least five (tilled) or two (forest) microplots in Tennessee. More spatial variability in the forested sites in Washington increased the minimum detectable difference, but these systems, consisting of low C content sandy soil with irregularly distributed pockets of organic C in buried logs, are likely to rank among the most spatially heterogeneous of systems. Our results clearly indicate that consistent intramicroplot differences at all sites will enable detection of much more modest changes if the same microplots are resampled.
Resumo:
OBJECTIVE To examine the psychometric properties of a Chinese version of the Problem Areas In Diabetes (PAID-C) scale. RESEARCH DESIGN AND METHODS The reliability and validity of the PAID-C were evaluated in a convenience sample of 205 outpatients with type 2 diabetes. Confirmatory factor analysis, Bland-Altman analysis, and Spearman's correlations facilitated the psychometric evaluation. RESULTS Confirmatory factor analysis confirmed a one-factor structure of the PAID-C (χ2/df ratio = 1.894, goodness-of-fit index = 0.901, comparative fit index = 0.905, root mean square error of approximation = 0.066). The PAID-C was associated with A1C (rs = 0.15; P < 0.05) and diabetes self-care behaviors in general diet (rs = −0.17; P < 0.05) and exercise (rs = −0.17; P < 0.05). The 4-week test-retest reliability demonstrated satisfactory stability (rs = 0.83; P < 0.01). CONCLUSIONS The PAID-C is a reliable and valid measure to determine diabetes-related emotional distress in Chinese people with type 2 diabetes.
Resumo:
Purpose. To investigate the effect of various presbyopic vision corrections on nighttime driving performance on a closed-road driving circuit. Methods. Participants were 11 presbyopes (mean age, 57.3 ± 5.8 years), with a mean best sphere distance refractive error of R+0.23±1.53 DS and L+0.20±1.50 DS, whose only experience of wearing presbyopic vision correction was reading spectacles. The study involved a repeated-measures design by which a participant's nighttime driving performance was assessed on a closed-road circuit while wearing each of four power-matched vision corrections. These included single-vision distance lenses (SV), progressive-addition spectacle lenses (PAL), monovision contact lenses (MV), and multifocal contact lenses (MTF CL) worn in a randomized order. Measures included low-contrast road hazard detection and avoidance, road sign and near target recognition, lane-keeping, driving time, and legibility distance for street signs. Eye movement data (fixation duration and number of fixations) were also recorded. Results. Street sign legibility distances were shorter when wearing MV and MTF CL than SV and PAL (P < 0.001), and participants drove more slowly with MTF CL than with PALs (P = 0.048). Wearing SV resulted in more errors (P < 0.001) and in more (P = 0.002) and longer (P < 0.001) fixations when responding to near targets. Fixation duration was also longer when viewing distant signs with MTF CL than with PAL (P = 0.031). Conclusions. Presbyopic vision corrections worn by naive, unadapted wearers affected nighttime driving. Overall, spectacle corrections (PAL and SV) performed well for distance driving tasks, but SV negatively affected viewing near dashboard targets. MTF CL resulted in the shortest legibility distance for street signs and longer fixation times.
Resumo:
Mid-infrared (MIR) and near-infrared (NIR) spectroscopy have been used to study the molecular structure of halloysite and potassium acetate intercalated halloysite and to determine the structural changes of halloysite through intercalation. The MIR spectra show all fundamental vibrations including the hydroxyl units, basic aluminosilicate framework and water molecules in the structure of halloysite and its intercalation complex. Comparison between halloysite and halloysite-potassium acetate intercalation complex shows almost all bands observed for halloysite are also observed for halloysite-potassium acetate intercalation complex apart from bands observed in the 1700-1300 cm-1 region, but with differences in band intensity. However, NIR, based on MIR spectra, provide sufficient evidence to analyze the structural changes of halloysite through intercalation. There are obvious differences between halloysite and halloysite-potassium acetate intercalation complex in the all spectral ranges. Therefore, the reproducibility of measurement and richness of qualitative information should be simultaneously considered for proper selection of a spectroscopic method for molecular structural analysis.
Resumo:
In this review piece, we survey the literature on the cost of equity capital implications of corporate disclosure and conservative accounting policy choice decisions with the principle objective of providing insights into the design and methodological issues, which underlie the empirical investigations. We begin with a review of the analytical studies most typically cited in the empirical research as providing a theoretical foundation. We then turn to consider literature that offers insights into the selection of proxies for each of our points of interest, cost of equity capital, disclosure quality and accounting conservatism. As a final step, we review selected empirical studies to illustrate the relevant evidence found within the literature. Based on our review, we interpret the literature as providing the researcher with only limited direct guidance on the appropriate choice of measure for each of the constructs of interest. Further, we view the literature as raising questions about both the interpretation of empirical findings in the face of measurement concerns and the suitability of certain theoretical arguments to the research setting. Overall, perhaps the message which is most clear is that one of the most controversial and fundamental issues underlying the literature is the issue of the diversifiability or nondiversifiability of information effects.
Resumo:
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.