454 resultados para intermethod comparison
Resumo:
Background Accelerometers have become one of the most common methods of measuring physical activity (PA). Thus, validity of accelerometer data reduction approaches remains an important research area. Yet, few studies directly compare data reduction approaches and other PA measures in free-living samples. Objective To compare PA estimates provided by 3 accelerometer data reduction approaches, steps, and 2 self-reported estimates: Crouter's 2-regression model, Crouter's refined 2-regression model, the weighted cut-point method adopted in the National Health and Nutrition Examination Survey (NHANES; 2003-2004 and 2005-2006 cycles), steps, IPAQ, and 7-day PA recall. Methods A worksite sample (N = 87) completed online-surveys and wore ActiGraph GT1M accelerometers and pedometers (SW-200) during waking hours for 7 consecutive days. Daily time spent in sedentary, light, moderate, and vigorous intensity activity and percentage of participants meeting PA recommendations were calculated and compared. Results Crouter's 2-regression (161.8 +/- 52.3 minutes/day) and refined 2-regression (137.6 +/- 40.3 minutes/day) models provided significantly higher estimates of moderate and vigorous PA and proportions of those meeting PA recommendations (91% and 92%, respectively) as compared with the NHANES weighted cut-point method (39.5 +/- 20.2 minutes/day, 18%). Differences between other measures were also significant. Conclusions When comparing 3 accelerometer cut-point methods, steps, and self-report measures, estimates of PA participation vary substantially.
Resumo:
In this study, we evaluated agreement among three generations of ActiGraph (TM) accelerometers in children and adolescents. Twenty-nine participants (mean age = 14.2 +/- 3.0 years) completed two laboratory-based activity sessions, each lasting 60 min. During each session, participants concurrently wore three different models of the ActiGraph (TM) accelerometers (GT1M, GT3X, GT3X+). Agreement among the three models for vertical axis counts, vector magnitude counts, and time spent in moderate-to-vigorous physical exercise (MVPA) was evaluated by calculating intraclass correlation coefficients and Bland-Altman plots. The intraclass correlation coefficient for total vertical axis counts, total vector magnitude counts, and estimated MVPA was 0.994 (95% CI = 0.989-0.996), 0.981 (95% CI = 0.969-0.989), and 0.996 (95% CI = 0.989-0.998), respectively. Inter-monitor differences for total vertical axis and vector magnitude counts ranged from 0.3% to 1.5%, while inter-monitor differences for estimated MVPA were equal to or close to zero. On the basis of these findings, we conclude that there is strong agreement between the GT1M, GT3X, and GT3X+ activity monitors, thus making it acceptable for researchers and practitioners to use different ActiGraph (TM) models within a given study.
Resumo:
The absence of comparative validity studies has prevented researchers from reaching consensus regarding the application of intensity-related accelerometer cut points for children and adolescents. PURPOSE This study aimed to evaluate the classification accuracy of five sets of independently developed ActiGraph cut points using energy expenditure, measured by indirect calorimetry, as a criterion reference standard. METHODS A total of 206 participants between the ages of 5 and 15 yr completed 12 standardized activity trials. Trials consisted of sedentary activities (lying down, writing, computer game), lifestyle activities (sweeping, laundry, throw and catch, aerobics, basketball), and ambulatory activities (comfortable walk, brisk walk, brisk treadmill walk, running). During each trial, participants wore an ActiGraph GT1M, and VO 2 was measured breath-by-breath using the Oxycon Mobile portable metabolic system. Physical activity intensity was estimated using five independently developed cut points: Freedson/Trost (FT), Puyau (PU), Treuth (TR), Mattocks (MT), and Evenson (EV). Classification accuracy was evaluated via weighted κ statistics and area under the receiver operating characteristic curve (ROC-AUC). RESULTS Across all four intensity levels, the EV (κ = 0.68) and FT (κ = 0.66) cut points exhibited significantly better agreement than TR (κ = 0.62), MT (κ = 0.54), and PU (κ = 0.36). The EV and FT cut points exhibited significantly better classification accuracy for moderate-to vigorous-intensity physical activity (ROC-AUC = 0.90) than TR, PU, or MT cut points (ROC-AUC = 0.77-0.85). Only the EV cut points provided acceptable classification accuracy for all four levels of physical activity intensity and performed well among children of all ages. The widely applied sedentary cut point of 100 counts per minute exhibited excellent classification accuracy (ROC-AUC = 0.90). CONCLUSIONS On the basis of these findings, we recommend that researchers use the EV ActiGraph cut points to estimate time spent in sedentary, light-, moderate-, and vigorous-intensity activity in children and adolescents. Copyright © 2011 by the American College of Sports Medicine.
Resumo:
We construct a two-scale mathematical model for modern, high-rate LiFePO4cathodes. We attempt to validate against experimental data using two forms of the phase-field model developed recently to represent the concentration of Li+ in nano-sized LiFePO4crystals. We also compare this with the shrinking-core based model we developed previously. Validating against high-rate experimental data, in which electronic and electrolytic resistances have been reduced is an excellent test of the validity of the crystal-scale model used to represent the phase-change that may occur in LiFePO4material. We obtain poor fits with the shrinking-core based model, even with fitting based on “effective” parameter values. Surprisingly, using the more sophisticated phase-field models on the crystal-scale results in poorer fits, though a significant parameter regime could not be investigated due to numerical difficulties. Separate to the fits obtained, using phase-field based models embedded in a two-scale cathodic model results in “many-particle” effects consistent with those reported recently.
Resumo:
Objective The present study aimed to develop accelerometer cut points to classify physical activities (PA) by intensity in preschoolers and to investigate discrepancies in PA levels when applying various accelerometer cut points. Methods To calibrate the accelerometer, 18 preschoolers (5.8 +/- 0.4 years) performed eleven structured activities and one free play session while wearing a GT1M ActiGraph accelerometer using 15 s epochs. The structured activities were chosen based on the direct observation system Children's Activity Rating Scale (CARS) while the criterion measure of PA intensity during free play was provided using a second-by-second observation protocol (modified CARS). Receiver Operating Characteristic (ROC) curve analyses were used to determine the accelerometer cut points. To examine the classification differences, accelerometer data of four consecutive days from 114 preschoolers (5.5 +/- 0.3 years) were classified by intensity according to previously published and the newly developed accelerometer cut points. Differences in predicted PA levels were evaluated using repeated measures ANOVA and Chi Square test. Results Cut points were identified at 373 counts/15 s for light (sensitivity: 86%; specificity: 91%; Area under ROC curve: 0.95), 585 counts/15 s for moderate (87%; 82%; 0.91) and 881 counts/15 s for vigorous PA (88%; 91%; 0.94). Further, applying various accelerometer cut points to the same data resulted in statistically and biologically significant differences in PA. Conclusions Accelerometer cut points were developed with good discriminatory power for differentiating between PA levels in preschoolers and the choice of accelerometer cut points can result in large discrepancies.
Resumo:
Objective To compare the level of agreement in results obtained from four physical activity (PA) measurement instruments that are in use in Australia and around the world. Methods 1,280 randomly selected participants answered two sets of PA questions by telephone. 428 answered the Active Australia (AA) and National Health Surveys, 427 answered the AA and CDC Behavioural Risk Factor Surveillance System surveys (BRFSS), and 425 answered the AA survey and the short International Physical Activity Questionnaire (IPAQ). Results Among the three pairs of survey items, the difference in mean total PA time was lowest when the AA and NHS items were asked (difference=24) (SE:17) minutes, compared with 144 (SE:21) mins for AA/BRFSS and 406 (SE:27) mins for AA/IPAQ). Correspondingly, prevalence estimates for 'sufficiently active' were similar for AA and NHS (56% and 55% respectively), but about 10% higher when BRFSS data were used, and about 26% higher when the IPAQ items were used, compared with estimates from the AA survey. Conclusions The findings clearly demonstrate that there are large differences in reported PA times and hence in prevalence estimates of 'sufficient activity' from these four measures. Implications It is important to consistently use the same survey for population monitoring purposes. As the AA survey has now been used three times in national surveys, its continued use for population surveys is recommended so that trend data ever a longer period of time can be established.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
Objective: The study aimed to examine the difference in response rates between opt-out and opt-in participant recruitment in a population-based study of heavy-vehicle drivers involved in a police-attended crash. Methods: Two approaches to subject recruitment were implemented in two different states over a 14-week period and response rates for the two approaches (opt-out versus opt-in recruitment) were compared. Results: Based on the eligible and contactable drivers, the response rates were 54% for the optout group and 16% for the opt-in group. Conclusions and Implications: The opt-in recruitment strategy (which was a consequence of one jurisdiction’s interpretation of the national Privacy Act at the time) resulted in an insufficient and potentially biased sample for the purposes of conducting research into risk factors for heavy-vehicle crashes. Australia’s national Privacy Act 1988 has had a long history of inconsistent practices by state and territory government departments and ethical review committees. These inconsistencies can have profound effects on the validity of research, as shown through the significantly different response rates we reported in this study. It is hoped that a more unified interpretation of the Privacy Act across the states and territories, as proposed under the soon-to-be released Australian Privacy Principles will reduce the recruitment challenges outlined in this study.
Resumo:
Protocols for bioassessment often relate changes in summary metrics that describe aspects of biotic assemblage structure and function to environmental stress. Biotic assessment using multimetric indices now forms the basis for setting regulatory standards for stream quality and a range of other goals related to water resource management in the USA and elsewhere. Biotic metrics are typically interpreted with reference to the expected natural state to evaluate whether a site is degraded. It is critical that natural variation in biotic metrics along environmental gradients is adequately accounted for, in order to quantify human disturbance-induced change. A common approach used in the IBI is to examine scatter plots of variation in a given metric along a single stream size surrogate and a fit a line (drawn by eye) to form the upper bound, and hence define the maximum likely value of a given metric in a site of a given environmental characteristic (termed the 'maximum species richness line' - MSRL). In this paper we examine whether the use of a single environmental descriptor and the MSRL is appropriate for defining the reference condition for a biotic metric (fish species richness) and for detecting human disturbance gradients in rivers of south-eastern Queensland, Australia. We compare the accuracy and precision of the MSRL approach based on single environmental predictors, with three regression-based prediction methods (Simple Linear Regression, Generalised Linear Modelling and Regression Tree modelling) that use (either singly or in combination) a set of landscape and local scale environmental variables as predictors of species richness. We compared the frequency of classification errors from each method against set biocriteria and contrast the ability of each method to accurately reflect human disturbance gradients at a large set of test sites. The results of this study suggest that the MSRL based upon variation in a single environmental descriptor could not accurately predict species richness at minimally disturbed sites when compared with SLR's based on equivalent environmental variables. Regression-based modelling incorporating multiple environmental variables as predictors more accurately explained natural variation in species richness than did simple models using single environmental predictors. Prediction error arising from the MSRL was substantially higher than for the regression methods and led to an increased frequency of Type I errors (incorrectly classing a site as disturbed). We suggest that problems with the MSRL arise from the inherent scoring procedure used and that it is limited to predicting variation in the dependent variable along a single environmental gradient.
Resumo:
1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.
Resumo:
Water management is vital for mine sites both for production and sustainability related issues. Effective water management is a complex task since the role of water on mine sites is multifaceted. Computers models are tools that represent mine site water interaction and can be used by mine sites to inform or evaluate their water management strategies. There exist several types of models that can be used to represent mine site water interactions. This paper presents three such models: an operational model, an aggregated systems model and a generic systems model. For each model the paper provides a description and example followed by an analysis of its advantages and disadvantages. The paper hypotheses that since no model is optimal for all situations, each model should be applied in situations where it is most appropriate based upon the scale of water interactions being investigated, either unit (operation), inter-site (aggregated systems) or intra-site (generic systems).
Resumo:
The nonlinear stability analysis introduced by Chen and Haughton [1] is employed to study the full nonlinear stability of the non-homogeneous spherically symmetric deformation of an elastic thick-walled sphere. The shell is composed of an arbitrary homogeneous, incompressible elastic material. The stability criterion ultimately requires the solution of a third-order nonlinear ordinary differential equation. Numerical calculations performed for a wide variety of well-known incompressible materials are then compared with existing bifurcation results and are found to be identical. Further analysis and comparison between stability and bifurcation are conducted for the case of thin shells and we prove by direct calculation that the two criteria are identical for all modes and all materials.
Resumo:
A cylindrical magnetron system and a hybrid inductively coupled plasma-assisted magnetron deposition system were examined experimentally in light of their discharge characteristics with a view to stress the enhanced controllability of the hybrid system. The comparative study has shown that the hybrid magnetron + the inductively coupled plasma system is a flexible, powerful, and convenient tool that has certain advantages as compared with the cylindrical dc magnetrons. In particular, the hybrid system features more linear current-voltage characteristics and the possibility of a bias-independent control of the discharge current.
Resumo:
PURPOSE To compare diffusion-weighted functional magnetic resonance imaging (DfMRI), a novel alternative to the blood oxygenation level-dependent (BOLD) contrast, in a functional MRI experiment. MATERIALS AND METHODS Nine participants viewed contrast reversing (7.5 Hz) black-and-white checkerboard stimuli using block and event-related paradigms. DfMRI (b = 1800 mm/s2 ) and BOLD sequences were acquired. Four parameters describing the observed signal were assessed: percent signal change, spatial extent of the activation, the Euclidean distance between peak voxel locations, and the time-to-peak of the best fitting impulse response for different paradigms and sequences. RESULTS The BOLD conditions showed a higher percent signal change relative to DfMRI; however, event-related DfMRI showed the strongest group activation (t = 21.23, P < 0.0005). Activation was more diffuse and spatially closer to the BOLD response for DfMRI when the block design was used. DfMRIevent showed the shortest TTP (4.4 +/- 0.88 sec). CONCLUSION The hemodynamic contribution to DfMRI may increase with the use of block designs.
Resumo:
Background The sequencing, de novo assembly and annotation of transcriptome datasets generated with next generation sequencing (NGS) has enabled biologists to answer genomic questions in non-model species with unprecedented ease. Reliable and accurate de novo assembly and annotation of transcriptomes, however, is a critically important step for transcriptome assemblies generated from short read sequences. Typical benchmarks for assembly and annotation reliability have been performed with model species. To address the reliability and accuracy of de novo transcriptome assembly in non-model species, we generated an RNAseq dataset for an intertidal gastropod mollusc species, Nerita melanotragus, and compared the assembly produced by four different de novo transcriptome assemblers; Velvet, Oases, Geneious and Trinity, for a number of quality metrics and redundancy. Results Transcriptome sequencing on the Ion Torrent PGM™ produced 1,883,624 raw reads with a mean length of 133 base pairs (bp). Both the Trinity and Oases de novo assemblers produced the best assemblies based on all quality metrics including fewer contigs, increased N50 and average contig length and contigs of greater length. Overall the BLAST and annotation success of our assemblies was not high with only 15-19% of contigs assigned a putative function. Conclusions We believe that any improvement in annotation success of gastropod species will require more gastropod genome sequences, but in particular an increase in mollusc protein sequences in public databases. Overall, this paper demonstrates that reliable and accurate de novo transcriptome assemblies can be generated from short read sequencers with the right assembly algorithms. Keywords: Nerita melanotragus; De novo assembly; Transcriptome; Heat shock protein; Ion torrent