996 resultados para CLUSTER VALIDATION
Resumo:
Background Wearable monitors are increasingly being used to objectively monitor physical activity in research studies within the field of exercise science. Calibration and validation of these devices are vital to obtaining accurate data. This article is aimed primarily at the physical activity measurement specialist, although the end user who is conducting studies with these devices also may benefit from knowing about this topic. Best Practices Initially, wearable physical activity monitors should undergo unit calibration to ensure interinstrument reliability. The next step is to simultaneously collect both raw signal data (e.g., acceleration) from the wearable monitors and rates of energy expenditure, so that algorithms can be developed to convert the direct signals into energy expenditure. This process should use multiple wearable monitors and a large and diverse subject group and should include a wide range of physical activities commonly performed in daily life (from sedentary to vigorous). Future Directions New methods of calibration now use "pattern recognition" approaches to train the algorithms on various activities, and they provide estimates of energy expenditure that are much better than those previously available with the single-regression approach. Once a method of predicting energy expenditure has been established, the next step is to examine its predictive accuracy by cross-validating it in other populations. In this article, we attempt to summarize the best practices for calibration and validation of wearable physical activity monitors. Finally, we conclude with some ideas for future research ideas that will move the field of physical activity measurement forward.
Resumo:
The purpose of this study was to examine the validity of the 3-Day Physical Activity Recall (3DPAR) self-report instrument in a sample of eighth and ninth grade girls (n = 70, 54.3% white, 37.1% African American). Criterion measures of physical activity were derived using the CSA 7164 accelerometer. Participants wore a CSA monitor for 7 consecutive days and completed the self-report physical activity recall for the last 3 of those days. Self-reported total METs, 30-min blocks of MVPA, and 30-min blocks of VPA were all significantly correlated with analogous CSA variables for 7 days (r = 0.35-0.51; P < 0.01) and 3 days (r = 0.27-0.46; P < 0.05) of monitoring. The results indicate that the 3DPAR is a valid instrument for assessing overall, vigorous, and moderate to vigorous physical activity in adolescent girls.
Resumo:
Dietitians have reported a lack of confidence in counselling clients with mental health issues. Standardised tools are needed to evaluate programs aiming to improve confidence. The Dietetic Confidence Scale (DCS) was developed to assess dietitians’perception of their capability about working with clients experiencing depression. Exploratory research revealed a 13-item, two-factor model. Dietetic confidence was associated with: 1) Confidence using the Nutrition Care Process; and 2) Confidence in Advocacy for Self-care and Client-care. This study aimed to validate the DCS using this two-factor model.The DCS was administered to 458 dietitians. Confirmatory factor analysis (CFA) assessed the scale’s psychometric validity. Reliability was measured using Cronbach’s alpha (α) co-efficient. CFA results supported the hypothesised two-factor, 13-item model. The Good Fit Index (GFI = 0.95) indicated a strong fit. Item-factor correlations ranged from r = 0.50 to 0.89. The overall scale and subscales showed good reliability (α = 0.93 to 0.76). This is the first study to validate an instrument that measures dietetic confidence about working with clients experiencing depression. The DCS can be used to measure changes in perceived confidence and identify where further training, mentoring or experience is needed. The findings also suggest that initiatives aimed at building dietitians' confidence about working with clients experiencing depression, should focus on improving client-focused nutrition care, foster advocacy, reflective practice, mentoring and encourage professional support networks. Avenues for future research include further validity and reliability testing to expand the generalisability of results; and modifying the scale for other disease or client populations.
Resumo:
Background Promoting participation physical activity (PA) is an important means of promoting healthy growth and development in children with cerebral palsy (CP). The ActiGraph is a uniaxial accelerometer that provides a realtime measure of PA intensity, duration and frequency. Its small, light weight design makes it a promising measure of activity in children with CP. To date no study has validated the use of accelerometry as a measure of PA in ambulant adolescents with CP. Objectives To evaluate the validity of the ActiGraph accelerometer for measuring PA intensity in adolescents with CP, using oxygen consumption (VO2), measured using portable indirect calorimetry (Cosmed K4b2), as the criterion measure. Design Validation Study Participants/Setting: Ambulant adolescents with CP aged 10–16 years, GMFCS rating of I-III. The recruitment target is 30 (10 in each GMFCS level). Materials/Methods Participants wore the ActiGraph (counts/min) and a Cosmed K4b2 indirect calorimeter (mL/kg/min) during six activity trials: quiet sitting (QS), comfortable paced walking (CPW), brisk paced walking (BPW), fast paced walking (FPW), a ball-kicking protocol (KP) and a ball-throwing protocol (TP). MET levels (multiples of resting metabolism) for each activity were predicted from ActiGraph counts using the Freedson age-specific equation (Freedson et al. 2005) and compared with actual MET levels measured by the Cosmed. Predicted and measured METs for each activity trial were classified as light (> 1.5 METs and <4.6 METs) or moderate to vigorous intensity (≥ 4.6 METs). Results To date 36 bouts of activity have been completed (6 participants x 6 activities). Mean VO2 increased linearly as the intensity of the walking activity increased (CPW=9.47±2.16, BPW=14.06±4.38, FPW=19.21±5.68 ml/kg/min) and ActiGraph counts reflected this pattern (CPW=1099±574, BPW=2233±797 FPW=4707±1013 counts/min). The throwing protocol recording the lowest VO2 (TP=7.50±3.86 ml/kg/min) and lowest overall counts/min (TP=31±27 counts/min). When each of the 36 bouts were classified as either light or moderate to vigorous intensity using measured VO2 as the criterion measure, the Freedson equation correctly classified 28 from 36 bouts (78%). Conclusion/Clinical Implications These preliminary findings suggest that there is a relationship between the intensity of PA and direct measure of oxygen consumption and that therefore the ActiGraph may be a promising tool for accurately measuring free living PA in the community. Further data collection of the complete sample will enable secondary analysis of the relationship between PA and severity of CP (GMFCS level).
Resumo:
Background Early feeding practices lay the foundation for children’s eating habits and weight gain. Questionnaires are available to assess parental feeding but overlapping and inconsistent items, subscales and terminology limit conceptual clarity and between study comparisons. Our aim was to consolidate a range of existing items into a parsimonious and conceptually robust questionnaire for assessing feeding practices with very young children (<3 years). Methods Data were from 462 mothers and children (age 21–27 months) from the NOURISH trial. Items from five questionnaires and two study-specific items were submitted to a priori item selection, allocation and verification, before theoretically-derived factors were tested using Confirmatory Factor Analysis. Construct validity of the new factors was examined by correlating these with child eating behaviours and weight. Results Following expert review 10 factors were specified. Of these, 9 factors (40 items) showed acceptable model fit and internal reliability (Cronbach’s α: 0.61-0.89). Four factors reflected non-responsive feeding practices: ‘Distrust in Appetite’, ‘Reward for Behaviour’, ‘Reward for Eating’, and ‘Persuasive Feeding’. Five factors reflected structure of the meal environment and limits: ‘Structured Meal Setting’, ‘Structured Meal Timing’, ‘Family Meal Setting’, ‘Overt Restriction’ and ‘Covert Restriction’. Feeding practices generally showed the expected pattern of associations with child eating behaviours but none with weight. Conclusion The Feeding Practices and Structure Questionnaire (FPSQ) provides a new reliable and valid measure of parental feeding practices, specifically maternal responsiveness to children’s hunger/satiety signals facilitated by routine and structure in feeding. Further validation in more diverse samples is required.
Resumo:
The Climate Change Adaptation for Natural Resource Management (NRM) in East Coast Australia Project aims to foster and support an effective “community of practice” for climate change adaptation within the East Coast Cluster NRM regions that will increase the capacity for adaptation to climate change through enhancements in knowledge and skills and through the establishment of long‐term collaborations. It is being delivered by six consortium research partners: * The University of Queensland (project lead) * Griffith University * University of the Sunshine Coast * CSIRO * New South Wales Office of Environment and Heritage * Queensland Department of Science, IT, Innovation and the Arts (Queensland Herbarium). The project relates to the East Coast Cluster, comprising the six coastal NRM regions and regional bodies between Rockhampton and Sydney: * Fitzroy Basin Association (FBA) * Burnett‐Mary Regional Group (BMRG) * SEQ Catchments (SEQC) * Northern Rivers Catchment Management Authority (CMA) (NRCMA) * Hunter‐Central Rivers CMA (HCRCMA) * Hawkesbury Nepean CMA (HNCMA). The aims of this report are to summarise the needs of the regional bodies in relation to NRM planning for climate change adaptation, and provide a basis for developing the detailed work plan for the research consortium. Two primary methods were used to identify the needs of the regional bodies: (1) document analysis of the existing NRM/ Catchment Action Plans (CAPs) and applications by the regional bodies for funding under Stream 1 of the Regional NRM Planning for Climate Change Fund, and; (2) a needs analysis workshop, held in May 2013 involving representatives from the research consortium partners and the regional bodies. The East Coast Cluster includes five of the ten largest significant urban areas in Australia, world heritage listed natural environments, significant agriculture, mining and extensive grazing. The three NSW CMAs have recently completed strategic level CAPs, with implementation plans to be finalised in 2014/2015. SEQC and FBA are beginning a review of their existing NRM Plans, to be completed in 2014 and 2015 respectively; while BMRG is aiming to produce a NRM and Climate Variability Action Strategy. The regional bodies will receive funding from the Australian Government through the Regional NRM Planning for Climate Change Fund (NRM Fund) to improve regional planning for climate change and help guide the location of carbon and biodiversity activities, including wildlife corridors. The bulk of the funding will be available for activities in 2013/2014, with smaller amounts available in subsequent years. Most regional bodies aim to have a large proportion of the planning work complete by the end of 2014. In addition, NSW CMAs are undergoing major structural change and will be incorporated into semi‐autonomous statutory Local Land Services bodies from 2014. Boundaries will align with local government boundaries and there will be significant change in staff and structures. The regional bodies in the cluster have a varying degree of climate knowledge. All plans recognise climate change as a key driver of change, but there are few specific actions or targets addressing climate change. Regional bodies also have varying capacity to analyse large volumes of spatial or modelling data. Due to the complex nature of natural resource management, all regional bodies work with key stakeholders (e.g. local government, industry groups, and community groups) to deliver NRM outcomes. Regional bodies therefore require project outputs that can be used directly in stakeholder engagement activities, and are likely to require some form of capacity building associated with each of the outputs to maximise uptake. Some of the immediate needs of the regional bodies are a summary of information or tools that are able to be used immediately; and a summary of the key outputs and milestone dates for the project, to facilitate alignment of planning activities with research outputs. A project framework is useful to show the linkages between research elements and the relevance of the research to the adaptive management cycle for NRM planning in which the regional bodies are engaged. A draft framework is proposed to stimulate and promote discussion on research elements and linkages; this will be refined during and following the development of the detailed project work plan. The regional bodies strongly emphasised the need to incorporate a shift to a systems based resilience approach to NRM planning, and that approach is included in the framework. The regional bodies identified that information on climate projections would be most useful at regional and subregional scale, to feed into scenario planning and impact analysis. Outputs should be ‘engagement ready’ and there is a need for capacity building to enable regional bodies to understand and use the projections in stakeholder engagement. There was interest in understanding the impacts of climate change projections on ecosystems (e.g. ecosystem shift), and the consequent impacts on the production of ecosystem services. It was emphasised that any modelling should be able to be used by the regional bodies with their stakeholders to allow for community input (i.e. no black box models). The online regrowth benefits tool was of great interest to the regional bodies, as spatial mapping of carbon farming opportunities would be relevant to their funding requirements. The NSW CMAs identified an interest in development of the tool for NSW vegetation types. Needs relating to socio‐economic information included understanding the socio‐economic determinants of carbon farming uptake and managing community expectations. A need was also identified to understand the vulnerability of industry groups as well as community to climate change impacts, and in particular understanding how changes in the flow of ecosystem services would interact with the vulnerability of these groups to impact on the linked ecologicalsocio‐economic system. Responses to disasters (particularly flooding and storm surge) and recovery responses were also identified as being of interest. An ecosystem services framework was highlighted as a useful approach to synthesising biophysical and socioeconomic information in the context of a systems based, resilience approach to NRM planning. A need was identified to develop processes to move towards such an approach to NRM planning from the current asset management approach. Examples of best practice in incorporating climate science into planning, using scenarios for stakeholder engagement in planning and processes for institutionalising learning were also identified as cross‐cutting needs. The over‐arching theme identified was the need for capacity building for the NRM bodies to best use the information available at any point in time. To this end a planners working group has been established to support the building of a network of informed and articulate NRM agents with knowledge of current climate science and capacity to use current tools to engage stakeholders in NRM planning for climate change adaptation. The planners working group would form the core group of the community of practice, with the broader group of stakeholders participating when activities aligned with their interests. In this way, it is anticipated that the Project will contribute to building capacity within the wider community to effectively plan for climate change adaptation.
Resumo:
Hospital disaster resilience can be defined as “the ability of hospitals to resist, absorb, and respond to the shock of disasters while maintaining and surging essential health services, and then to recover to its original state or adapt to a new one.” This article aims to provide a framework which can be used to comprehensively measure hospital disaster resilience. An evaluation framework for assessing hospital resilience was initially proposed through a systematic literature review and Modified-Delphi consultation. Eight key domains were identified: hospital safety, command, communication and cooperation system, disaster plan, resource stockpile, staff capability, disaster training and drills, emergency services and surge capability, and recovery and adaptation. The data for this study were collected from 41 tertiary hospitals in Shandong Province in China, using a specially designed questionnaire. Factor analysis was conducted to determine the underpinning structure of the framework. It identified a four-factor structure of hospital resilience, namely, emergency medical response capability (F1), disaster management mechanisms (F2), hospital infrastructural safety (F3), and disaster resources (F4). These factors displayed good internal consistency. The overall level of hospital disaster resilience (F) was calculated using the scoring model: F = 0.615F1 + 0.202F2 + 0.103F3 + 0.080F4. This validated framework provides a new way to operationalise the concept of hospital resilience, and it is also a foundation for the further development of the measurement instrument in future studies.
Resumo:
This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.
Resumo:
The controlled growth of ultra-small Ge/Si quantum dot (QD) nuclei (≈1 nm) suitable for the synthesis of uniform nanopatterns with high surface coverage, is simulated using atom-only and size non-uniform cluster fluxes. It is found that seed nuclei of more uniform sizes are formed when clusters of non-uniform size are deposited. This counter-intuitive result is explained via adatom-nanocluster interactions on Si(100) surfaces. Our results are supported by experimental data on the geometric characteristics of QD patterns synthesized by nanocluster deposition. This is followed by a description of the role of plasmas as non-uniform cluster sources and the impact on surface dynamics. The technique challenges conventional growth modes and is promising for deterministic synthesis of nanodot arrays.
Resumo:
Cluster ions and charged and neutral nanoparticle concentrations were monitored using a neutral cluster and air ion spectrometer (NAIS) over a period of one year in Brisbane, Australia. The study yielded 242 complete days of usable data, of which particle formation events were observed on 101 days. Small, intermediate and large ion concentrations were evaluated in real time. In the diurnal cycle, small ion concentration was highest during the second half of the night while large ion concentrations were a maximum during the day. The small ion concentration showed a decrease when the large ion concentration increased. Particle formation was generally followed by a peak in the intermediate ion concentration. The rate of increase of intermediate ions was used as the criteria for identifying particle formation events. Such events were followed by a period of growth to larger sizes and usually occurred between 8 am and 2 pm. Particle formation events were found to be related to the wind direction. The gaseous precursors for the production of secondary particles in the urban environment of Brisbane have been shown to be ammonia and sulfuric acid. During these events, the nanoparticle number concentrations in the size range 1.6 to 42 nm, which were normally lower than 1x104 cm-3, often exceeded 5x104 cm-3 with occasional values over 1x105 cm-3. Cluster ions generally occurred in number concentrations between 300 and 600 cm-3 but decreased significantly to about 200 cm-3 during particle formation events. This was accompanied by an increase in the large ion concentration. We calculated the fraction of nanoparticles that were charged and investigated the occurrence of possible overcharging during particle formation events. Overcharging is defined as the condition where the charged fraction of particles is higher than in charge equilibrium. This can occur when cluster ions attach to neutral particles in the atmosphere, giving rise to larger concentrations of charged particles in the short term. Ion-induced nucleation is one of the mechanisms of particle formation in the atmosphere, and overcharging has previously been considered as an indicator of this process. The possible role of ions in particle formation was investigated.
Resumo:
The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.
Resumo:
The present study explores reproducing the closest geometry of a high pressure ratio single stage radial-inflow turbine applied in the Sundstrans Power Systems T-100 Multipurpose Small Power Unit. The commercial software ANSYS-Vista RTD along with a built in module, BladeGen, is used to conduct a meanline design and create 3D geometry of one flow passage. Carefully examining the proposed design against the geometrical and experimental data, ANSYS-TurboGrid is applied to generate computational mesh. CFD simulations are performed with ANSYS-CFX in which three-dimensional Reynolds-Averaged Navier-Stokes equations are solved subject to appropriate boundary conditions. Results are compared with numerical and experimental data published in the literature in order to generate the exact geometry of the existing turbine and validate the numerical results against the experimental ones.
Resumo:
Singapore is located at the equator, with abundant supply of solar radiation, relatively high ambient temperature and relative humidity throughout the year. The meteorological conditions of Singapore are favourable for efficient operation of solar energy based systems. Solar assisted heat pump systems are built on the roof-top of National University of Singapore’s Faculty of Engineering. The objectives of this study include the design and performance evaluation of a solar assisted heat-pump system for water desalination, water heating and drying of clothes. Using MATLAB programming language, a 2-dimensional simulation model has been developed to conduct parametric studies on the system. The system shows good prospect to be implemented in both industrial and residential applications and would give new opportunities in replacing conventional energy sources with green renewable energy.
Resumo:
Computational models in physiology often integrate functional and structural information from a large range of spatio-temporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and scepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace and refine animal experiments. A fundamental requirement to fulfil these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations between experiments, models and simulations in cardiac electrophysiology. We describe the processes, data and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. Validation must therefore take into account the complex interplay between models, simulations and experiments. Key points for developing strategies for validation are: 1) understanding sources of bio-variability is crucial to the comparison between simulation and experimental results; 2) robustness of techniques and tools is a pre-requisite to conducting physiological investigations using the MSE system; 3) definition and adoption of standards facilitates interoperability of experiments, models and simulations; 4) physiological validation must be understood as an iterative process that defines the specific aspects of electrophysiology the MSE system targets, and is driven by advancements in experimental and computational methods and the combination of both.