90 resultados para Normative validation
Resumo:
Methods for assessing the sustainability of agricultural systems do often not fully (i) take into account the multifunctionality of agriculture, (ii) include multidimensionality, (iii) utilize and implement the assessment knowledge and (iv) identify conflicting goals and trade-offs. This chapter reviews seven recently developed multidisciplinary indicator-based assessment methods with respect to their contribution to these shortcomings. All approaches include (1) normative aspects such as goal setting, (2) systemic aspects such as a specification of scale of analysis and (3) a reproducible structure of the approach. The approaches can be categorized into three typologies: first, top-down farm assessments, which focus on field or farm assessment; second, top-down regional assessments, which assess the on-farm and the regional effects; and third, bottom-up, integrated participatory or transdisciplinary approaches, which focus on a regional scale. Our analysis shows that the bottom-up, integrated participatory or transdisciplinary approaches seem to better overcome the four shortcomings mentioned above.
Resumo:
Objective: Thought–shape fusion (TSF) is a cognitive distortion that has been linked to eating pathology. Two studies were conducted to further explore this phenomenon and to establish the psychometric properties of a French short version of the TSF scale. Method: In Study 1, students (n 5 284) completed questionnaires assessing TSF and related psychopathology. In Study 2, the responses of women with eating disorders (n 5 22) and women with no history of an eating disorder (n 5 23) were compared. Results: The French short version of the TSF scale has a unifactorial structure, with convergent validity with measures of eating pathology, and good internal consistency. Depression, eating pathology, body dissatisfaction, and thought-action fusion emerged as predictors of TSF. Individuals with eating disorders have higher TSF, and more clinically relevant food-related thoughts than do women with no history of an eating disorder. Discussion: This research suggests that the shortened TSF scale can suitably measure this construct, and provides support for the notion that TSF is associated with eating pathology.
Resumo:
A simple four-dimensional assimilation technique, called Newtonian relaxation, has been applied to the Hamburg climate model (ECHAM), to enable comparison of model output with observations for short periods of time. The prognostic model variables vorticity, divergence, temperature, and surface pressure have been relaxed toward European Center for Medium-Range Weather Forecasts (ECMWF) global meteorological analyses. Several experiments have been carried out, in which the values of the relaxation coefficients have been varied to find out which values are most usable for our purpose. To be able to use the method for validation of model physics or chemistry, good agreement of the model simulated mass and wind field is required. In addition, the model physics should not be disturbed too strongly by the relaxation forcing itself. Both aspects have been investigated. Good agreement with basic observed quantities, like wind, temperature, and pressure is obtained for most simulations in the extratropics. Derived variables, like precipitation and evaporation, have been compared with ECMWF forecasts and observations. Agreement for these variables is smaller than for the basic observed quantities. Nevertheless, considerable improvement is obtained relative to a control run without assimilation. Differences between tropics and extratropics are smaller than for the basic observed quantities. Results also show that precipitation and evaporation are affected by a sort of continuous spin-up which is introduced by the relaxation: the bias (ECMWF-ECHAM) is increasing with increasing relaxation forcing. In agreement with this result we found that with increasing relaxation forcing the vertical exchange of tracers by turbulent boundary layer mixing and, in a lesser extent, by convection, is reduced.
Resumo:
Healthcare information systems have the potential to enhance productivity, lower costs, and reduce medication errors by automating business processes. However, various issues such as system complexity and system abilities in a relation to user requirements as well as rapid changes in business needs have an impact on the use of these systems. In many cases failure of a system to meet business process needs has pushed users to develop alternative work processes (workarounds) to fill this gap. Some research has been undertaken on why users are motivated to perform and create workarounds. However, very little research has assessed the consequences on patient safety. Moreover, the impact of performing these workarounds on the organisation and how to quantify risks and benefits is not well analysed. Generally, there is a lack of rigorous understanding and qualitative and quantitative studies on healthcare IS workarounds and their outcomes. This project applies A Normative Approach for Modelling Workarounds to develop A Model of Motivation, Constraints, and Consequences. It aims to understand the phenomenon in-depth and provide guidelines to organisations on how to deal with workarounds. Finally the method is demonstrated on a case study example and its relative merits discussed.
Resumo:
Evaluating CCMs with the presented framework will increase our confidence in predictions of stratospheric ozone change.
Resumo:
Clinical pathways are widely adopted by many large hospitals around the world in order to provide high-quality patient treatment and reduce the length and cost of hospital stay. However, nowadays most of them are static and nonpersonalized. Our objective is to capture and represent clinical pathway using organizational semiotics method including Semantic Analysis which determines semantic units in clinical pathway, their relationship and their patterns of behavior, and Norm Analysis which extracts and specifies the norms that establish how and when these medical behaviors will occur. Finally, we propose a method to develop clinical pathway ontology based on the results of Semantic Analysis and Norm analysis. This approach will give a contribution to design personalized clinical pathway by defining a set of possible patterns of behavior and theClinical pathways are widely adopted by many large hospitals around the world in order to provide high-quality patient treatment and reduce the length and cost of hospital stay. However, nowadays most of them are static and nonpersonalized. Our objective is to capture and represent clinical pathway using organizational semiotics method including Semantic Analysis which determines semantic units in clinical pathway, their relationship and their patterns of behavior, and Norm Analysis which extracts and specifies the norms that establish how and when these medical behaviors will occur. Finally, we propose a method to develop clinical pathway ontology based on the results of Semantic Analysis and Norm analysis. This approach will give a contribution to design personalized clinical pathway by defining a set of possible patterns of behavior and the norms that govern the behavior based on patient’s condition.
Resumo:
This paper assesses the performance of a vocabulary test designed to measure second language productive vocabulary knowledge.The test, Lex30, uses a word association task to elicit vocabulary, and uses word frequency data to measure the vocabulary produced. Here we report firstly on the reliability of the test as measured by a test-retest study, a parallel test forms experiment and an internal consistency measure. We then investigate the construct validity of the test by looking at changes in test performance over time, analyses of correlations with scores on similar tests, and comparison of spoken and written test performance. Last, we examine the theoretical bases of the two main test components: eliciting vocabulary and measuring vocabulary. Interpretations of our findings are discussed in the context of test validation research literature. We conclude that the findings reported here present a robust argument for the validity of the test as a research tool, and encourage further investigation of its validity in an instructional context
Resumo:
Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.
Resumo:
An initial validation of the Along Track Scanning Radiometer (ATSR) Reprocessing for Climate (ARC) retrievals of sea surface temperature (SST) is presented. ATSR-2 and Advanced ATSR (AATSR) SST estimates are compared to drifting buoy and moored buoy observations over the period 1995 to 2008. The primary ATSR estimates are of skin SST, whereas buoys measure SST below the surface. Adjustment is therefore made for the skin effect, for diurnal stratification and for differences in buoy–satellite observation time. With such adjustments, satellite-in situ differences are consistent between day and night within ~ 0.01 K. Satellite-in situ differences are correlated with differences in observation time, because of the diurnal warming and cooling of the ocean. The data are used to verify the average behaviour of physical and empirical models of the warming/cooling rates. Systematic differences between adjusted AATSR and in-situ SSTs against latitude, total column water vapour (TCWV), and wind speed are less than 0.1 K, for all except the most extreme cases (TCWV < 5 kg m–2, TCWV > 60 kg m–2). For all types of retrieval except the nadir-only two-channel (N2), regional biases are less than 0.1 K for 80% of the ocean. Global comparison against drifting buoys shows night time dual-view two-channel (D2) SSTs are warm by 0.06 ± 0.23 K and dual-view three-channel (D3) SSTs are warm by 0.06 ± 0.21 K (day-time D2: 0.07 ± 0.23 K). Nadir-only results are N2: 0.03 ± 0.33 K and N3: 0.03 ± 0.19 K showing the improved inter-algorithm consistency to ~ 0.02 K. This represents a marked improvement from the existing operational retrieval algorithms for which inter-algorithm inconsistency is > 0.5 K. Comparison against tropical moored buoys, which are more accurate than drifting buoys, gives lower error estimates (N3: 0.02 ± 0.13 K, D2: 0.03 ± 0.18 K). Comparable results are obtained for ATSR-2, except that the ATSR-2 SSTs are around 0.1 K warm compared to AATSR
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud-masks. Here, this is done over both land and ocean using night-time (infrared) imagery. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 87% and 48% for ocean and land, respectively using the Bayesian technique, compared to 74% and 39%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud masks. Here, the technique is shown to be suitable for daytime applications over land and sea, using visible and near-infrared imagery, in addition to thermal infrared. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 89% and 73% for ocean and land, respectively using the Bayesian technique, compared to 90% and 70%, respectively for the threshold-based techniques associated with the validation dataset.