997 resultados para Summed estimation scales


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo presenta una reflexión metodológica relativa al uso de escalas de estimaciones sumadas o Likert en la evaluación del desempeño docente en el contexto universitario. Se presentan antecedentes en el marco de las prescripciones técnicas para este tipo de escalamientos, así como un conjunto de observaciones referidas a la pertinencia de su aplicación con fines evaluativos y a sus limitaciones en tanto herramienta para la generación de conocimiento. Se concluye que la escala de Likert puede ser utilizada en contexto evaluativos, atendiendo al conjunto de requerimientos ligados a su aplicación y tratamiento analítico-interpretativo, reconociendo los problemas insalvables que presenta, de manera de sopesar y relativizar la construcción del dato numeral. De este modo, se hace explícita la crítica al carácter "quantofrénico" y "artefactual" que acompaña su aplicación, y que contradictoriamente se inscribe en un discurso que sitúa la evaluación docente en el marco de políticas de calidad en educación superior

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo presenta una reflexión metodológica relativa al uso de escalas de estimaciones sumadas o Likert en la evaluación del desempeño docente en el contexto universitario. Se presentan antecedentes en el marco de las prescripciones técnicas para este tipo de escalamientos, así como un conjunto de observaciones referidas a la pertinencia de su aplicación con fines evaluativos y a sus limitaciones en tanto herramienta para la generación de conocimiento. Se concluye que la escala de Likert puede ser utilizada en contexto evaluativos, atendiendo al conjunto de requerimientos ligados a su aplicación y tratamiento analítico-interpretativo, reconociendo los problemas insalvables que presenta, de manera de sopesar y relativizar la construcción del dato numeral. De este modo, se hace explícita la crítica al carácter "quantofrénico" y "artefactual" que acompaña su aplicación, y que contradictoriamente se inscribe en un discurso que sitúa la evaluación docente en el marco de políticas de calidad en educación superior

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo presenta una reflexión metodológica relativa al uso de escalas de estimaciones sumadas o Likert en la evaluación del desempeño docente en el contexto universitario. Se presentan antecedentes en el marco de las prescripciones técnicas para este tipo de escalamientos, así como un conjunto de observaciones referidas a la pertinencia de su aplicación con fines evaluativos y a sus limitaciones en tanto herramienta para la generación de conocimiento. Se concluye que la escala de Likert puede ser utilizada en contexto evaluativos, atendiendo al conjunto de requerimientos ligados a su aplicación y tratamiento analítico-interpretativo, reconociendo los problemas insalvables que presenta, de manera de sopesar y relativizar la construcción del dato numeral. De este modo, se hace explícita la crítica al carácter "quantofrénico" y "artefactual" que acompaña su aplicación, y que contradictoriamente se inscribe en un discurso que sitúa la evaluación docente en el marco de políticas de calidad en educación superior

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes and applies an alternative demographic procedure for extending a demand system to allow for the effect of household size and composition changes, along with price changes, on expenditure allocation. The demographic procedure is applied to two recent demand functional forms to obtain their estimable demographic extensions. The estimation on pooled time series of Australian Household Expenditure Surveys yields sensible and robust estimates of the equivalence scale, and of its variation with relative prices. Further evidence on the usefulness of this procedure is provided by using it to evaluate the nature and magnitude of the inequality bias of relative price changes in Australia over a period from the late 1980s to the early part of the new millennium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological studies are based on characteristics of groups of individuals, which are common in various disciplines including epidemiology. It is of great interest for epidemiologists to study the geographical variation of a disease by accounting for the positive spatial dependence between neighbouring areas. However, the choice of scale of the spatial correlation requires much attention. In view of a lack of studies in this area, this study aims to investigate the impact of differing definitions of geographical scales using a multilevel model. We propose a new approach -- the grid-based partitions and compare it with the popular census region approach. Unexplained geographical variation is accounted for via area-specific unstructured random effects and spatially structured random effects specified as an intrinsic conditional autoregressive process. Using grid-based modelling of random effects in contrast to the census region approach, we illustrate conditions where improvements are observed in the estimation of the linear predictor, random effects, parameters, and the identification of the distribution of residual risk and the aggregate risk in a study region. The study has found that grid-based modelling is a valuable approach for spatially sparse data while the SLA-based and grid-based approaches perform equally well for spatially dense data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:


Abstract: Psychometric properties of two self-report clinical competence scales for nursing students.
Background: It is important to assess the clinical competence of nursing students to gauge their professional development and educational needs. This can be measured by self-assessment tools. Anema and McCoy (2010) contended that the currently available measures need further psychometric testing.
Aim: To test the psychometric properties of Nursing Competencies Questionnaire (NCQ) and Self-Efficacy in Clinical Performance (SECP) clinical competence scales.

Method: A non-randomly selected sample of n=248 2nd year nursing students completed NCQ, SECP and demographic questionnaires (June and September 2013). Mokken Scaling Analysis (MSA) was used to test the structural validity and scale properties, convergent and discriminant validity and reliability were subsequently tested.

Results: The NCQ provided evidence of a unidimensional scale which had strong scale scalability coefficients Hs =0.581; but limited evidence of item rankability HT =0.367. MSA undertaken with the SECP scale identified two potential unidimensional scales the SECP28 and SECP7, each with adequate evidence of good/reasonable scalablity psychometric properties as a summed scale but no/very limited evidence of scale rankability (SECP28: Hs = 0.55, HT=0.211; SECP7: Hs = 0.61, HT=0.049). Analysis of between cohort differences and NCQ/ SECP scale scores produced evidence of convergent and discriminant validity and good internal reliability: NCQ α = 0.93, SECP28 α = 0.96, and SECP7 α=0.89.

Discussion: The NCQ was verified to have evidence of reliability and validity; however, as the SECP findings are new, and the sample small, with reference to Straat and colleagues (2014), the SECP results should be interpreted with caution and verified on a second sample.

Conclusions: Measurement of perceived self-competence could inform the development of nursing competence and could start early in a nursing programme. Further testing of the NCQ and SECP scales with larger samples and from different years is indicated.


References:
Anema, M., G and McCoy, JK. (2010) Competency-Based Nursing Education: Guide to Achieving Outstanding Learner Outcomes. New York: Springer.
Straat, JH., van der Ark, LA and Sijtsma, K. (2014) Minimum Sample Size Requirements for Mokken Scale Analysis Educational and Psychological Measurement 74 (5), 809-822.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating snow mass at continental scales is difficult but important for understanding landatmosphere interactions, biogeochemical cycles and Northern latitudes’ hydrology. Remote sensing provides the only consistent global observations, but the uncertainty in measurements is poorly understood. Existing techniques for the remote sensing of snow mass are based on the Chang algorithm, which relates the absorption of Earth-emitted microwave radiation by a snow layer to the snow mass within the layer. The absorption also depends on other factors such as the snow grain size and density, which are assumed and fixed within the algorithm. We examine the assumptions, compare them to field measurements made at the NASA Cold Land Processes Experiment (CLPX) Colorado field site in 2002–3, and evaluate the consequences of deviation and variability for snow mass retrieval. The accuracy of the emission model used to devise the algorithm also has an impact on its accuracy, so we test this with the CLPX measurements of snow properties against SSM/I and AMSR-E satellite measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating snow mass at continental scales is difficult, but important for understanding land-atmosphere interactions, biogeochemical cycles and the hydrology of the Northern latitudes. Remote sensing provides the only consistent global observations, butwith unknown errors. Wetest the theoretical performance of the Chang algorithm for estimating snow mass from passive microwave measurements using the Helsinki University of Technology (HUT) snow microwave emission model. The algorithm's dependence upon assumptions of fixed and uniform snow density and grainsize is determined, and measurements of these properties made at the Cold Land Processes Experiment (CLPX) Colorado field site in 2002–2003 used to quantify the retrieval errors caused by differences between the algorithm assumptions and measurements. Deviation from the Chang algorithm snow density and grainsize assumptions gives rise to an error of a factor of between two and three in calculating snow mass. The possibility that the algorithm performsmore accurately over large areas than at points is tested by simulating emission from a 25 km diameter area of snow with a distribution of properties derived from the snow pitmeasurements, using the Chang algorithm to calculate mean snow-mass from the simulated emission. The snowmass estimation froma site exhibiting the heterogeneity of the CLPX Colorado site proves onlymarginally different than that from a similarly-simulated homogeneous site. The estimation accuracy predictions are tested using the CLPX field measurements of snow mass, and simultaneous SSM/I and AMSR-E measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Weeds tend to aggregate in patches within fields and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at different scales, the strength of the relationships between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We have developed a general method that uses novel within-field nested sampling and residual maximum likelihood (REML) estimation to explore scale-dependent relationships between weeds and soil properties. We have validated the method using a case study of Alopecurus myosuroides in winter wheat. Using REML, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales we optimized the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current paper provides a detailed examination of the psychometric properties of the Gudjonsson Suggestibility Scales (Gudjonsson, 1997) which have been widely used to measure individual suggestibility. Several fundamental problems associated with the Shift and Total Suggestibility subscales are identified and discussed. Two arguably more conceptually coherent methods of scoring the Shift subscale (‘Shift-positive’ and ‘Shift-negative’) are introduced. A confirmatory factor analytic model based on two oblique factors and relative answering regression effects between corresponding items was tested and supported based on a sample of 220 children. Based on a latent variable estimation approach, the internal consistency reliabilities associated with the Shift subscale scores were found to be unacceptably low. Consequently, we propose that until the problems associated with Shift-standard and Total Suggestibility is addressed successfully, use of the GSS should be limited to the Yield subscale.