928 resultados para Test (assessment)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the study of student learning literature, the traditional view holds that when students are faced with heavy workload, poor teaching, and content that they cannot relate to – important aspects of the learning context, they will more likely utilise the surface approach to learning due to stresses, lack of understanding and lack of perceived relevance of the content (Kreber, 2003; Lizzio, Wilson, & Simons, 2002; Ramdsen, 1989; Ramsden, 1992; Trigwell & Prosser, 1991; Vermunt, 2005). For example, in studies involving health and medical sciences students, courses that utilised student-centred, problem-based approaches to teaching and learning were found to elicit a deeper approach to learning than the teacher-centred, transmissive approach (Patel, Groen, & Norman, 1991; Sadlo & Richardson, 2003). It is generally accepted that the line of causation runs from the learning context (or rather students’ self reported data on the learning context) to students’ learning approaches. That is, it is the learning context as revealed by students’ self-reported data that elicit the associated learning behaviour. However, other research studies also found that the same teaching and learning environment can be perceived differently by different students. In a study of students’ perceptions of assessment requirements, Sambell and McDowell (1998) found that students “are active in the reconstruction of the messages and meanings of assessment” (p. 391), and their interpretations are greatly influenced by their past experiences and motivations. In a qualitative study of Hong Kong tertiary students, Kember (2004) found that students using the surface learning approach reported heavier workload than students using the deep learning approach. According to Kember if students learn by extracting meanings from the content and making connections, they will more likely see the higher order intentions embodied in the content and the high cognitive abilities being assessed. On the other hand, if they rote-learn for the graded task, they fail to see the hierarchical relationship in the content and to connect the information. These rote-learners will tend to see the assessment as requiring memorising and regurgitation of a large amount of unconnected knowledge, which explains why they experience a high workload. Kember (2004) thus postulate that it is the learning approach that influences how students perceive workload. Campbell and her colleagues made a similar observation in their interview study of secondary students’ perceptions of teaching in the same classroom (Campbell et al., 2001). The above discussions suggest that students’ learning approaches can influence their perceptions of assessment demands and other aspects of the learning context such as relevance of content and teaching effectiveness. In other words, perceptions of elements in the teaching and learning context are endogenously determined. This study attempted to investigate the causal relationships at the individual level between learning approaches and perceptions of the learning context in economics education. In this study, students’ learning approaches and their perceptions of the learning context were measured. The elements of the learning context investigated include: teaching effectiveness, workload and content. The authors are aware of existence of other elements of the learning context, such as generic skills, goal clarity and career preparation. These aspects, however, were not within the scope of this present study and were therefore not investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic Resonance Imaging (MRI) offers a valuable research tool for the assessment of 3D spinal deformity in AIS, however the horizontal patient position imposed by conventional scanners removes the axial compressive loading on the spine which is an important determinant of deformity shape and magnitude in standing scoliosis patients. The objective of this study was to design, construct and test an MRI compatible compression device for research into the effect of axial loading on spinal deformity using supine MRI scans. The compression device was designed and constructed, consisting of a vest worn by the patient, which was attached via straps to a pneumatically actuated footplate. An applied load of 0.5 x bodyweight was remotely controlled by a unit in the scanner operator’s console. The entire device was constructed using non-metallic components for MRI compatibility. The device was evaluated by performing unloaded and loaded supine MRI scans on a series of 10 AIS patients. The study concluded that an MRI compatible compression device had been successfully designed and constructed, providing a research tool for studies into the effect of axial loading on 3D spinal deformity in scoliosis. The 3D axially loaded MR imaging capability developed in this study will allow future research investigations of the effect of axial loading on spinal rotation, and for imaging the response of scoliotic spinal tissues to axial loading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliability and validity in the testing of spoken language are essential in order to assess learners' English language proficiency as evidence of their readiness to begin courses in tertiary institutions. Research has indicated that the task chosen to elicit language samples can have a marked effect on both the nature of the interaction, including the power differential, and assessment, raising the issue of ethics. This exploratory studey, with a group of 32 students from the Peoples's Republic of China preparing for tertiary study in Singapore, compares test-takers' reactions to the use of an oral proficiency interview and a pair interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic Resonance Imaging (MRI) offers a valuable research tool for the assessment of 3D spinal deformity in AIS, however the horizontal patient position imposed by conventional scanners removes the axial compressive loading on the spine. The objective of this study was to design, construct and test an MRI compatible compression device for research into the effect of axial loading on spinal deformity using supine MRI scans. The device was evaluated by performing unloaded and loaded supine MRI scans on a series of 10 AIS patients. The patient group had a mean initial (unloaded) major Cobb angle of 43±7º, which increased to 50±9º on application of the compressive load. The 7° increase in mean Cobb angle is consistent with that reported by a previous study comparing standing versus supine posture in scoliosis patients (Torell et al, 1985. Spine 10:425-7).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are several noninvasive techniques for assessing the kinetics of tear film, but no comparative studies have been conducted to evaluate their efficacies. Our aim is to test and compare techniques based on high-speed videokeratoscopy (HSV), dynamic wavefront sensing (DWS), and lateral shearing interferometry (LSI). Algorithms are developed to estimate the tear film build-up time TBLD, and the average tear film surface quality in the stable phase of the interblink interval TFSQAv. Moderate but significant correlations are found between TBLD measured with LSI and DWS based on vertical coma (Pearson's r2=0.34, p<0.01) and higher order rms (r2=0.31, p<0.01), as well as between TFSQAv measured with LSI and HSV (r2=0.35, p<0.01), and between LSI and DWS based on the rms fit error (r2=0.40, p<0.01). No significant correlation is found between HSV and DWS. All three techniques estimate tear film build-up time to be below 2.5 sec, and they achieve a remarkably close median value of 0.7 sec. HSV appears to be the most precise method for measuring tear film surface quality. LSI appears to be the most sensitive method for analyzing tear film build-up.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary clinical role of the non-invasive physical measurement of a bone, generally referred to as ‘bone densitometry,’ is to identify those subjects at risk of an osteoporotic fracture and their subsequent response to pharmaceutical intervention. The true ‘gold standard’ measurement of the mechanical integrity of a bone, and hence its fracture load, is a destructive test, generally performed by compressing either a regular shaped sample or whole bone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To investigate how age-related declines in vision (particularly contrast sensitivity), simulated using cataract-goggles and low-contrast stimuli, influence the accuracy and speed of cognitive test performance in older adults. An additional aim was to investigate whether declines in vision differentially affect secondary more than primary memory. Method: Using a fully within-subjects design, 50 older drivers aged 66-87 years completed two tests of cognitive performance - letter matching (perceptual speed) and symbol recall (short-term memory) - under different viewing conditions that degraded visual input (low-contrast stimuli, cataract-goggles, and low-contrast stimuli combined with cataract-goggles, compared with normal viewing). However, presentation time was also manipulated for letter matching. Visual function, as measured using standard charts, was taken into account in statistical analyses. Results: Accuracy and speed for cognitive tasks were significantly impaired when visual input was degraded. Furthermore, cognitive performance was positively associated with contrast sensitivity. Presentation time did not influence cognitive performance, and visual gradation did not differentially influence primary and secondary memory. Conclusion: Age-related declines in visual function can impact on the accuracy and speed of cognitive performance, and therefore the cognitive abilities of older adults may be underestimated in neuropsychological testing. It is thus critical that visual function be assessed prior to testing, and that stimuli be adapted to older adults' sensory capabilities (e.g., by maximising stimuli contrast).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developments in school education in Australia over the past decade have witnessed the rise of national efforts to reform curriculum, assessment and reporting. Constitutionally the power to decide on curriculum matters still resides with the States. Higher stakes in assessment, brought about by national testing and international comparative analyses of student achievement data, have challenged State efforts to maintain the emphasis on assessment to promote learning while fulfilling accountability demands. In this article lessons from the Queensland experience indicate that it is important to build teachers' assessment capacity and their assessment literacy for the promotion of student learning. It is argued that teacher assessment can be a source of dependable results through moderation practice. The Queensland Studies Authority has recognised and supported the development of teacher assessment and moderation practice in the context of standards-driven, national reform. Recent research findings explain how the focus on learning can be maintained by avoiding an over-interpretation of test results in terms of innate ability and limitations and by encouraging teachers to adopt more tailored diagnosis of assessment data to address equity through focus on achievement for all. Such efforts are challenged as political pressures related to the Australian government’s implementation of national testing and national partnership funding arrangements tied to the performance of students at or below minimum standards become increasingly apparent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paired speaking tests are now commonly used in both high-stakes testing and classroom assessment contexts. The co-construction of discourse by candidates is regarded as a strength of paired speaking tests, as candidates have the opportunity to display a wider range of interactional competencies, including turn taking, initiating topics and engaging in extended discourse with a partner, rather than an examiner. However, the impact of the interlocutor in such jointly negotiated discourse and the implications for assessing interactional competence are areas of concern. This article reports on the features of interactional competence that were salient to four trained raters of 12 paired speaking tests through the analysis of rater notes, stimulated verbal recalls and rater discussions. Findings enabled the identification of features of the performance noted by raters when awarding scores for interactional competence, and the particular features associated with higher and lower scores. A number of these features were seen by the raters as mutual achievements, which raises the issue of the extent to which it is possible to assess individual contributions to the co-constructed performance. The findings have implications for defining the construct of interactional competence in paired speaking tests and operationalising this in rating scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paired speaking tests are increasingly used in both low-and high-stakes second language assessment contexts. Until recently, very little was known about the way in which raters interpret and apply descriptors relating to interactional competence to a performance that is co-constructed. This book presents a study which explores the interactional features of a paired speaking test that were sailient to raters and the extent to which raters viewed the performance as separable. The study shows that raters use their own frames of reference to interpret descriptors and that they viewed certain features of the performance as mutual accomplishments. The book takes us 'beyond scores', and in doing so, contributes to the growing body of research on paired speaking tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to concerns about the quality of English Language Learning (ELL) education at tertiary level, the Chinese Ministry of Education (CMoE) launched the College English Reform Program (CERP) in 2004. By means of a press release (CMoE, 2005) and a guideline document titled College English Curriculum Requirements (CECR) (CMoE, 2007), the CERP proposed two major changes to the College English assessment policy, which were: (1) the shift to optional status for the compulsory external test, the College English Test Band 4 (CET4); and (2) the incorporation of formative assessment into the existing summative assessment framework. This study investigated the interactions between the College English assessment policy change, the theoretical underpinnings, and the assessment practices within two Chinese universities (one Key University and one Non-Key University). It adopted a sociocultural theoretical perspective to examine the implementation process as experienced by local actors of institutional and classroom levels. Systematic data analysis using a constant comparative method (Merriam, 1998) revealed that contextual factors and implementation issues did not lead to significant differences in the two cases. Lack of training in assessment and the sociocultural factors such as the traditional emphasis on the product of learning and hierarchical teacher/students relationship are decisive and responsible for the limited effect of the reform.