370 resultados para computer-based instrumentation
Resumo:
Vertebrplasty involved injecting cement into a fractured vertebra to provide stabilisation. There is clinical evidence to suggest however that vertebroplasty may be assocated with a higher risk of adjacent vertebral fracture; which may be due to the change in material properties of the post-procedure vertebra modifying the transmission of mechanical stresses to adjacent vertebrae.
Resumo:
An approach aimed at enhancing learning by matching individual students' preferred cognitive styles to computer-based instructional (CBI) material is presented. This approach was used in teaching some components of a third-year unit in an electrical engineering course at the Queensland University of Technology. Cognitive style characteristics of perceiving and processing information were considered. The bimodal nature of cognitive styles (analytic/imager, analytic/verbalizer, wholist/imager and wholist/verbalizer) was examined in order to assess the full ramification of cognitive styles on learning. In a quasi-experimental format, students' cognitive styles were analysed by cognitive style analysis (CSA) software. On the basis of the CSA results the system defaulted students to either matched or mismatched CBI material. The consistently better performance by the matched group suggests potential for further investigations where the limitations cited in this paper are eliminated. Analysing the differences between cognitive styles on individual test tasks also suggests that certain test tasks may better suit certain cognitive styles.
Resumo:
This paper reports two studies designed to investigate the effect on learning outcomes of matching individuals' preferred cognitive styles to computer-based instructional (CBI) material. Study 1 considered the styles individually as Verbalizer, Imager, Wholist and Analytic. Study 2 considered the bi-dimensional nature of cognitive styles in order to assess the full ramification of cognitive styles on learning: Analytic/Imager, Analytic/ Verbalizer, Wholist/Imager and the Wholist/Verbalizer. The mix of images and text, the nature of the text material, use of advance organizers and proximity of information to facilitate meaningful connections between various pieces of information were some of the considerations in the design of the CBI material. In a quasi-experimental format, students' cognitive styles were analysed by Cognitive Style Analysis (CSA) software. On the basis of the CSA result, the system defaulted students to either matched or mismatched CBI material by alternating between the two formats. The instructional material had a learning and a test phase. Learning outcome was tested on recall, labelling, explanation and problem-solving tasks. Comparison of the matched and mismatched instruction did not indicate significant difference between the groups, but the consistently better performance by the matched group suggests potential for further investigations where the limitations cited in this paper are eliminated. The result did indicate a significant difference between the four cognitive styles with the Wholist/Verbalizer group performing better then all other cognitive styles. Analysing the difference between cognitive styles on individual test tasks indicated significant difference on recall, labelling and explanation, suggesting that certain test tasks may suit certain cognitive styles.
Resumo:
This paper reports a study investigating the effect of individual cognitive styles on learning through computer-based instruction. The study adopted a quasi-experimental design involving four groups which were presented with instructional material that either matched or mismatched with their preferred cognitive styles. Cognitive styles were measured by cognitive style assessment software (Riding, 1991). The instructional material was designed to cater for the four cognitive styles identified by Riding. Students' learning outcomes were measured by the time taken to perform test tasks and the number of marks scored. The results indicate no significant difference between the matched and mismatched groups on both time taken and scores on test tasks. However, there was significant difference between the four cognitive styles on test score. The Wholist/Verbaliser group performed better then all other groups. There was no significant difference between the other groups. An analysis of the performance on test task by each cognitive style showed significant difference between the groups on recall, labelling and explanation. Difference between the cognitive style groups did not reach significance level for problem-solving tasks. The findings of the study indicate a potential for cognitive style to influence learning outcomes measured by performance on test tasks.
Resumo:
This study investigated whether conceptual development is greater if students learning senior chemistry hear teacher explanations and other traditional teaching approaches first then see computer based visualizations or vice versa. Five Canadian chemistry classes, taught by three different teachers, studied the topics of Le Chatelier’s Principle and dynamic chemical equilibria using scientific visualizations with the explanation and visualizations in different orders. Conceptual development was measured using a 12 item test based on the Chemistry Concepts Inventory. Data was obtained about the students’ abilities, learning styles (auditory, visual or kinesthetic) and sex, and the relationships between these factors and conceptual development due to the teaching sequences were investigated. It was found that teaching sequence is not important in terms of students’ conceptual learning gains, across the whole cohort or for any of the three subgroups.
Resumo:
Visual modes of representation have always been very important in science and science education. Interactive computer-based animations and simulations offer new visual resources for chemistry education. Many studies have shown that students enjoy learning with visualisations but few have explored how learning outcomes compare when teaching with or without visualisations. This study employs a quasi-experimental crossover research design and quantitative methods to measure the educational effectiveness - defined as level of conceptual development on the part of students - of using computer-based scientific visualisations versus teaching without visualisations in teaching chemistry. In addition to finding that teaching with visualisations offered outcomes that were not significantly different from teaching without visualisations, the study also explored differences in outcomes for male and female students, students with different learning styles (visual, aural, kinesthetic) and students of differing levels of academic ability.
Resumo:
Previously, expected satiety (ES) has been measured using software and two-dimensional pictures presented on a computer screen. In this context, ES is an excellent predictor of self-selected portions, when quantified using similar images and similar software. In the present study we sought to establish the veracity of ES as a predictor of behaviours associated with real foods. Participants (N = 30) used computer software to assess their ES and ideal portion of three familiar foods. A real bowl of one food (pasta and sauce) was then presented and participants self-selected an ideal portion size. They then consumed the portion ad libitum. Additional measures of appetite, expected and actual liking, novelty, and reward, were also taken. Importantly, our screen-based measures of expected satiety and ideal portion size were both significantly related to intake (p < .05). By contrast, measures of liking were relatively poor predictors (p > .05). In addition, consistent with previous studies, the majority (90%) of participants engaged in plate cleaning. Of these, 29.6% consumed more when prompted by the experimenter. Together, these findings further validate the use of screen-based measures to explore determinants of portion-size selection and energy intake in humans.
Resumo:
Floods are among the most devastating events that affect primarily tropical, archipelagic countries such as the Philippines. With the current predictions of climate change set to include rising sea levels, intensification of typhoon strength and a general increase in the mean annual precipitation throughout the Philippines, it has become paramount to prepare for the future so that the increased risk of floods on the country does not translate into more economic and human loss. Field work and data gathering was done within the framework of an internship at the former German Technical Cooperation (GTZ) in cooperation with the Local Government Unit of Ormoc City, Leyte, The Philippines, in order to develop a dynamic computer based flood model for the basin of the Pagsangaan River. To this end, different geo-spatial analysis tools such as PCRaster and ArcGIS, hydrological analysis packages and basic engineering techniques were assessed and implemented. The aim was to develop a dynamic flood model and use the development process to determine the required data, availability and impact on the results as case study for flood early warning systems in the Philippines. The hope is that such projects can help to reduce flood risk by including the results of worst case scenario analyses and current climate change predictions into city planning for municipal development, monitoring strategies and early warning systems. The project was developed using a 1D-2D coupled model in SOBEK (Deltares Hydrological modelling software package) and was also used as a case study to analyze and understand the influence of different factors such as land use, schematization, time step size and tidal variation on the flood characteristics. Several sources of relevant satellite data were compared, such as Digital Elevation Models (DEMs) from ASTER and SRTM data, as well as satellite rainfall data from the GIOVANNI server (NASA) and field gauge data. Different methods were used in the attempt to partially calibrate and validate the model to finally simulate and study two Climate Change scenarios based on scenario A1B predictions. It was observed that large areas currently considered not prone to floods will become low flood risk (0.1-1 m water depth). Furthermore, larger sections of the floodplains upstream of the Lilo- an’s Bridge will become moderate flood risk areas (1 - 2 m water depth). The flood hazard maps created for the development of the present project will be presented to the LGU and the model will be used to create a larger set of possible flood prone areas related to rainfall intensity by GTZ’s Local Disaster Risk Management Department and to study possible improvements to the current early warning system and monitoring of the basin section belonging to Ormoc City; recommendations about further enhancement of the geo-hydro-meteorological data to improve the model’s accuracy mainly on areas of interest will also be presented at the LGU.
Resumo:
Digital forensics investigations aim to find evidence that helps confirm or disprove a hypothesis about an alleged computer-based crime. However, the ease with which computer-literate criminals can falsify computer event logs makes the prosecutor's job highly challenging. Given a log which is suspected to have been falsified or tampered with, a prosecutor is obliged to provide a convincing explanation for how the log may have been created. Here we focus on showing how a suspect computer event log can be transformed into a hypothesised actual sequence of events, consistent with independent, trusted sources of event orderings. We present two algorithms which allow the effort involved in falsifying logs to be quantified, as a function of the number of `moves' required to transform the suspect log into the hypothesised one, thus allowing a prosecutor to assess the likelihood of a particular falsification scenario. The first algorithm always produces an optimal solution but, for reasons of efficiency, is suitable for short event logs only. To deal with the massive amount of data typically found in computer event logs, we also present a second heuristic algorithm which is considerably more efficient but may not always generate an optimal outcome.