883 resultados para Task-Based Instruction (TBI)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explores the effects of modeling instruction on student learning in physics. Multiple representations grounded in physical contexts were employed by students to analyze the results of inquiry lab investigations. Class whiteboard discussions geared toward a class consensus following Socratic dialogue were implemented throughout the modeling cycle. Lab investigations designed to address student preconceptions related to Newton’s Third Law were implemented. Student achievement was measured based on normalized gains on the Force Concept Inventory. Normalized FCI gains achieved by students in this study were comparable to those achieved by students of other novice modelers. Physics students who had taken a modeling Intro to Physics course scored significantly higher on the FCI posttest than those who had not. The FCI results also provided insight into deeply rooted student preconceptions related to Newton’s Third Law. Implications for instruction and the design of lab investigations related to Newton’s Third Law are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to explore the relationship between faculty perceptions, selected demographics, implementation of elements of transactional distance theory and online web-based course completion rates. This theory posits that the high transactional distance of online courses makes it difficult for students to complete these courses successfully; too often this is associated with low completion rates. Faculty members play an indispensable role in course design, whether online or face-to-face. They also influence course delivery format from design through implementation and ultimately to how students will experience the course. This study used transactional distance theory as the conceptual framework to examine the relationship between teaching and learning strategies used by faculty members to help students complete online courses. Faculty members’ sex, number of years teaching online at the college, and their online course completion rates were considered. A researcher-developed survey was used to collect data from 348 faculty members who teach online at two prominent colleges in the southeastern part of United States. An exploratory factor analysis resulted in six factors related to transactional distance theory. The factors accounted for slightly over 65% of the variance of transactional distance scores as measured by the survey instrument. Results provided support for Moore’s (1993) theory of transactional distance. Female faculty members scored higher in all the factors of transactional distance theory when compared to men. Faculty number of years teaching online at the college level correlated significantly with all the elements of transactional distance theory. Regression analysis was used to determine that two of the factors, instructor interface and instructor-learner interaction, accounted for 12% of the variance in student online course completion rates. In conclusion, of the six factors found, the two with the highest percentage scores were instructor interface and instructor-learner interaction. This finding, while in alignment with the literature concerning the dialogue element of transactional distance theory, brings a special interest to the importance of instructor interface as a factor. Surprisingly, based on the reviewed literature on transactional distance theory, faculty perceptions concerning learner-learner interaction was not an important factor and there was no learner-content interaction factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ontology engineering research community has focused for many years on supporting the creation, development and evolution of ontologies. Ontology forecasting, which aims at predicting semantic changes in an ontology, represents instead a new challenge. In this paper, we want to give a contribution to this novel endeavour by focusing on the task of forecasting semantic concepts in the research domain. Indeed, ontologies representing scientific disciplines contain only research topics that are already popular enough to be selected by human experts or automatic algorithms. They are thus unfit to support tasks which require the ability of describing and exploring the forefront of research, such as trend detection and horizon scanning. We address this issue by introducing the Semantic Innovation Forecast (SIF) model, which predicts new concepts of an ontology at time t + 1, using only data available at time t. Our approach relies on lexical innovation and adoption information extracted from historical data. We evaluated the SIF model on a very large dataset consisting of over one million scientific papers belonging to the Computer Science domain: the outcomes show that the proposed approach offers a competitive boost in mean average precision-at-ten compared to the baselines when forecasting over 5 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The enduring aging of the world population and prospective increase of age-related chronic diseases urge the implementation of new models for healthcare delivery. One strategy relies on ICT (Information and Communications Technology) home-based solutions allowing clients to pursue their treatments without institutionalization. Stroke survivors are a particular population that could strongly benefit from such solutions, but is not yet clear what the best approach is for bringing forth an adequate and sustainable usage of home-based rehabilitation systems. Here we explore two possible approaches: coaching and gaming. Methods: We performed trials with 20 healthy participants and 5 chronic stroke survivors to study and compare execution of an elbow flexion and extension task when performed within a coaching mode that provides encouragement or within a gaming mode. For each mode we analyzed compliance, arm movement kinematics and task scores. In addition, we assessed the usability and acceptance of the proposed modes through a customized self-report questionnaire. Results: In the healthy participants sample, 13/20 preferred the gaming mode and rated it as being significantly more fun (p < .05), but the feedback delivered by the coaching mode was subjectively perceived as being more useful (p < .01). In addition, the activity level (number of repetitions and total movement of the end effector) was significantly higher (p <.001) during coaching. However, the quality of movements was superior in gaming with a trend towards shorter movement duration (p=.074), significantly shorter travel distance (p <.001), higher movement efficiency (p <.001) and higher performance scores (p <.001). Stroke survivors also showed a trend towards higher activity levels in coaching, but with more movement quality during gaming. Finally, both training modes showed overall high acceptance. Conclusions: Gaming led to higher enjoyment and increased quality in movement execution in healthy participants. However, we observed that game mechanics strongly determined user behavior and limited activity levels. In contrast, coaching generated higher activity levels. Hence, the purpose of treatment and profile of end-users has to be considered when deciding on the most adequate approach for home based stroke rehabilitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This essay addresses the hitches and glitches in the hybrid instruction system of teaching and learning for large-enrollment courses. This new instructional methodology asks facilitators to redesign their entire traditional teaching and learning practices. The nature of subject to be taught via the hybrid mode further affects the success rate of the modules from the time of inception to launch to actual delivery and completion of the course. The entire process involves undoing the old habits and methodologies and instructors picking up new skills, along with the right motivation to take up the task. The course planning and delivery require a substantial commitment in terms of hours from the instructors catering to large-enrollment courses, along with pursuing their routine roles at the campuses. From the pupil’s perspective, the response varies, as hybrid learning seeks self-discipline and time management skills from the learner. After the initial roadblocks, students enjoy hybrid learning if the course structure and instructions are simple and the course content flexible and varied. We will study the problems and possible solutions to the success of the hybrid teaching–learning system at each stage where large number of students enrolled for a specific course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The enduring aging of the world population and prospective increase of age-related chronic diseases urge the implementation of new models for healthcare delivery. One strategy relies on ICT (Information and Communications Technology) home-based solutions allowing clients to pursue their treatments without institutionalization. Stroke survivors are a particular population that could strongly benefit from such solutions, but is not yet clear what the best approach is for bringing forth an adequate and sustainable usage of home-based rehabilitation systems. Here we explore two possible approaches: coaching and gaming. Methods: We performed trials with 20 healthy participants and 5 chronic stroke survivors to study and compare execution of an elbow flexion and extension task when performed within a coaching mode that provides encouragement or within a gaming mode. For each mode we analyzed compliance, arm movement kinematics and task scores. In addition, we assessed the usability and acceptance of the proposed modes through a customized self-report questionnaire. Results: In the healthy participants sample, 13/20 preferred the gaming mode and rated it as being significantly more fun (p < .05), but the feedback delivered by the coaching mode was subjectively perceived as being more useful (p < .01). In addition, the activity level (number of repetitions and total movement of the end effector) was significantly higher (p <.001) during coaching. However, the quality of movements was superior in gaming with a trend towards shorter movement duration (p=.074), significantly shorter travel distance (p <.001), higher movement efficiency (p <.001) and higher performance scores (p <.001). Stroke survivors also showed a trend towards higher activity levels in coaching, but with more movement quality during gaming. Finally, both training modes showed overall high acceptance. Conclusions: Gaming led to higher enjoyment and increased quality in movement execution in healthy participants. However, we observed that game mechanics strongly determined user behavior and limited activity levels. In contrast, coaching generated higher activity levels. Hence, the purpose of treatment and profile of end-users has to be considered when deciding on the most adequate approach for home based stroke rehabilitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: The aims of this study were to establish a Colombian smoothed centile charts and LMS tables for tríceps, subscapular and sum tríceps+subscapular skinfolds; appropriate cut-offs were selected using receiver operating characteristic analysis based in a populationbased sample of schoolchildren in Bogota, Colombia and to compare them with international studies. METHODS: A total of 9 618 children and adolescents attending public schools in Bogota, Colombia (55.7% girls; age range of 9–17.9 years). Height, weight, body mass index (BMI), waist circumference, triceps and subscapular skinfold measurements were obtained using standardized methods. We have calculated tríceps+subscapular skinfold (T+SS) sum. Smoothed percentile curves for triceps and subscapular skinfold thickness were derived by the LMS method. Receiver operating characteristics curve (ROC) analyses were used to evaluate the optimal cut-off point of tríceps, subscapular and sum tríceps+subscapular skinfolds for overweight and obesity based on the International Obesity Task Force (IOTF) definitions. Data were compared with international studies. RESULTS: Subscapular, triceps skinfolds and T+SS were significantly higher in girls than in boys (P <0.001). The median values for triceps, subscapular as well as T+SS skinfold thickness increased in a sex-specific pattern with age. The ROC analysis showed that subscapular, triceps skinfolds and T+SS have a high discrimination power in the identification of overweight and obesity in the sample population in this study. Based on the raw non-adjusted data, we found that Colombian boys and girls had high triceps and subscapular skinfolds values than their counterparts from Spain, UK, German and US. CONCLUSIONS: Our results provide sex- and age-specific normative reference standards for the triceps and subscapular skinfold thickness values in a large, population-based sample of 3 schoolchildren and adolescents from an Latin-American population. By providing LMS tables for Latin-American people based on Colombian reference data, we hope to provide quantitative tools for the study of obesity and its complications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waiting time at an intensive care unity stands for a key feature in the assessment of healthcare quality. Nevertheless, its estimation is a difficult task, not only due to the different factors with intricate relations among them, but also with respect to the available data, which may be incomplete, self-contradictory or even unknown. However, its prediction not only improves the patients’ satisfaction but also enhance the quality of the healthcare being provided. To fulfill this goal, this work aims at the development of a decision support system that allows one to predict how long a patient should remain at an emergency unit, having into consideration all the remarks that were just stated above. It is built on top of a Logic Programming approach to knowledge representation and reasoning, complemented with a Case Base approach to computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a process for the classifi cation of new residential electricity customers. The current state of the art is extended by using a combination of smart metering and survey data and by using model-based feature selection for the classifi cation task. Firstly, the normalized representative consumption profi les of the population are derived through the clustering of data from households. Secondly, new customers are classifi ed using survey data and a limited amount of smart metering data. Thirdly, regression analysis and model-based feature selection results explain the importance of the variables and which are the drivers of diff erent consumption profi les, enabling the extraction of appropriate models. The results of a case study show that the use of survey data signi ficantly increases accuracy of the classifi cation task (up to 20%). Considering four consumption groups, more than half of the customers are correctly classifi ed with only one week of metering data, with more weeks the accuracy is signifi cantly improved. The use of model-based feature selection resulted in the use of a signifi cantly lower number of features allowing an easy interpretation of the derived models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes our semi-automatic keyword based approach for the four topics of Information Extraction from Microblogs Posted during Disasters task at Forum for Information Retrieval Evaluation (FIRE) 2016. The approach consists three phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The job of a historian is to understand what happened in the past, resorting in many cases to written documents as a firsthand source of information. Text, however, does not amount to the only source of knowledge. Pictorial representations, in fact, have also accompanied the main events of the historical timeline. In particular, the opportunity of visually representing circumstances has bloomed since the invention of photography, with the possibility of capturing in real-time the occurrence of a specific events. Thanks to the widespread use of digital technologies (e.g. smartphones and digital cameras), networking capabilities and consequent availability of multimedia content, the academic and industrial research communities have developed artificial intelligence (AI) paradigms with the aim of inferring, transferring and creating new layers of information from images, videos, etc. Now, while AI communities are devoting much of their attention to analyze digital images, from an historical research standpoint more interesting results may be obtained analyzing analog images representing the pre-digital era. Within the aforementioned scenario, the aim of this work is to analyze a collection of analog documentary photographs, building upon state-of-the-art deep learning techniques. In particular, the analysis carried out in this thesis aims at producing two following results: (a) produce the date of an image, and, (b) recognizing its background socio-cultural context,as defined by a group of historical-sociological researchers. Given these premises, the contribution of this work amounts to: (i) the introduction of an historical dataset including images of “Family Album” among all the twentieth century, (ii) the introduction of a new classification task regarding the identification of the socio-cultural context of an image, (iii) the exploitation of different deep learning architectures to perform the image dating and the image socio-cultural context classification.