692 resultados para IMS Learning Design
Resumo:
This study examines how awareness of the interior architecture of a building, specifically daylighing, affects students academic performance. Extensive research has proven that the use of daylighting in a classroom can significantly enhance students’ academic success. The problem statement and purpose of this study is to determine if student awareness of daylighting in their learning environment affects academic performance compared to students with no knowledge of daylighting. Research and surveys in existing and newly constructed high schools were conducted to verify the results of this study. These design ideas and concepts could influence the architecture and design industry to advocate construction and building requirements that incorporate more sustainable design teaching techniques.
Resumo:
This mixed methods concurrent triangulation design study was predicated upon two models that advocated a connection between teaching presence and perceived learning: the Community of Inquiry Model of Online Learning developed by Garrison, Anderson, and Archer (2000); and the Online Interaction Learning Model by Benbunan-Fich, Hiltz, and Harasim (2005). The objective was to learn how teaching presence impacted students’ perceptions of learning and sense of community in intensive online distance education courses developed and taught by instructors at a regional comprehensive university. In the quantitative phase online surveys collected relevant data from participating students (N = 397) and selected instructional faculty (N = 32) during the second week of a three-week Winter Term. Student information included: demographics such as age, gender, employment status, and distance from campus; perceptions of teaching presence; sense of community; perceived learning; course length; and course type. The students claimed having positive relationships between teaching presence, perceived learning, and sense of community. The instructors showed similar positive relationships with no significant differences when the student and instructor data were compared. The qualitative phase consisted of interviews with 12 instructors who had completed the online survey and replied to all of the open-response questions. The two phases were integrated using a matrix generation, and the analysis allowed for conclusions regarding teaching presence, perceived learning, and sense of community. The findings were equivocal with regard to satisfaction with course length and the relative importance of the teaching presence components. A model was provided depicting relationships between and among teaching presence components, perceived learning, and sense of community in intensive online courses.
Resumo:
This paper aims to provide an improved NSGA-II (Non-Dominated Sorting Genetic Algorithm-version II) which incorporates a parameter-free self-tuning approach by reinforcement learning technique, called Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning (NSGA-RL). The proposed method is particularly compared with the classical NSGA-II when applied to a satellite coverage problem. Furthermore, not only the optimization results are compared with results obtained by other multiobjective optimization methods, but also guarantee the advantage of no time-spending and complex parameter tuning.
Resumo:
Objectives To evaluate the learning, retention and transfer of performance improvements after Nintendo Wii Fit (TM) training in patients with Parkinson's disease and healthy elderly people. Design Longitudinal, controlled clinical study. Participants Sixteen patients with early-stage Parkinson's disease and 11 healthy elderly people. Interventions Warm-up exercises and Wii Fit training that involved training motor (shifts centre of gravity and step alternation) and cognitive skills. A follow-up evaluative Wii Fit session was held 60 days after the end of training. Participants performed a functional reach test before and after training as a measure of learning transfer. Main outcome measures Learning and retention were determined based on the scores of 10 Wii Fit games over eight sessions. Transfer of learning was assessed after training using the functional reach test. Results Patients with Parkinson's disease showed no deficit in learning or retention on seven of the 10 games, despite showing poorer performance on five games compared with the healthy elderly group. Patients with Parkinson's disease showed marked learning deficits on three other games, independent of poorer initial performance. This deficit appears to be associated with cognitive demands of the games which require decision-making, response inhibition, divided attention and working memory. Finally, patients with Parkinson's disease were able to transfer motor ability trained on the games to a similar untrained task. Conclusions The ability of patients with Parkinson's disease to learn, retain and transfer performance improvements after training on the Nintendo Wii Fit depends largely on the demands, particularly cognitive demands, of the games involved, reiterating the importance of game selection for rehabilitation purposes. (C) 2012 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
[EN]The use of IT for teaching and learning is widely accepted as a means to enhance the learning experience. Hence, education professionals at all levels experience the impulse to introduce some kind of IT design in classrooms of every kind, where the use of IT has, at points, become mandatory. Nevertheless, there are little conclusive data that pinpoints what are the exact benefits that a given IT design, per se, brings to teaching or learning [1,2,3,4]. As any other technology, we contend, IT should be closely associated to the teaching methodology to be implemented, having into account all the factors that are going to influence all the process. In this article, we will analyse parameters that are considered to be critical if we are to predict the posible success of an IT design.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
Broad consensus has been reached within the Education and Cognitive Psychology research communities on the need to center the learning process on experimentation and concrete application of knowledge, rather than on a bare transfer of notions. Several advantages arise from this educational approach, ranging from the reinforce of students learning, to the increased opportunity for a student to gain greater insight into the studied topics, up to the possibility for learners to acquire practical skills and long-lasting proficiency. This is especially true in Engineering education, where integrating conceptual knowledge and practical skills assumes a strategic importance. In this scenario, learners are called to play a primary role. They are actively involved in the construction of their own knowledge, instead of passively receiving it. As a result, traditional, teacher-centered learning environments should be replaced by novel learner-centered solutions. Information and Communication Technologies enable the development of innovative solutions that provide suitable answers to the need for the availability of experimentation supports in educational context. Virtual Laboratories, Adaptive Web-Based Educational Systems and Computer-Supported Collaborative Learning environments can significantly foster different learner-centered instructional strategies, offering the opportunity to enhance personalization, individualization and cooperation. More specifically, they allow students to explore different kinds of materials, to access and compare several information sources, to face real or realistic problems and to work on authentic and multi-facet case studies. In addition, they encourage cooperation among peers and provide support through coached and scaffolded activities aimed at fostering reflection and meta-cognitive reasoning. This dissertation will guide readers within this research field, presenting both the theoretical and applicative results of a research aimed at designing an open, flexible, learner-centered virtual lab for supporting students in learning Information Security.
Resumo:
In this thesis is described the design and synthesis of potential agents for the treatment of the multifactorial Alzheimer’s disease (AD). Our multi-target approach was to consider cannabinoid system involved in AD, together with classic targets. In the first project, designed modifications were performed on lead molecule in order to increase potency and obtain balanced activities on fatty acid amide hydrolase and cholinesterases. A small library of compounds was synthesized and biological results showed increased inhibitory activity (nanomolar range) related to selected target. The second project was focused on the benzofuran framework, a privileged structure being a common moiety found in many biologically active natural products and therapeutics. Hybrid molecules were designed and synthesized, focusing on the inhibition of cholinesterases, Aβ aggregation, FAAH and on the interaction with CB receptors. Preliminary results showed that several compounds are potent CB ligands, in particular the high affinity for CB2 receptors, could open new opportunities to modulate neuroinflammation. The third and the fourth project were carried out at the IMS, Aberdeen, under the supervision of Prof. Matteo Zanda. The role of the cannabinoid system in the brain is still largely unexplored and the relationship between the CB1 receptors functional modification, density and distribution and the onset of a pathological state is not well understood. For this reasons, Rimonabant analogues suitable as radioligands were synthesized. The latter, through PET, could provide reliable measurements of density and distribution of CB1 receptors in the brain. In the fifth project, in collaboration with CHyM of York, the goal was to develop arginine analogues that are target specific due to their exclusively location into NOS enzymes and could work as MRI contrasting agents. Synthesized analogues could be suitable substrate for the transfer of polarization by p-H2 molecules through SABRE technique transforming MRI a more sensitive and faster technique.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Background: Medication-related problems are common in the growing population of older adults and inappropriate prescribing is a preventable risk factor. Explicit criteria such as the Beers criteria provide a valid instrument for describing the rate of inappropriate medication (IM) prescriptions among older adults. Objective: To reduce IM prescriptions based on explicit Beers criteria using a nurse-led intervention in a nursing-home (NH) setting. Study Design: The pre/post-design included IM assessment at study start (pre-intervention), a 4-month intervention period, IM assessment after the intervention period (post-intervention) and a further IM assessment at 1-year follow-up. Setting: 204-bed inpatient NH in Bern, Switzerland. Participants: NH residents aged ≥60 years. Intervention: The intervention included four key intervention elements: (i) adaptation of Beers criteria to the Swiss setting; (ii) IM identification; (iii) IM discontinuation; and (iv) staff training. Main Outcome Measure: IM prescription at study start, after the 4-month intervention period and at 1-year follow-up. Results: The mean±SD resident age was 80.3±8.8 years. Residents were prescribed a mean±SD 7.8±4.0 medications. The prescription rate of IMs decreased from 14.5% pre-intervention to 2.8% post-intervention (relative risk [RR] = 0.2; 95% CI 0.06, 0.5). The risk of IM prescription increased nonstatistically significantly in the 1-year follow-up period compared with post-intervention (RR = 1.6; 95% CI 0.5, 6.1). Conclusions: This intervention to reduce IM prescriptions based on explicit Beers criteria was feasible, easy to implement in an NH setting, and resulted in a substantial decrease in IMs. These results underscore the importance of involving nursing staff in the medication prescription process in a long-term care setting.
Resumo:
Unique as snowflakes, learning communities are formed in countless ways. Some are designed specifically for first-year students, while others offer combined or clustered upper-level courses. Most involve at least two linked courses, and some add residential and social components. Many address core general education and basic skills requirements. Learning communities differ in design, yet they are similar in striving to enhance students' academic and social growth. First-year learning communities foster experiences that have been linked to academic success and retention. They also offer unique opportunities for librarians interested in collaborating with departmental faculty and enhancing teaching skills. This article will explore one librarian's experiences teaching within three first-year learning communities at Buffalo State College.
Digital signal processing and digital system design using discrete cosine transform [student course]
Resumo:
The discrete cosine transform (DCT) is an important functional block for image processing applications. The implementation of a DCT has been viewed as a specialized research task. We apply a micro-architecture based methodology to the hardware implementation of an efficient DCT algorithm in a digital design course. Several circuit optimization and design space exploration techniques at the register-transfer and logic levels are introduced in class for generating the final design. The students not only learn how the algorithm can be implemented, but also receive insights about how other signal processing algorithms can be translated into a hardware implementation. Since signal processing has very broad applications, the study and implementation of an extensively used signal processing algorithm in a digital design course significantly enhances the learning experience in both digital signal processing and digital design areas for the students.
Resumo:
Aims: To determine whether or not a Learning Disability(LD) label leads to stigmatization. Study Design: This research used a 2(sex of participant) x 2(LD label)x 2 (Sex of stimulus person) factorial design. Place and Duration of Study: Bucknell University, between October 2010 and April 2011. Methodology: Sample: We included 200 participants (137 women and 63 men, ranging in age from 18 – 75 years, M = 26.41. Participants rated the stimulus individual on 27 personality traits, 8 Life success measures, and the Big-5 personality dimensions. Also, participants completed a Social Desirability measure. Results: A MANOVA revealed a main effect for the Learning Disability description, F(6, 185) = 6.41 p< .0001, eta2 = .17,for the Big-5 personality dimensions, Emotional Stability, F(1, 185) = 13.39, p < .001, eta2 = .066, and Openness to Experiences F(1,185) = 7.12, p< .008, eta2 = .036.Stimulus individuals described as having a learning disability were perceived as being less emotionally stable and more open to experiences than those described as not having a learning disability. Another MANOVA revealed a main effect for having a disability or not, F(8, 183) = 4.29, p< .0001, eta2 = .158, for the Life Success items, Attractiveness, F(1, 198) = 16.63, p< .0001, eta2 = .080, and Future Success,F(1, 198) = 4.57, p< .034, eta2 = .023. Stimulus individuals described as having a learning disability were perceived as being less attractive and with less potential for success than those described as not having a learning disability. Conclusion: The results of this research provide evidence that a bias exists toward those who have learning disabilities. The mere presence of an LD label had the ability to cause a differential perception of those with LDs and those without LDs.
Resumo:
Disturbances in reward processing have been implicated in bulimia nervosa (BN). Abnormalities in processing reward-related stimuli might be linked to dysfunctions of the catecholaminergic neurotransmitter system, but findings have been inconclusive. A powerful way to investigate the relationship between catecholaminergic function and behavior is to examine behavioral changes in response to experimental catecholamine depletion (CD). The purpose of this study was to uncover putative catecholaminergic dysfunction in remitted subjects with BN who performed a reinforcement-learning task after CD. CD was achieved by oral alpha-methyl-para-tyrosine (AMPT) in 19 unmedicated female subjects with remitted BN (rBN) and 28 demographically matched healthy female controls (HC). Sham depletion administered identical capsules containing diphenhydramine. The study design consisted of a randomized, double-blind, placebo-controlled crossover, single-site experimental trial. The main outcome measures were reward learning in a probabilistic reward task analyzed using signal-detection theory. Secondary outcome measures included self-report assessments, including the Eating Disorder Examination-Questionnaire. Relative to healthy controls, rBN subjects were characterized by blunted reward learning in the AMPT-but not in placebo-condition. Highlighting the specificity of these findings, groups did not differ in their ability to perceptually distinguish between stimuli. Increased CD-induced anhedonic (but not eating disorder) symptoms were associated with a reduced response bias toward a more frequently rewarded stimulus. In conclusion, under CD, rBN subjects showed reduced reward learning compared with healthy control subjects. These deficits uncover disturbance of the central reward processing systems in rBN related to altered brain catecholamine levels, which might reflect a trait-like deficit increasing vulnerability to BN.