699 resultados para Learning center design
Resumo:
This mixed methods concurrent triangulation design study was predicated upon two models that advocated a connection between teaching presence and perceived learning: the Community of Inquiry Model of Online Learning developed by Garrison, Anderson, and Archer (2000); and the Online Interaction Learning Model by Benbunan-Fich, Hiltz, and Harasim (2005). The objective was to learn how teaching presence impacted students’ perceptions of learning and sense of community in intensive online distance education courses developed and taught by instructors at a regional comprehensive university. In the quantitative phase online surveys collected relevant data from participating students (N = 397) and selected instructional faculty (N = 32) during the second week of a three-week Winter Term. Student information included: demographics such as age, gender, employment status, and distance from campus; perceptions of teaching presence; sense of community; perceived learning; course length; and course type. The students claimed having positive relationships between teaching presence, perceived learning, and sense of community. The instructors showed similar positive relationships with no significant differences when the student and instructor data were compared. The qualitative phase consisted of interviews with 12 instructors who had completed the online survey and replied to all of the open-response questions. The two phases were integrated using a matrix generation, and the analysis allowed for conclusions regarding teaching presence, perceived learning, and sense of community. The findings were equivocal with regard to satisfaction with course length and the relative importance of the teaching presence components. A model was provided depicting relationships between and among teaching presence components, perceived learning, and sense of community in intensive online courses.
Resumo:
CONTEXT AND OBJECTIVE: Injuries are an important cause of morbidity during adolescence, but can be avoided through learning about some of their characteristics. This study aimed to identify the most frequent injuries among adolescents attended at an emergency service. DESIGN AND SETTING: Retrospective descriptive study on adolescents attended at the emergency service of the Teaching Health Center, Faculdade de Medicina de Ribeirao Preto (FMRP), between January 1, 2009, and September 30, 2009. METHODS: Age, sex, type of injury, site, day and time of occurrence, part of body involved, care received, whether the adolescent was accompanied at the time of injury and whether any type of counseling regarding injury prevention had been given were analyzed. RESULTS: Among 180 adolescents attended, 106 (58.8%) were boys and 74 (41.1%) were girls. Their ages were: 10 to 12 (66/36.6%), 12 to 14 (60/33.3%) and 14 to 16 years (54/30%). The injuries had occurred in public places (47.7%) and at home (21.1%). The main types were bruises (45.1%) and falls (39.2%), involving upper limbs (46.1%), lower limbs (31%) and head/neck (13.1%). The injuries occurred in the afternoon (44.4%) and morning (30%), on Mondays (17.7%) and Thursdays (16.6%). Radiological examinations were performed on 53.8%. At the time of injury, 76.1% of the adolescents were accompanied. Some type of counseling about injury prevention had been received by 39.4%. CONCLUSIONS: Although the injuries were of low severity, preventive attitudes need to be incorporated in order to reduce the risks and provide greater safety for adolescents.
Resumo:
The ALRED construction is a lightweight strategy for constructing message authentication algorithms from an underlying iterated block cipher. Even though this construction's original analyses show that it is secure against some attacks, the absence of formal security proofs in a strong security model still brings uncertainty on its robustness. In this paper, aiming to give a better understanding of the security level provided by different authentication algorithms based on this design strategy, we formally analyze two ALRED variants-the MARVIN message authentication code and the LETTERSOUP authenticated-encryption scheme,-bounding their security as a function of the attacker's resources and of the underlying cipher's characteristics.
Resumo:
The preparation, crystal structure and magnetic properties of a new oxalate-containing copper(II) chain of formula {[(CH3)(4)N](2)]Cu(C2O4)(2)] center dot H2O}(n) (1) [(CH3)(4)N+ = tetramethylammonium cation] are reported. The structure of 1 consists of anionic oxalate-bridged copper(II) chains, tetramethylammoniun cations and crystallization water molecules. Each copper(II) ion in 1 is surrounded by three oxalate ligands, one being bidentate and the other two exhibiting bis-bidenate coordination modes. Although all the tris-chelated copper(H) units from a given chain exhibit the same helicity, adjacent chains have opposite helicities and then an achiral structure results. Variable-temperature magnetic susceptibility measurements of 1 show the occurrence of a weak ferromagnetic interaction through the oxalate bridge [J = +1.14(1)cm(-1), the Hamiltonian being defined as H = -J Sigma nm S-i . S-j]. This value is analyzed and discussed in the light of available magnetostructural data for oxalate-bridged copper(H) complexes with the same out-of-plane exchange pathway. (C) 2012 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.
Resumo:
This paper aims to provide an improved NSGA-II (Non-Dominated Sorting Genetic Algorithm-version II) which incorporates a parameter-free self-tuning approach by reinforcement learning technique, called Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning (NSGA-RL). The proposed method is particularly compared with the classical NSGA-II when applied to a satellite coverage problem. Furthermore, not only the optimization results are compared with results obtained by other multiobjective optimization methods, but also guarantee the advantage of no time-spending and complex parameter tuning.
Resumo:
Background: Atrial fibrillation is a serious public health problem posing a considerable burden to not only patients, but the healthcare environment due to high rates of morbidity, mortality, and medical resource utilization. There are limited data on the variation in treatment practice patterns across different countries, healthcare settings and the associated health outcomes. Methods/design: RHYTHM-AF was a prospective observational multinational study of management of recent onset atrial fibrillation patients considered for cardioversion designed to collect data on international treatment patterns and short term outcomes related to cardioversion. We present data collected in 10 countries between May 2010 and June 2011. Enrollment was ongoing in Italy and Brazil at the time of data analysis. Data were collected at the time of atrial fibrillation episode in all countries (Australia, Brazil, France, Germany, Italy, Netherlands, Poland, Spain, Sweden, United Kingdom), and cumulative follow-up data were collected at day 60 (+/- 10) in all but Spain. Information on center characteristics, enrollment data, patient demographics, detail of atrial fibrillation episode, medical history, diagnostic procedures, acute treatment of atrial fibrillation, discharge information and the follow-up data on major events and rehospitalizations up to day 60 were collected. Discussion: A total of 3940 patients were enrolled from 175 acute care centers. 70.5% of the centers were either academic (44%) or teaching (26%) hospitals with an overall median capacity of 510 beds. The sites were mostly specialized with anticoagulation clinics (65.9%), heart failure (75.1%) and hypertension clinics (60.1%) available. The RHYTHM-AF registry will provide insight into regional variability of antiarrhythmic and antithrombotic treatment of atrial fibrillation, the appropriateness of such treatments with respect to outcomes, and their cost-efficacy. Observations will help inform strategies to improve cardiovascular outcomes in patients with atrial fibrillation.
Resumo:
Objectives To evaluate the learning, retention and transfer of performance improvements after Nintendo Wii Fit (TM) training in patients with Parkinson's disease and healthy elderly people. Design Longitudinal, controlled clinical study. Participants Sixteen patients with early-stage Parkinson's disease and 11 healthy elderly people. Interventions Warm-up exercises and Wii Fit training that involved training motor (shifts centre of gravity and step alternation) and cognitive skills. A follow-up evaluative Wii Fit session was held 60 days after the end of training. Participants performed a functional reach test before and after training as a measure of learning transfer. Main outcome measures Learning and retention were determined based on the scores of 10 Wii Fit games over eight sessions. Transfer of learning was assessed after training using the functional reach test. Results Patients with Parkinson's disease showed no deficit in learning or retention on seven of the 10 games, despite showing poorer performance on five games compared with the healthy elderly group. Patients with Parkinson's disease showed marked learning deficits on three other games, independent of poorer initial performance. This deficit appears to be associated with cognitive demands of the games which require decision-making, response inhibition, divided attention and working memory. Finally, patients with Parkinson's disease were able to transfer motor ability trained on the games to a similar untrained task. Conclusions The ability of patients with Parkinson's disease to learn, retain and transfer performance improvements after training on the Nintendo Wii Fit depends largely on the demands, particularly cognitive demands, of the games involved, reiterating the importance of game selection for rehabilitation purposes. (C) 2012 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Resumo:
CONTEXT AND OBJECTIVE: Injuries are an important cause of morbidity during adolescence, but can be avoided through learning about some of their characteristics. This study aimed to identify the most frequent injuries among adolescents attended at an emergency service. DESIGN AND SETTING: Retrospective descriptive study on adolescents attended at the emergency service of the Teaching Health Center, Faculdade de Medicina de Ribeirão Preto (FMRP), between January 1, 2009, and September 30, 2009. METHODS: Age, sex, type of injury, site, day and time of occurrence, part of body involved, care received, whether the adolescent was accompanied at the time of injury and whether any type of counseling regarding injury prevention had been given were analyzed. RESULTS: Among 180 adolescents attended, 106 (58.8%) were boys and 74 (41.1%) were girls. Their ages were: 10 to 12 (66/36.6%), 12 to 14 (60/33.3%) and 14 to 16 years (54/30%). The injuries had occurred in public places (47.7%) and at home (21.1%). The main types were bruises (45.1%) and falls (39.2%), involving upper limbs (46.1%), lower limbs (31%) and head/neck (13.1%). The injuries occurred in the afternoon (44.4%) and morning (30%), on Mondays (17.7%) and Thursdays (16.6%). Radiological examinations were performed on 53.8%. At the time of injury, 76.1% of the adolescents were accompanied. Some type of counseling about injury prevention had been received by 39.4%. CONCLUSIONS: Although the injuries were of low severity, preventive attitudes need to be incorporated in order to reduce the risks and provide greater safety for adolescents.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
[EN]The use of IT for teaching and learning is widely accepted as a means to enhance the learning experience. Hence, education professionals at all levels experience the impulse to introduce some kind of IT design in classrooms of every kind, where the use of IT has, at points, become mandatory. Nevertheless, there are little conclusive data that pinpoints what are the exact benefits that a given IT design, per se, brings to teaching or learning [1,2,3,4]. As any other technology, we contend, IT should be closely associated to the teaching methodology to be implemented, having into account all the factors that are going to influence all the process. In this article, we will analyse parameters that are considered to be critical if we are to predict the posible success of an IT design.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
During this work, done mainly in the laboratories of the department of Industrial Chemistry and Materials of the University of Bologna but also in the laboratories of the Carnegie Mellon University in collaboration with prof. K. Matyjaszewski and at the university of Zaragoza in collaboration with prof. J. Barberá, was focused mainly on the synthesis and characterization of new functional polymeric materials. In the past years our group gained a deep knowledge about the photomodulation of azobenzene containing polymers. The aim of this thesis is to push forward the performances of these materials by the synthesis of well defined materials, in which, by a precise control over the macromolecular structures, better or even new functionality can be delivered to the synthesized material. For this purpose, besides the rich photochemistry of azoaromatic polymers that brings to the application, the control offered from the recent techniques of controlled radical polymerization, ATRP over all, gives an enormous range of opportunity for the developing of a new generation of functional materials whose properties are determinate not only by the chemical nature of the functional center (e.g. azoaromatic chromophore) but are tuned and even amplified by a synergy with the whole macromolecular structure. Old materials in new structures. In this contest the work of this thesis was focused mainly on the synthesis and characterization of well defined azoaromatic polymers in order to establish, for the first time, precise structure-properties correlation. In fact a series of well defined different azopolymers, chiral and achiral, with different molecular weight and highly monodisperse were synthesized and their properties were studied, in terms of photoexpansion and photomodulation of chirality. We were then able to study the influence of the macromolecular structure in terms of molecular weight and ramification on the studied properties. The huge amount of possibility offered by the tailoring of the macromolecular structure were exploited for the synthesis of new cholesteric photochromic polymers that can be used as a smart label for the certification of the thermal history of any thermosensitive product. Finally the ATRP synthesis allowed us to synthesize a total new class of material, named molecular brushes: a flat surface covered with an ultra thin layer of polymeric chain covalently bond onto the surface from one end. This new class of materials is of extreme interest as they offer the possibility to tune and manage the interaction of the surface with the environment. In this contest we synthesized both azoaromatic surfaces, growing directly the polymer from the surface, and mixed brushes: surfaces covered with incompatible macromolecules. Both type of surfaces acts as “smart” surfaces: the first it is able to move the orientation of a LC cell by simply photomodulation and, thanks to the robustness of the covalent bond, can be used as a command surface overcoming all the limitation due to the dewetting of the active layer. The second type of surface, functionalized by a grafting-to method, can self assemble the topmost layer responding to changed environmental conditions, exposing different functionality according to different environment.
Resumo:
The aim of this thesis was to synthesize multipotent drugs for the treatment of Alzheimer’s disease (AD) and for benign prostatic hyperplasia (BPH), two diseases that affect the elderly. AD is a neurodegenerative disorder that is characterized, among other factors, by loss of cholinergic neurons. Selective activation of M1 receptors through an allosteric site could restore the cholinergic hypofunction, improving the cognition in AD patients. We describe here the discovery and SAR of a novel series of quinone derivatives. Among them, 1 was the most interesting, being a high M1 selective positive allosteric modulator. At 100 nM, 1 triplicated the production of cAMP induced by oxotremorine. Moreover, it inhibited AChE and it displayed antioxidant properties. Site-directed mutagenesis experiments indicated that 1 acts at an allosteric site involving residue F77. Thus, 1 is a promising drug because the M1 activation may offer disease-modifying properties that could address and reduce most of AD hallmarks. BPH is an enlargement of the prostate caused by increased cellular growth. Blockade of α1-ARs is the predominant form of medical therapy for the treatment of the symptoms associated with BPH. α1-ARs are classified into three subtypes. The α1A- and α1D-AR subtypes are predominant in the prostate, while α1B-ARs regulate the blood pressure. Herein, we report the synthesis of quinazoline-derivatives obtained replacing the piperazine ring of doxazosin and prazosin with (S)- or (R)-3-aminopiperidine. The presence of a chiral center in the 3-C position of the piperidine ring allowed us to exploit the importance of stereochemistry in the binding at α1-ARs. It turned out that the S configuration at the 3-C position of the piperidine increases the affinity of the compounds at all three α1-AR subtypes, whereas the configuration at the benzodioxole ring of doxazosin derivatives is not critical for the interaction with α1-ARs.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
The present work is aimed to the study and the analysis of the defects detected in the civil structure and that are object of civil litigation in order to create an instruments capable of helping the different actor involved in the building process. It is divided in three main sections. The first part is focused on the collection of the data related to the civil proceeding of the 2012 and the development of in depth analysis of the main aspects regarding the defects on existing buildings. The research center “Osservatorio Claudio Ceccoli” developed a system for the collection of the information coming from the civil proceedings of the Court of Bologna. Statistical analysis are been performed and the results are been shown and discussed in the first chapters.The second part analyzes the main issues emerged during the study of the real cases, related to the activities of the technical consultant. The idea is to create documents, called “focus”, addressed to clarify and codify specific problems in order to develop guidelines that help the technician editing of the technical advice.The third part is centered on the estimation of the methods used for the collection of data. The first results show that these are not efficient. The critical analysis of the database, the result and the experience and throughout, allowed the implementation of the collection system for the data.