741 resultados para Learning Course Model
Resumo:
Purpose: The retinal balance between pro- and anti-angiogenic factors is critical for angiogenesis control, but is also involved in cell survival. We previously reported upregulation of VEGF and photoreceptor (PR) cell death in the Light-damage (LD) model. Preliminary results showed that anti-VEGF can rescue PR from cell death. Thus, we investigated the role of VEGF on the retina and we herein described the effect of anti-VEGF antibody delivered by lentiviral gene transfer in this model.Methods: To characterize the action of VEGF during the LD, we exposed Balb/c mice subretinally injected with LV-anti-VEGF, or not, to 5'000 lux for 1h. We next evaluated the retinal function, PR survival and protein expression (VEGF, VEGFR1/2, Src, PEDF, p38MAPK, Akt, Peripherin, SWL-opsin) after LD. We analyzed Blood retinal barrier (BRB) integrity on flat-mounted RPE and cryosections stained with β-catenin, ZO-1, N-cadherin and albumin.Results: Results indicate that the VEGF pathway is modulated after LD. LD leads to extravascular albumin leakage and BRB breakdown: β-catenin, ZO-1 and N-cadherin translocate to the cytoplasm of RPE cells showing loss of cell cohesion. This phenomenon is in adequacy with the VEGF time-course expression. Assessment of the retinal function reveals that PR rescue correlates with the level of LV-anti-VEGF expression. Rhodopsin content was higher in the LV-anti-VEGF group than in controls and measures of the ONL thickness indicate that LV-anti-VEGF preserves by 82% the outer nuclear layer from degeneration. Outer segments (OS) appeared well organized with an appropriate length in the LV-anti-VEGF group compared to controls, and the expression of SWL-opsin is maintained in the OS without being mislocalized as in the LV-GFP group. Finally, LV-anti-VEGF treatment prevents BRB breakdown and maintained RPE cell integrity.Conclusions: This study involves VEGF in LD and highlights the prime importance of the BRB integrity for PR survival. Taken together, these results show that anti-VEGF is neuroprotective in this model and maintains functional PR layer in LD-treated mice.
Resumo:
The aim of this communication is to describe the results of a pilot project for the assessment of the transversal competency "the capacity for learning and responsibility". This competency is centred on the capacity for the analysis, synthesis, overview, and practical application of newly acquired knowledge. It is proposed by the University of Barcelona in its undergraduate degree courses,through multidisciplinary teaching teams. The goal of the pilot project is to evaluate this competency.We worked with a group of students in a first-year Business Degree maths course, during the firstsemester of the 2012/2013 academic year. The development of the project was in two stages: (i)design of a specific task to share with the same students in the following semester when the subjectwould be economic history; and (ii) the elaboration of an evaluation rubric in which we defined thecontent, the aspects to evaluate, the evaluation criteria, and the marking scale. The attainment of theexpectations of quality on the specific task was scored following this rubric, which provided a singlebasis for the precise and fair assessment by the instructor and for the students' own self-evaluation.We conclude by describing the main findings of the experience. There particularly stood out the highscore in the students' self-evaluation given to one aspect of the competency – their capacity forlearning – in stark contrast to their instructor's quite negative evaluation. This means that we have towork both to improve teaching practice and to identify the optimal competency evaluationmethodology.
Resumo:
This article offers a review of the literature on interprofessional education (EIP), a form of education which brings together members of two or more professions in a joint training. In this course, participants gain knowledge through other professionals and about them. The goal of EIP is to improve collaboration between health professionals and the quality of patient care. The EIP is booming worldwide and seems for from a mere fad. This expansion can be explained by several factors: the increasing importance attributed to the quality of care and patient safety, care changes (aging population and increasing chronic diseases) and the shortage of health professionals. The expectations of the EIP are large, while the evidence supporting its effectiveness is being built.
Resumo:
Des del 1995 el Consell Europeu ha promogut l’aprenentatge d’una segona llengua a través d’una altra àrea en el que coneixem per CLIL (Contingut i Llengua Integrats en l’Aprenentatge) o en altres paraules: “una activitat en la qual l’aprenentatge d’una llengua estrangera és utilitzada com una eina per l’aprenentatge d’una àrea no linguística en la qual llengua i contingut tenen un mateix paper” (Marsh, 2002). Tot I així, “ensenyar una àrea a través d’una llengua estrangera no és el mateix que la integració de llengua i contingut”. CLIL comporta altres implicacions metodològiques pel que fa a la planificació, estratègies didàctiques i particularment al rol del docent. De fet, són aquests factors els que componen l’èxit o el fracàs en l’implementació de CLIL. i per aquest motiu pretenc analitzar i descriure les diferències entre una sessió de CLIL i una de llengua anglesa. Aquesta investigació és un estudi de cas que vol oferir una mirada a les diferències entre una unitat de CLIL i una de llengua anglesa portades a terme en un grup de 3r de primària a l’escola de Sant Miquel dels Sants (Vic) pel que fa a la planificació, les estratègies i actuacions del docent.
Resumo:
The Learning Affect Monitor (LAM) is a new computer-based assessment system integrating basic dimensional evaluation and discrete description of affective states in daily life, based on an autonomous adapting system. Subjects evaluate their affective states according to a tridimensional space (valence and activation circumplex as well as global intensity) and then qualify it using up to 30 adjective descriptors chosen from a list. The system gradually adapts to the user, enabling the affect descriptors it presents to be increasingly relevant. An initial study with 51 subjects, using a 1 week time-sampling with 8 to 10 randomized signals per day, produced n = 2,813 records with good reliability measures (e.g., response rate of 88.8%, mean split-half reliability of .86), user acceptance, and usability. Multilevel analyses show circadian and hebdomadal patterns, and significant individual and situational variance components of the basic dimension evaluations. Validity analyses indicate sound assignment of qualitative affect descriptors in the bidimensional semantic space according to the circumplex model of basic affect dimensions. The LAM assessment module can be implemented on different platforms (palm, desk, mobile phone) and provides very rapid and meaningful data collection, preserving complex and interindividually comparable information in the domain of emotion and well-being.
Resumo:
Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.
Resumo:
Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.
Resumo:
BACKGROUND: The clinical course of HIV-1 infection is highly variable among individuals, at least in part as a result of genetic polymorphisms in the host. Toll-like receptors (TLRs) have a key role in innate immunity and mutations in the genes encoding these receptors have been associated with increased or decreased susceptibility to infections. OBJECTIVES: To determine whether single-nucleotide polymorphisms (SNPs) in TLR2-4 and TLR7-9 influenced the natural course of HIV-1 infection. METHODS: Twenty-eight SNPs in TLRs were analysed in HAART-naive HIV-positive patients from the Swiss HIV Cohort Study. The SNPs were detected using Sequenom technology. Haplotypes were inferred using an expectation-maximization algorithm. The CD4 T cell decline was calculated using a least-squares regression. Patients with a rapid CD4 cell decline, less than the 15th percentile, were defined as rapid progressors. The risk of rapid progression associated with SNPs was estimated using a logistic regression model. Other candidate risk factors included age, sex and risk groups (heterosexual, homosexual and intravenous drug use). RESULTS: Two SNPs in TLR9 (1635A/G and +1174G/A) in linkage disequilibrium were associated with the rapid progressor phenotype: for 1635A/G, odds ratio (OR), 3.9 [95% confidence interval (CI),1.7-9.2] for GA versus AA and OR, 4.7 (95% CI,1.9-12.0) for GG versus AA (P = 0.0008). CONCLUSION: Rapid progression of HIV-1 infection was associated with TLR9 polymorphisms. Because of its potential implications for intervention strategies and vaccine developments, additional epidemiological and experimental studies are needed to confirm this association.
Resumo:
Quality of life has been extensively discussed in acute and chronic illnesses. However a dynamic model grounded in the experience of patients in the course of transplantation has not been to our knowledge developed. In a qualitative longitudinal study, patients awaiting solid organ transplantation participated in semi-structured interviews: Exploring topics pre-selected on previous research literature review. Creative interview was privileged, open to themes patients would like to discuss at the different steps of the transplantation process. A qualitative thematic and reflexive analysis was performed, and a model of the dimensions constitutive of quality of life from the perspective of the patients was elaborated. Quality of life is not a stable construct in a long lasting illness-course, but evolves with illness constraints, treatments and outcomes. Dimensions constitutive of quality of life are defined, each of them containing different sub-categories depending on the organ related illness co-morbidities and the stage of illness-course.
Resumo:
An Adobe (R) animation is presented for use in undergraduate Biochemistry courses, illustrating the mechanism of Na+ and K+ translocation coupled to ATP hydrolysis by the (Na, K)-ATPase, a P-2c-type ATPase, or ATP-powered ion pump that actively translocates cations across plasma membranes. The enzyme is also known as an E-1/E-2-ATPase as it undergoes conformational changes between the E-1 and E-2 forms during the pumping cycle, altering the affinity and accessibility of the transmembrane ion-binding sites. The animation is based on Horisberger's scheme that incorporates the most recent significant findings to have improved our understanding of the (Na, K)-ATPase structure function relationship. The movements of the various domains within the (Na, K)-ATPase alpha-subunit illustrate the conformational changes that occur during Na+ and K+ translocation across the membrane and emphasize involvement of the actuator, nucleotide, and phosphorylation domains, that is, the "core engine" of the pump, with respect to ATP binding, cation transport, and ADP and P-i release.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Resumo:
La asignatura troncal “Evaluación Psicológica” de los estudios de Psicología y delestudio de grado “Desarrollo humano en la sociedad de la información” de laUniversidad de Girona consta de 12 créditos según la Ley Orgánica de Universidades.Hasta el año académico 2004-05 el trabajo no presencial del alumno consistía en larealización de una evaluación psicológica que se entregaba por escrito a final de curso yde la cual el estudiante obtenía una calificación y revisión si se solicitaba. En el caminohacia el Espacio Europeo de Educación Superior, esta asignatura consta de 9 créditosque equivalen a un total de 255 horas de trabajo presencial y no presencial delestudiante. En los años académicos 2005-06 y 2006-07 se ha creado una guía de trabajopara la gestión de la actividad no presencial con el objetivo de alcanzar aprendizajes anivel de aplicación y solución de problemas/pensamiento crítico (Bloom, 1975)siguiendo las recomendaciones de la Agencia para la Calidad del Sistema Universitariode Cataluña (2005). La guía incorpora: los objetivos de aprendizaje, los criterios deevaluación, la descripción de las actividades, el cronograma semanal de trabajos paratodo el curso, la especificación de las tutorías programadas para la revisión de losdiversos pasos del proceso de evaluación psicológica y el uso del foro para elconocimiento, análisis y crítica constructiva de las evaluaciones realizadas por loscompañeros
Resumo:
The traditional model of learning based on knowledge transfer doesn't promote the acquisition of information-related competencies and development of autonomous learning. More needs to be done to embrace learner-centred approaches, based on constructivism, collaboration and co-operation. This new learning paradigm is aligned with the European Higher Education Area (EHEA) requirements. In this sense, a learning experience based in faculty' librarian collaboration was seen as the best option for promoting student engagement and also a way to increase information-related competences in Open University of Catalonia (UOC) academic context. This case study outlines the benefits of teacher-librarian collaboration in terms of pedagogy innovation, resources management and introduction of open educational resources (OER) in virtual classrooms, Information literacy (IL) training and use of 2.0 tools in teaching. Our faculty-librarian's collaboration aims to provide an example of technology-enhanced learning and demonstrate how working together improves the quality and relevance of educational resources in UOC's virtual classrooms. Under this new approach, while teachers change their role from instructors to facilitators of the learning process and extend their reach to students, libraries acquire an important presence in the academic learning communities.
Resumo:
This article reports on a project at the Universitat Oberta de Catalunya (UOC: The Open University of Catalonia, Barcelona) to develop an innovative package of hypermedia-based learning materials for a new course entitled 'Current Issues in Marketing'. The UOC is a distance university entirely based on a virtual campus. The learning materials project was undertaken in order to benefit from the advantages which new communication technologies offer to the teaching of marketing in distance education. The article reviews the main issues involved in incorporating new technologies in learning materials, the development of the learning materials, and their functioning within the hypermedia based virtual campus of the UOC. An empirical study is then carried out in order to evaluate the attitudes of students to the project. Finally, suggestions for improving similar projects in the future are put forward.