76 resultados para Teaching with Projects
Resumo:
The present study was carried out to check whether classic osteometric parameters can be determined from the 3D reconstructions of MSCT (multislice computed tomography) scans acquired in the context of the Virtopsy project. To this end, four isolated and macerated skulls were examined by six examiners. First the skulls were conventionally (manually) measured using 32 internationally accepted linear measurements. Then the skulls were scanned by the use of MSCT with slice thicknesses of 1.25 mm and 0.63 mm, and the 33 measurements were virtually determined on the digital 3D reconstructions of the skulls. The results of the traditional and the digital measurements were compared for each examiner to figure out variations. Furthermore, several parameters were measured on the cranium and postcranium during an autopsy and compared to the values that had been measured on a 3D reconstruction from a previously acquired postmortem MSCT scan. The results indicate that equivalent osteometric values can be obtained from digital 3D reconstructions from MSCT scans using a slice thickness of 1.25 mm, and from conventional manual examinations. The measurements taken from a corpse during an autopsy could also be validated with the methods used for the digital 3D reconstructions in the context of the Virtopsy project. Future aims are the assessment and biostatistical evaluation in respect to sex, age and stature of all data sets stored in the Virtopsy project so far, as well as of future data sets. Furthermore, a definition of new parameters, only measurable with the aid of MSCT data would be conceivable.
Resumo:
BACKGROUND: Patient behavior accounts for half or more of the variance in health, disease, mortality and treatment outcome and costs. Counseling using motivational interviewing (MI) effectively improves the substance use and medical compliance behavior of patients. Medical training should include substantial focus on this key issue of health promotion. The objective of the study is to test the efficacy of teaching MI to medical students. METHODS: Thirteen fourth-year medical students volunteered to participate. Seven days before and after an 8-hour interactive MI training workshop, each student performed a video-recorded interview with two standardized patients: a 60 year-old alcohol dependent female consulting a primary care physician for the first time about fatigue and depression symptoms; and a 50 year-old male cigarette smoker hospitalized for myocardial infarction. All 52 videos (13 students×2 interviews before and after training) were independently coded by two blinded clinicians using the Motivational Interviewing Training Integrity (MITI, 3.0). MITI scores consist of global spirit (Evocation, Collaboration, Autonomy/Support), global Empathy and Direction, and behavior count summary scores (% Open questions, Reflection to question ratio, % Complex reflections, % MI-adherent behaviors). A "beginning proficiency" threshold (BPT) is defined for each of these 9 scores. The proportion of students reaching BPT before and after training was compared using McNemar exact tests. Inter-rater reliability was evaluated by comparing double coding, and test-retest analyses were conducted on a sub-sample of 10 consecutive interviews by each coder. Weighted Kappas were used for global rating scales and intra-class correlations (ICC) were computed for behavior count summary scores. RESULTS: The percent of counselors reaching BPT before and after MI training increased significantly for Evocation (15% to 65%, p<.001), Collaboration (27% to 77%, p=.001), Autonomy/Support (15% to 54%, p=.006), and % Open questions (4% to 38%, p=.004). Proportions increased, but were not statistically significant for Empathy (38% to 58%, p=.18), Reflection to question ratio (0% to 15%, p=.12), % Complex reflection (35% to 54%, p=.23), and % MI-adherent behaviors (8% to 15%, p=.69). There was virtually no change for the Direction scale (92% to 88%, p=1.00). The reliability analyses produced mixed results. Weighted kappas for inter-rater reliability ranged from .14 for Direction to .51 for Collaboration, and from .27 for Direction to .80 for Empathy for test-retest. ICCs ranged from .20 for Complex reflections to .89 for Open questions (inter-rater), and from .67 for Complex reflections to .99 for Reflection to question ratio (test-retest). CONCLUSION: This pilot study indicates that a single 8-hour training in motivational interviewing for voluntary fourth-year medical students results in significant improvement of some MI skills. A larger sample of randomly selected medical students observed over longer periods should be studied to test if MI training generalizes to medical students. Inter-rater reliability and test-retest findings indicate a need for caution when interpreting the present results, as well as for more intensive training to help appropriately capture more dimensions of the process in future studies.
Resumo:
The purpose of this study was to evaluate the intraocular pressure (IOP)-lowering effect of modified goniopuncture with the 532-nm Nd : YAG selective laser trabeculoplasty (SLT) laser on eyes after deep sclerectomy with collagen implant (DSCI). This was an interventional cased series. The effects of modified goniopuncture on eyes with insufficient IOP-lowering after DSCI were observed. Goniopuncture was performed using a Q-switched, frequency-doubled 532-nm Nd : YAG laser (SLT-goniopuncture, SLT-G). Outcome measures were amount of IOP-lowering and rapidity of decrease after laser intervention. In all, 10 eyes of 10 patients with a mean age of 71.0±7.7 (SD) years were treated with SLT-G. The mean time of SLT-G after DSCI procedure was 7.1±10.9 months. SLT-G decreased IOP from an average of 16.1±3.4 mm Hg to 14.2±2.8 mm Hg (after 15 min), 13.6±3.9 mm Hg (at 1 day), 12.5±4.1 mm Hg (at 1 month), and 12.6±2.5 (at 6 months) (P<0.0125). There were no complications related to the intervention. Patients in this series achieved an average 22.5% of IOP reduction after SLT-G. The use of the SLT laser appears to be an effective and safe alternative to the traditional Nd : YAG laser for goniopuncture in eyes after DSCI, with potential advantages related to non-perforation of trabeculo-descemet's membrane (TDM).
Resumo:
Rationale: Although associated with adverse outcomes in other cardiopulmonary conditions, the prognostic value of hyponatremia, a marker of neurohormonal activation, in patients with acute pulmonary embolism (PE) is unknown. Objectives: To examine the associations between hyponatremia and mortality and hospital readmission rates for patients hospitalized with PE. METHODS: We evaluated 13,728 patient discharges with a primary diagnosis of PE from 185 hospitals in Pennsylvania (January 2000 to November 2002). We used random-intercept logistic regression to assess the independent association between serum sodium levels at the time of presentation and mortality and hospital readmission within 30 days, adjusting for patient (race, insurance, severity of illness, use of thrombolytic therapy) and hospital factors (region, size, teaching status). Measurements and Main Results: Hyponatremia (sodium ?135 mmol/L) was present in 2,907 patients (21.1%). Patients with a sodium level greater than 135, 130-135, and less than 130 mmol/L had a cumulative 30-day mortality of 8.0, 13.6, and 28.5% (P < 0.001), and a readmission rate of 11.8, 15.6, and 19.3% (P < 0.001), respectively. Compared with patients with a sodium greater than 135 mmol/L, the adjusted odds of dying were significantly greater for patients with a sodium 130-135 mmol/L (odds ratio [OR], 1.53; 95% confidence interval [CI], 1.33-1.76) and a sodium less than 130 mmol/L (OR, 3.26; 95% CI, 2.48-4.29). The adjusted odds of readmission were also increased for patients with a sodium of 130-135 mmol/L (OR, 1.28; 95% CI, 1.12-1.46) and a sodium less than 130 mmol/L (OR, 1.44; 95% CI, 1.02-2.02). Conclusions: Hyponatremia is common in patients presenting with PE, and is an independent predictor of short-term mortality and hospital readmission.
Resumo:
BACKGROUND: Home hospital is advocated in many western countries in spite of limited evidence of its economic advantage over usual hospital care. Heart failure and community-acquired pneumonia are two medical conditions which are frequently targeted by home hospital programs. While recent trials were devoted to comparisons of safety and costs, the acceptance of home hospital for patients with these conditions remains poorly described. OBJECTIVE: To document the medical eligibility and final transfer decision to home hospital for patients hospitalized with a primary diagnosis of heart failure or community-acquired pneumonia. DESIGN: Longitudinal study of patients admitted to the medical ward of acute care hospitals, up to the final decision concerning their transfer. SETTING: Medical departments of one university hospital and two regional teaching Swiss hospitals. PATIENTS: All patients admitted over a 9 month period to the three settings with a primary diagnosis of heart failure (n= 301) or pneumonia (n=441). MEASUREMENTS: Presence of permanent exclusion criteria on admission; final decision of (in)eligibility based on medical criteria; final decision regarding the transfer, taking into account the opinions of the family physician, the patient and informal caregivers. RESULTS: While 27.9% of heart failure and 37.6% of pneumonia patients were considered to be eligible from a medical point of view, the program acceptance by family physicians, patients and informal caregivers was low and a transfer to home hospital was ultimately chosen for just 3.8% of heart failure and 9.6% of pneumonia patients. There were no major differences between the three settings. CONCLUSIONS: In the case of these two conditions, the potential economic advantage of home hospital over usual inpatient care is compromised by the low proportion of patients ultimately transferred.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Introduction: Building online courses is a highly time consuming task for teachers of a single university. Universities working alone create high-quality courses but often cannot cover all pathological fields. Moreover this often leads to duplication of contents among universities, representing a big waste of teacher time and energy. We initiated in 2011 a French university network for building mutualized online teaching pathology cases, and this network has been extended in 2012 to Quebec and Switzerland. Method: Twenty French universities (see & for details), University Laval in Quebec and University of Lausanne in Switzerland are associated to this project. One e-learning Moodle platform (http://moodle.sorbonne-paris-cite.fr/) contains texts with URL pointing toward virtual slides that are decentralized in several universities. Each university has the responsibility of its own slide scanning, slide storage and online display with virtual slide viewers. The Moodle website is hosted by PRES Sorbonne Paris Cité, and financial supports for hardware have been obtained from UNF3S (http://www.unf3s.org/) and from PRES Sorbonne Paris Cité. Financial support for international fellowships has been obtained from CFQCU (http://www.cfqcu.org/). Results: The Moodle interface has been explained to pathology teachers using web-based conferences with screen sharing. The teachers added then contents such as clinical cases, selfevaluations and other media organized in several sections by student levels and pathological fields. Contents can be used as online learning or online preparation of subsequent courses in classrooms. In autumn 2013, one resident from Quebec spent 6 weeks in France and Switzerland and created original contents in inflammatory skin pathology. These contents are currently being validated by senior teachers and will be opened to pathology residents in spring 2014. All contents of the website can be accessed for free. Most contents just require anonymous connection but some specific fields, especially those containing pictures obtained from patients who agreed for a teaching use only, require personal identification of the students. Also, students have to register to access Moodle tests. All contents are written in French but one case has been translated into English to illustrate this communication (http://moodle.sorbonne-pariscite.fr/mod/page/view.php?id=261) (use "login as a guest"). The Moodle test module allows many types of shared questions, making it easy to create personalized tests. Contents that are opened to students have been validated by an editorial committee composed of colleagues from the participating institutions. Conclusions: Future developments include other international fellowships, the next one being scheduled for one French resident from May to October 2014 in Quebec, with a study program centered on lung and breast pathology. It must be kept in mind that these e-learning programs highly depend on teachers' time, not only at these early steps but also later to update the contents. We believe that funding resident fellowships for developing online pathological teaching contents is a win-win situation, highly beneficial for the resident who will improve his knowledge and way of thinking, highly beneficial for the teachers who will less worry about access rights or image formats, and finally highly beneficial for the students who will get courses fully adapted to their practice.
Resumo:
INTRODUCTION: Developments in technology, web-based teaching and whole slide imaging have broadened the teaching horizon in anatomic pathology. Creating online learning material including many types of media such as radiologic images, whole slides, videos, clinical and macroscopic photographs, is now accessible to most universities. Unfortunately, a major limiting factor to maintain and update the learning material is the amount of resources needed. In this perspective, a French-national university network was initiated in 2011 to build joint online teaching modules consisting of clinical cases and tests. The network has since expanded internationally to Québec, Switzerland and Ivory Coast. METHOD: One of the first steps of the project was to build a learning module on inflammatory skin pathology for interns and residents in pathology and dermatology. A pathology resident from Québec spent 6 weeks in France and Switzerland to develop the contents and build the module on an e-learning Moodle platform under the supervision of two dermatopathologists. The learning module contains text, interactive clinical cases, tests with feedback, virtual slides, images and clinical photographs. For that module, the virtual slides are decentralized in 2 universities (Bordeaux and Paris 7). Each university is responsible of its own slide scanning, image storage and online display with virtual slide viewers. RESULTS: The module on inflammatory skin pathology includes more than 50 web pages with French original content, tests and clinical cases, links to over 45 virtual images and more than 50 microscopic and clinical photographs. The whole learning module is being revised by four dermatopathologists and two senior pathologists. It will be accessible to interns and residents in the spring of 2014. The experience and knowledge gained from that work will be transferred to the next international resident whose work will be aimed at creating lung and breast pathology learning modules. CONCLUSION: The challenges of sustaining a project of this scope are numerous. The technical aspect of whole-slide imaging and storage needs to be developed by each university or group. The content needs to be regularly updated and its accuracy reviewed by experts in each individual domain. The learning modules also need to be promoted within the academic community to ensure maximal benefit for trainees. A collateral benefit of the project was the establishment of international partnerships between French-speaking universities and pathologists with the common goal of promoting pathology education through the use of multi-media technology including whole slide imaging.
Resumo:
PURPOSE: To explore the use of telementoring for distant teaching and training in endovascular aortic aneurysm repair (EVAR). METHODS: According to a prospectively designed study protocol, 48 patients underwent EVAR: the first 12 patients (group A) were treated at a secondary care center by an experienced interventionist, who was training the local team; a further 12 patients (group B) were operated by the local team at their secondary center with telementoring by the experienced operator from an adjacent suite; and the last 24 patients (group C) were operated by the local team with remote telementoring support from the experienced interventionist at a tertiary care center. Telementoring was performed using 3 video sources; images were transmitted using 4 ISDN lines. EVAR was performed using intravascular ultrasound and simultaneous fluoroscopy to obtain road mapping of the abdominal aorta and its branches, as well as for identifying the origins of the renal arteries, assessing the aortic neck, and monitoring the attachment of the stent-graft proximally and distally. RESULTS: Average duration of telementoring was 2.1 hours during the first 12 patients (group B) and 1.2 hours for the remaining 24 patients (group C). There was no difference in procedural duration (127+/-59 minutes in group A, 120+/-4 minutes in group B, and 119+/-39 minutes in group C; p=0.94) or the mean time spent in the ICU (26+/-15 hours in group A, 22+/-2 hours in group B, and 22+/-11 hours for group C; p=0.95). The length of hospital stay (11+/-4 days in group A, 9+/-4 days in group B, and 7+/-1 days in group C; p=0.002) was significantly different only for group C versus A (p=0.002). Only 1 (8.3%) patient (in group A: EVAR performed by the experienced operator) required conversion to open surgery because of iliac artery rupture. This was the only conversion (and the only death) in the entire study group (1/12 in group A versus 0/36 in groups B + C, p=0.31). CONCLUSIONS: Telementoring for EVAR is feasible and shows promising results. It may serve as a model for development of similar projects for teaching other invasive procedures in cardiovascular medicine.
Resumo:
Résumé: Les gouvernements des pays occidentaux ont dépensé des sommes importantes pour faciliter l'intégration des technologies de l'information et de la communication dans l'enseignement espérant trouver une solution économique à l'épineuse équation que l'on pourrait résumer par la célèbre formule " faire plus et mieux avec moins ". Cependant force est de constater que, malgré ces efforts et la très nette amélioration de la qualité de service des infrastructures, cet objectif est loin d'être atteint. Si nous pensons qu'il est illusoire d'attendre et d'espérer que la technologie peut et va, à elle seule, résoudre les problèmes de qualité de l'enseignement, nous croyons néanmoins qu'elle peut contribuer à améliorer les conditions d'apprentissage et participer de la réflexion pédagogique que tout enseignant devrait conduire avant de dispenser ses enseignements. Dans cette optique, et convaincu que la formation à distance offre des avantages non négligeables à condition de penser " autrement " l'enseignement, nous nous sommes intéressé à la problématique du développement de ce type d'applications qui se situent à la frontière entre les sciences didactiques, les sciences cognitives, et l'informatique. Ainsi, et afin de proposer une solution réaliste et simple permettant de faciliter le développement, la mise-à-jour, l'insertion et la pérennisation des applications de formation à distance, nous nous sommes impliqué dans des projets concrets. Au fil de notre expérience de terrain nous avons fait le constat que (i)la qualité des modules de formation flexible et à distance reste encore très décevante, entre autres parce que la valeur ajoutée que peut apporter l'utilisation des technologies n'est, à notre avis, pas suffisamment exploitée et que (ii)pour réussir tout projet doit, outre le fait d'apporter une réponse utile à un besoin réel, être conduit efficacement avec le soutien d'un " champion ". Dans l'idée de proposer une démarche de gestion de projet adaptée aux besoins de la formation flexible et à distance, nous nous sommes tout d'abord penché sur les caractéristiques de ce type de projet. Nous avons ensuite analysé les méthodologies de projet existantes dans l'espoir de pouvoir utiliser l'une, l'autre ou un panachage adéquat de celles qui seraient les plus proches de nos besoins. Nous avons ensuite, de manière empirique et par itérations successives, défini une démarche pragmatique de gestion de projet et contribué à l'élaboration de fiches d'aide à la décision facilitant sa mise en oeuvre. Nous décrivons certains de ses acteurs en insistant particulièrement sur l'ingénieur pédagogique que nous considérons comme l'un des facteurs clé de succès de notre démarche et dont la vocation est de l'orchestrer. Enfin, nous avons validé a posteriori notre démarche en revenant sur le déroulement de quatre projets de FFD auxquels nous avons participé et qui sont représentatifs des projets que l'on peut rencontrer dans le milieu universitaire. En conclusion nous pensons que la mise en oeuvre de notre démarche, accompagnée de la mise à disposition de fiches d'aide à la décision informatisées, constitue un atout important et devrait permettre notamment de mesurer plus aisément les impacts réels des technologies (i) sur l'évolution de la pratique des enseignants, (ii) sur l'organisation et (iii) sur la qualité de l'enseignement. Notre démarche peut aussi servir de tremplin à la mise en place d'une démarche qualité propre à la FFD. D'autres recherches liées à la réelle flexibilisation des apprentissages et aux apports des technologies pour les apprenants pourront alors être conduites sur la base de métriques qui restent à définir. Abstract: Western countries have spent substantial amount of monies to facilitate the integration of the Information and Communication Technologies (ICT) into Education hoping to find a solution to the touchy equation that can be summarized by the famous statement "do more and better with less". Despite these efforts, and notwithstanding the real improvements due to the undeniable betterment of the infrastructure and of the quality of service, this goal is far from reached. Although we think it illusive to expect technology, all by itself, to solve our economical and educational problems, we firmly take the view that it can greatly contribute not only to ameliorate learning conditions but participate to rethinking the pedagogical approach as well. Every member of our community could hence take advantage of this opportunity to reflect upon his or her strategy. In this framework, and convinced that integrating ICT into education opens a number of very interesting avenues provided we think teaching "out of the box", we got ourself interested in courseware development positioned at the intersection of didactics and pedagogical sciences, cognitive sciences and computing. Hence, and hoping to bring a realistic and simple solution that could help develop, update, integrate and sustain courseware we got involved in concrete projects. As ze gained field experience we noticed that (i)The quality of courseware is still disappointing, amongst others, because the added value that the technology can bring is not made the most of, as it could or should be and (ii)A project requires, besides bringing a useful answer to a real problem, to be efficiently managed and be "championed". Having in mind to propose a pragmatic and practical project management approach we first looked into open and distance learning characteristics. We then analyzed existing methodologies in the hope of being able to utilize one or the other or a combination to best fit our needs. In an empiric manner and proceeding by successive iterations and refinements, we defined a simple methodology and contributed to build descriptive "cards" attached to each of its phases to help decision making. We describe the different actors involved in the process insisting specifically on the pedagogical engineer, viewed as an orchestra conductor, whom we consider to be critical to ensure the success of our approach. Last but not least, we have validated a posteriori our methodology by reviewing four of the projects we participated to and that we think emblematic of the university reality. We believe that the implementation of our methodology, along with the availability of computerized cards to help project managers to take decisions, could constitute a great asset and contribute to measure the technologies' real impacts on (i) the evolution of teaching practices (ii) the organization and (iii) the quality of pedagogical approaches. Our methodology could hence be of use to help put in place an open and distance learning quality assessment. Research on the impact of technologies to learning adaptability and flexibilization could rely on adequate metrics.
Resumo:
INTRODUCTION: A large proportion of visits to our Emergency Department (ED) are for non-life-threatening conditions. We investigated whether patients' characteristics and reasons for consultation had changed over 13 years. METHODS: Consecutive adult patients with non-life-threatening conditions at triage were included in the spring of 2000 and in the summer of 2013. In both years patients completed a similar questionnaire, which addressed their reasons for consultation and any previous consultation with a general practitioner (GP). RESULTS: We included 581 patients in 2013 vs 516 in 2000, with a mean age of 44.5 years vs 46.4 years (p=0.128). Of these patients, 54.0% vs 57.0% were male (p=0.329), 55.5% vs 58.7% were Swiss (p=0.282), 76.4% were registered with a GP in both periods, but self-referral increased from 52.0% to 68.8% (p<0.001); 57.7% vs., 58.3% consulted during out-of- hours (p=0.821). Trauma-related visits decreased from 34.2% to 23.7% (p<0.001). Consultations within 12 hours of onset of symptoms dropped from 54.5% to 30.9%, and delays of ≥1 week increased from 14.3% to 26.9% (p<0.001). The primary motive for self-referral remained unawareness of an alternative, followed in 2013 by dissatisfaction with the GP's treatment or appointment. Patients who believed that their health problem would not require hospitalisation increased from 52.8% to 74.2% and those who were actually hospitalised decreased from 24.9% to 13.9% (all p<0.001). CONCLUSION: The number of visits for non-life-threatening consultations continue to increase. Our ED is used by a large proportion of patients as a convenient alternative source of primary care.
Resumo:
Résumé: L'impact de la maladie d'Alzheimer (MA) est dévastateur pour la vie quotidienne de la personne affectée, avec perte progressive de la mémoire et d'autres facultés cognitives jusqu'à la démence. Il n'existe toujours pas de traitement contre cette maladie et il y a aussi une grande incertitude sur le diagnostic des premiers stades de la MA. La signature anatomique de la MA, en particulier l'atrophie du lobe temporal moyen (LTM) mesurée avec la neuroimagerie, peut être utilisée comme un biomarqueur précoce, in vivo, des premiers stades de la MA. Toutefois, malgré le rôle évident du LMT dans les processus de la mémoire, nous savons que les modèles anatomiques prédictifs de la MA basés seulement sur des mesures d'atrophie du LTM n'expliquent pas tous les cas cliniques. Au cours de ma thèse, j'ai conduit trois projets pour comprendre l'anatomie et le fonctionnement du LMT dans (1) les processus de la maladie et dans (2) les processus de mémoire ainsi que (3) ceux de l'apprentissage. Je me suis intéressée à une population avec déficit cognitif léger (« Mild Cognitive Impairment », MCI), à risque pour la MA. Le but du premier projet était de tester l'hypothèse que des facteurs, autres que ceux cognitifs, tels que les traits de personnalité peuvent expliquer les différences interindividuelles dans le LTM. De plus, la diversité phénotypique des manifestations précliniques de la MA provient aussi d'une connaissance limitée des processus de mémoire et d'apprentissage dans le cerveau sain. L'objectif du deuxième projet porte sur l'investigation des sous-régions du LTM, et plus particulièrement de leur contribution dans différentes composantes de la mémoire de reconnaissance chez le sujet sain. Pour étudier cela, j'ai utilisé une nouvelle méthode multivariée ainsi que l'IRM à haute résolution pour tester la contribution de ces sous-régions dans les processus de familiarité (« ou Know ») et de remémoration (ou « Recollection »). Finalement, l'objectif du troisième projet était de tester la contribution du LTM en tant que système de mémoire dans l'apprentissage et l'interaction dynamique entre différents systèmes de mémoire durant l'apprentissage. Les résultats du premier projet montrent que, en plus du déficit cognitif observé dans une population avec MCI, les traits de personnalité peuvent expliquer les différences interindividuelles du LTM ; notamment avec une plus grande contribution du neuroticisme liée à une vulnérabilité au stress et à la dépression. Mon étude a permis d'identifier un pattern d'anormalité anatomique dans le LTM associé à la personnalité avec des mesures de volume et de diffusion moyenne du tissu. Ce pattern est caractérisé par une asymétrie droite-gauche du LTM et un gradient antéro-postérieur dans le LTM. J'ai interprété ce résultat par des propriétés tissulaires et neurochimiques différemment sensibles au stress. Les résultats de mon deuxième projet ont contribué au débat actuel sur la contribution des sous-régions du LTM dans les processus de familiarité et de remémoration. Utilisant une nouvelle méthode multivariée, les résultats supportent premièrement une dissociation des sous-régions associées aux différentes composantes de la mémoire. L'hippocampe est le plus associé à la mémoire de type remémoration et le cortex parahippocampique, à la mémoire de type familiarité. Deuxièmement, l'activation correspondant à la trace mnésique pour chaque type de mémoire est caractérisée par une distribution spatiale distincte. La représentation neuronale spécifique, « sparse-distributed», associée à la mémoire de remémoration dans l'hippocampe serait la meilleure manière d'encoder rapidement des souvenirs détaillés sans interférer les souvenirs précédemment stockés. Dans mon troisième projet, j'ai mis en place une tâche d'apprentissage en IRM fonctionnelle pour étudier les processus d'apprentissage d'associations probabilistes basé sur le feedback/récompense. Cette étude m'a permis de mettre en évidence le rôle du LTM dans l'apprentissage et l'interaction entre différents systèmes de mémoire comme la mémoire procédurale, perceptuelle ou d'amorçage et la mémoire de travail. Nous avons trouvé des activations dans le LTM correspondant à un processus de mémoire épisodique; les ganglions de la base (GB), à la mémoire procédurale et la récompense; le cortex occipito-temporal (OT), à la mémoire de représentation perceptive ou l'amorçage et le cortex préfrontal, à la mémoire de travail. Nous avons également observé que ces régions peuvent interagir; le type de relation entre le LTM et les GB a été interprété comme une compétition, ce qui a déjà été reporté dans des études récentes. De plus, avec un modèle dynamique causal, j'ai démontré l'existence d'une connectivité effective entre des régions. Elle se caractérise par une influence causale de type « top-down » venant de régions corticales associées avec des processus de plus haut niveau venant du cortex préfrontal sur des régions corticales plus primaires comme le OT cortex. Cette influence diminue au cours du de l'apprentissage; cela pourrait correspondre à un mécanisme de diminution de l'erreur de prédiction. Mon interprétation est que cela est à l'origine de la connaissance sémantique. J'ai également montré que les choix du sujet et l'activation cérébrale associée sont influencés par les traits de personnalité et des états affectifs négatifs. Les résultats de cette thèse m'ont amenée à proposer (1) un modèle expliquant les mécanismes possibles liés à l'influence de la personnalité sur le LTM dans une population avec MCI, (2) une dissociation des sous-régions du LTM dans différents types de mémoire et une représentation neuronale spécifique à ces régions. Cela pourrait être une piste pour résoudre les débats actuels sur la mémoire de reconnaissance. Finalement, (3) le LTM est aussi un système de mémoire impliqué dans l'apprentissage et qui peut interagir avec les GB par une compétition. Nous avons aussi mis en évidence une interaction dynamique de type « top -down » et « bottom-up » entre le cortex préfrontal et le cortex OT. En conclusion, les résultats peuvent donner des indices afin de mieux comprendre certains dysfonctionnements de la mémoire liés à l'âge et la maladie d'Alzheimer ainsi qu'à améliorer le développement de traitement. Abstract: The impact of Alzheimer's disease is devastating for the daily life of the affected patients, with progressive loss of memory and other cognitive skills until dementia. We still lack disease modifying treatment and there is also a great amount of uncertainty regarding the accuracy of diagnostic classification in the early stages of AD. The anatomical signature of AD, in particular the medial temporal lobe (MTL) atrophy measured with neuroimaging, can be used as an early in vivo biomarker in early stages of AD. However, despite the evident role of MTL in memory, we know that the derived predictive anatomical model based only on measures of brain atrophy in MTL does not explain all clinical cases. Throughout my thesis, I have conducted three projects to understand the anatomy and the functioning of MTL on (1) disease's progression, (2) memory process and (3) learning process. I was interested in a population with mild cognitive impairment (MCI), at risk for AD. The objective of the first project was to test the hypothesis that factors, other than the cognitive ones, such as the personality traits, can explain inter-individual differences in the MTL. Moreover, the phenotypic diversity in the manifestations of preclinical AD arises also from the limited knowledge of memory and learning processes in healthy brain. The objective of the second project concerns the investigation of sub-regions of the MTL, and more particularly their contributions in the different components of recognition memory in healthy subjects. To study that, I have used a new multivariate method as well as MRI at high resolution to test the contribution of those sub-regions in the processes of familiarity and recollection. Finally, the objective of the third project was to test the contribution of the MTL as a memory system in learning and the dynamic interaction between memory systems during learning. The results of the first project show that, beyond cognitive state of impairment observed in the population with MCI, the personality traits can explain the inter-individual differences in the MTL; notably with a higher contribution of neuroticism linked to proneness to stress and depression. My study has allowed identifying a pattern of anatomical abnormality in the MTL related to personality with measures of volume and mean diffusion of the tissue. That pattern is characterized by right-left asymmetry in MTL and an anterior to posterior gradient within MTL. I have interpreted that result by tissue and neurochemical properties differently sensitive to stress. Results of my second project have contributed to the actual debate on the contribution of MTL sub-regions in the processes of familiarity and recollection. Using a new multivariate method, the results support firstly a dissociation of the subregions associated with different memory components. The hippocampus was mostly associated with recollection and the surrounding parahippocampal cortex, with familiarity type of memory. Secondly, the activation corresponding to the mensic trace for each type of memory is characterized by a distinct spatial distribution. The specific neuronal representation, "sparse-distributed", associated with recollection in the hippocampus would be the best way to rapidly encode detailed memories without overwriting previously stored memories. In the third project, I have created a learning task with functional MRI to sudy the processes of learning of probabilistic associations based on feedback/reward. That study allowed me to highlight the role of the MTL in learning and the interaction between different memory systems such as the procedural memory, the perceptual memory or priming and the working memory. We have found activations in the MTL corresponding to a process of episodic memory; the basal ganglia (BG), to a procedural memory and reward; the occipito-temporal (OT) cortex, to a perceptive memory or priming and the prefrontal cortex, to working memory. We have also observed that those regions can interact; the relation type between the MTL and the BG has been interpreted as a competition. In addition, with a dynamic causal model, I have demonstrated a "top-down" influence from cortical regions associated with high level cortical area such as the prefrontal cortex on lower level cortical regions such as the OT cortex. That influence decreases during learning; that could correspond to a mechanism linked to a diminution of prediction error. My interpretation is that this is at the origin of the semantic knowledge. I have also shown that the subject's choice and the associated brain activation are influenced by personality traits and negative affects. Overall results of this thesis have brought me to propose (1) a model explaining the possible mechanism linked to the influence of personality on the MTL in a population with MCI, (2) a dissociation of MTL sub-regions in different memory types and a neuronal representation specific to each region. This could be a cue to resolve the actual debates on recognition memory. Finally, (3) the MTL is also a system involved in learning and that can interact with the BG by a competition. We have also shown a dynamic interaction of « top -down » and « bottom-up » types between the pre-frontal cortex and the OT cortex. In conclusion, the results could give cues to better understand some memory dysfunctions in aging and Alzheimer's disease and to improve development of treatment.
Resumo:
Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.
Resumo:
BACKGROUND: In 2007, a first survey on undergraduate palliative care teaching in Switzerland has revealed major heterogeneity of palliative care content, allocation of hours and distribution throughout the 6 year curriculum in Swiss medical faculties. This second survey in 2012/13 has been initiated as part of the current Swiss national strategy in palliative care (2010 - 2015) to serve as a longitudinal monitoring instrument and as a basis for redefinition of palliative care learning objectives and curriculum planning in our country. METHODS: As in 2007, a questionnaire was sent to the deans of all five medical faculties in Switzerland in 2012. It consisted of eight sections: basic background information, current content and hours in dedicated palliative care blocks, current palliative care content in other courses, topics related to palliative care presented in other courses, recent attempts at improving palliative care content, palliative care content in examinations, challenges, and overall summary. Content analysis was performed and the results matched with recommendations from the EAPC for undergraduate training in palliative medicine as well as with recommendations from overseas countries. RESULTS: There is a considerable increase in palliative care content, academic teaching staff and hours in all medical faculties compared to 2007. No Swiss medical faculty reaches the range of 40 h dedicated specifically to palliative care as recommended by the EAPC. Topics, teaching methods, distribution throughout different years and compulsory attendance still differ widely. Based on these results, the official Swiss Catalogue of Learning Objectives (SCLO) was complemented with 12 new learning objectives for palliative and end of life care (2013), and a national basic script for palliative care was published (2015). CONCLUSION: Performing periodic surveys of palliative care teaching at national medical faculties has proven to be a useful tool to adapt the national teaching framework and to improve the recognition of palliative medicine as an integral part of medical training.
Resumo:
Functional advantages and drawbacks are commonly mentioned to rationally justify or condemn municipality amalgamations. However, many consolidation projects are resisted by local governments or citizens on the grounds that amalgamation would dampen local identity. A municipality's name change is probably the most visible sign of the loss of community bond experienced by citizens at amalgamation time. This article aims to put a value on this loss by measuring citizen willingness to pay for their city name. This methodological approach innovates upon the literature on municipal amalgamation and place branding by exploiting the versatility of the so-called contingent valuation method (CVM). CVM confronts respondents, in a survey setting, with a hypothetical market in which a characteristic of interest is exchanged. Here the characteristic is the possibility to retain one's city name for an amalgamated jurisdiction. The article presents the estimates provided by a survey conducted in four Swiss cities.