10 resultados para Teaching with geospatial technologies
em Université de Lausanne, Switzerland
Resumo:
The identity [r]evolution is happening. Who are you, who am I in the information society? In recent years, the convergence of several factors - technological, political, economic - has accelerated a fundamental change in our networked world. On a technological level, information becomes easier to gather, to store, to exchange and to process. The belief that more information brings more security has been a strong political driver to promote information gathering since September 11. Profiling intends to transform information into knowledge in order to anticipate one's behaviour, or needs, or preferences. It can lead to categorizations according to some specific risk criteria, for example, or to direct and personalized marketing. As a consequence, new forms of identities appear. They are not necessarily related to our names anymore. They are based on information, on traces that we leave when we act or interact, when we go somewhere or just stay in one place, or even sometimes when we make a choice. They are related to the SIM cards of our mobile phones, to our credit card numbers, to the pseudonyms that we use on the Internet, to our email addresses, to the IP addresses of our computers, to our profiles... Like traditional identities, these new forms of identities can allow us to distinguish an individual within a group of people, or describe this person as belonging to a community or a category. How far have we moved through this process? The identity [r]evolution is already becoming part of our daily lives. People are eager to share information with their "friends" in social networks like Facebook, in chat rooms, or in Second Life. Customers take advantage of the numerous bonus cards that are made available. Video surveillance is becoming the rule. In several countries, traditional ID documents are being replaced by biometric passports with RFID technologies. This raises several privacy issues and might actually even result in changing the perception of the concept of privacy itself, in particular by the younger generation. In the information society, our (partial) identities become the illusory masks that we choose -or that we are assigned- to interplay and communicate with each other. Rights, obligations, responsibilities, even reputation are increasingly associated with these masks. On the one hand, these masks become the key to access restricted information and to use services. On the other hand, in case of a fraud or negative reputation, the owner of such a mask can be penalized: doors remain closed, access to services is denied. Hence the current preoccupying growth of impersonation, identity-theft and other identity-related crimes. Where is the path of the identity [r]evolution leading us? The booklet is giving a glance on possible scenarios in the field of identity.
Resumo:
BACKGROUND: The general proficiency in physical diagnostic skills seems to be declining in relation to the development of new technologies. The few studies that have examined this question have invariably used recordings of cardiac events obtained from patients. However, this type of evaluation may not correlate particularly well with bedside skills. Our objectives were 1) To compare the cardiac auscultatory skills of physicians in training with those of experienced cardiologists by using real patients to test bedside diagnostic skills. 2) To evaluate the impact of a five-month bedside cardiac auscultation training program. METHODS: 1) In an academic primary care center, 20 physicians (trainees in internal medicine and family practice) and two skilled academic cardiologists listened to 33 cardiac events in 13 patients directly at bedside and identified the cardiac events by completing an open questionnaire. Heart sounds, murmurs and diagnosis were determined beforehand by an independent skilled cardiologist and were validated by echocardiography. Thirteen primary cardiologic diagnoses were possible.2) Ten of the physicians agreed to participate in a course of 45-minute sessions once a week for 5 months. After the course they listened again to the same patients (pre/post-interventional study). RESULTS: 1) The experts were the most skillful, achieving 69% recognition of heart sounds and murmurs and correct diagnoses in 62% of cases. They also heard all of the diastolic murmurs. The residents heard only 40% of the extra heart sounds and made a correct diagnosis in 24% of cases. 2) After the weekly training sessions, their mean percentage for correct diagnosis was 35% [an increase of 66% (p < 0.05)]. CONCLUSIONS: The level of bedside diagnostic skills in this relatively small group of physicians in training is indeed low, but can be improved by a course focusing on realistic bedside teaching.
Resumo:
We present an approach to teaching evidence-based management (EBMgt) that trains future managers how to produce local evidence. Local evidence is causally interpretable data, collected on-site in companies to address a specific business problem. Our teaching method is a variant of problem-based learning, a method originally developed to teach evidence-based medicine. Following this method, students learn an evidence-based problem-solving cycle for addressing actual business cases. Executing this cycle, students use and produce scientific evidence through literature searches and the design of local, experimental tests of causal hypotheses. We argue the value of teaching EBMgt with a focus on producing local evidence, how it can be taught, and what can be taught. We conclude by outlining our contribution to the literature on teaching EBMgt and by discussing limitations of our approach.
Resumo:
OBJECTIVE: To develop and compare two new technologies for diagnosing a contiguous gene syndrome, the Williams-Beuren syndrome (WBS). METHODS: The first proposed method, named paralogous sequence quantification (PSQ), is based on the use of paralogous sequences located on different chromosomes and quantification of specific mismatches present at these loci using pyrosequencing technology. The second exploits quantitative real time polymerase chain reaction (QPCR) to assess the relative quantity of an analysed locus. RESULTS: A correct and unambiguous diagnosis was obtained for 100% of the analysed samples with either technique (n = 165 and n = 155, respectively). These methods allowed the identification of two patients with atypical deletions in a cohort of 182 WBS patients. Both patients presented with mild facial anomalies, mild mental retardation with impaired visuospatial cognition, supravalvar aortic stenosis, and normal growth indices. These observations are consistent with the involvement of GTF2IRD1 or GTF2I in some of the WBS facial features. CONCLUSIONS: Both PSQ and QPCR are robust, easy to interpret, and simple to set up. They represent a competitive alternative for the diagnosis of segmental aneuploidies in clinical laboratories. They have advantages over fluorescence in situ hybridisation or microsatellites/SNP genotyping for detecting short segmental aneuploidies as the former is costly and labour intensive while the latter depends on the informativeness of the polymorphisms.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Résumé: Les gouvernements des pays occidentaux ont dépensé des sommes importantes pour faciliter l'intégration des technologies de l'information et de la communication dans l'enseignement espérant trouver une solution économique à l'épineuse équation que l'on pourrait résumer par la célèbre formule " faire plus et mieux avec moins ". Cependant force est de constater que, malgré ces efforts et la très nette amélioration de la qualité de service des infrastructures, cet objectif est loin d'être atteint. Si nous pensons qu'il est illusoire d'attendre et d'espérer que la technologie peut et va, à elle seule, résoudre les problèmes de qualité de l'enseignement, nous croyons néanmoins qu'elle peut contribuer à améliorer les conditions d'apprentissage et participer de la réflexion pédagogique que tout enseignant devrait conduire avant de dispenser ses enseignements. Dans cette optique, et convaincu que la formation à distance offre des avantages non négligeables à condition de penser " autrement " l'enseignement, nous nous sommes intéressé à la problématique du développement de ce type d'applications qui se situent à la frontière entre les sciences didactiques, les sciences cognitives, et l'informatique. Ainsi, et afin de proposer une solution réaliste et simple permettant de faciliter le développement, la mise-à-jour, l'insertion et la pérennisation des applications de formation à distance, nous nous sommes impliqué dans des projets concrets. Au fil de notre expérience de terrain nous avons fait le constat que (i)la qualité des modules de formation flexible et à distance reste encore très décevante, entre autres parce que la valeur ajoutée que peut apporter l'utilisation des technologies n'est, à notre avis, pas suffisamment exploitée et que (ii)pour réussir tout projet doit, outre le fait d'apporter une réponse utile à un besoin réel, être conduit efficacement avec le soutien d'un " champion ". Dans l'idée de proposer une démarche de gestion de projet adaptée aux besoins de la formation flexible et à distance, nous nous sommes tout d'abord penché sur les caractéristiques de ce type de projet. Nous avons ensuite analysé les méthodologies de projet existantes dans l'espoir de pouvoir utiliser l'une, l'autre ou un panachage adéquat de celles qui seraient les plus proches de nos besoins. Nous avons ensuite, de manière empirique et par itérations successives, défini une démarche pragmatique de gestion de projet et contribué à l'élaboration de fiches d'aide à la décision facilitant sa mise en oeuvre. Nous décrivons certains de ses acteurs en insistant particulièrement sur l'ingénieur pédagogique que nous considérons comme l'un des facteurs clé de succès de notre démarche et dont la vocation est de l'orchestrer. Enfin, nous avons validé a posteriori notre démarche en revenant sur le déroulement de quatre projets de FFD auxquels nous avons participé et qui sont représentatifs des projets que l'on peut rencontrer dans le milieu universitaire. En conclusion nous pensons que la mise en oeuvre de notre démarche, accompagnée de la mise à disposition de fiches d'aide à la décision informatisées, constitue un atout important et devrait permettre notamment de mesurer plus aisément les impacts réels des technologies (i) sur l'évolution de la pratique des enseignants, (ii) sur l'organisation et (iii) sur la qualité de l'enseignement. Notre démarche peut aussi servir de tremplin à la mise en place d'une démarche qualité propre à la FFD. D'autres recherches liées à la réelle flexibilisation des apprentissages et aux apports des technologies pour les apprenants pourront alors être conduites sur la base de métriques qui restent à définir. Abstract: Western countries have spent substantial amount of monies to facilitate the integration of the Information and Communication Technologies (ICT) into Education hoping to find a solution to the touchy equation that can be summarized by the famous statement "do more and better with less". Despite these efforts, and notwithstanding the real improvements due to the undeniable betterment of the infrastructure and of the quality of service, this goal is far from reached. Although we think it illusive to expect technology, all by itself, to solve our economical and educational problems, we firmly take the view that it can greatly contribute not only to ameliorate learning conditions but participate to rethinking the pedagogical approach as well. Every member of our community could hence take advantage of this opportunity to reflect upon his or her strategy. In this framework, and convinced that integrating ICT into education opens a number of very interesting avenues provided we think teaching "out of the box", we got ourself interested in courseware development positioned at the intersection of didactics and pedagogical sciences, cognitive sciences and computing. Hence, and hoping to bring a realistic and simple solution that could help develop, update, integrate and sustain courseware we got involved in concrete projects. As ze gained field experience we noticed that (i)The quality of courseware is still disappointing, amongst others, because the added value that the technology can bring is not made the most of, as it could or should be and (ii)A project requires, besides bringing a useful answer to a real problem, to be efficiently managed and be "championed". Having in mind to propose a pragmatic and practical project management approach we first looked into open and distance learning characteristics. We then analyzed existing methodologies in the hope of being able to utilize one or the other or a combination to best fit our needs. In an empiric manner and proceeding by successive iterations and refinements, we defined a simple methodology and contributed to build descriptive "cards" attached to each of its phases to help decision making. We describe the different actors involved in the process insisting specifically on the pedagogical engineer, viewed as an orchestra conductor, whom we consider to be critical to ensure the success of our approach. Last but not least, we have validated a posteriori our methodology by reviewing four of the projects we participated to and that we think emblematic of the university reality. We believe that the implementation of our methodology, along with the availability of computerized cards to help project managers to take decisions, could constitute a great asset and contribute to measure the technologies' real impacts on (i) the evolution of teaching practices (ii) the organization and (iii) the quality of pedagogical approaches. Our methodology could hence be of use to help put in place an open and distance learning quality assessment. Research on the impact of technologies to learning adaptability and flexibilization could rely on adequate metrics.
Resumo:
PURPOSE: To evaluate a diagnostic strategy for pulmonary embolism that combined clinical assessment, plasma D-dimer measurement, lower limb venous ultrasonography, and helical computed tomography (CT). METHODS: A cohort of 965 consecutive patients presenting to the emergency departments of three general and teaching hospitals with clinically suspected pulmonary embolism underwent sequential noninvasive testing. Clinical probability was assessed by a prediction rule combined with implicit judgment. All patients were followed for 3 months. RESULTS: A normal D-dimer level (<500 microg/L by a rapid enzyme-linked immunosorbent assay) ruled out venous thromboembolism in 280 patients (29%), and finding a deep vein thrombosis by ultrasonography established the diagnosis in 92 patients (9.5%). Helical CT was required in only 593 patients (61%) and showed pulmonary embolism in 124 patients (12.8%). Pulmonary embolism was considered ruled out in the 450 patients (46.6%) with a negative ultrasound and CT scan and a low-to-intermediate clinical probability. The 8 patients with a negative ultrasound and CT scan despite a high clinical probability proceeded to pulmonary angiography (positive: 2; negative: 6). Helical CT was inconclusive in 11 patients (pulmonary embolism: 4; no pulmonary embolism: 7). The overall prevalence of pulmonary embolism was 23%. Patients classified as not having pulmonary embolism were not anticoagulated during follow-up and had a 3-month thromboembolic risk of 1.0% (95% confidence interval: 0.5% to 2.1%). CONCLUSION: A noninvasive diagnostic strategy combining clinical assessment, D-dimer measurement, ultrasonography, and helical CT yielded a diagnosis in 99% of outpatients suspected of pulmonary embolism, and appeared to be safe, provided that CT was combined with ultrasonography to rule out the disease.
Resumo:
Pelvic floor anatomy is complex and its three-dimensional organization is often difficult to understand for both undergrad- uate and postgraduate students. Here, we focused on several critical points that need to be considered when teaching the perineum. We have to deal with a mixed population of students and with a variety of interest. Yet, a perfect knowledge of the pelvic floor is the basis for any gynecologist and for any surgical intervention. Our objectives are several-fold; i) to estab- lish the objectives and the best way of teaching, ii) to identify and localize areas in the female pelvic floor that are suscepti- ble to generate problems in understanding the three-dimensional organization, iii) to create novel approaches by respecting the anatomical surroundings, and iv) prospectively, to identify elements that may create problems during surgery i.e. to have a closer look at nerve trajectories and on compression sites that may cause neuralgia or postoperative pain. A feedback from students concludes that they have difficulties to assimilate this much information, especially the different imaging tech- niques. Eventually, this will lead to a severe selection of what has to be taught and included in lectures or practicals. Another consequence is that more time to study prosected pelves needs to be given.
Resumo:
OBJECTIVE: Study of the uptake of new medical technologies provides useful information on the transfer of published evidence into usual practice. We conducted an audit of selected hospitals in three countries (Canada, France, and Switzerland) to identify clinical predictors of low-molecular-weight (LMW) heparin use and outpatient treatment, and to compare the pace of uptake of these new therapeutic approaches across hospitals. DESIGN: Historical review of medical records. SETTING AND PARTICIPANTS: We reviewed the medical records of 3043 patients diagnosed with deep vein thrombosis (DVT) in five Canadian, two French, and two Swiss teaching hospitals from 1994 to 1998. Measures. We explored independent clinical variables associated with LMW heparin use and outpatient treatment, and determined crude and adjusted rates of LMW heparin use and outpatient treatment across hospitals. RESULTS: For the years studied, the overall rates of LMW heparin use and outpatient treatment in the study sample were 34.1 and 15.8%, respectively, with higher rates of use in later years. Many comorbidities were negatively associated with outpatient treatment, and risk-adjusted rates of use of these new approaches varied significantly across hospitals. CONCLUSION: There has been a relatively rapid uptake of LMW heparins and outpatient treatment for DVT in their early years of availability, but the pace of uptake has varied considerably across hospitals and countries.