58 resultados para Regulation devices and piloting learning
Resumo:
The amygdala is part of a neural network that contributes to the regulation of emotional behaviors. Rodents, especially rats, are used extensively as model organisms to decipher the functions of specific amygdala nuclei, in particular in relation to fear and emotional learning. Analysis of the role of the nonhuman primate amygdala in these functions has lagged work in the rodent but provides evidence for conservation of basic functions across species. Here we provide quantitative information regarding the morphological characteristics of the main amygdala nuclei in rats and monkeys, including neuron and glial cell numbers, neuronal soma size, and individual nuclei volumes. The volumes of the lateral, basal, and accessory basal nuclei were, respectively, 32, 39, and 39 times larger in monkeys than in rats. In contrast, the central and medial nuclei were only 8 and 4 times larger in monkeys than in rats. The numbers of neurons in the lateral, basal, and accessory basal nuclei were 14, 11, and 16 times greater in monkeys than in rats, whereas the numbers of neurons in the central and medial nuclei were only 2.3 and 1.5 times greater in monkeys than in rats. Neuron density was between 2.4 and 3.7 times lower in monkeys than in rats, whereas glial density was only between 1.1 and 1.7 times lower in monkeys than in rats. We compare our data in rats and monkeys with those previously published in humans and discuss the theoretical and functional implications that derive from our quantitative structural findings.
Resumo:
Recent findings in neuroscience suggest that adult brain structure changes in response to environmental alterations and skill learning. Whereas much is known about structural changes after intensive practice for several months, little is known about the effects of single practice sessions on macroscopic brain structure and about progressive (dynamic) morphological alterations relative to improved task proficiency during learning for several weeks. Using T1-weighted and diffusion tensor imaging in humans, we demonstrate significant gray matter volume increases in frontal and parietal brain areas following only two sessions of practice in a complex whole-body balancing task. Gray matter volume increase in the prefrontal cortex correlated positively with subject's performance improvements during a 6 week learning period. Furthermore, we found that microstructural changes of fractional anisotropy in corresponding white matter regions followed the same temporal dynamic in relation to task performance. The results make clear how marginal alterations in our ever changing environment affect adult brain structure and elucidate the interrelated reorganization in cortical areas and associated fiber connections in correlation with improvements in task performance.
Resumo:
BACKGROUND: Electrophysiological cardiac devices are increasingly used. The frequency of subclinical infection is unknown. We investigated all explanted devices using sonication, a method for detection of microbial biofilms on foreign bodies. METHODS AND RESULTS: Consecutive patients in whom cardiac pacemakers and implantable cardioverter/defibrillators were removed at our institution between October 2007 and December 2008 were prospectively included. Devices (generator and/or leads) were aseptically removed and sonicated, and the resulting sonication fluid was cultured. In parallel, conventional swabs of the generator pouch were performed. A total of 121 removed devices (68 pacemakers, 53 implantable cardioverter/defibrillators) were included. The reasons for removal were insufficient battery charge (n=102), device upgrading (n=9), device dysfunction (n=4), or infection (n=6). In 115 episodes (95%) without clinical evidence of infection, 44 (38%) grew bacteria in sonication fluid, including Propionibacterium acnes (n=27), coagulase-negative staphylococci (n=11), Gram-positive anaerobe cocci (n=3), Gram-positive anaerobe rods (n=1), Gram-negative rods (n=1), and mixed bacteria (n=1). In 21 of 44 sonication-positive episodes, bacterial counts were significant (>or=10 colony-forming units/mL of sonication fluid). In 26 sterilized controls, sonication cultures remained negative in 25 cases (96%). In 112 cases without clinical infection, conventional swab cultures were performed: 30 cultures (27%) were positive, and 18 (60%) were concordant with sonication fluid cultures. Six devices and leads were removed because of infection, growing Staphylococcus aureus, Streptococcus mitis, and coagulase-negative staphylococci in 6 sonication fluid cultures and 4 conventional swab cultures. CONCLUSIONS: Bacteria can colonize cardiac electrophysiological devices without clinical signs of infection.
Resumo:
Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.
Resumo:
The present research deals with the review of the analysis and modeling of Swiss franc interest rate curves (IRC) by using unsupervised (SOM, Gaussian Mixtures) and supervised machine (MLP) learning algorithms. IRC are considered as objects embedded into different feature spaces: maturities; maturity-date, parameters of Nelson-Siegel model (NSM). Analysis of NSM parameters and their temporal and clustering structures helps to understand the relevance of model and its potential use for the forecasting. Mapping of IRC in a maturity-date feature space is presented and analyzed for the visualization and forecasting purposes.
Resumo:
Scientific reporting and communication is a challenging topic for which traditional study programs do not offer structured learning activities on a regular basis. This paper reports on the development and implementation of a web application and associated learning activities that intend to raise the awareness of reporting and communication issues among students in forensic science and law. The project covers interdisciplinary case studies based on a library of written reports about forensic examinations. Special features of the web framework, in particular a report annotation tool, support the design of various individual and group learning activities that focus on the development of knowledge and competence in dealing with reporting and communication challenges in the students' future areas of professional activity.
Resumo:
Glucose-dependent insulinotropic polypeptide (GIP) is a key incretin hormone, released from intestine after a meal, producing a glucose-dependent insulin secretion. The GIP receptor (GIPR) is expressed on pyramidal neurons in the cortex and hippocampus, and GIP is synthesized in a subset of neurons in the brain. However, the role of the GIPR in neuronal signaling is not clear. In this study, we used a mouse strain with GIPR gene deletion (GIPR KO) to elucidate the role of the GIPR in neuronal communication and brain function. Compared with C57BL/6 control mice, GIPR KO mice displayed higher locomotor activity in an open-field task. Impairment of recognition and spatial learning and memory of GIPR KO mice were found in the object recognition task and a spatial water maze task, respectively. In an object location task, no impairment was found. GIPR KO mice also showed impaired synaptic plasticity in paired-pulse facilitation and a block of long-term potentiation in area CA1 of the hippocampus. Moreover, a large decrease in the number of neuronal progenitor cells was found in the dentate gyrus of transgenic mice, although the numbers of young neurons was not changed. Together the results suggest that GIP receptors play an important role in cognition, neurotransmission, and cell proliferation.
Resumo:
Individual learning (e.g., trial-and-error) and social learning (e.g., imitation) are alternative ways of acquiring and expressing the appropriate phenotype in an environment. The optimal choice between using individual learning and/or social learning may be dictated by the life-stage or age of an organism. Of special interest is a learning schedule in which social learning precedes individual learning, because such a schedule is apparently a necessary condition for cumulative culture. Assuming two obligatory learning stages per discrete generation, we obtain the evolutionarily stable learning schedules for the three situations where the environment is constant, fluctuates between generations, or fluctuates within generations. During each learning stage, we assume that an organism may target the optimal phenotype in the current environment by individual learning, and/or the mature phenotype of the previous generation by oblique social learning. In the absence of exogenous costs to learning, the evolutionarily stable learning schedules are predicted to be either pure social learning followed by pure individual learning ("bang-bang" control) or pure individual learning at both stages ("flat" control). Moreover, we find for each situation that the evolutionarily stable learning schedule is also the one that optimizes the learned phenotype at equilibrium.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
This report compares policy learning processes in 11 European countries. Based on the country reports that were produced by the national teams of the INSPIRES project, this paper develops an argument that connects problem pressure and politicization to learning in different labor market innovations. In short, we argue that learning efforts are most likely to impact on policy change if there is a certain problem pressure that clearly necessitates political action. On the other hand, if problem pressure is very low, or so high that governments need to react immediately, chances are low that learning impacts on policy change. The second part of our argument contends that learning impacts on policy change especially if a problem is not very politicized, i.e. there are no main conflicts concerning a reform, because then, solutions are wound up in the search for a compromise. Our results confirm our first hypothesis regarding the connection between problem pressure and policy learning. Governments learn indeed up to a certain degree of problem pressure. However, once political action becomes really urgent, i.e. in anti-crisis policies, there is no time and room for learning. On the other hand, learning occurred independently from the politicization of problem. In fact, in countries that have a consensual political system, learning occurred before the decision on a reform, whereas in majoritarian systems, learning happened after the adoption of a policy during the process of implementation.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
The aim of the present study was to assess the influence of local environmental olfactory cues on place learning in rats. We developed a new experimental design allowing the comparison of the use of local olfactory and visual cues in spatial and discrimination learning. We compared the effect of both types of cues on the discrimination of a single food source in an open-field arena. The goal was either in a fixed or in a variable location, and could be indicated by local olfactory and/or visual cues. The local cues enhanced the discrimination of the goal dish, whether it was in a fixed or in a variable location. However, we did not observe any overshadowing of the spatial information by the local olfactory or visual cue. Rats relied primarily on distant visuospatial information to locate the goal, neglecting local information when it was in conflict with the spatial information.
Resumo:
Aujourd'hui, les problèmes des maladies infectieuses concernent l'émergence d'infections difficiles à traiter, telles que les infections associées aux implants et les infections fongiques invasives chez les patients immunodéprimés. L'objectif de cette thèse était de développer des stratégies pour l'éradication des biofilms bactériens (partie 1), ainsi que d'étudier des méthodes innovantes pour la détection microbienne, pour l'établissement de nouveaux tests de sensibilité (partie 2). Le traitement des infections associées aux implants est difficile car les biofilms bactériens peuvent résister à des niveaux élevés d'antibiotiques. A ce jour, il n'y a pas de traitement optimal défini contre des infections causées par des bactéries de prévalence moindre telles que Enterococcus faecalis ou Propionibacterium acnés. Dans un premier temps, nous avons démontré une excellente activité in vitro de la gentamicine sur une souche de E. faecalis en phase stationnaire de croissance Nous avons ensuite confirmé l'activité de la gentamicine sur un biofilm précoce en modèle expérimental animal à corps étranger avec un taux de guérison de 50%. De plus, les courbes de bactéricidie ainsi que les résultats de calorimétrie ont prouvé que l'ajout de gentamicine améliorait l'activité in vitro de la daptomycine, ainsi que celle de la vancomycine. In vivo, le schéma thérapeutique le plus efficace était l'association daptomycine/gentamicine avec un taux de guérison de 55%. En établissant une nouvelle méthode pour l'évaluation de l'activité des antimicrobiens vis-à-vis de micro-organismes en biofilm, nous avons démontré que le meilleur antibiotique actif sur les biofilms à P. acnés était la rifampicine, suivi par la penicilline G, la daptomycine et la ceftriaxone. Les études conduites en modèle expérimental animal ont confirmé l'activité de la rifampicine seule avec un taux de guérison 36%. Le meilleur schéma thérapeutique était au final l'association rifampicine/daptomycine avec un taux de guérison 63%. Les associations de rifampicine avec la vancomycine ou la levofloxacine présentaient des taux de guérisons respectivement de 46% et 25%. Nous avons ensuite étudié l'émergence in vitro de la résistance à la rifampicine chez P. acnés. Nous avons observé un taux de mutations de 10"9. La caractérisation moléculaire de la résistance chez les mutant-résistants a mis en évidence l'implication de 5 mutations ponctuelles dans les domaines I et II du gène rpoB. Ce type de mutations a déjà été décrit au préalable chez d'autres espèces bactériennes, corroborant ainsi la validité de nos résultats. La deuxième partie de cette thèse décrit une nouvelle méthode d'évaluation de l'efficacité des antifongiques basée sur des mesures de microcalorimétrie isotherme. En utilisant un microcalorimètre, la chaleur produite par la croissance microbienne peut être-mesurée en temps réel, très précisément. Nous avons évalué l'activité de l'amphotéricine B, des triazolés et des échinocandines sur différentes souches de Aspergillus spp. par microcalorimétrie. La présence d'amphotéricine Β ou de triazole retardait la production de chaleur de manière concentration-dépendante. En revanche, pour les échinochandines, seule une diminution le pic de « flux de chaleur » a été observé. La concordance entre la concentration minimale inhibitrice de chaleur (CMIC) et la CMI ou CEM (définie par CLSI M38A), avec une marge de 2 dilutions, était de 90% pour l'amphotéricine B, 100% pour le voriconazole, 90% pour le pozoconazole et 70% pour la caspofongine. La méthode a été utilisée pour définir la sensibilité aux antifongiques pour d'autres types de champignons filamenteux. Par détermination microcalorimétrique, l'amphotéricine B s'est avéré être l'agent le plus actif contre les Mucorales et les Fusarium spp.. et le voriconazole le plus actif contre les Scedosporium spp. Finalement, nous avons évalué l'activité d'associations d'antifongiques vis-à-vis de Aspergillus spp. Une meilleure activité antifongique était retrouvée avec l'amphotéricine B ou le voriconazole lorsque ces derniers étaient associés aux échinocandines vis-à-vis de A. fumigatus. L'association échinocandine/amphotéricine B a démontré une activité antifongique synergique vis-à-vis de A. terreus, contrairement à l'association échinocandine/voriconazole qui ne démontrait aucune amélioration significative de l'activité antifongique. - The diagnosis and treatment of infectious diseases are today increasingly challenged by the emergence of difficult-to-manage situations, such as infections associated with medical devices and invasive fungal infections, especially in immunocompromised patients. The aim of this thesis was to address these challenges by developing new strategies for eradication of biofilms of difficult-to-treat microorganisms (treatment, part 1) and investigating innovative methods for microbial detection and antimicrobial susceptibility testing (diagnosis, part 2). The first part of the thesis investigates antimicrobial treatment strategies for infections caused by two less investigated microorganisms, Enterococcus faecalis and Propionibacterium acnes, which are important pathogens causing implant-associated infections. The treatment of implant-associated infections is difficult in general due to reduced susceptibility of bacteria when present in biofilms. We demonstrated an excellent in vitro activity of gentamicin against E. faecalis in stationary growth- phase and were able to confirm the activity against "young" biofilms (3 hours) in an experimental foreign-body infection model (cure rate 50%). The addition of gentamicin improved the activity of daptomycin and vancomycin in vitro, as determined by time-kill curves and microcalorimetry. In vivo, the most efficient combination regimen was daptomycin plus gentamicin (cure rate 55%). Despite a short duration of infection, the cure rates were low, highlighting that enterococcal biofilms remain difficult to treat despite administration of newer antibiotics, such as daptomycin. By establishing a novel in vitro assay for evaluation of anti-biofilm activity (microcalorimetry), we demonstrated that rifampin was the most active antimicrobial against P. acnes biofilms, followed by penicillin G, daptomycin and ceftriaxone. In animal studies we confirmed the anti-biofilm activity of rifampin (cure rate 36% when administered alone), as well as in combination with daptomycin (cure rate 63%), whereas in combination with vancomycin or levofloxacin it showed lower cure rates (46% and 25%, respectively). We further investigated the emergence of rifampin resistance in P. acnes in vitro. Rifampin resistance progressively emerged during exposure to rifampin, if the bacterial concentration was high (108 cfu/ml) with a mutation rate of 10"9. In resistant isolates, five point mutations of the rpoB gene were found in cluster I and II, as previously described for staphylococci and other bacterial species. The second part of the thesis describes a novel real-time method for evaluation of antifungals against molds, based on measurements of the growth-related heat production by isothermal microcalorimetry. Current methods for evaluation of antifungal agents against molds, have several limitations, especially when combinations of antifungals are investigated. We evaluated the activity of amphotericin B, triazoles (voriconazole, posaconazole) and echinocandins (caspofungin and anidulafungin) against Aspergillus spp. by microcalorimetry. The presence of amphotericin Β or a triazole delayed the heat production in a concentration-dependent manner and the minimal heat inhibition concentration (MHIC) was determined as the lowest concentration inhibiting 50% of the heat produced at 48 h. Due to the different mechanism of action echinocandins, the MHIC for this antifungal class was determined as the lowest concentration lowering the heat-flow peak with 50%. Agreement within two 2-fold dilutions between MHIC and MIC or MEC (determined by CLSI M38A) was 90% for amphotericin B, 100% for voriconazole, 90% for posaconazole and 70% for caspofungin. We further evaluated our assay for antifungal susceptibility testing of non-Aspergillus molds. As determined by microcalorimetry, amphotericin Β was the most active agent against Mucorales and Fusarium spp., whereas voriconazole was the most active agent against Scedosporium spp. Finally, we evaluated the activity of antifungal combinations against Aspergillus spp. Against A. jumigatus, an improved activity of amphotericin Β and voriconazole was observed when combined with an echinocandin. Against A. terreus, an echinocandin showed a synergistic activity with amphotericin B, whereas in combination with voriconazole, no considerable improved activity was observed.
Resumo:
Rats were treated postnatally (PND 5-16) with BSO (l-buthionine-(S,R)-sulfoximine) in an animal model of schizophrenia based on transient glutathione deficit. The BSO treated rats were impaired in patrolling a maze or a homing table when adult, yet demonstrated preserved escape learning, place discrimination and reversal in a water maze task [37]. In the present work, BSO rats' performance in the water maze was assessed in conditions controlling for the available visual cues. First, in a completely curtained environment with two salient controlled cues, BSO rats showed little accuracy compared to control rats. Secondly, pre-trained BSO rats were impaired in reaching the familiar spatial position when curtains partially occluded different portions of the room environment in successive sessions. The apparently preserved place learning in a classical water maze task thus appears to require the stability and the richness of visual landmarks from the surrounding environment. In other words, the accuracy of BSO rats in place and reversal learning is impaired in a minimal cue condition or when the visual panorama changes between trials. However, if the panorama remains rich and stable between trials, BSO rats are equally efficient in reaching a familiar position or in learning a new one. This suggests that the BSO accurate performance in the water maze does not satisfy all the criteria for a cognitive map based navigation on the integration of polymodal cues. It supports the general hypothesis of a binding deficit in BSO rats.
Resumo:
Meta-analysis of prospective studies shows that quantitative ultrasound of the heel using validated devices predicts risk of different types of fracture with similar performance across different devices and in elderly men and women. These predictions are independent of the risk estimates from hip DXA measures.Introduction Clinical utilisation of heel quantitative ultrasound (QUS) depends on its power to predict clinical fractures. This is particularly important in settings that have no access to DXA-derived bone density measurements. We aimed to assess the predictive power of heel QUS for fractures using a meta-analysis approach.Methods We conducted an inverse variance random effects meta-analysis of prospective studies with heel QUS measures at baseline and fracture outcomes in their follow-up. Relative risks (RR) per standard deviation (SD) of different QUS parameters (broadband ultrasound attenuation [BUA], speed of sound [SOS], stiffness index [SI], and quantitative ultrasound index [QUI]) for various fracture outcomes (hip, vertebral, any clinical, any osteoporotic and major osteoporotic fractures) were reported based on study questions.Results Twenty-one studies including 55,164 women and 13,742 men were included in the meta-analysis with a total follow-up of 279,124 person-years. All four QUS parameters were associated with risk of different fracture. For instance, RR of hip fracture for 1 SD decrease of BUA was 1.69 (95% CI 1.43-2.00), SOS was 1.96 (95% CI 1.64-2.34), SI was 2.26 (95%CI 1.71-2.99) and QUI was 1.99 (95% CI 1.49-2.67). There was marked heterogeneity among studies on hip and any clinical fractures but no evidence of publication bias amongst them. Validated devices from different manufacturers predicted fracture risks with similar performance (meta-regression p values > 0.05 for difference of devices). QUS measures predicted fracture with a similar performance in men and women. Meta-analysis of studies with QUS measures adjusted for hip BMD showed a significant and independent association with fracture risk (RR/SD for BUA = 1.34 [95%CI 1.22-1.49]).Conclusions This study confirms that heel QUS, using validated devices, predicts risk of different fracture outcomes in elderly men and women. Further research is needed for more widespread utilisation of the heel QUS in clinical settings across the world.