858 resultados para workshops (seminars)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary of the IOWATER Program and workshops offered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este estudio pretende realizar una aproximación al papel de la cooperación en el empoderamiento femenino sobre los recursos naturales en la comunidad rural mexicana Once de Mayo. Para ello se analizan las experiencias de doce participantes en proyectos destinados a mujeres utilizando como medios la historia de vida, la observación participativa y los talleres. De todo ello se desprende la importancia de que los programas y/o proyectos aborden las necesidades prácticas de las mujeres, vinculadas a su hogar, sin omitir sus necesidades estratégicas de género como elementos para el empoderamiento. Además, para el empoderamiento femenino a través de la cooperación es fundamental el desarrollo de actitudes de liderazgo por parte de alguna/as de las participantes y el propio interés de todas las involucradas en colaborar, además de sus experiencias previas de capacitación en diferentes temáticas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantitative approaches in ceramology are gaining ground in excavation reports, archaeological publications and thematic studies. Hence, a wide variety of methods are being used depending on the researchers' theoretical premise, the type of material which is examined, the context of discovery and the questions that are addressed. The round table that took place in Athens on November 2008 was intended to offer the participants the opportunity to present a selection of case studies on the basis of which methodological approaches were discussed. The aim was to define a set of guidelines for quantification that would prove to be of use to all researchers. Contents: 1) Introduction (Samuel Verdan); 2) Isthmia and beyond. How can quantification help the analysis of EIA sanctuary deposits? (Catherine Morgan); 3) Approaching aspects of cult practice and ethnicity in Early Iron Age Ephesos using quantitative analysis of a Protogeometric deposit from the Artemision (Michael Kerschner); 4) Development of a ceramic cultic assemblage: Analyzing pottery from Late Helladic IIIC through Late Geometric Kalapodi (Ivonne Kaiser, Laura-Concetta Rizzotto, Sara Strack); 5) 'Erfahrungsbericht' of application of different quantitative methods at Kalapodi (Sara Strack); 6) The Early Iron Age sanctuary at Olympia: counting sherds from the Pelopion excavations (1987-1996) (Birgitta Eder); 7) L'aire du pilier des Rhodiens à Delphes: Essai de quantification du mobilier (Jean-Marc Luce); 8) A new approach in ceramic statistical analyses: Pit 13 on Xeropolis at Lefkandi (David A. Mitchell, Irene S. Lemos); 9) Households and workshops at Early Iron Age Oropos: A quantitative approach of the fine, wheel-made pottery (Vicky Vlachou); 10) Counting sherds at Sindos: Pottery consumption and construction of identities in the Iron Age (Stefanos Gimatzidis); 11) Analyse quantitative du mobilier céramique des fouilles de Xombourgo à Ténos et le cas des supports de caisson (Jean-Sébastien Gros); 12) Defining a typology of pottery from Gortyn: The material from a pottery workshop pit, (Emanuela Santaniello); 13) Quantification of ceramics from Early Iron Age tombs (Antonis Kotsonas); 14) Quantitative analysis of the pottery from the Early Iron Age necropolis of Tsikalario on Naxos (Xenia Charalambidou); 15) Finding the Early Iron Age in field survey: Two case studies from Boeotia and Magnesia (Vladimir Stissi); 16) Pottery quantification: Some guidelines (Samuel Verdan).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This booklet is a compilation of notes taken during motor grader operators workshops held at some 20 different locations throughout Iowa during the last two years. It is also the advice of 16 experienced motor grader operators and maintenance foremen (from 14 different counties around Iowa), who serve as instructors and assistant instructors at the "MoGo" workshops. The instructors have all said that they learn as much from the operators who attend the workshops as they impart. Motor grader operators from throughout Iowa have shown us new, innovative and better ways of maintaining gravel roads. This booklet is an attempt to pass on some of these "tips" that we have gathered from Iowa operators. It will need to be revised, corrected, and added to based on the advice we get from you, the operators who do the work here in Iowa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the substantial advances obtained in the treatment of localized malignancies, metastatic disease still lacks effective treatment and remains the primary cause of cancer mortality, including in breast cancer. Thus, in order to improve the survival of cancer patients it is necessary to effectively improve prevention or treatment of metastasis. To achieve this goal, complementary strategies can be envisaged: the first one is the eradication of established metastases by adding novel modalities to current treatments, such as immunotherapy or targeted therapies. A second one is to prevent tumor cell dissemination to secondary organs by targeting specific steps governing the metastatic cascade and organ-specific tropism. A third one is to block the colonization of secondary organs and subsequent cancer cell growth by impinging on the ability of disseminated cancer cells to adapt to the novel microenvironment. To obtain optimal results it might be necessary to combine these strategies. The development of therapeutic approaches aimed at preventing dissemination and organ colonization requires a deeper understanding of the specific genetic events occurring in cancer cells and of the host responses that co-operate to promote metastasis formation. Recent developments in the field disclosed novel mechanisms of metastasis. In particular the crosstalk between disseminated cancer cells and the host microenvironment is emerging as a critical determinant of metastasis. The identification of tissue-specific signals involved in metastatic progression will open the way to new therapeutic strategies. Here, we will review recent progress in the field, with particular emphasis on the mechanisms of organ specific dissemination and colonization of breast cancer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neuronal death occurs naturally in the development of the vertebrate central nervous system, deleting large numbers of neurons at the time when afferent and efferent connections are being formed. It is these that regulate it, by means of anterograde and retrograde survival signals that depend on trophic molecules and electrical activity. Possible roles include the regulation of neuronal numbers (numerical matching) and the elimination of axonal targeting errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Universitat Oberta de Catalunya (UOC, Open University of Catalonia) is involved inseveral research projects and educational activities related to the use of Open Educational Resources (OER). Some of the discussed issues in the concept of OER are research issues which are being tackled in two EC projects (OLCOS and SELF). Besides the research part, the UOC aims at developing a virtual centre for analysing and promoting the concept of OERin Europe in the sector of Higher and Further Education. The objectives are to makeinformation and learning services available to provide university management staff,eLearning support centres, faculty and learners with practical information required to create, share and re-use such interoperable digital content, tools and licensing schemes. In the realisation of these objectives, the main activities are the following: to provide organisationaland individual e-learning end-users with orientation; to develop perspectives and useful recommendations in the form of a medium-term Roadmap 2010 for OER in Higher and Further Education in Europe; to offer practical information and support services about how to create, share and re-use open educational content by means of tutorials, guidelines, best practices, and specimen of exemplary open e-learning content; to establish a larger group ofcommitted experts throughout Europe and other continents who not only share theirexpertise but also steer networking, workshops, and clustering efforts; and to foster and support a community of practice in open e-learning content know-how and experiences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectiu d’aquest estudi és plantejar un estat de la qüestió sobre la producció vinària i d’àmfores en època romana a l’àrea de l’ager Tarraconensis i les terres de l’Ebre (possiblement corresponents al territorium de Dertosa), intentant posar en relació el testimoni arqueològic sobre els elements de transformació agrícola (premses, dipòsits) i la producció i distribució de les àmfores. S’ha pogut documentar una producció primerenca d’àmfores de la forma Dressel 1 en època tardorepublicana, reduïda pel que ara sabem a la zona de l’Alt Camp, i potser també present a la zona de Dertosa (forn del Mas d’Aragó). Tanmateix, la producció més important correspon a l’època altimperial. En època d’August es documenta l’aparició dels tallers de l’ager Dertosanus, així com a la part oriental de l’ager Tarraconensis (Darró, el Vilarenc) i al taller de la Canaleta (Vila-seca), on es produïren àmfores de la forma Oberaden 74. Tanmateix, fou en plena època julioclàudia quan les figlinae de l’àrea del Baix Camp es posaren en marxa, amb la producció d’àmfores de les formes Dressel 2-4 i 7-11. Aquesta producció es relaciona amb el vi de Tarraco al qual fan referència les fonts escrites llatines, i arqueològicament sembla que es pot perllongar almenys fins a final del segle II i inici del III.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Endosomal and cytosolic nucleic acid receptors are important immune sensors required for the detection of infecting or replicating viruses. The intracellular location of these receptors allows viral recognition and, at the same time, avoids unnecessary immune activation to self-nucleic acids that are continuously released by dying host cells. Recent evidence, however, indicates that endogenous factors such as anti-microbial peptides have the ability to break this protective mechanism. Here, we discuss these factors and illustrate how they drive inflammatory responses by promoting immune recognition of self-nucleic acids in skin wounds and inflammatory skin diseases such as psoriasis and lupus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän diplomityön tavoite on luoda viitekehys kahdesta työn pääteorioista, jotka ovat: "liiketoiminnan ulkoiset menestystekijät" ja "alueiden kilpailukyky". Kummatkin teoriat sisältävät tekijöitä, joilla on vaikutusta yrityksen sijaintipaikkapäätökseen. Viitekehyksen pohjalta tarkastellaan kahta tutkimusaluetta: Landen seutua ja Kuuma-aluetta. Työn tuloksena syntyy kuva kummastakin tutkimusalueesta ja analyysi viitekehyksestä. Työn ensimmäisessä osassa käydään läpi aihealueen tutkimuksen taustaa ja mitä ongelmia tutkimuksissa on tullut esille. Senjälkeen esitellään kaikki liiketoiminnan ulkoiset menestystekijät. Alueiden kilpailukyvyn teoriaosuus täydentää viitekehyksen tekijät. Työn jälkimmäinen empiirinen osa perustuu lähdemateriaaliin, joka on kerätty haastatteluista, lehtiartikkeleista ja seminaareista koskien tutkimusalueita. Tutkimustuloksista selviää, että kummatkin tutkimusalueet ovat erilaisia ja niillä on omat avainklusterinsa ja menestyvät toimialansa. Viitekehys luotiin melko onnistuneesti. Lopulta selvisi, että se sopii hyvin aihealueen tutkimuksen laajentamiseen, mutta heikosti yksittäisenyrityksen sijaintipaikkapäätökseen.