967 resultados para Distributed virtual machine
Resumo:
Abstract¿La deteción del espectro libre para las comunicaciones inalámbricas en un momento puntual es una tarea compleja cuyo desarrollo se simplica al realizarse de forma distribuida por una red de radio cognitiva. Sin embargo existes dificultades y vulnerabilidades de seguridad que han de ser tenidas en cuenta y solventadas a la hora de autenticar y validar los nodos de la red. Este artículo presenta una propuesta de mejora del protocolo fully distributed decision making protocol for CRN con el fin de llevar a cabo esta tarea de detección del espectro de una manera eficiente y segura.
Resumo:
Since the earliest years after the discovery of X-rays, radiological images have been used successfully for answering medico-legal and forensic questions. The possibility to evaluate the inside of a body without actually opening it has been appreciated and used in forensic pathology since then. However, the introduction of modern cross-sectional imaging techniques into post-mortem investigations has created controversial discussions among the medico-legal community. Terms like "Virtual Autopsy" and "Necro-Radiology" have led to confusion and controversy concerning the role of radiological techniques in forensic case work. Regardless, the use of those techniques in post-mortem investigations is increasing, especially the one of Muti-Detector Computed Tomography (MDCT). But also other imaging techniques such as Postmortem Angiography and Magnetic Resonance Imaging are increasingly applied. This presentation shall give an overview over the different techniques of postmortem imaging. It will critically explain their advantages and limitations in forensic death investigations compared to conventional autopsy. The role of postmortem imaging shall be discussed in order to distinguish between real state of the art and virtual reality.
Resumo:
El texto que se presenta muestra cómo se lleva a cabo la gestión de los libros electrónicos en la Biblioteca Virtual (en adelante BV) de la Universitat Oberta de Catalunya (en adelante UOC). La BV pone especial énfasis en la adquisición de libros digitales para mejorar el acceso de los usuarios a los recursos y a lascolecciones de una universidad caracterizada por su virtualidad. El documento presenta, en primer lugar, el entorno en el que se adquieren y se utilizan los libros electrónicos: se describen los distintos escenarios de adquisición en los que se puede encontrar la BV y se definen los circuitos internos que permiten su gestión, así como los procesos técnicos de los documentos. A continuación, se muestran las distintas opciones de acceso y consulta de libros electrónicos que actualmente se ofrecen desde la BV y se exponenlos análisis de uso de dichos documentos. Por último, se presentan las conclusiones a las que llega la BV sobre el nuevo contexto de libros electrónicos.
Resumo:
Our work is focused on alleviating the workload for designers of adaptive courses on the complexity task of authoring adaptive learning designs adjusted to specific user characteristics and the user context. We propose an adaptation platform that consists in a set of intelligent agents where each agent carries out an independent adaptation task. The agents apply machine learning techniques to support the user modelling for the adaptation process
Resumo:
OBJECTIVE: The purpose of this study was to adapt and improve a minimally invasive two-step postmortem angiographic technique for use on human cadavers. Detailed mapping of the entire vascular system is almost impossible with conventional autopsy tools. The technique described should be valuable in the diagnosis of vascular abnormalities. MATERIALS AND METHODS: Postmortem perfusion with an oily liquid is established with a circulation machine. An oily contrast agent is introduced as a bolus injection, and radiographic imaging is performed. In this pilot study, the upper or lower extremities of four human cadavers were perfused. In two cases, the vascular system of a lower extremity was visualized with anterograde perfusion of the arteries. In the other two cases, in which the suspected cause of death was drug intoxication, the veins of an upper extremity were visualized with retrograde perfusion of the venous system. RESULTS: In each case, the vascular system was visualized up to the level of the small supplying and draining vessels. In three of the four cases, vascular abnormalities were found. In one instance, a venous injection mark engendered by the self-administration of drugs was rendered visible by exudation of the contrast agent. In the other two cases, occlusion of the arteries and veins was apparent. CONCLUSION: The method described is readily applicable to human cadavers. After establishment of postmortem perfusion with paraffin oil and injection of the oily contrast agent, the vascular system can be investigated in detail and vascular abnormalities rendered visible.
Resumo:
L'objectiu del projecte és el desenvolupament d'una aplicació d'escriptori que permeti a l'estudiant la consulta offline de les darreres novetats disponibles a les aules del campus UOC. Es tracta d'automatitzar el procés de consulta i descàrrega local de dades per tal de poder consultar-les posteriorment fora de línia.
Resumo:
El projecte es basa en el desenvolupament d'una aplicació web per a la companyia fictícia XsysPC que ha decidit oferir la venta per Internet dels productes que tenen en les seves botigues, tals com ordinadors, components informàtics, perifèrics i altres.
Resumo:
Este artículo presenta una propuesta de mejora del protocolo fully distributed decision making protocol for CRN con el fin de llevar a cabo la tarea de detección del espectro libre para las comunicaciones inalámbricas de una manera eficiente y segura.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
Aquest projecte està principalment destinat a aconseguir un bon maneig d'algunes de les eines involucrades en aquest tema, com són JSP, TOMCAT i Struts 2 per a les necessitats web i MySQL per al registre de materials i transaccions.
Resumo:
El projecte es basa en el desenvolupament d'una aplicació web per a la companyia fictícia XsysPC que ha decidit oferir la venta per Internet dels productes que tenen en les seves botigues, tals com ordinadors, components informàtics, perifèrics i altres.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The research aimed to evaluate machine traffic effect on soil compaction and the least limiting water range related to soybean cultivar yields, during two years, in a Haplustox soil. The six treatments were related to tractor (11 Mg weight) passes by the same place: T0, no compaction; and T1*, 1; T1, 1; T2, 2; T4, 4 and T6, 6. In the treatment T1*, the compaction occurred when soil was dried, in 2003/2004, and with a 4 Mg tractor in 2004/2005. Soybean yield was evaluated in relation to soil compaction during two agricultural years in completely randomized design (compaction levels); however, in the second year, there was a factorial scheme (compaction levels, with and without irrigation), with four replicates represented by 9 m² plots. In the first year, soybean [Glycine max (L.) Merr.] cultivar IAC Foscarim 31 was cultivated without irrigation; and in the second year, IAC Foscarim 31 and MG/BR 46 (Conquista) cultivars were cultivated with and without irrigation. Machine traffic causes compaction and reduces soybean yield for soil penetration resistance between 1.64 to 2.35 MPa, and bulk density between 1.50 to 1.53 Mg m-3. Soil bulk density from which soybean cultivar yields decrease is lower than the critical one reached at least limiting water range (LLWR =/ 0).