994 resultados para software measurement
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
The Europeanisation of the measurement of diversity in education: a soft instrument of public policy
Resumo:
Faced with an increasing number of data and rankings, the author questions the roles of the different groups of actors who were originally involved in questioning the use of statistical indicators as a means of addressing issues of access to higher education. The comparison and nature of these international (UNESCO, OECD, EUROSTAT) and national (Germany, England, France, Switzerland) indicators in matters of inequalities of access to higher education question the tension between the discourses and the indicators they generate, and their recording at the national level. Who says what and with what consequences? What range of actors are involved in this process? What kind of power relations forms them? The author discusses how the issue of inequalities of access to higher education got on to the agendas of European organisations, identifies the policies that were defined, and sets them against an array of indicators, showing the discrepancy between the discourses and what the indicators reveal, the gap between the recommendations and the available tools. Why is there such a contrast? What are the mechanisms at work? Is it a technical or a political problem? What does this discrepancy reveal as far as national specificities within the construction of social inequalities are concerned?
Resumo:
OBJECTIVE: To assess the accuracy of a semiautomated 3D volume reconstruction method for organ volume measurement by postmortem MRI. METHODS: This prospective study was approved by the institutional review board and the infants' parents gave their consent. Postmortem MRI was performed in 16 infants (1 month to 1 year of age) at 1.5 T within 48 h of their sudden death. Virtual organ volumes were estimated using the Myrian software. Real volumes were recorded at autopsy by water displacement. The agreement between virtual and real volumes was quantified following the Bland and Altman's method. RESULTS: There was a good agreement between virtual and real volumes for brain (mean difference: -0.03% (-13.6 to +7.1)), liver (+8.3% (-9.6 to +26.2)) and lungs (+5.5% (-26.6 to +37.6)). For kidneys, spleen and thymus, the MRI/autopsy volume ratio was close to 1 (kidney: 0.87±0.1; spleen: 0.99±0.17; thymus: 0.94±0.25), but with a less good agreement. For heart, the MRI/real volume ratio was 1.29±0.76, possibly due to the presence of residual blood within the heart. The virtual volumes of adrenal glands were significantly underestimated (p=0.04), possibly due to their very small size during the first year of life. The percentage of interobserver and intraobserver variation was lower or equal to 10%, but for thymus (15.9% and 12.6%, respectively) and adrenal glands (69% and 25.9%). CONCLUSIONS: Virtual volumetry may provide significant information concerning the macroscopic features of the main organs and help pathologists in sampling organs that are more likely to yield histological findings.
Resumo:
Because self-reported health status [SRHS] is an ordered response variable, inequality measurement for SRHS data requires a numerical scale for converting individual responses into a summary statistic. The choice of scale is however problematic, since small variations in the numerical scale may reverse the ordering of a given pair of distributions of SRHS data in relation to conventional inequality indices such as the variance. This paper introduces a parametric family of inequality indices, founded on an inequality ordering proposed by Allison and Foster [Allison, R.A., Foster, J., 2004. Measuring health inequalities using qualitative data. Journal of Health Economics 23, 505-524], which satisfy a suitable invariance property with respect to the choice of numerical scale. Several key members of the parametric family are also derived, and an empirical application using data from the Swiss Health Survey illustrates the proposed methodology. [Authors]
Resumo:
Investigaremos cómo las redes de colaboración y el softwarelibre permiten adaptar el centro educativo al entorno, cómo pueden ayudar al centro a potenciar la formación profesional y garantizar la durabilidad de las acciones, con el objetivo que perdure el conocimiento y la propia red de colaboración para una mejora educativa.
Resumo:
Computer-Aided Tomography Angiography (CTA) images are the standard for assessing Peripheral artery disease (PAD). This paper presents a Computer Aided Detection (CAD) and Computer Aided Measurement (CAM) system for PAD. The CAD stage detects the arterial network using a 3D region growing method and a fast 3D morphology operation. The CAM stage aims to accurately measure the artery diameters from the detected vessel centerline, compensating for the partial volume effect using Expectation Maximization (EM) and a Markov Random field (MRF). The system has been evaluated on phantom data and also applied to fifteen (15) CTA datasets, where the detection accuracy of stenosis was 88% and the measurement accuracy was with an 8% error.
Resumo:
Abstract
Resumo:
In the realm of forensic pathology, β-tryptase measurement for diagnostic purposes is performed in postmortem serum obtained from femoral blood. This may be partially or completely unavailable in some specific cases, such as infant autopsies and severely damaged bodies. The aim of this study was to investigate the usefulness of determining β-tryptase levels for diagnostic purposes in alternative biological samples. Urine, vitreous humor and pericardial fluid were selected and measured in 94 subjects including: fatal anaphylaxis following contrast material administration (6 cases), hypothermia (10 cases), diabetic ketoacidosis (10 cases), gunshot suicide (10 cases), heroin injection-related deaths (18 cases), trauma (10 cases), sudden death with minimal coronary atherosclerosis (10 cases), severe coronary atherosclerosis without myocardial infarction (10 cases) and severe coronary atherosclerosis with myocardial infarction (10 cases). Postmortem serum and pericardial fluid β-tryptase levels higher than the clinical reference value (11.4ng/ml) were systematically identified in fatal anaphylaxis following contrast material administration and 6 cases unrelated to anaphylaxis. β-tryptase concentrations in urine and vitreous humor were lower than the clinical reference value in all cases included in this study. Determination of β-tryptase in pericardial fluid appears to be a possible alternative to postmortem serum in the early postmortem period when femoral blood cannot be collected during autopsy and biochemical investigations are required to objectify increased β-tryptase levels.
Resumo:
Major liver resection can be used in the treatment of liver cancer. The functional capacity of liver parenchyma needs to be evaluated preoperatively because it conditions the outcome. We assessed whether the whole body clearance of glycerol, a substrate essentially metabolized in liver cells, may be suitable as a simple test of liver function. Seven patients after major hepatectomy, six patients after colectomy and 12 healthy subjects were studied. Patients were investigated on the first day after surgery. All participants were studied during a 150-min basal period followed by a 120-min infusion of 16 mumol kg-1 min-1 13C-labelled glycerol. Whole body glycerol clearance was calculated from the change in plasma glycerol concentration. Whole body glucose production was measured with 6,6 2H2 glucose infused as a tracer in the basal state and during glycerol infusion. In addition, 13C glucose synthesis was monitored to quantitate gluconeogenesis from glycerol. Patients after liver resection had higher plasma glycerol concentrations and lower whole body glycerol clearance than healthy subjects and patients after colectomy. They also had higher plasma glucagon concentrations. Their fasting glucose production was mildly elevated in the fasting state and did not change after glycerol infusion, indicating a normal hepatic autoregulation of glucose production. These results indicate that whole body glycerol clearance can be simply determined from plasma glycerol concentrations during exogenous glycerol infusion. It is significantly reduced in patients after major hepatectomy, suggesting that it constitutes a sensitive test of hepatic function. Its use as a preoperative testing procedure remains to be evaluated.
Resumo:
Trabajo que muestra, haciendo uso de tecnologías libres y basándonos en sistemas operativos abiertos, cómo es posible mantener un nivel alto de trabajo para una empresa que se dedica a implementar y realizar desarrollos en tecnologías de software libre. Se muestra el montaje de un laboratorio de desarrollo que nos va a permitir entender el funcionamiento y la implementación tanto de GNU/Linux como del software que se basa en él dentro de la infraestructura de la empresa.
Resumo:
Abstract
Resumo:
The objective of this work was to build mock-ups of complete yerba mate plants in several stages of development, using the InterpolMate software, and to compute photosynthesis on the interpolated structure. The mock-ups of yerba-mate were first built in the VPlants software for three growth stages. Male and female plants grown in two contrasting environments (monoculture and forest understory) were considered. To model the dynamic 3D architecture of yerba-mate plants during the biennial growth interval between two subsequent prunings, data sets of branch development collected in 38 dates were used. The estimated values obtained from the mock-ups, including leaf photosynthesis and sexual dimorphism, are very close to those observed in the field. However, this similarity was limited to reconstructions that included growth units from original data sets. The modeling of growth dynamics enables the estimation of photosynthesis for the entire yerba mate plant, which is not easily measurable in the field. The InterpolMate software is efficient for building yerba mate mock-ups.
Resumo:
The aim of this work is to study the influence of several analytical parameters on the variability of Raman spectra of paint samples. In the present study, microtome thin section and direct (no preparation) analysis are considered as sample preparation. In order to evaluate their influence on the measures, an experimental design such as 'fractional full factorial' with seven factors (including the sampling process) is applied, for a total of 32 experiments representing 160 measures. Once the influence of sample preparation highlighted, a depth profile of a paint sample is carried out by changing the focusing plane in order to measure the colored layer under a clearcoat. This is undertaken in order to avoid sample preparation such a microtome sectioning. Finally, chemometric treatments such as principal component analysis are applied to the resulting spectra. The findings of this study indicate the importance of sample preparation, or more specifically, the surface roughness, on the variability of the measurements on a same sample. Moreover, the depth profile experiment highlights the influence of the refractive index of the upper layer (clearcoat) when measuring through a transparent layer.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.