948 resultados para Fault location algorithms
Resumo:
BACKGROUND: Active screening by mobile teams is considered the best method for detecting human African trypanosomiasis (HAT) caused by Trypanosoma brucei gambiense but the current funding context in many post-conflict countries limits this approach. As an alternative, non-specialist health care workers (HCWs) in peripheral health facilities could be trained to identify potential cases who need testing based on their symptoms. We explored the predictive value of syndromic referral algorithms to identify symptomatic cases of HAT among a treatment-seeking population in Nimule, South Sudan. METHODOLOGY/PRINCIPAL FINDINGS: Symptom data from 462 patients (27 cases) presenting for a HAT test via passive screening over a 7 month period were collected to construct and evaluate over 14,000 four item syndromic algorithms considered simple enough to be used by peripheral HCWs. For comparison, algorithms developed in other settings were also tested on our data, and a panel of expert HAT clinicians were asked to make referral decisions based on the symptom dataset. The best performing algorithms consisted of three core symptoms (sleep problems, neurological problems and weight loss), with or without a history of oedema, cervical adenopathy or proximity to livestock. They had a sensitivity of 88.9-92.6%, a negative predictive value of up to 98.8% and a positive predictive value in this context of 8.4-8.7%. In terms of sensitivity, these out-performed more complex algorithms identified in other studies, as well as the expert panel. The best-performing algorithm is predicted to identify about 9/10 treatment-seeking HAT cases, though only 1/10 patients referred would test positive. CONCLUSIONS/SIGNIFICANCE: In the absence of regular active screening, improving referrals of HAT patients through other means is essential. Systematic use of syndromic algorithms by peripheral HCWs has the potential to increase case detection and would increase their participation in HAT programmes. The algorithms proposed here, though promising, should be validated elsewhere.
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
This Phase I report describes a preliminary evaluation of a new compaction monitoring system developed by Caterpillar, Inc. (CAT), for use as a quality control and quality assurance (QC/QA) tool during earthwork construction operations. The CAT compaction monitoring system consists of an instrumented roller with sensors to monitor machine power output in response to changes in soil machine interaction and is fitted with a global positioning system (GPS) to monitor roller location in real time. Three pilot tests were conducted using CAT’s compaction monitoring technology. Two of the sites were located in Peoria, Illinois, at the Caterpillar facilities. The third project was an actual earthwork grading project in West Des Moines, Iowa. Typical construction operations for all tests included the following steps: (1) aerate/till existing soil; (2) moisture condition soil with water truck (if too dry); (3) remix; (4) blade to level surface; and (5) compact soil using the CAT CP-533E roller instrumented with the compaction monitoring sensors and display screen. Test strips varied in loose lift thickness, water content, and length. The results of the study show that it is possible to evaluate soil compaction with relatively good accuracy using machine energy as an indicator, with the advantage of 100% coverage with results in real time. Additional field trials are necessary, however, to expand the range of correlations to other soil types, different roller configurations, roller speeds, lift thicknesses, and water contents. Further, with increased use of this technology, new QC/QA guidelines will need to be developed with a framework in statistical analysis. Results from Phase I revealed that the CAT compaction monitoring method has a high level of promise for use as a QC/QA tool but that additional testing is necessary in order to prove its validity under a wide range of field conditions. The Phase II work plan involves establishing a Technical Advisor Committee, developing a better understanding of the algorithms used, performing further testing in a controlled environment, testing on project sites in the Midwest, and developing QC/QA procedures.
Resumo:
In the last five years, Deep Brain Stimulation (DBS) has become the most popular and effective surgical technique for the treatent of Parkinson's disease (PD). The Subthalamic Nucleus (STN) is the usual target involved when applying DBS. Unfortunately, the STN is in general not visible in common medical imaging modalities. Therefore, atlas-based segmentation is commonly considered to locate it in the images. In this paper, we propose a scheme that allows both, to perform a comparison between different registration algorithms and to evaluate their ability to locate the STN automatically. Using this scheme we can evaluate the expert variability against the error of the algorithms and we demonstrate that automatic STN location is possible and as accurate as the methods currently used.
Resumo:
This work focuses on the prediction of the two main nitrogenous variables that describe the water quality at the effluent of a Wastewater Treatment Plant. We have developed two kind of Neural Networks architectures based on considering only one output or, in the other hand, the usual five effluent variables that define the water quality: suspended solids, biochemical organic matter, chemical organic matter, total nitrogen and total Kjedhal nitrogen. Two learning techniques based on a classical adaptative gradient and a Kalman filter have been implemented. In order to try to improve generalization and performance we have selected variables by means genetic algorithms and fuzzy systems. The training, testing and validation sets show that the final networks are able to learn enough well the simulated available data specially for the total nitrogen
Resumo:
Between the cities of Domodossola and Locarno, the complex ``Centovalli Line'' tectonic zone of the Central Alps outlines deformation phases over a long period of time (probably starting similar to 30 Ma ago) and under variable P-T conditions. The last deformation phases developed gouge-bearing faults with a general E-W trend that crosscuts the roots of the Alpine Canavese zone and the Finero ultramafic body. Kinematic indicators show that the general motion was mainly dextral associated with back thrusting towards the S. The <2 mu m clay fractions of fault gouges from Centovalli Line consist mainly of illite, smectite and chlorite with varied illite-smectite, chlorite-smectite and chlorite-serpentine mixed-layers. Constrained with the illite crystallinity index, the thermal conditions induced by the tectonic activity show a gradual trend from anchizonal to diagenetic conditions. The <2 and <0.2 mu M clay fractions, and hydrothermal K-feldspar separates all provide K-Ar ages between 14.2 +/- 2.9 Ma and roughly 0 Ma, with major episodes at about 12,8, 6 and close to 0 Ma These ages set the recurrent tectonic activity and the associated fluid circulations between Upper Miocene and Recent. On the basis of the K-Ar ages and with a thermal gradient of 25-30 degrees C/km, the studied fault zones were located at a depth of 4-7 km. If they were active until now as observed in field, the exhumation was approximately 2.5-3.0 km for the last 12 Ma with a mean velocity of 0.4 mm/y. Comparison with available models on the recent Alpine evolution shows that the tectonic activity in the area relates to a continuum of the back-thrusting movements of the Canavese Line, and/or to several late-extensional phases of the Rhone-Simplon line. The Centovalli-Val Vigezzo zone therefore represents a major tectonic zone of the Central-Western Alps resulting from different interacting tectonic events. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This report summarizes progress made in Phase 1 of the GIS-based Accident Location and Analysis System (GIS-ALAS) project. The GIS-ALAS project builds on several longstanding efforts by the Iowa Department of Transportation (DOT), law enforcement agencies, Iowa State University, and several other entities to create a locationally-referenced highway accident database for Iowa. Most notable of these efforts is the Iowa DOT’s development of a PC-based accident location and analysis system (PC-ALAS), a system that has been well received by users since it was introduced in 1989. With its pull-down menu structure, PC-ALAS is more portable and user-friendly than its mainframe predecessor. Users can obtain accident statistics for locations during specified time periods. Searches may be refined to identify accidents of specific types or involving drivers with certain characteristics. Output can be viewed on a computer screen, sent to a file, or printed using pre-defined formats.
Resumo:
Numerous measurements by XRD of the Scherrer width at half-peak height (001 reflection of illite), coupled with analyses of clay-size assemblages, provide evidence for strong variations in the conditions of low temperature metamorphism in the Tethyan Himalaya metasediments between the Spiti river and the Tso Morari. Three sectors can be distinguished along the Spiti river-Tso Morari transect. In the SW, the Takling and Parang La area is characterised by a metamorphism around anchizone-epizone boundary conditions. Further north, in the Dutung area, the metamorphic grade abruptly decreases to weak diagenesis, with the presence of mixed-layered clay phases. At the end of the profile towards the NE, a progressive metamorphic increase up to greenschist facies is recorded, marked by the appearance of biotite and chloritoid. The combination of these data with the structural. observations permits to propose that a nappe stack has been crosscut by the younger Dutung-Thaktote extensional fault zone (DTFZ). The change in metamorphism across this zone helps to assess the displacements which occurred during synorogenic extension. In the SW and NE parts of the studied transect, a burial of 12 km has been estimated, assuming a geothermal gradient of 25 degrees C/km. In the SW part, this burial is due to the juxtaposition of the Shikar Beh and Mata nappes and in the NE part, solely to burial beneath the Mata nappe. In the central part of the profile, the effect of the DTFZ is to bring down diagenetic sediments in-between the two aforesaid metamorphic zones. The offset along the Dutung-Thaktote normal faults is estimated at 16 km.
Resumo:
The Polochic-Motagua fault systems (PMFS) are part of the sinistral transform boundary between the North American and Caribbean plates. To the west, these systems interact with the subduction zone of the Cocos plate, forming a subduction-subduction-transform triple junction. The North American plate moves westward relative to the Caribbean plate. This movement does not affect the geometry of the subducted Cocos plate, which implies that deformation is accommodated entirely in the two overriding plates. Structural data, fault kinematic analysis, and geomorphic observations provide new elements that help to understand the late Cenozoic evolution of this triple junction. In the Miocene, extension and shortening occurred south and north of the Motagua fault, respectively. This strain regime migrated northward to the Polochic fault after the late Miocene. This shift is interpreted as a ``pull-up'' of North American blocks into the Caribbean realm. To the west, the PMFS interact with a trench-parallel fault zone that links the Tonala fault to the Jalpatagua fault. These faults bound a fore-arc sliver that is shared by the two overriding plates. We propose that the dextral Jalpatagua fault merges with the sinistral PMFS, leaving behind a suturing structure, the Tonala fault. This tectonic ``zipper'' allows the migration of the triple junction. As a result, the fore-arc sliver comes into contact with the North American plate and helps to maintain a linear subduction zone along the trailing edge of the Caribbean plate. All these processes currently make the triple junction increasingly diffuse as it propagates eastward and inland within both overriding plates.
Resumo:
Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.
Resumo:
Networks are evolving toward a ubiquitous model in which heterogeneousdevices are interconnected. Cryptographic algorithms are required for developing securitysolutions that protect network activity. However, the computational and energy limitationsof network devices jeopardize the actual implementation of such mechanisms. In thispaper, we perform a wide analysis on the expenses of launching symmetric and asymmetriccryptographic algorithms, hash chain functions, elliptic curves cryptography and pairingbased cryptography on personal agendas, and compare them with the costs of basic operatingsystem functions. Results show that although cryptographic power costs are high and suchoperations shall be restricted in time, they are not the main limiting factor of the autonomyof a device.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.