897 resultados para Sensor Data Visualization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A good system of preventive bridge maintenance enhances the ability of engineers to manage and monitor bridge conditions, and take proper action at the right time. Traditionally infrastructure inspection is performed via infrequent periodical visual inspection in the field. Wireless sensor technology provides an alternative cost-effective approach for constant monitoring of infrastructures. Scientific data-acquisition systems make reliable structural measurements, even in inaccessible and harsh environments by using wireless sensors. With advances in sensor technology and availability of low cost integrated circuits, a wireless monitoring sensor network has been considered to be the new generation technology for structural health monitoring. The main goal of this project was to implement a wireless sensor network for monitoring the behavior and integrity of highway bridges. At the core of the system is a low-cost, low power wireless strain sensor node whose hardware design is optimized for structural monitoring applications. The key components of the systems are the control unit, sensors, software and communication capability. The extensive information developed for each of these areas has been used to design the system. The performance and reliability of the proposed wireless monitoring system is validated on a 34 feet span composite beam in slab bridge in Black Hawk County, Iowa. The micro strain data is successfully extracted from output-only response collected by the wireless monitoring system. The energy efficiency of the system was investigated to estimate the battery lifetime of the wireless sensor nodes. This report also documents system design, the method used for data acquisition, and system validation and field testing. Recommendations on further implementation of wireless sensor networks for long term monitoring are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the root-colonizing biocontrol strain CHA0 of Pseudomonas fluorescens, cell density-dependent synthesis of extracellular, plant-beneficial secondary metabolites and enzymes is positively regulated by the GacS/GacA two-component system. Mutational analysis of the GacS sensor kinase using improved single-copy vectors showed that inactivation of each of the three conserved phosphate acceptor sites caused an exoproduct null phenotype (GacS-), whereas deletion of the periplasmic loop domain had no significant effect on the expression of exoproduct genes. Strain CHA0 is known to synthesize a solvent-extractable extracellular signal that advances and enhances the expression of exoproduct genes during the transition from exponential to stationary growth phase when maximal exoproduct formation occurs. Mutational inactivation of either GacS or its cognate response regulator GacA abolished the strain's response to added signal. Deletion of the linker domain of the GacS sensor kinase caused signal-independent, strongly elevated expression of exoproduct genes at low cell densities. In contrast to the wild-type strain CHA0, the gacS linker mutant and a gacS null mutant were unable to protect tomato plants from crown and root rot caused by Fusarium oxysporum f. sp. radicis-lycopersici in a soil-less microcosm, indicating that, at least in this plant-pathogen system, there is no advantage in using a signal-independent biocontrol strain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization is a relatively recent tool available to engineers for enhancing transportation project design through improved communication, decision making, and stakeholder feedback. Current visualization techniques include image composites, video composites, 2D drawings, drive-through or fly-through animations, 3D rendering models, virtual reality, and 4D CAD. These methods are used mainly to communicate within the design and construction team and between the team and external stakeholders. Use of visualization improves understanding of design intent and project concepts and facilitates effective decision making. However, visualization tools are typically used for presentation only in large-scale urban projects. Visualization is not widely accepted due to a lack of demonstrated engineering benefits for typical agency projects, such as small- and medium-sized projects, rural projects, and projects where external stakeholder communication is not a major issue. Furthermore, there is a perceived high cost of investment of both financial and human capital in adopting visualization tools. The most advanced visualization technique of virtual reality has only been used in academic research settings, and 4D CAD has been used on a very limited basis for highly complicated specialty projects. However, there are a number of less intensive visualization methods available which may provide some benefit to many agency projects. In this paper, we present the results of a feasibility study examining the use of visualization and simulation applications for improving highway planning, design, construction, and safety and mobility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization is a relatively recent tool available to engineers for enhancing transportation project design through improved communication, decision making, and stakeholder feedback. Current visualization techniques include image composites, video composites, 2D drawings, drive-through or fly-through animations, 3D rendering models, virtual reality, and 4D CAD. These methods are used mainly to communicate within the design and construction team and between the team and external stakeholders. Use of visualization improves understanding of design intent and project concepts and facilitates effective decision making. However, visualization tools are typically used for presentation only in large-scale urban projects. Visualization is not widely accepted due to a lack of demonstrated engineering benefits for typical agency projects, such as small- and medium-sized projects, rural projects, and projects where external stakeholder communication is not a major issue. Furthermore, there is a perceived high cost of investment of both financial and human capital in adopting visualization tools. The most advanced visualization technique of virtual reality has only been used in academic research settings, and 4D CAD has been used on a very limited basis for highly complicated specialty projects. However, there are a number of less intensive visualization methods available which may provide some benefit to many agency projects. In this paper, we present the results of a feasibility study examining the use of visualization and simulation applications for improving highway planning, design, construction, and safety and mobility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper deals with the development and application of the methodology for automatic mapping of pollution/contamination data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve this problem. The automatic tuning of isotropic and an anisotropic GRNN model using cross-validation procedure is presented. Results are compared with k-nearest-neighbours interpolation algorithm using independent validation data set. Quality of mapping is controlled by the analysis of raw data and the residuals using variography. Maps of probabilities of exceeding a given decision level and ?thick? isoline visualization of the uncertainties are presented as examples of decision-oriented mapping. Real case study is based on mapping of radioactively contaminated territories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract:The objective of this work was to evaluate whether a canopy sensor is capable of estimating sugarcane response to N, as well as to propose strategies for handling the data generated by this device during the decision-making process for crop N fertilization. Four N rate-response experiments were carried out, with N rates varying from 0 to 240 kg ha-1. Two evaluations with the canopy sensor were performed when the plants reached average stalk height of 0.3 and 0.5 m. Only two experiments showed stalk yield response to N rates. The canopy sensor was able to identify the crop response to different N rates and the relationship of the nutrient with sugarcane yield. The response index values obtained from the canopy sensor readings were useful in assessing sugarcane response to the applied N rate. Canopy reflectance sensors can help to identify areas responsive to N fertilization and, therefore, improve sugarcane fertilizer management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to compare coronary magnetic resonance angiography (MRA) data obtained with different scanning methodologies, adequate visualization and presentation of the coronary MRA data need to be ensured. Furthermore, an objective quantitative comparison between images acquired with different scanning methods is desirable. To address this need, a software tool ("Soap-Bubble") that facilitates visualization and quantitative comparison of 3D volume targeted coronary MRA data was developed. In the present implementation, the user interactively specifies a curved subvolume (enclosed in the 3D coronary MRA data set) that closely encompasses the coronary arterial segments. With a 3D Delaunay triangulation and a parallel projection, this enables the simultaneous display of multiple coronary segments in one 2D representation. For objective quantitative analysis, frequently explored quantitative parameters such as signal-to-noise ratio (SNR); contrast-to-noise ratio (CNR); and vessel length, sharpness, and diameter can be assessed. The present tool supports visualization and objective, quantitative comparisons of coronary MRA data obtained with different scanning methods. The first results obtained in healthy adults and in patients with coronary artery disease are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSN) are formed by nodes with limited computational and power resources. WSNs are finding an increasing number of applications, both civilian and military, most of which require security for the sensed data being collected by the base station from remote sensor nodes. In addition, when many sensor nodes transmit to the base station, the implosion problem arises. Providing security measures and implosion-resistance in a resource-limited environment is a real challenge. This article reviews the aggregation strategies proposed in the literature to handle the bandwidth and security problems related to many-to-one transmission in WSNs. Recent contributions to secure lossless many-to-one communication developed by the authors in the context of several Spanish-funded projects are surveyed. Ongoing work on the secure lossy many-to-one communication is also sketched.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12–14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Wireless Sensor Networks (WSN) arealready a very important data source to obtain data about the environment. Thus, they are key to the creation of Cyber-Physical Systems (CPS). Given the popularity of P2P middlewares as ameans to efficiently process information and distribute services, being able to integrate them to WSN¿s is an interesting proposal. JXTA is a widely used P2P middleware that allows peers to easily exchange information, heavily relying on its main architectural highlight, the capability to organize peers with common interests into peer groups. However, right now, approaches to integrate WSNs to a JXTA network seldom take advantage of peer groups. For this reason, in this paper we present jxSensor, an integrationlayer for sensor motes which facilitates the deployment of CPS¿s under this architecture. This integration has been done taking into account JXTA¿s idiosyncrasies and proposing novel ideas,such as the Virtual Peer, a group of sensors that acts as a single entity within the peer group context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an experiment to be performed in both instrumental analysis and experimental physical-chemistry curricular disciplines in order to open options to develop challenging basic research activities. Thus the experimental procedures and the results obtained in the preparation of electrodeposited lead dioxide onto graphite and its evaluation as potentiometric sensor for H3O+ and Pb2+ ions, are presented. The data obtained in acid-base titrations were compared with those of the traditional combination glass electrode at the same conditions. Although a linear sub-Nernstian response to free hydrogen ions was observed for the electrodeposited PbO2 electrode, a good agreement was obtained between them. Working as lead(II) sensing electrode, the PbO2 showed a linear sub-Nernstian behavior at total Pb2+ concentrations ranging from 3,5 x 10-4 to 3,0 x 10-2 mol/L in nitrate media. For the redox couple PbO2/Pb(II) the operational slope converges to the theoretical one, as the acidity of the working solution increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a participant study that compares biological data exploration tasks using volume renderings of laser confocal microscopy data across three environments that vary in level of immersion: a desktop, fishtank, and cave system. For the tasks, data, and visualization approach used in our study, we found that subjects qualitatively preferred and quantitatively performed better in the cave compared with the fishtank and desktop. Subjects performed real-world biological data analysis tasks that emphasized understanding spatial relationships including characterizing the general features in a volume, identifying colocated features, and reporting geometric relationships such as whether clusters of cells were coplanar. After analyzing data in each environment, subjects were asked to choose which environment they wanted to analyze additional data sets in - subjects uniformly selected the cave environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El reconeixement dels gestos de la mà (HGR, Hand Gesture Recognition) és actualment un camp important de recerca degut a la varietat de situacions en les quals és necessari comunicar-se mitjançant signes, com pot ser la comunicació entre persones que utilitzen la llengua de signes i les que no. En aquest projecte es presenta un mètode de reconeixement de gestos de la mà a temps real utilitzant el sensor Kinect per Microsoft Xbox, implementat en un entorn Linux (Ubuntu) amb llenguatge de programació Python i utilitzant la llibreria de visió artifical OpenCV per a processar les dades sobre un ordinador portàtil convencional. Gràcies a la capacitat del sensor Kinect de capturar dades de profunditat d’una escena es poden determinar les posicions i trajectòries dels objectes en 3 dimensions, el que implica poder realitzar una anàlisi complerta a temps real d’una imatge o d’una seqüencia d’imatges. El procediment de reconeixement que es planteja es basa en la segmentació de la imatge per poder treballar únicament amb la mà, en la detecció dels contorns, per després obtenir l’envolupant convexa i els defectes convexos, que finalment han de servir per determinar el nombre de dits i concloure en la interpretació del gest; el resultat final és la transcripció del seu significat en una finestra que serveix d’interfície amb l’interlocutor. L’aplicació permet reconèixer els números del 0 al 5, ja que s’analitza únicament una mà, alguns gestos populars i algunes de les lletres de l’alfabet dactilològic de la llengua de signes catalana. El projecte és doncs, la porta d’entrada al camp del reconeixement de gestos i la base d’un futur sistema de reconeixement de la llengua de signes capaç de transcriure tant els signes dinàmics com l’alfabet dactilològic.