969 resultados para Sensor measurements
Resumo:
The water activity (a(w)) of microbial substrates, biological samples, and foods and drinks is usually determined by direct measurement of the equilibrium relative humidity above a sample. However, these materials can contain ethanol, which disrupts the operation of humidity sensors. Previously, an indirect and problematic technique based on freezing-point depression measurements was needed to calculate the a(w) when ethanol was present. We now describe a rapid and accurate method to determine the a(w) of ethanol-containing samples at ambient temperatures. Disruption of sensor measurements was minimized by using a newly developed, alcohol-resistant humidity sensor fitted with an alcohol filter. Linear equations were derived from a(w) measurements of standard ethanol-water mixtures, and from Norrish's equation, to correct sensor measurements. To our knowledge, this is the first time that electronic sensors have been used to determine the a(w) of ethanol- containing samples.
Resumo:
RÉSUMÉ - Les images satellitales multispectrales, notamment celles à haute résolution spatiale (plus fine que 30 m au sol), représentent une source d’information inestimable pour la prise de décision dans divers domaines liés à la gestion des ressources naturelles, à la préservation de l’environnement ou à l’aménagement et la gestion des centres urbains. Les échelles d’étude peuvent aller du local (résolutions plus fines que 5 m) à des échelles régionales (résolutions plus grossières que 5 m). Ces images caractérisent la variation de la réflectance des objets dans le spectre qui est l’information clé pour un grand nombre d’applications de ces données. Or, les mesures des capteurs satellitaux sont aussi affectées par des facteurs « parasites » liés aux conditions d’éclairement et d’observation, à l’atmosphère, à la topographie et aux propriétés des capteurs. Deux questions nous ont préoccupé dans cette recherche. Quelle est la meilleure approche pour restituer les réflectances au sol à partir des valeurs numériques enregistrées par les capteurs tenant compte des ces facteurs parasites ? Cette restitution est-elle la condition sine qua non pour extraire une information fiable des images en fonction des problématiques propres aux différents domaines d’application des images (cartographie du territoire, monitoring de l’environnement, suivi des changements du paysage, inventaires des ressources, etc.) ? Les recherches effectuées les 30 dernières années ont abouti à une série de techniques de correction des données des effets des facteurs parasites dont certaines permettent de restituer les réflectances au sol. Plusieurs questions sont cependant encore en suspens et d’autres nécessitent des approfondissements afin, d’une part d’améliorer la précision des résultats et d’autre part, de rendre ces techniques plus versatiles en les adaptant à un plus large éventail de conditions d’acquisition des données. Nous pouvons en mentionner quelques unes : - Comment prendre en compte des caractéristiques atmosphériques (notamment des particules d’aérosol) adaptées à des conditions locales et régionales et ne pas se fier à des modèles par défaut qui indiquent des tendances spatiotemporelles à long terme mais s’ajustent mal à des observations instantanées et restreintes spatialement ? - Comment tenir compte des effets de « contamination » du signal provenant de l’objet visé par le capteur par les signaux provenant des objets environnant (effet d’adjacence) ? ce phénomène devient très important pour des images de résolution plus fine que 5 m; - Quels sont les effets des angles de visée des capteurs hors nadir qui sont de plus en plus présents puisqu’ils offrent une meilleure résolution temporelle et la possibilité d’obtenir des couples d’images stéréoscopiques ? - Comment augmenter l’efficacité des techniques de traitement et d’analyse automatique des images multispectrales à des terrains accidentés et montagneux tenant compte des effets multiples du relief topographique sur le signal capté à distance ? D’autre part, malgré les nombreuses démonstrations par des chercheurs que l’information extraite des images satellitales peut être altérée à cause des tous ces facteurs parasites, force est de constater aujourd’hui que les corrections radiométriques demeurent peu utilisées sur une base routinière tel qu’est le cas pour les corrections géométriques. Pour ces dernières, les logiciels commerciaux de télédétection possèdent des algorithmes versatiles, puissants et à la portée des utilisateurs. Les algorithmes des corrections radiométriques, lorsqu’ils sont proposés, demeurent des boîtes noires peu flexibles nécessitant la plupart de temps des utilisateurs experts en la matière. Les objectifs que nous nous sommes fixés dans cette recherche sont les suivants : 1) Développer un logiciel de restitution des réflectances au sol tenant compte des questions posées ci-haut. Ce logiciel devait être suffisamment modulaire pour pouvoir le bonifier, l’améliorer et l’adapter à diverses problématiques d’application d’images satellitales; et 2) Appliquer ce logiciel dans différents contextes (urbain, agricole, forestier) et analyser les résultats obtenus afin d’évaluer le gain en précision de l’information extraite par des images satellitales transformées en images des réflectances au sol et par conséquent la nécessité d’opérer ainsi peu importe la problématique de l’application. Ainsi, à travers cette recherche, nous avons réalisé un outil de restitution de la réflectance au sol (la nouvelle version du logiciel REFLECT). Ce logiciel est basé sur la formulation (et les routines) du code 6S (Seconde Simulation du Signal Satellitaire dans le Spectre Solaire) et sur la méthode des cibles obscures pour l’estimation de l’épaisseur optique des aérosols (aerosol optical depth, AOD), qui est le facteur le plus difficile à corriger. Des améliorations substantielles ont été apportées aux modèles existants. Ces améliorations concernent essentiellement les propriétés des aérosols (intégration d’un modèle plus récent, amélioration de la recherche des cibles obscures pour l’estimation de l’AOD), la prise en compte de l’effet d’adjacence à l’aide d’un modèle de réflexion spéculaire, la prise en compte de la majorité des capteurs multispectraux à haute résolution (Landsat TM et ETM+, tous les HR de SPOT 1 à 5, EO-1 ALI et ASTER) et à très haute résolution (QuickBird et Ikonos) utilisés actuellement et la correction des effets topographiques l’aide d’un modèle qui sépare les composantes directe et diffuse du rayonnement solaire et qui s’adapte également à la canopée forestière. Les travaux de validation ont montré que la restitution de la réflectance au sol par REFLECT se fait avec une précision de l’ordre de ±0.01 unités de réflectance (pour les bandes spectrales du visible, PIR et MIR), même dans le cas d’une surface à topographie variable. Ce logiciel a permis de montrer, à travers des simulations de réflectances apparentes à quel point les facteurs parasites influant les valeurs numériques des images pouvaient modifier le signal utile qui est la réflectance au sol (erreurs de 10 à plus de 50%). REFLECT a également été utilisé pour voir l’importance de l’utilisation des réflectances au sol plutôt que les valeurs numériques brutes pour diverses applications courantes de la télédétection dans les domaines des classifications, du suivi des changements, de l’agriculture et de la foresterie. Dans la majorité des applications (suivi des changements par images multi-dates, utilisation d’indices de végétation, estimation de paramètres biophysiques, …), la correction des images est une opération cruciale pour obtenir des résultats fiables. D’un point de vue informatique, le logiciel REFLECT se présente comme une série de menus simples d’utilisation correspondant aux différentes étapes de saisie des intrants de la scène, calcul des transmittances gazeuses, estimation de l’AOD par la méthode des cibles obscures et enfin, l’application des corrections radiométriques à l’image, notamment par l’option rapide qui permet de traiter une image de 5000 par 5000 pixels en 15 minutes environ. Cette recherche ouvre une série de pistes pour d’autres améliorations des modèles et méthodes liés au domaine des corrections radiométriques, notamment en ce qui concerne l’intégration de la FDRB (fonction de distribution de la réflectance bidirectionnelle) dans la formulation, la prise en compte des nuages translucides à l’aide de la modélisation de la diffusion non sélective et l’automatisation de la méthode des pentes équivalentes proposée pour les corrections topographiques.
Resumo:
El protocolo SOS (Sensor Observation Service) es una especificación OGC dentro de la iniciativa Sensor Web Enablement (SWE), que permite acceder a las observaciones y datos de sensores heterogéneos de una manera estándar. En el proyecto gvSIG se ha abierto una línea de investigación entorno a la SWE, existiendo en la actualidad dos prototipos de clientes SOS para gvSIG y gvSIG Mobile. La especificación utilizada para describir las medidas proporcionadas por sensores es Observation & Measurement (O&M) y la descripción de los metadatos de los sensores (localización. ID, fenómenos medidos, procesamiento de los datos, etc) se obtiene a partir del esquema Sensor ML. Se ha implementado el siguiente conjunto de operaciones: GetCapabilities para la descripción del servicio; DescribeSensor para acceder a los metadatos del sensor y el GetObservation para recibir las observaciones. En el caso del prototipo para gvSIG escritorio se puede acceder a los datos procedentes de los distintos grupos de sensores “offerings” añadiéndolos en el mapa como nuevas capas. Los procedimientos o sensores que están incluidos en un “offering” son presentados como elementos de la capa que se pueden cartografiar en el mapa. Se puede acceder a las observaciones (GetObservation) de estos sensores filtrando los datos por intervalo de tiempo y propiedad del fenómeno observado. La información puede ser representada sobre el mapa mediante gráficas para una mejor comprensión con la posibilidad de comparar datos de distintos sensores. En el caso del prototipo para el cliente móvil gvSIG Mobile, se ha utilizado la misma filosofía que para el cliente de escritorio, siendo cada “offering” una nueva capa. Las observaciones de los sensores pueden ser visualizadas en la pantalla del dispositivo móvil y se pueden obtener mapas temáticos,con el objetivo de facilitar la interpretación de los datos
Resumo:
Structural Health Monitoring (SHM) denotes a system with the ability to detect and interpret adverse changes in a structure. One of the critical challenges for practical implementation of SHM system is the ability to detect damage under changing environmental conditions. This paper aims to characterize the temperature, load and damage effects in the sensor measurements obtained with piezoelectric transducer (PZT) patches. Data sets are collected on thin aluminum specimens under different environmental conditions and artificially induced damage states. The fuzzy clustering algorithm is used to organize the sensor measurements into a set of clusters, which can attribute the variation in sensor data due to temperature, load or any induced damage.
Resumo:
The development of gas sensors with innovative designs and advanced functional materials has attracted considerable scientific interest given their potential for addressing important technological challenges. This work presents new insight towards the development of high-performance p-type semiconductor gas sensors. Gas sensor test devices, based on copper (II) oxide (CuO) with innovative and unique designs (urchin-like, fiber-like, and nanorods), are prepared by a microwave-assisted synthesis method. The crystalline composition, surface area, porosity, and morphological characteristics are studied by X-ray powder diffraction, nitrogen adsorption isotherms, field-emission scanning electron microscopy and high-resolution transmission electron microscopy. Gas sensor measurements, performed simultaneously on multiple samples, show that morphology can have a substantial influence on gas sensor performance. An assembly of urchin-like structures is found to be most effective for hydrogen detection in the range of parts-per-million at 200 °C with 300-fold larger response than the previously best reported values for semiconducting CuO hydrogen gas sensors. These results show that morphology plays an important role in the gas sensing performance of CuO and can be effectively applied in the further development of gas sensors based on p-type semiconductors. High-performance gas sensors based on CuO hierarchical morphologies with in situ gas sensor comparison are reported. Urchin-like morphologies with high hydrogen sensitivity and selectivity that show chemical and thermal stability and low temperature operation are analyzed. The role of morphological influences in p-type gas sensor materials is discussed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.
Resumo:
We present a theoretical framework and a case study for reusing the same conceptual and computational methodology for both temporal abstraction and linear (unidimensional) space abstraction, in a domain (evaluation of traffic-control actions) significantly different from the one (clinical medicine) in which the method was originally used. The method, known as knowledge-based temporal abstraction, abstracts high-level concepts and patterns from time-stamped raw data using a formal theory of domain-specific temporal-abstraction knowledge. We applied this method, originally used to interpret time-oriented clinical data, to the domain of traffic control, in which the monitoring task requires linear pattern matching along both space and time. First, we reused the method for creation of unidimensional spatial abstractions over highways, given sensor measurements along each highway measured at the same time point. Second, we reused the method to create temporal abstractions of the traffic behavior, for the same space segments, but during consecutive time points. We defined the corresponding temporal-abstraction and spatial-abstraction domain-specific knowledge. Our results suggest that (1) the knowledge-based temporal-abstraction method is reusable over time and unidimensional space as well as over significantly different domains; (2) the method can be generalized into a knowledge-based linear-abstraction method, which solves tasks requiring abstraction of data along any linear distance measure; and (3) a spatiotemporal-abstraction method can be assembled from two copies of the generalized method and a spatial-decomposition mechanism, and is applicable to tasks requiring abstraction of time-oriented data into meaningful spatiotemporal patterns over a linear, decomposable space, such as traffic over a set of highways.
Resumo:
In this article we present a study of the effects of external and internal mass transfer limitation of oxygen in a nitrifying system. The oxygen uptake rates (OUR) were measured on both a macro-scale with a respirometric reactor using off-gas analysis (Titrimetric and Off-Gas Analysis (TOGA) sensor) and on a micro-scale with microsensors. These two methods provide independent, accurate measurements of the reaction rates and concentration profiles around and in the granules. The TOGA sensor and micro-sensor measurements showed a significant external mass transfer effect at low dissolved oxygen (DO) concentrations in the bulk liquid while it was insignificant at higher DO concentrations. The oxygen distribution with anaerobic or anoxic conditions in the center clearly shows major mass transfer limitation in the aggregate interior. The large drop in DO concentration of 22 - 80% between the bulk liquid and aggregate surface demonstrates that the external mass transfer resistance is also highly important. The maximum OUR even for floccular biomass was only attained at much higher DO concentrations ( approximate to 8 mg/L) than typically used in such systems. For granules, the DO required for maximal activity was estimated to be > 20mg/L, clearly indicating the effects of the major external and internal mass transfer limitations on the overall biomass activity. Smaller aggregates had a larger volumetric OUR indicating that the granules may have a lower activity in the interior part of the aggregate. (C) 2004 Wiley Periodicals, Inc.
Resumo:
The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.
Resumo:
Most approaches to stereo visual odometry reconstruct the motion based on the tracking of point features along a sequence of images. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these algorithms deteriorates. This paper proposes a probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features. In order to effectively combine both types of features, their associated errors are weighted according to their covariance matrices, computed from the propagation of Gaussian distribution errors in the sensor measurements. The method, of course, is computationally more expensive that using only one type of feature, but still can run in real-time on a standard computer and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.
Resumo:
Characterized not just by high Mach numbers, but also high flow total enthalpies-often accompanied by dissociation and ionization of flowing gas itself-the experimental simulation of hypersonic flows requires impulse facilities like shock tunnels. However, shock tunnel simulation imposes challenges and restrictions on the flow diagnostics, not just because of the possible extreme flow conditions, but also the short run times-typically around 1 ms. The development, calibration and application of fast response MEMS sensors for surface pressure measurements in IISc hypersonic shock tunnel HST-2, with a typical test time of 600 mu s, for the complex flow field of strong (impinging) shock boundary layer interaction with separation close to the leading edge, is delineated in this paper. For Mach numbers 5.96 (total enthalpy 1.3 MJ kg(-1)) and 8.67 (total enthalpy 1.6 MJ kg(-1)), surface pressures ranging from around 200 Pa to 50 000 Pa, in various regions of the flow field, are measured using the MEMS sensors. The measurements are found to compare well with the measurements using commercial sensors. It was possible to resolve important regions of the flow field involving significant spatial gradients of pressure, with a resolution of 5 data points within 12 mm in each MEMS array, which cannot be achieved with the other commercial sensors. In particular, MEMS sensors enabled the measurement of separation pressure (at Mach 8.67) near the leading edge and the sharply varying pressure in the reattachment zone.
Resumo:
In this article, the design and development of a Fiber Bragg Grating (FBG) based displacement sensor package for submicron level displacement measurements are presented. A linear shift of 12.12 nm in Bragg wavelength of the FBG sensor is obtained for a displacement of 6 mm with a calibration factor of 0.495 mu m/pm. Field trials have also been conducted by comparing the FBG displacement sensor package against a conventional dial gauge, on a five block masonry prism specimen loaded using three-point bending technique. The responses from both the sensors are in good agreement, up to the failure of the masonry prism. Furthermore, from the real-time displacement data recorded using FBG, it is possible to detect the time at which early creaks generated inside the body of the specimen which then prorogate to the surface to develop visible surface cracks; the respective load from the load cell can be obtained from the inflection (stress release point) in the displacement curve. Thus the developed FBG displacement sensor package can be used to detect failures in structures much earlier and to provide an adequate time to exercise necessary action, thereby avoiding the possible disaster.
Resumo:
Wireless Sensor Networks (WSNs) which utilise IEEE 802.15.4 technology operate primarily in the 2.4 GHz globally compatible ISM band. However, the wireless propagation channel in this crowded band is notoriously variable and unpredictable, and it has a significant impact on the coverage range and quality of the radio links between the wireless nodes. Therefore, the use of Frequency Diversity (FD) has potential to ameliorate this situation. In this paper, the possible benefits of using FD in a tunnel environment have been quantified by performing accurate propagation measurements using modified and calibrated off-the-shelf 802.15.4 based sensor motes in the disused Aldwych underground railway tunnel. The objective of this investigation is to characterise the performance of FD in this confined environment. Cross correlation coefficients are calculated from samples of the received power on a number of frequency channels gathered during the field measurements. The low measured values of the cross correlation coefficients indicate that applying FD at 2.4 GHz will improve link performance in a WSN deployed in a tunnel. This finding closely matches results obtained by running a computational simulation of the tunnel radio propagation using a 2D Finite-Difference Time-Domain (FDTD) method. ©2009 IEEE.
Resumo:
When it comes to measuring blade-tip clearance or blade-tip timing in turbines, reflective intensity-modulated optical fiber sensors overcome several traditional limitations of capacitive, inductive or discharging probe sensors. This paper presents the signals and results corresponding to the third stage of a multistage turbine rig, obtained from a transonic wind-tunnel test. The probe is based on a trifurcated bundle of optical fibers that is mounted on the turbine casing. To eliminate the influence of light source intensity variations and blade surface reflectivity, the sensing principle is based on the quotient of the voltages obtained from the two receiving bundle legs. A discrepancy lower than 3% with respect to a commercial sensor was observed in tip clearance measurements. Regarding tip timing measurements, the travel wave spectrum was obtained, which provides the average vibration amplitude for all blades at a particular nodal diameter. With this approach, both blade-tip timing and tip clearance measurements can be carried out simultaneously. The results obtained on the test turbine rig demonstrate the suitability and reliability of the type of sensor used, and suggest the possibility of performing these measurements in real turbines under real working conditions.