992 resultados para Remote sensor observations
Resumo:
Mode of access: Internet.
Resumo:
Distributed source coding (DSC) has recently been considered as an efficient approach to data compression in wireless sensor networks (WSN). Using this coding method multiple sensor nodes compress their correlated observations without inter-node communications. Therefore energy and bandwidth can be efficiently saved. In this paper, we investigate a randombinning based DSC scheme for remote source estimation in WSN and its performance of estimated signal to distortion ratio (SDR). With the introduction of a detailed power consumption model for wireless sensor communications, we quantitatively analyze the overall network energy consumption of the DSC scheme. We further propose a novel energy-aware transmission protocol for the DSC scheme, which flexibly optimizes the DSC performance in terms of either SDR or energy consumption, by adapting the source coding and transmission parameters to the network conditions. Simulations validate the energy efficiency of the proposed adaptive transmission protocol. © 2007 IEEE.
Resumo:
This thesis describes two separate projects. The first is a theoretical and experimental investigation of surface acoustic wave streaming in microfluidics. The second is the development of a novel acoustic glucose sensor. A separate abstract is given for each here. Optimization of acoustic streaming in microfluidic channels by SAWs Surface Acoustic Waves, (SAWs) actuated on flat piezoelectric substrates constitute a convenient and versatile tool for microfluidic manipulation due to the easy and versatile interfacing with microfluidic droplets and channels. The acoustic streaming effect can be exploited to drive fast streaming and pumping of fluids in microchannels and droplets (Shilton et al. 2014; Schmid et al. 2011), as well as size dependant sorting of particles in centrifugal flows and vortices (Franke et al. 2009; Rogers et al. 2010). Although the theory describing acoustic streaming by SAWs is well understood, very little attention has been paid to the optimisation of SAW streaming by the correct selection of frequency. In this thesis a finite element simulation of the fluid streaming in a microfluidic chamber due to a SAW beam was constructed and verified against micro-PIV measurements of the fluid flow in a fabricated device. It was found that there is an optimum frequency that generates the fastest streaming dependent on the height and width of the chamber. It is hoped this will serve as a design tool for those who want to optimally match SAW frequency with a particular microfluidic design. An acoustic glucose sensor Diabetes mellitus is a disease characterised by an inability to properly regulate blood glucose levels. In order to keep glucose levels under control some diabetics require regular injections of insulin. Continuous monitoring of glucose has been demonstrated to improve the management of diabetes (Zick et al. 2007; Heinemann & DeVries 2014), however there is a low patient uptake of continuous glucose monitoring systems due to the invasive nature of the current technology (Ramchandani et al. 2011). In this thesis a novel way of monitoring glucose levels is proposed which would use ultrasonic waves to ‘read’ a subcutaneous glucose sensitive-implant, which is only minimally invasive. The implant is an acoustic analogy of a Bragg stack with a ‘defect’ layer that acts as the sensing layer. A numerical study was performed on how the physical changes in the sensing layer can be deduced by monitoring the reflection amplitude spectrum of ultrasonic waves reflected from the implant. Coupled modes between the skin and the sensing layer were found to be a potential source of error and drift in the measurement. It was found that by increasing the number of layers in the stack that this could be minimized. A laboratory proof of concept system was developed using a glucose sensitive hydrogel as the sensing layer. It was possible to monitor the changing thickness and speed of sound of the hydrogel due to physiological relevant changes in glucose concentration.
Resumo:
Executing a cloud or aerosol physical properties retrieval algorithm from controlled synthetic data is an important step in retrieval algorithm development. Synthetic data can help answer questions about the sensitivity and performance of the algorithm or aid in determining how an existing retrieval algorithm may perform with a planned sensor. Synthetic data can also help in solving issues that may have surfaced in the retrieval results. Synthetic data become very important when other validation methods, such as field campaigns,are of limited scope. These tend to be of relatively short duration and often are costly. Ground stations have limited spatial coverage whilesynthetic data can cover large spatial and temporal scales and a wide variety of conditions at a low cost. In this work I develop an advanced cloud and aerosol retrieval simulator for the MODIS instrument, also known as Multi-sensor Cloud and Aerosol Retrieval Simulator (MCARS). In a close collaboration with the modeling community I have seamlessly combined the GEOS-5 global climate model with the DISORT radiative transfer code, widely used by the remote sensing community, with the observations from the MODIS instrument to create the simulator. With the MCARS simulator it was then possible to solve the long standing issue with the MODIS aerosol optical depth retrievals that had a low bias for smoke aerosols. MODIS aerosol retrieval did not account for effects of humidity on smoke aerosols. The MCARS simulator also revealed an issue that has not been recognized previously, namely,the value of fine mode fraction could create a linear dependence between retrieved aerosol optical depth and land surface reflectance. MCARS provided the ability to examine aerosol retrievals against “ground truth” for hundreds of thousands of simultaneous samples for an area covered by only three AERONET ground stations. Findings from MCARS are already being used to improve the performance of operational MODIS aerosol properties retrieval algorithms. The modeling community will use the MCARS data to create new parameterizations for aerosol properties as a function of properties of the atmospheric column and gain the ability to correct any assimilated retrieval data that may display similar dependencies in comparisons with ground measurements.
Resumo:
Propôs-se, neste trabalho, estimar dados de albedo à superfície terrestre usando-se o sensor Thematic Mapper (TM) do satélite LANDSAT 5 e compará-lo com dados de duas estações agrometeorológicas localizadas em região de Cerrado e a outra em cultivo da cana-de-açúcar. A região de estudo está localizada no município de Santa Rita do Passa Quatro, SP, Brasil. Para a realização do estudo obtiveram-se seis imagens orbitais do satélite Landsat 5 sensores TM, na órbita 220 e ponto 75, nas datas de 22/02, 11/04, 29/05, 01/08, 17/08 e 21/11, todas do ano de 2005, a que correspondem os dias juliano de 53, 101, 149, 213, 229 e 325, respectivamente. As correções geométricas para as imagens foram realizadas e geradas as cartas de albedo. O algoritmo SEBAL estimou satisfatoriamente os valores de albedo de superfícies sobre áreas de cerrado e de cana-de-açúcar, na região de Santa Rita do Passa Quatro, SP, consistentes com observações realizadas do albedo à superfície.
Resumo:
Understanding the ecological role of benthic microalgae, a highly productive component of coral reef ecosystems, requires information on their spatial distribution. The spatial extent of benthic microalgae on Heron Reef (southern Great Barrier Reef, Australia) was mapped using data from the Landsat 5 Thematic Mapper sensor. integrated with field measurements of sediment chlorophyll concentration and reflectance. Field-measured sediment chlorophyll concentrations. 2 ranging from 23-1.153 mg chl a m(2), were classified into low, medium, and high concentration classes (1-170, 171-290, and > 291 mg chl a m(-2)) using a K-means clustering algorithm. The mapping process assumed that areas in the Thematic Mapper image exhibiting similar reflectance levels in red and blue bands would correspond to areas of similar chlorophyll a levels. Regions of homogenous reflectance values corresponding to low, medium, and high chlorophyll levels were identified over the reef sediment zone by applying a standard image classification algorithm to the Thematic Mapper image. The resulting distribution map revealed large-scale ( > 1 km 2) patterns in chlorophyll a levels throughout the sediment zone of Heron Reef. Reef-wide estimates of chlorophyll a distribution indicate that benthic Microalgae may constitute up to 20% of the total benthic chlorophyll a at Heron Reef. and thus contribute significantly to total primary productivity on the reef.
Resumo:
AIM: This work presents detailed experimental performance results from tests executed in the hospital environment for Health Monitoring for All (HM4All), a remote vital signs monitoring system based on a ZigBee® (ZigBee Alliance, San Ramon, CA) body sensor network (BSN). MATERIALS AND METHODS: Tests involved the use of six electrocardiogram (ECG) sensors operating in two different modes: the ECG mode involved the transmission of ECG waveform data and heart rate (HR) values to the ZigBee coordinator, whereas the HR mode included only the transmission of HR values. In the absence of hidden nodes, a non-beacon-enabled star network composed of sensing devices working on ECG mode kept the delivery ratio (DR) at 100%. RESULTS: When the network topology was changed to a 2-hop tree, the performance degraded slightly, resulting in an average DR of 98.56%. Although these performance outcomes may seem satisfactory, further investigation demonstrated that individual sensing devices went through transitory periods with low DR. Other tests have shown that ZigBee BSNs are highly susceptible to collisions owing to hidden nodes. Nevertheless, these tests have also shown that these networks can achieve high reliability if the amount of traffic is kept low. Contrary to what is typically shown in scientific articles and in manufacturers' documentation, the test outcomes presented in this article include temporal graphs of the DR achieved by each wireless sensor device. CONCLUSIONS: The test procedure and the approach used to represent its outcomes, which allow the identification of undesirable transitory periods of low reliability due to contention between devices, constitute the main contribution of this work.
Resumo:
This study explores a large set of OC and EC measurements in PM(10) and PM(2.5) aerosol samples, undertaken with a long term constant analytical methodology, to evaluate the capability of the OC/EC minimum ratio to represent the ratio between the OC and EC aerosol components resulting from fossil fuel combustion (OC(ff)/EC(ff)). The data set covers a wide geographical area in Europe, but with a particular focus upon Portugal, Spain and the United Kingdom, and includes a great variety of sites: urban (background, kerbside and tunnel), industrial, rural and remote. The highest minimum ratios were found in samples from remote and rural sites. Urban background sites have shown spatially and temporally consistent minimum ratios, of around 1.0 for PM(10) and 0.7 for PM(2.5).The consistency of results has suggested that the method could be used as a tool to derive the ratio between OC and EC from fossil fuel combustion and consequently to differentiate OC from primary and secondary sources. To explore this capability, OC and EC measurements were performed in a busy roadway tunnel in central Lisbon. The OC/EC ratio, which reflected the composition of vehicle combustion emissions, was in the range of 03-0.4. Ratios of OC/EC in roadside increment air (roadside minus urban background) in Birmingham, UK also lie within the range 03-0.4. Additional measurements were performed under heavy traffic conditions at two double kerbside sites located in the centre of Lisbon and Madrid. The OC/EC minimum ratios observed at both sites were found to be between those of the tunnel and those of urban background air, suggesting that minimum values commonly obtained for this parameter in open urban atmospheres over-predict the direct emissions of OC(ff) from road transport. Possible reasons for this discrepancy are explored. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Gestão e Sistemas Ambientais
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Smart Cities are designed to be living systems and turn urban dwellers life more comfortable and interactive by keeping them aware of what surrounds them, while leaving a greener footprint. The Future Cities Project [1] aims to create infrastructures for research in smart cities including a vehicular network, the BusNet, and an environmental sensor platform, the Urban Sense. Vehicles within the BusNet are equipped with On Board Units (OBUs) that offer free Wi-Fi to passengers and devices near the street. The Urban Sense platform is composed by a set of Data Collection Units (DCUs) that include a set of sensors measuring environmental parameters such as air pollution, meteorology and noise. The Urban Sense platform is expanding and receptive to add new sensors to the platform. The parnership with companies like TNL were made and the need to monitor garbage street containers emerged as air pollution prevention. If refuse collection companies know prior to the refuse collection which route is the best to collect the maximum amount of garbage with the shortest path, they can reduce costs and pollution levels are lower, leaving behind a greener footprint. This dissertation work arises in the need to monitor the garbage street containers and integrate these sensors into an Urban Sense DCU. Due to the remote locations of the garbage street containers, a network extension to the vehicular network had to be created. This dissertation work also focus on the Multi-hop network designed to extend the vehicular network coverage area to the remote garbage street containers. In locations where garbage street containers have access to the vehicular network, Roadside Units (RSUs) or Access Points (APs), the Multi-hop network serves has a redundant path to send the data collected from DCUs to the Urban Sense cloud database. To plan this highly dynamic network, the Wi-Fi Planner Tool was developed. This tool allowed taking measurements on the field that led to an optimized location of the Multi-hop network nodes with the use of radio propagation models. This tool also allowed rendering a temperature-map style overlay for Google Earth [2] application. For the DCU for garbage street containers the parner company provided the access to a HUB (device that communicates with the sensor inside the garbage containers). The Future Cities use the Raspberry pi as a platform for the DCUs. To collect the data from the HUB a RS485 to RS232 converter was used at the physical level and the Modbus protocol at the application level. To determine the location and status of the vehicles whinin the vehicular network a TCP Server was developed. This application was developed for the OBUs providing the vehicle Global Positioning System (GPS) location as well as information of when the vehicle is stopped, moving, on idle or even its slope. To implement the Multi-hop network on the field some scripts were developed such as pingLED and “shark”. These scripts helped upon node deployment on the field as well as to perform all the tests on the network. Two setups were implemented on the field, an urban setup was implemented for a Multi-hop network coverage survey and a sub-urban setup was implemented to test the Multi-hop network routing protocols, Optimized Link State Routing Protocol (OLSR) and Babel.
Resumo:
Health promotion in hospital environments can be improved using the most recent information and communication technologies. The Internet connectivity to small sensor nodes carried by patients allows remote access to their bio-signals. To promote these features the healthcare wireless sensor networks (HWSN) are used. In these networks mobility support is a key issue in order to keep patients under realtime monitoring even when they move around. To keep sensors connected to the network, they should change their access points of attachment when patients move to a new coverage area along an infirmary. This process, called handover, is responsible for continuous network connectivity to the sensors. This paper presents a detailed performance evaluation study considering three handover mechanisms for healthcare scenarios (Hand4MAC, RSSI-based, and Backbone-based). The study was performed by simulation using several scenarios with different number of sensors and different moving velocities of sensor nodes. The results show that Hand4MAC is the best solution to guarantee almost continuous connectivity to sensor nodes with less energy consumption.
Resumo:
The Internet of Things (IoT) has emerged as a paradigm over the last few years as a result of the tight integration of the computing and the physical world. The requirement of remote sensing makes low-power wireless sensor networks one of the key enabling technologies of IoT. These networks encompass several challenges, especially in communication and networking, due to their inherent constraints of low-power features, deployment in harsh and lossy environments, and limited computing and storage resources. The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) [1] was proposed by the IETF ROLL (Routing Over Low-power Lossy links) working group and is currently adopted as an IETF standard in the RFC 6550 since March 2012. Although RPL greatly satisfied the requirements of low-power and lossy sensor networks, several issues remain open for improvement and specification, in particular with respect to Quality of Service (QoS) guarantees and support for mobility. In this paper, we focus mainly on the RPL routing protocol. We propose some enhancements to the standard specification in order to provide QoS guarantees for static as well as mobile LLNs. For this purpose, we propose OF-FL (Objective Function based on Fuzzy Logic), a new objective function that overcomes the limitations of the standardized objective functions that were designed for RPL by considering important link and node metrics, namely end-to-end delay, number of hops, ETX (Expected transmission count) and LQL (Link Quality Level). In addition, we present the design of Co-RPL, an extension to RPL based on the corona mechanism that supports mobility in order to overcome the problem of slow reactivity to frequent topology changes and thus providing a better quality of service mainly in dynamic networks application. Performance evaluation results show that both OF-FL and Co-RPL allow a great improvement when compared to the standard specification, mainly in terms of packet loss ratio and average network latency. 2015 Elsevier B.V. Al
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.