965 resultados para sensor techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel third-generation hydrogen peroxide (H2O2) biosensor was developed by immobilizing horseradish peroxidase (HRP) on a biocompatible gold electrode modified with a well-ordered, self-assembled DNA film. Cysteamine was first self-assembled on a gold electrode to provide an interface for the assembly of DNA molecules. Then DNA was chemisorbed onto the self-assembled monolayers (SAMs) of cysteamine to form a network by controlling DNA concentration. The DNA-network film obtained provided a biocompatible microenvironment for enzyme molecules, greatly amplified the coverage of HRP molecules on the electrode surface, and most importantly could act as a charge carrier which facilitated the electron transfer between HRP and the electrode. Finally, HRP was adsorbed on the DNA-network film. The process of the biosensor construction was followed by atomic force microscopy (AFM). Voltammetric and time-based amperometric techniques were employed to characterize the properties of the biosensor derived. The enzyme electrode achieved 95% of the steady-state current within 2 s and had a 0.5 mu mol l(-1) detection limit of H2O2. Furthermore, the biosensor showed high sensitivity, good reproducibility, and excellent long-term stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My original contribution to knowledge is the creation of a WSN system that further improves the functionality of existing technology, whilst achieving improved power consumption and reliability. This thesis concerns the development of industrially applicable wireless sensor networks that are low-power, reliable and latency aware. This work aims to improve upon the state of the art in networking protocols for low-rate multi-hop wireless sensor networks. Presented is an application-driven co-design approach to the development of such a system. Starting with the physical layer, hardware was designed to meet industry specified requirements. The end system required further investigation of communications protocols that could achieve the derived application-level system performance specifications. A CSMA/TDMA hybrid MAC protocol was developed, leveraging numerous techniques from the literature and novel optimisations. It extends the current art with respect to power consumption for radio duty-cycled applications, and reliability, in dense wireless sensor networks, whilst respecting latency bounds. Specifically, it provides 100% packet delivery for 11 concurrent senders transmitting towards a single radio duty cycled sink-node. This is representative of an order of magnitude improvement over the comparable art, considering MAC-only mechanisms. A novel latency-aware routing protocol was developed to exploit the developed hardware and MAC protocol. It is based on a new weighted objective function with multiple fail safe mechanisms to ensure extremely high reliability and robustness. The system was empirically evaluated on two hardware platforms. These are the application-specific custom 868 MHz node and the de facto community-standard TelosB. Extensive empirical comparative performance analyses were conducted against the relevant art to demonstrate the advances made. The resultant system is capable of exceeding 10-year battery life, and exhibits reliability performance in excess of 99.9%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The bioluminescence technique was used to quantify the local glucose concentration in the tissue surrounding subcutaneously implanted polyurethane material and surrounding glucose sensors. In addition, some implants were coated with a single layer of adipose-derived stromal cells (ASCs) because these cells improve the wound-healing response around biomaterials. METHODS: Control and ASC-coated implants were implanted subcutaneously in rats for 1 or 8 weeks (polyurethane) or for 1 week only (glucose sensors). Tissue biopsies adjacent to the implant were immediately frozen at the time of explant. Cryosections were assayed for glucose concentration profile using the bioluminescence technique. RESULTS: For the polyurethane samples, no significant differences in glucose concentration within 100 μm of the implant surface were found between bare and ASC-coated implants at 1 or 8 weeks. A glucose concentration gradient was demonstrated around the glucose sensors. For all sensors, the minimum glucose concentration of approximately 4 mM was found at the implant surface and increased with distance from the sensor surface until the glucose concentration peaked at approximately 7 mM at 100 μm. Then the glucose concentration decreased to 5.5-6.5 mM more than 100 μmm from the surface. CONCLUSIONS: The ASC attachment to polyurethane and to glucose sensors did not change the glucose profiles in the tissue surrounding the implants. Although most glucose sensors incorporate a diffusion barrier to reduce the gradient of glucose and oxygen in the tissue, it is typically assumed that there is no steep glucose gradient around the sensors. However, a glucose gradient was observed around the sensors. A more complete understanding of glucose transport and concentration gradients around sensors is critical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An optical window model for the rodent dorsum was used to perform chronic and quantitative intravital microscopy and laser Doppler flowmetry of microvascular networks adjacent to functional and non-functional glucose sensors. The one-sided configuration afforded direct, real-time observation of the tissue response to bare (unmodified, smooth surface) sensors and sensors coated with porous poly-L-lactic acid (PLLA). Microvessel length density and red blood cell flux (blood perfusion) within 1 mm of the sensors were measured bi-weekly over 2 weeks. When non-functional sensors were fully implanted beneath the windows, the porous coated sensors had two-fold more vasculature and significantly higher blood perfusion than bare sensors on Day 14. When functional sensors were implanted percutaneously, as in clinical use, no differences in baseline current, neovascularization, or tissue perfusion were observed between bare and porous coated sensors. However, percutaneously implanted bare sensors had two-fold more vascularity than fully implanted bare sensors by Day 14, indicating the other factors, such as micromotion, might be stimulating angiogenesis. Despite increased angiogenesis adjacent to percutaneous sensors, modest sensor current attenuation occurred over 14 days, suggesting that factors other than angiogenesis may play a dominant role in determining sensor function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite remote sensing of ocean colour is the only method currently available for synoptically measuring wide-area properties of ocean ecosystems, such as phytoplankton chlorophyll biomass. Recently, a variety of bio-optical and ecological methods have been established that use satellite data to identify and differentiate between either phytoplankton functional types (PFTs) or phytoplankton size classes (PSCs). In this study, several of these techniques were evaluated against in situ observations to determine their ability to detect dominant phytoplankton size classes (micro-, nano- and picoplankton). The techniques are applied to a 10-year ocean-colour data series from the SeaWiFS satellite sensor and compared with in situ data (6504 samples) from a variety of locations in the global ocean. Results show that spectral-response, ecological and abundance-based approaches can all perform with similar accuracy. Detection of microplankton and picoplankton were generally better than detection of nanoplankton. Abundance-based approaches were shown to provide better spatial retrieval of PSCs. Individual model performance varied according to PSC, input satellite data sources and in situ validation data types. Uncertainty in the comparison procedure and data sources was considered. Improved availability of in situ observations would aid ongoing research in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ocean Virtual Laboratory is an ESA-funded project to prototype the concept of a single point of access for all satellite remote-sensing data with ancillary model output and in situ measurements for a given region. The idea is to provide easy access for the non-specialist to both data and state-of-the-art processing techniques and enable their easy analysis and display. The project, led by OceanDataLab, is being trialled in the region of the Agulhas Current, as it contains signals of strong contrast (due to very energetic upper ocean dynamics) and special SAR data acquisitions have been recorded there. The project also encourages the take up of Earth Observation data by developing training material to help those not in large scientific or governmental organizations make the best use of what data are available. The website for access is: http://ovl-project.oceandatalab.com/

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in the harsh environment found in combustion systems and automotive engine exhausts, large wire diameters are required and consequently the measurement bandwidth is reduced. This paper describes two new algorithmic compensation techniques based on blind deconvolution to address this loss of high-frequency signal components using the measurements from two thermocouples. In particular, a continuous-time approach is proposed, combined with a cross-relation blind deconvolution for parameter estimation. A feature of this approach is that no a priori assumption is made about the time constant ratio of the two thermocouples. The advantages, including small estimation variance and limitations of the method, are highlighted using results from simulation and test rig studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A distributed optical fiber sensor based on Brillouin scattering (BOTDR or BOTDA) can measure and monitor strain and temperature generated along optical fiber. Because it can measure in real-time with high precision and stability, it is quite suitable for health monitoring of large-scale civil infrastructures. However, the main challenge of applying it to structural health monitoring is to ensure it is robust and can be repaired by adopting a suitable embedding method. In this paper, a novel method based on air-blowing and vacuum grouting techniques for embedding long-distance optical fiber sensors was developed. This method had no interference with normal concrete construction during its installation, and it could easily replace the long-distance embedded optical fiber sensor (LEOFS). Two stages of static loading tests were applied to investigate the performance of the LEOFS. The precision and the repeatability of the LEOFS were studied through an overloading test. The durability and the stability of the LEOFS were confirmed by a corrosion test. The strains of the LEOFS were used to evaluate the reinforcing effect of carbon fiber reinforced polymer and thereby the health state of the beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problems of effective in situ measurement of the real-time strain for bridge weigh in motion in reinforced concrete bridge structures through the use of optical fiber sensor systems. By undertaking a series of tests, coupled with dynamic loading, the performance of fiber Bragg grating-based sensor systems with various amplification techniques were investigated. In recent years, structural health monitoring (SHM) systems have been developed to monitor bridge deterioration, to assess load levels and hence extend bridge life and safety. Conventional SHM systems, based on measuring strain, can be used to improve knowledge of the bridge's capacity to resist loads but generally give no information on the causes of any increase in stresses. Therefore, it is necessary to find accurate sensors capable of capturing peak strains under dynamic load and suitable methods for attaching these strain sensors to existing and new bridge structures. Additionally, it is important to ensure accurate strain transfer between concrete and steel, adhesives layer, and strain sensor. The results show the benefits in the use of optical fiber networks under these circumstances and their ability to deliver data when conventional sensors cannot capture accurate strains and/or peak strains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Localization is a fundamental task in Cyber-Physical Systems (CPS), where data is tightly coupled with the environment and the location where it is generated. The research literature on localization has reached a critical mass, and several surveys have also emerged. This review paper contributes on the state-of-the-art with the proposal of a new and holistic taxonomy of the fundamental concepts of localization in CPS, based on a comprehensive analysis of previous research works and surveys. The main objective is to pave the way towards a deep understanding of the main localization techniques, and unify their descriptions. Furthermore, this review paper provides a complete overview on the most relevant localization and geolocation techniques. Also, we present the most important metrics for measuring the accuracy of localization approaches, which is meant to be the gap between the real location and its estimate. Finally, we present open issues and research challenges pertaining to localization. We believe that this review paper will represent an important and complete reference of localization techniques in CPS for researchers and practitioners and will provide them with an added value as compared to previous surveys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling.