76 resultados para Sensor data fusion
Resumo:
Recently, it has been shown that fusion of the estimates of a set of sparse recovery algorithms result in an estimate better than the best estimate in the set, especially when the number of measurements is very limited. Though these schemes provide better sparse signal recovery performance, the higher computational requirement makes it less attractive for low latency applications. To alleviate this drawback, in this paper, we develop a progressive fusion based scheme for low latency applications in compressed sensing. In progressive fusion, the estimates of the participating algorithms are fused progressively according to the availability of estimates. The availability of estimates depends on computational complexity of the participating algorithms, in turn on their latency requirement. Unlike the other fusion algorithms, the proposed progressive fusion algorithm provides quick interim results and successive refinements during the fusion process, which is highly desirable in low latency applications. We analyse the developed scheme by providing sufficient conditions for improvement of CS reconstruction quality and show the practical efficacy by numerical experiments using synthetic and real-world data. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
For space applications, the weight of the liquid level sensors are of major concern as they affect the payload fraction and hence the cost. An attempt is made to design and test a light weight High Temperature Superconductor (HTS) wire based liquid level sensor for Liquid Oxygen (LOX) tank used in the cryostage of the spacecraft. The total resistance value measured of the HTS wire is inversely proportional to the liquid level. A HTS wire (SF12100) of 12mm width and 2.76m length without copper stabilizer has been used in the level sensor. The developed HTS wire based LOX level sensor is calibrated against a discrete diode array type level sensor. Liquid Nitrogen (LN2) and LOX has been used as cryogenic fluid for the calibration purpose. The automatic data logging for the system has been done using LabVIEW11. The net weight of the developed sensor is less than 1 kg.
Resumo:
Cryosorption pump is the only possible device to pump helium, hydrogen and its isotopes in fusion environment, such as high magnetic field and high plasma temperatures. Activated carbons are known to be the most suitable adsorbent in the development of cryosorption pumps. For this purpose, the data of adsorption characteristics of activated carbons in the temperature range 4.5 K to 77 K are needed, but are not available in the literature. For obtaining the above data, a commercial micro pore analyzer operating at 77 K has been integrated with a two stage GM cryocooler, which enables the cooling of the sample temperature down to 4.5 K. A heat switch mounted between the second stage cold head and the sample chamber helps to raise the sample chamber temperature to 77 K without affecting the performance of the cryocooler. The detailed description of this system is presented elsewhere. This paper presents the results of experimental studies of adsorption isotherms measured on different types of activated carbons in the form of granules, globules, flake knitted and non-woven types in the temperature range 4.5 K to 10 K using Helium gas as the adsorbate. The above results are analyzed to obtain the pore size distributions and surface areas of the activated carbons. The effect of adhesive used for bonding the activated carbons to the panels is also studied. These results will be useful to arrive at the right choice of activated carbon to be used for the development of cryosorption pumps.
Resumo:
Electromagnetic Articulography (EMA) technique is used to record the kinematics of different articulators while one speaks. EMA data often contains missing segments due to sensor failure. In this work, we propose a maximum a-posteriori (MAP) estimation with continuity constraint to recover the missing samples in the articulatory trajectories recorded using EMA. In this approach, we combine the benefits of statistical MAP estimation as well as the temporal continuity of the articulatory trajectories. Experiments on articulatory corpus using different missing segment durations show that the proposed continuity constraint results in a 30% reduction in average root mean squared error in estimation over statistical estimation of missing segments without any continuity constraint.
Resumo:
The sensing of relative humidity (RH) at room temperature has potential applications in several areas ranging from biomedical to horticulture, paper, and textile industries. In this paper, a highly sensitive humidity sensor based on carbon nanotubes (CNTs) coated on the surface of an etched fiber Bragg grating (EFBG) sensor has been demonstrated, for detecting RH over a wide range of 20%-90% at room temperature. When water molecules interact with the CNT coated EFBG, the effective refractive index of the fiber core changes, resulting in a shift in the Bragg wavelength. It has been possible to achieve a high sensitivity of similar to 31 pm/% RH, which is the highest compared with many of the existing FBG-based humidity sensors. The limit of detection in the CNT coated EFBG has been found to be similar to 0.03 RH. The experimental data shows a linear response of Bragg wavelength shift with increase in humidity. This novel method of incorporating CNTs on to the FBG sensor for humidity sensing has not been reported before.
Resumo:
In the immediate surroundings of our daily life, we can find a lot of places where the energy in the form of vibration is being wasted. Therefore, we have enormous opportunities to utilize the same. Piezoelectric character of matter enables us to convert this mechanical vibration energy into electrical energy which can be stored and used to power other device, instead of being wasted. This work is done to realize both actuator and sensor in a cantilever beam based on piezoelectricity. The sensor part is called vibration energy harvester. The numerical analyses were performed for the cantilever beam using the commercial package ANSYS and MATLAB. The cantilever beam is realized by taking a plate and fixing its one end between two massive plates. Two PZT patches were glued to the beam on its two faces. Experiments were performed using data acquisition system (DAQ) and LABVIEW software for actuating and sensing the vibration of the cantilever beam.
Resumo:
We consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. Sensor nodes periodically sense the random field and generate data, which is stored in the corresponding data queues. The EH source harnesses energy from ambient energy sources and the generated energy is stored in an energy buffer. Sensor nodes receive energy for data transmission from the EH source. The EH source has to efficiently share the stored energy among the nodes to minimize the long-run average delay in data transmission. We formulate the problem of energy sharing between the nodes in the framework of average cost infinite-horizon Markov decision processes (MDPs). We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the epsilon-greedy method as well as upper confidence bound (UCB). We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization to find near optimal energy sharing policies. Through simulations, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method.
Resumo:
Action recognition plays an important role in various applications, including smart homes and personal assistive robotics. In this paper, we propose an algorithm for recognizing human actions using motion capture action data. Motion capture data provides accurate three dimensional positions of joints which constitute the human skeleton. We model the movement of the skeletal joints temporally in order to classify the action. The skeleton in each frame of an action sequence is represented as a 129 dimensional vector, of which each component is a 31) angle made by each joint with a fixed point on the skeleton. Finally, the video is represented as a histogram over a codebook obtained from all action sequences. Along with this, the temporal variance of the skeletal joints is used as additional feature. The actions are classified using Meta-Cognitive Radial Basis Function Network (McRBFN) and its Projection Based Learning (PBL) algorithm. We achieve over 97% recognition accuracy on the widely used Berkeley Multimodal Human Action Database (MHAD).
Resumo:
In the context of wireless sensor networks, we are motivated by the design of a tree network spanning a set of source nodes that generate packets, a set of additional relay nodes that only forward packets from the sources, and a data sink. We assume that the paths from the sources to the sink have bounded hop count, that the nodes use the IEEE 802.15.4 CSMA/CA for medium access control, and that there are no hidden terminals. In this setting, starting with a set of simple fixed point equations, we derive explicit conditions on the packet generation rates at the sources, so that the tree network approximately provides certain quality of service (QoS) such as end-to-end delivery probability and mean delay. The structures of our conditions provide insight on the dependence of the network performance on the arrival rate vector, and the topological properties of the tree network. Our numerical experiments suggest that our approximations are able to capture a significant part of the QoS aware throughput region (of a tree network), that is adequate for many sensor network applications. Furthermore, for the special case of equal arrival rates, default backoff parameters, and for a range of values of target QoS, we show that among all path-length-bounded trees (spanning a given set of sources and the data sink) that meet the conditions derived in the paper, a shortest path tree achieves the maximum throughput. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Characterized not just by high Mach numbers, but also high flow total enthalpies-often accompanied by dissociation and ionization of flowing gas itself-the experimental simulation of hypersonic flows requires impulse facilities like shock tunnels. However, shock tunnel simulation imposes challenges and restrictions on the flow diagnostics, not just because of the possible extreme flow conditions, but also the short run times-typically around 1 ms. The development, calibration and application of fast response MEMS sensors for surface pressure measurements in IISc hypersonic shock tunnel HST-2, with a typical test time of 600 mu s, for the complex flow field of strong (impinging) shock boundary layer interaction with separation close to the leading edge, is delineated in this paper. For Mach numbers 5.96 (total enthalpy 1.3 MJ kg(-1)) and 8.67 (total enthalpy 1.6 MJ kg(-1)), surface pressures ranging from around 200 Pa to 50 000 Pa, in various regions of the flow field, are measured using the MEMS sensors. The measurements are found to compare well with the measurements using commercial sensors. It was possible to resolve important regions of the flow field involving significant spatial gradients of pressure, with a resolution of 5 data points within 12 mm in each MEMS array, which cannot be achieved with the other commercial sensors. In particular, MEMS sensors enabled the measurement of separation pressure (at Mach 8.67) near the leading edge and the sharply varying pressure in the reattachment zone.
Resumo:
We are given a set of sensors at given locations, a set of potential locations for placing base stations (BSs, or sinks), and another set of potential locations for placing wireless relay nodes. There is a cost for placing a BS and a cost for placing a relay. The problem we consider is to select a set of BS locations, a set of relay locations, and an association of sensor nodes with the selected BS locations, so that the number of hops in the path from each sensor to its BS is bounded by h(max), and among all such feasible networks, the cost of the selected network is the minimum. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard, and is hard to even approximate within a constant factor. For this problem, we propose a polynomial time approximation algorithm (SmartSelect) based on a relay placement algorithm proposed in our earlier work, along with a modification of the greedy algorithm for weighted set cover. We have analyzed the worst case approximation guarantee for this algorithm. We have also proposed a polynomial time heuristic to improve upon the solution provided by SmartSelect. Our numerical results demonstrate that the algorithms provide good quality solutions using very little computation time in various randomly generated network scenarios.
Resumo:
We propose apractical, feature-level and score-level fusion approach by combining acoustic and estimated articulatory information for both text independent and text dependent speaker verification. From a practical point of view, we study how to improve speaker verification performance by combining dynamic articulatory information with the conventional acoustic features. On text independent speaker verification, we find that concatenating articulatory features obtained from measured speech production data with conventional Mel-frequency cepstral coefficients (MFCCs) improves the performance dramatically. However, since directly measuring articulatory data is not feasible in many real world applications, we also experiment with estimated articulatory features obtained through acoustic-to-articulatory inversion. We explore both feature level and score level fusion methods and find that the overall system performance is significantly enhanced even with estimated articulatory features. Such a performance boost could be due to the inter-speaker variation information embedded in the estimated articulatory features. Since the dynamics of articulation contain important information, we included inverted articulatory trajectories in text dependent speaker verification. We demonstrate that the articulatory constraints introduced by inverted articulatory features help to reject wrong password trials and improve the performance after score level fusion. We evaluate the proposed methods on the X-ray Microbeam database and the RSR 2015 database, respectively, for the aforementioned two tasks. Experimental results show that we achieve more than 15% relative equal error rate reduction for both speaker verification tasks. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In this article, the design and development of a Fiber Bragg Grating (FBG) based displacement sensor package for submicron level displacement measurements are presented. A linear shift of 12.12 nm in Bragg wavelength of the FBG sensor is obtained for a displacement of 6 mm with a calibration factor of 0.495 mu m/pm. Field trials have also been conducted by comparing the FBG displacement sensor package against a conventional dial gauge, on a five block masonry prism specimen loaded using three-point bending technique. The responses from both the sensors are in good agreement, up to the failure of the masonry prism. Furthermore, from the real-time displacement data recorded using FBG, it is possible to detect the time at which early creaks generated inside the body of the specimen which then prorogate to the surface to develop visible surface cracks; the respective load from the load cell can be obtained from the inflection (stress release point) in the displacement curve. Thus the developed FBG displacement sensor package can be used to detect failures in structures much earlier and to provide an adequate time to exercise necessary action, thereby avoiding the possible disaster.
Resumo:
In the context of the minimal supersymmetric standard model (MSSM), we discuss the possibility of the lightest Higgs boson with mass M-h = 98 GeV to be consistent with the 2.3 sigma excess observed at the LEP in the decay mode e(+)e(-) -> Zh, with h -> b (b) over bar. In the same region of the MSSM parameter space, the heavier Higgs boson (H) with mass M-H similar to 125 GeV is required to be consistent with the latest data on Higgs coupling measurements at the end of the 7 + 8 TeV LHC run with 25 fb(-1) of data. While scanning the MSSM parameter space, we impose constraints coming from flavor physics, relic density of the cold dark matter as well as direct dark matter searches. We study the possibility of observing this light Higgs boson in vector boson fusion process and associated production with W/Z-boson at the high luminosity (3000 fb(-1)) run of the 14 TeV LHC. Our analysis shows that this scenario can hardly be ruled out even at the high luminosity run of the LHC. However, the precise measurement of the Higgs signal strength ratios can play a major role to distinguish this scenario from the canonical MSSM one.