25 resultados para Underwater acoustics signal processing
Resumo:
In this paper, the well-known method of frames approach to the signal decomposition problem is reformulated as a certain bilevel goal-attainment linear least squares problem. As a consequence, a numerically robust variant of the method, named approximating method of frames, is proposed on the basis of a certain minimal Euclidean norm approximating splitting pseudo-iteration-wise method.
Resumo:
This paper addresses the problem of service development based on GSM handset signaling. The aim is to achieve this goal without the participation of the users, which requires the use of a passive GSM receiver on the uplink. Since no tool for GSM uplink capturing was available, we developed a new method that can synchronize to multiple mobile devices by simply overhearing traffic between them and the network. Our work includes the implementation of modules for signal recovery, message reconstruction and parsing. The method has been validated against a benchmark solution on GSM downlink and independently evaluated on uplink channels. Initial evaluations show up to 99% success rate in message decoding, which is a very promising result. Moreover, we conducted measurements that reveal insights on the impact of signal power on the capturing performance and investigate possible reactive measures.
Resumo:
Atrial fibrillation (AF) is the most common cardiac arrhythmia, and is responsible for the highest number of rhythm-related disorders and cardioembolic strokes worldwide. Intracardiac signal analysis during the onset of paroxysmal AF led to the discovery of pulmonary vein as a triggering source of AF, which has led to the development of pulmonary vein ablation--an established curative therapy for drug-resistant AF. Complex, multicomponent and rapid electrical activity widely involving the atrial substrate characterizes persistent/permanent AF. Widespread nature of the problem and complexity of signals in persistent AF reduce the success rate of ablation therapy. Although signal processing applied to extraction of relevant features from these complex electrograms has helped to improve the efficacy of ablation therapy in persistent/permanent AF, improved understanding of complex signals should help to identify sources of AF and further increase the success rate of ablation therapy.
Resumo:
Clock synchronization is critical for the operation of a distributed wireless network system. In this paper we investigate on a method able to evaluate in real time the synchronization offset between devices down to nanoseconds (as needed for positioning). The method is inspired by signal processing algorithms and relies on fine-grain time information obtained during the reconstruction of the signal at the receiver. Applying the method to a GPS-synchronized system show that GPS-based synchronization has high accuracy potential but still suffers from short-term clock drift, which limits the achievable localization error.
Resumo:
We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.
Resumo:
This paper considers a framework where data from correlated sources are transmitted with the help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth variations. We show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples showing how the proposed algorithm can be deployed in sensor networks and distributed imaging applications.
Resumo:
In free viewpoint applications, the images are captured by an array of cameras that acquire a scene of interest from different perspectives. Any intermediate viewpoint not included in the camera array can be virtually synthesized by the decoder, at a quality that depends on the distance between the virtual view and the camera views available at decoder. Hence, it is beneficial for any user to receive camera views that are close to each other for synthesis. This is however not always feasible in bandwidth-limited overlay networks, where every node may ask for different camera views. In this work, we propose an optimized delivery strategy for free viewpoint streaming over overlay networks. We introduce the concept of layered quality-of-experience (QoE), which describes the level of interactivity offered to clients. Based on these levels of QoE, camera views are organized into layered subsets. These subsets are then delivered to clients through a prioritized network coding streaming scheme, which accommodates for the network and clients heterogeneity and effectively exploit the resources of the overlay network. Simulation results show that, in a scenario with limited bandwidth or channel reliability, the proposed method outperforms baseline network coding approaches, where the different levels of QoE are not taken into account in the delivery strategy optimization.
Resumo:
Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Our method to evaluate the synchronization accuracy is inspired by signal processing algorithms and relies on fine grain time information. The method is able to calculate the clock offset and skew between devices with nanosecond accuracy in real time. It was implemented using software defined radio technology. We demonstrate that GPS-based synchronization suffers from remaining clock offset in the range of a few hundred of nanoseconds but the clock skew is negligible. Finally, we determine a corresponding lower bound on the expected positioning error.
Resumo:
Cloudification of the Centralized-Radio Access Network (C-RAN) in which signal processing runs on general purpose processors inside virtual machines has lately received significant attention. Due to short deadlines in the LTE Frequency Division Duplex access method, processing time fluctuations introduced by the virtualization process have a deep impact on C-RAN performance. This paper evaluates bottlenecks of the OpenAirInterface (OAI is an open-source software-based implementation of LTE) cloud performance, provides feasibility studies on C-RAN execution, and introduces a cloud architecture that significantly reduces the encountered execution problems. In typical cloud environments, the OAI processing time deadlines cannot be guaranteed. Our proposed cloud architecture shows good characteristics for the OAI cloud execution. As an example, in our setup more than 99.5% processed LTE subframes reach reasonable processing deadlines close to performance of a dedicated machine.
Resumo:
BACKGROUND AND OBJECTIVES Multiple-breath washout (MBW) is an attractive test to assess ventilation inhomogeneity, a marker of peripheral lung disease. Standardization of MBW is hampered as little data exists on possible measurement bias. We aimed to identify potential sources of measurement bias based on MBW software settings. METHODS We used unprocessed data from nitrogen (N2) MBW (Exhalyzer D, Eco Medics AG) applied in 30 children aged 5-18 years: 10 with CF, 10 formerly preterm, and 10 healthy controls. This setup calculates the tracer gas N2 mainly from measured O2 and CO2concentrations. The following software settings for MBW signal processing were changed by at least 5 units or >10% in both directions or completely switched off: (i) environmental conditions, (ii) apparatus dead space, (iii) O2 and CO2 signal correction, and (iv) signal alignment (delay time). Primary outcome was the change in lung clearance index (LCI) compared to LCI calculated with the settings as recommended. A change in LCI exceeding 10% was considered relevant. RESULTS Changes in both environmental and dead space settings resulted in uniform but modest LCI changes and exceeded >10% in only two measurements. Changes in signal alignment and O2 signal correction had the most relevant impact on LCI. Decrease of O2 delay time by 40 ms (7%) lead to a mean LCI increase of 12%, with >10% LCI change in 60% of the children. Increase of O2 delay time by 40 ms resulted in mean LCI decrease of 9% with LCI changing >10% in 43% of the children. CONCLUSIONS Accurate LCI results depend crucially on signal processing settings in MBW software. Especially correct signal delay times are possible sources of incorrect LCI measurements. Algorithms of signal processing and signal alignment should thus be optimized to avoid susceptibility of MBW measurements to this significant measurement bias.