872 resultados para Signal-subspace compression
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.
Resumo:
OBJECTIVES To establish whether complex signal processing is beneficial for users of bone anchored hearing aids. METHODS Review and analysis of two studies from our own group, each comparing a speech processor with basic digital signal processing (either Baha Divino or Baha Intenso) and a processor with complex digital signal processing (either Baha BP100 or Baha BP110 power). The main differences between basic and complex signal processing are the number of audiologist accessible frequency channels and the availability and complexity of the directional multi-microphone noise reduction and loudness compression systems. RESULTS Both studies show a small, statistically non-significant improvement of speech understanding in quiet with the complex digital signal processing. The average improvement for speech in noise is +0.9 dB, if speech and noise are emitted both from the front of the listener. If noise is emitted from the rear and speech from the front of the listener, the advantage of the devices with complex digital signal processing as opposed to those with basic signal processing increases, on average, to +3.2 dB (range +2.3 … +5.1 dB, p ≤ 0.0032). DISCUSSION Complex digital signal processing does indeed improve speech understanding, especially in noise coming from the rear. This finding has been supported by another study, which has been published recently by a different research group. CONCLUSIONS When compared to basic digital signal processing, complex digital signal processing can increase speech understanding of users of bone anchored hearing aids. The benefit is most significant for speech understanding in noise.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
In this work we review some earlier distributed algorithms developed by the authors and collaborators, which are based on two different approaches, namely, distributed moment estimation and distributed stochastic approximations. We show applications of these algorithms on image compression, linear classification and stochastic optimal control. In all cases, the benefit of cooperation is clear: even when the nodes have access to small portions of the data, by exchanging their estimates, they achieve the same performance as that of a centralized architecture, which would gather all the data from all the nodes.
Resumo:
There are a large number of image processing applications that work with different performance requirements and available resources. Recent advances in image compression focus on reducing image size and processing time, but offer no real-time solutions for providing time/quality flexibility of the resulting image, such as using them to transmit the image contents of web pages. In this paper we propose a method for encoding still images based on the JPEG standard that allows the compression/decompression time cost and image quality to be adjusted to the needs of each application and to the bandwidth conditions of the network. The real-time control is based on a collection of adjustable parameters relating both to aspects of implementation and to the hardware with which the algorithm is processed. The proposed encoding system is evaluated in terms of compression ratio, processing delay and quality of the compressed image when compared with the standard method.
Resumo:
Pulse compression techniques originated in radar.The present work is concerned with the utilization of these techniques in general, and the linear FM (LFM) technique in particular, for comnunications. It introduces these techniques from an optimum communications viewpoint and outlines their capabilities.It also considers the candidacy of the class of LFM signals for digital data transmission and the LFM spectrum. Work related to the utilization of LFM signals for digital data transmission has been mostly experimental and mainly concerned with employing two rectangular LFM pulses (or chirps) with reversed slopes to convey the bits 1 and 0 in an incoherent node.No systematic theory for LFM signal design and system performance has been available. Accordingly, the present work establishes such a theory taking into account coherent and noncoherent single-link and multiplex signalling modes. Some new results concerning the slope-reversal chirp pair are obtained. The LFM technique combines the typical capabilities of pulse compression with a relative ease of implementation. However, these merits are often hampered by the difficulty of handling the LFM spectrum which cannot generally be expressed closed-form. The common practice is to obtain a plot of this spectrum with a digital computer for every single set of LFM pulse parameters.Moreover, reported work has been Justly confined to the spectrum of an ideally rectangular chirp pulse with no rise or fall times.Accordingly, the present work comprises a systerratic study of the LFM spectrum which takes the rise and fall time of the chirp pulse into account and can accommodate any LFM pulse with any parameters.It· formulates rather simple and accurate prediction criteria concerning the behaviour of this spectrum in the different frequency regions. These criteria would facilitate the handling of the LFM technique in theory and practice.
Resumo:
The problem of structured noise suppression is addressed by i)modelling the subspaces hosting the components of the signal conveying the information and ii)applying a nonlin- ear non-extensive technique for effecting the right separation. Although the approach is applicable to all situations satisfying the hypothesis of the proposed framework, this work is motivated by a particular scenario, namely, the cancellation of low frequency noise in broadband seismic signals.
Resumo:
Zero-carbon powertrains development has become one of the main challenges for automotive industries around the world. Following this guideline, several approaches such as powertrain electrification, advanced combustions, and hydrogen internal combustion engines have been aimed to achieve the goal. Low Temperature Combustions, characterized by a simultaneous reduction of fuel consumption and emissions, represent one of the most studied solutions moving towards a sustainable mobility. Previous research demonstrate that Gasoline partially premixed Compression Ignition combustion is one of the most promising LTC. Mainly characterized by the high-pressure direct-injection of gasoline and the spontaneous ignition of the premixed air-fuel mixture, GCI combustion has shown a good potential to achieve the high thermal efficiency and low pollutants in compression ignited engines required by future emission regulations. Despite its potential, GCI combustion might suffer from low combustion controllability and stability, because gasoline spontaneous ignition is significantly affected by slight variations of the local in-cylinder thermal conditions. Therefore, to properly control GCI combustion assuring the maximum performance, a deep knowledge of the combustion process, i.e., gasoline auto-ignition and the effect of the control parameters on the combustion and pollutants, is mandatory. This PhD dissertation focuses on the study of GCI combustion in a light-duty compression ignited engine. Starting from a standard 1.3L diesel engine, this work describes the activities made moving toward the full conversion of the engine. A preliminary study of the GCI combustion was conducted in a “Single-Cylinder” engine configuration highlighting combustion characteristics and dependencies on the control parameters. Then, the full engine conversion was performed, and a wide experimental campaign allowed to confirm the benefits of this advanced combustion methodologies in terms of pollutants and thermal efficiency. The analysis of the in-cylinder pressure signal allowed to study in depth the GCI combustion and develop control-oriented models aimed to improve the combustion stability.
Resumo:
Friction and triboelectrification of materials show a strong correlation during sliding contacts. Friction force fluctuations are always accompanied by two tribocharging events at metal-insulator [e.g., polytetrafluoroethylene (PTFE)] interfaces: injection of charged species from the metal into PTFE followed by the flow of charges from PTFE to the metal surface. Adhesion maps that were obtained by atomic force microscopy (AFM) show that the region of contact increases the pull-off force from 10 to 150 nN, reflecting on a resilient electrostatic adhesion between PTFE and the metallic surface. The reported results suggest that friction and triboelectrification have a common origin that must be associated with the occurrence of strong electrostatic interactions at the interface.
Resumo:
OBJECTIVES: The complexity and heterogeneity of human bone, as well as ethical issues, most always hinder the performance of clinical trials. Thus, in vitro studies become an important source of information for the understanding of biomechanical events on implant-supported prostheses, although study results cannot be considered reliable unless validation studies are conducted. The purpose of this work was to validate an artificial experimental model based on its modulus of elasticity, to simulate the performance of human bone in vivo in biomechanical studies of implant-supported prostheses. MATERIAL AND METHODS: In this study, fast-curing polyurethane (F16 polyurethane, Axson) was used to build 40 specimens that were divided into five groups. The following reagent ratios (part A/part B) were used: Group A (0.5/1.0), Group B (0.8/1.0), Group C (1.0/1.0), Group D (1.2/1.0), and Group E (1.5/1.0). A universal testing machine (Kratos model K - 2000 MP) was used to measure modulus of elasticity values by compression. RESULTS: Mean modulus of elasticity values were: Group A - 389.72 MPa, Group B - 529.19 MPa, Group C - 571.11 MPa, Group D - 470.35 MPa, Group E - 437.36 MPa. CONCLUSION: The best mechanical characteristics and modulus of elasticity value comparable to that of human trabecular bone were obtained when A/B ratio was 1:1.
Resumo:
We consider distributions u is an element of S'(R) of the form u(t) = Sigma(n is an element of N) a(n)e(i lambda nt), where (a(n))(n is an element of N) subset of C and Lambda = (lambda n)(n is an element of N) subset of R have the following properties: (a(n))(n is an element of N) is an element of s', that is, there is a q is an element of N such that (n(-q) a(n))(n is an element of N) is an element of l(1); for the real sequence., there are n(0) is an element of N, C > 0, and alpha > 0 such that n >= n(0) double right arrow vertical bar lambda(n)vertical bar >= Cn(alpha). Let I(epsilon) subset of R be an interval of length epsilon. We prove that for given Lambda, (1) if Lambda = O(n(alpha)) with alpha < 1, then there exists epsilon > 0 such that u vertical bar I(epsilon) = 0 double right arrow u 0; (2) if Lambda = O(n) is uniformly discrete, then there exists epsilon > 0 such that u vertical bar I(epsilon) = 0 double right arrow u 0; (3) if alpha > 1 and. is uniformly discrete, then for all epsilon > 0, u vertical bar I(epsilon) = 0 double right arrow u = 0. Since distributions of the above mentioned form are very common in engineering, as in the case of the modeling of ocean waves, signal processing, and vibrations of beams, plates, and shells, those uniqueness and nonuniqueness results have important consequences for identification problems in the applied sciences. We show an identification method and close this article with a simple example to show that the recovery of geometrical imperfections in a cylindrical shell is possible from a measurement of its dynamics.
Resumo:
The analysis of Macdonald for electrolytes is generalized to the case in which two groups of ions are present. We assume that the electrolyte can be considered as a dispersion of ions in a dielectric liquid, and that the ionic recombination can be neglected. We present the differential equations governing the ionic redistribution when the liquid is subjected to an external electric field, describing the simultaneous diffusion of the two groups of ions in the presence of their own space charge fields. We investigate the influence of the ions on the impedance spectroscopy of an electrolytic cell. In the analysis, we assume that each group of ions have equal mobility, the electrodes perfectly block and that the adsorption phenomena can be neglected. In this framework, it is shown that the real part of the electrical impedance of the cell has a frequency dependence presenting two plateaux, related to a type of ambipolar and free diffusion coefficients. The importance of the considered problem on the ionic characterization performed by means of the impedance spectroscopy technique was discussed. (c) 2008 American Institute of Physics.