17 resultados para compressed sensing compressive sensing CS norma l1
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.
Resumo:
It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
The convergence between the recent developments in sensing technologies, data science, signal processing and advanced modelling has fostered a new paradigm to the Structural Health Monitoring (SHM) of engineered structures, which is the one based on intelligent sensors, i.e., embedded devices capable of stream processing data and/or performing structural inference in a self-contained and near-sensor manner. To efficiently exploit these intelligent sensor units for full-scale structural assessment, a joint effort is required to deal with instrumental aspects related to signal acquisition, conditioning and digitalization, and those pertaining to data management, data analytics and information sharing. In this framework, the main goal of this Thesis is to tackle the multi-faceted nature of the monitoring process, via a full-scale optimization of the hardware and software resources involved by the {SHM} system. The pursuit of this objective has required the investigation of both: i) transversal aspects common to multiple application domains at different abstraction levels (such as knowledge distillation, networking solutions, microsystem {HW} architectures), and ii) the specificities of the monitoring methodologies (vibrations, guided waves, acoustic emission monitoring). The key tools adopted in the proposed monitoring frameworks belong to the embedded signal processing field: namely, graph signal processing, compressed sensing, ARMA System Identification, digital data communication and TinyML.
Resumo:
The common thread of this thesis is the will of investigating properties and behavior of assemblies. Groups of objects display peculiar properties, which can be very far from the simple sum of respective components’ properties. This is truer, the smaller is inter-objects distance, i.e. the higher is their density, and the smaller is the container size. “Confinement” is in fact a key concept in many topics explored and here reported. It can be conceived as a spatial limitation, that yet gives origin to unexpected processes and phenomena based on inter-objects communication. Such phenomena eventually result in “non-linear properties”, responsible for the low predictability of large assemblies. Chapter 1 provides two insights on surface chemistry, namely (i) on a supramolecular assembly based on orthogonal forces, and (ii) on selective and sensitive fluorescent sensing in thin polymeric film. In chapters 2 to 4 confinement of molecules plays a major role. Most of the work focuses on FRET within core-shell nanoparticles, investigated both through a simulation model and through experiments. Exciting results of great applicative interest are drawn, such as a method of tuning emission wavelength at constant excitation, and a way of overcoming self-quenching processes by setting up a competitive deactivation channel. We envisage applications of these materials as labels for multiplexing analysis, and in all fields of fluorescence imaging, where brightness coupled with biocompatibility and water solubility is required. Adducts of nanoparticles and molecular photoswitches are investigated in the context of superresolution techniques for fluorescence microscopy. In chapter 5 a method is proposed to prepare a library of functionalized Pluronic F127, which gives access to a twofold “smart” nanomaterial, namely both (i)luminescent and (ii)surface-functionalized SCSSNPs. Focus shifts in chapter 6 to confinement effects in an upper size scale. Moving from nanometers to micrometers, we investigate the interplay between microparticles flowing in microchannels where a constriction affects at very long ranges structure and dynamics of the colloidal paste.
Resumo:
Remote sensing (RS) techniques have evolved into an important instrument to investigate forest function. New methods based on the remote detection of leaf biochemistry and photosynthesis are being developed and applied in pilot studies from airborne and satellite platforms (PRI, solar-induced fluorescence; N and chlorophyll content). Non-destructive monitoring methods, a direct application of RS studies, are also proving increasingly attractive for the determination of stress conditions or nutrient deficiencies not only in research but also in agronomy, horticulture and urban forestry (proximal RS). In this work I will focus on some novel techniques recently developed for the estimation of photochemistry and photosynthetic rates based (i) on the proximal measurement of steady-state chlorophyll fluorescence yield, or (ii) the remote sensing of changes in hyperspectral leaf reflectance, associated to xanthophyll de-epoxydation and energy partitioning, which is closely coupled to leaf photochemistry and photosynthesis. I will also present and describe a mathematical model of leaf steady-state fluorescence and photosynthesis recently developed in our group. Two different species were used in the experiments: Arbutus unedo, a schlerophyllous Mediterranean species, and Populus euroamericana, a broad leaf deciduous tree widely used in plantation forestry. Results show that ambient fluorescence could provide a useful tool for testing photosynthetic processes from a distance. These results confirm also the photosynthetic reflectance index (PRI) as an efficient remote sensing reflectance index estimating short-term changes in photochemical efficiency as well as long-term changes in leaf biochemistry. The study also demonstrated that RS techniques could provide a fast and reliable method to estimate photosynthetic pigment content and total nitrogen, beside assessing the state of photochemical process in our plants’ leaves in the field. This could have important practical applications for the management of plant cultivation systems, for the estimation of the nutrient requirements of our plants for optimal growth.
Resumo:
Il presente studio si concentra sulle diverse applicazioni del telerilevamento termico in ambito urbano. Vengono inizialmente descritti la radiazione infrarossa e le sue interazioni con l’atmosfera terrestre, le leggi principali che regolano lo scambio di calore per irraggiamento, le caratteristiche dei sensori e le diverse applicazioni di termografia. Successivamente sono trattati nel dettaglio gli aspetti caratteristici della termografia da piattaforma satellitare, finalizzata principalmente alla valutazione del fenomeno dell'Urban Heat Island; vengono descritti i sensori disponibili, le metodologie di correzione per gli effetti atmosferici, per la stima dell'emissività delle superfici e per il calcolo della temperatura superficiale dei pixels. Viene quindi illustrata la sperimentazione effettuata sull'area di Bologna mediante immagini multispettrali ASTER: i risultati mostrano come sull'area urbana sia riscontrabile la presenza dell'Isola di Calore Urbano, anche se la sua quantificazione risulta complessa. Si procede quindi alla descrizione di potenzialità e limiti della termografia aerea, dei suoi diversi utilizzi, delle modalità operative di rilievo e degli algoritmi utilizzati per il calcolo della temperatura superficiale delle coperture edilizie. Tramite l’analisi di alcune esperienze precedenti vengono trattati l’influenza dell’atmosfera, la modellazione dei suoi effetti sulla radianza rilevata, i diversi metodi per la stima dell’emissività. Viene quindi introdotto il progetto europeo Energycity, finalizzato alla creazione di un sistema GeoWeb di supporto spaziale alle decisioni per la riduzione di consumi energetici e produzione di gas serra su sette città dell'Europa Centrale. Vengono illustrate le modalità di rilievo e le attività di processing dei datasets digitali per la creazione di mappe di temperatura superficiale da implementare nel sistema SDSS. Viene infine descritta la sperimentazione effettuata sulle immagini termiche acquisite nel febbraio 2010 sulla città di Treviso, trasformate in un mosaico georiferito di temperatura radiometrica tramite correzioni geometriche e radiometriche; a seguito della correzione per l’emissività quest’ultimo verrà trasformato in un mosaico di temperatura superficiale.
Resumo:
Electrochemical biosensors provide an attractive means to analyze the content of a biological sample due to the direct conversion of a biological event to an electronic signal, enabling the development of cheap, small, portable and simple devices, that allow multiplex and real-time detection. At the same time nanobiotechnology is drastically revolutionizing the biosensors development and different transduction strategies exploit concepts developed in these field to simplify the analysis operations for operators and end users, offering higher specificity, higher sensitivity, higher operational stability, integrated sample treatments and shorter analysis time. The aim of this PhD work has been the application of nanobiotechnological strategies to electrochemical biosensors for the detection of biological macromolecules. Specifically, one project was focused on the application of a DNA nanotechnology called hybridization chain reaction (HCR), to amplify the hybridization signal in an electrochemical DNA biosensor. Another project on which the research activity was focused concerns the development of an electrochemical biosensor based on a biological model membrane anchored to a solid surface (tBLM), for the recognition of interactions between the lipid membrane and different types of target molecules.
Resumo:
Future wireless communications systems are expected to be extremely dynamic, smart and capable to interact with the surrounding radio environment. To implement such advanced devices, cognitive radio (CR) is a promising paradigm, focusing on strategies for acquiring information and learning. The first task of a cognitive systems is spectrum sensing, that has been mainly studied in the context of opportunistic spectrum access, in which cognitive nodes must implement signal detection techniques to identify unused bands for transmission. In the present work, we study different spectrum sensing algorithms, focusing on their statistical description and evaluation of the detection performance. Moving from traditional sensing approaches we consider the presence of practical impairments, and analyze algorithm design. Far from the ambition of cover the broad spectrum of spectrum sensing, we aim at providing contributions to the main classes of sensing techniques. In particular, in the context of energy detection we studied the practical design of the test, considering the case in which the noise power is estimated at the receiver. This analysis allows to deepen the phenomenon of the SNR wall, providing the conditions for its existence and showing that presence of the SNR wall is determined by the accuracy of the noise power estimation process. In the context of the eigenvalue based detectors, that can be adopted by multiple sensors systems, we studied the practical situation in presence of unbalances in the noise power at the receivers. Then, we shift the focus from single band detectors to wideband sensing, proposing a new approach based on information theoretic criteria. This technique is blind and, requiring no threshold setting, can be adopted even if the statistical distribution of the observed data in not known exactly. In the last part of the thesis we analyze some simple cooperative localization techniques based on weighted centroid strategies.
Resumo:
During my Doctoral study I researched about the remote detection of canopy N concentration in forest stands, its potentials and problems, under many overlapping perspectives. The study consisted of three parts. In S. Rossore 2000 dataset analysis, I tested regressions between N concentration and NIR reflectances derived from different sources (field samples, airborne and satellite sensors). The analysis was further expanded using a larger dataset acquired in year 2009 as part of a new campaign funded by the ESA. In both cases, a good correlation was observed between Landsat NIR, using both TM (2009) and ETM+ (2000) imagery, and N concentration measured by a CHN elemental analyzer. Concerning airborne sensors I did not obtain the same good results, mainly because of the large FOV of the two instruments, and to the anisotropy of vegetation reflectance. We also tested the relation between ground based ASD measures and nitrogen concentration, obtaining really good results. Thus, I decided to expand my study to the regional level, focusing only on field and satellite measures. I analyzed a large dataset for the whole of Catalonia, Spain; MODIS imagery was used, in consideration of its spectral characteristics and despite its rather poor spatial resolution. Also in this case a regression between nitrogen concentration and reflectances was found, but not so good as in previous experiences. Moreover, vegetation type was found to play an important role in the observed relationship. We concluded that MODIS is not the most suitable satellite sensor in realities like Italy and Catalonia, which present a patchy and inhomogeneous vegetation cover; so it could be utilized for the parameterization of eco-physiological and biogeochemical models, but not for really local nitrogen estimate. Thus multispectral sensors similar to Landsat Thematic Mapper, with better spatial resolution, could be the most appropriate sensors to estimate N concentration.
Resumo:
Pervasive Sensing is a recent research trend that aims at providing widespread computing and sensing capabilities to enable the creation of smart environments that can sense, process, and act by considering input coming from both people and devices. The capabilities necessary for Pervasive Sensing are nowadays available on a plethora of devices, from embedded devices to PCs and smartphones. The wide availability of new devices and the large amount of data they can access enable a wide range of novel services in different areas, spanning from simple data collection systems to socially-aware collaborative filtering. However, the strong heterogeneity and unreliability of devices and sensors poses significant challenges. So far, existing works on Pervasive Sensing have focused only on limited portions of the whole stack of available devices and data that they can use, to propose and develop mainly vertical solutions. The push from academia and industry for this kind of services shows that time is mature for a more general support framework for Pervasive Sensing solutions able to enhance frail architectures, promote a well balanced usage of resources on different devices, and enable the widest possible access to sensed data, while ensuring a minimal energy consumption on battery-operated devices. This thesis focuses on pervasive sensing systems to extract design guidelines as foundation of a comprehensive reference model for multi-tier Pervasive Sensing applications. The validity of the proposed model is tested in five different scenarios that present peculiar and different requirements, and different hardware and sensors. The ease of mapping from the proposed logical model to the real implementations and the positive performance result campaigns prove the quality of the proposed approach and offer a reliable reference model, together with a direction for the design and deployment of future Pervasive Sensing applications.
Resumo:
Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
Sensors are devices that have shown widespread use, from the detection of gas molecules to the tracking of chemical signals in biological cells. Single walled carbon nanotube (SWCNT) and graphene based electrodes have demonstrated to be an excellent material for the development of electrochemical biosensors as they display remarkable electronic properties and the ability to act as individual nanoelectrodes, display an excellent low-dimensional charge carrier transport, and promote surface electrocatalysis. The present work aims at the preparation and investigation of electrochemically modified SWCNT and graphene-based electrodes for applications in the field of biosensors. We initially studied SWCNT films and focused on their topography and surface composition, electrical and optical properties. Parallel to SWCNTs, graphene films were investigated. Higher resistance values were obtained in comparison with nanotubes films. The electrochemical surface modification of both electrodes was investigated following two routes (i) the electrografting of aryl diazonium salts, and (ii) the electrophylic addition of 1, 3-benzodithiolylium tetrafluoroborate (BDYT). Both the qualitative and quantitative characteristics of the modified electrode surfaces were studied such as the degree of functionalization and their surface composition. The combination of Raman, X-ray photoelectron spectroscopy, atomic force microscopy, electrochemistry and other techniques, has demonstrated that selected precursors could be covalently anchored to the nanotubes and graphene-based electrode surfaces through novel carbon-carbon formation.