13 resultados para sensor-Cloud system
em AMS Tesi di Dottorato - Alm@DL - Universit
Resumo:
In fluid dynamics research, pressure measurements are of great importance to define the flow field acting on aerodynamic surfaces. In fact the experimental approach is fundamental to avoid the complexity of the mathematical models for predicting the fluid phenomena. It’s important to note that, using in-situ sensor to monitor pressure on large domains with highly unsteady flows, several problems are encountered working with the classical techniques due to the transducer cost, the intrusiveness, the time response and the operating range. An interesting approach for satisfying the previously reported sensor requirements is to implement a sensor network capable of acquiring pressure data on aerodynamic surface using a wireless communication system able to collect the pressure data with the lowest environmental–invasion level possible. In this thesis a wireless sensor network for fluid fields pressure has been designed, built and tested. To develop the system, a capacitive pressure sensor, based on polymeric membrane, and read out circuitry, based on microcontroller, have been designed, built and tested. The wireless communication has been performed using the Zensys Z-WAVE platform, and network and data management have been implemented. Finally, the full embedded system with antenna has been created. As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat has been chosen as working example.
Resumo:
During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.
Resumo:
An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.
Resumo:
n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.
Resumo:
We have realized a Data Acquisition chain for the use and characterization of APSEL4D, a 32 x 128 Monolithic Active Pixel Sensor, developed as a prototype for frontier experiments in high energy particle physics. In particular a transition board was realized for the conversion between the chip and the FPGA voltage levels and for the signal quality enhancing. A Xilinx Spartan-3 FPGA was used for real time data processing, for the chip control and the communication with a Personal Computer through a 2.0 USB port. For this purpose a firmware code, developed in VHDL language, was written. Finally a Graphical User Interface for the online system monitoring, hit display and chip control, based on windows and widgets, was realized developing a C++ code and using Qt and Qwt dedicated libraries. APSEL4D and the full acquisition chain were characterized for the first time with the electron beam of the transmission electron microscope and with 55Fe and 90Sr radioactive sources. In addition, a beam test was performed at the T9 station of the CERN PS, where hadrons of momentum of 12 GeV/c are available. The very high time resolution of APSEL4D (up to 2.5 Mfps, but used at 6 kfps) was fundamental in realizing a single electron Young experiment using nanometric double slits obtained by a FIB technique. On high statistical samples, it was possible to observe the interference and diffractions of single isolated electrons traveling inside a transmission electron microscope. For the first time, the information on the distribution of the arrival time of the single electrons has been extracted.
Resumo:
Parkinson’s disease is a neurodegenerative disorder due to the death of the dopaminergic neurons of the substantia nigra of the basal ganglia. The process that leads to these neural alterations is still unknown. Parkinson’s disease affects most of all the motor sphere, with a wide array of impairment such as bradykinesia, akinesia, tremor, postural instability and singular phenomena such as freezing of gait. Moreover, in the last few years the fact that the degeneration in the basal ganglia circuitry induces not only motor but also cognitive alterations, not necessarily implicating dementia, and that dopamine loss induces also further implications due to dopamine-driven synaptic plasticity got more attention. At the present moment, no neuroprotective treatment is available, and even if dopamine-replacement therapies as well as electrical deep brain stimulation are able to improve the life conditions of the patients, they often present side effects on the long term, and cannot recover the neural loss, which instead continues to advance. In the present thesis both motor and cognitive aspects of Parkinson’s disease and basal ganglia circuitry were investigated, at first focusing on Parkinson’s disease sensory and balance issues by means of a new instrumented method based on inertial sensor to provide further information about postural control and postural strategies used to attain balance, then applying this newly developed approach to assess balance control in mild and severe patients, both ON and OFF levodopa replacement. Given the inability of levodopa to recover balance issues and the new physiological findings than underline the importance in Parkinson’s disease of non-dopaminergic neurotransmitters, it was therefore developed an original computational model focusing on acetylcholine, the most promising neurotransmitter according to physiology, and its role in synaptic plasticity. The rationale of this thesis is that a multidisciplinary approach could gain insight into Parkinson’s disease features still unresolved.
3D Surveying and Data Management towards the Realization of a Knowledge System for Cultural Heritage
Resumo:
The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.
Resumo:
In this work, in-situ measurements of aerosol chemical composition, particle number size distribution, cloud-relevant properties and ground-based cloud observations were combined with high-resolution satellite sea surface chlorophyll-a concentration and air mass back-trajectory data to investigate the impact of the marine biota on aerosol physico-chemical and cloud properties. Studies were performed over the North-Eastern Atlantic Ocean, the central Mediterranean Sea, and the Arctic Ocean, by deploying both multi-year datasets and short-time scale observations. All the data were chosen to be representative of the marine atmosphere, reducing to a minimum any anthropogenic input. A relationship between the patterns of marine biological activity and the time evolution of marine aerosol properties was observed, under a variety of aspects, from chemical composition to number concentration and size distribution, up to the most cloud‐relevant properties. At short-time scales (1-2 months), the aerosol properties tend to respond to biological activity variations with a delay of about one to three weeks. This delay should be considered in model applications that make use of Chlorophyll-a to predict marine aerosol properties at high temporal resolution. The impact of oceanic biological activity on the microphysical properties of marine stratiform clouds is also evidenced by our analysis, over the Eastern North Atlantic Ocean. Such clouds tend to have a higher number of smaller cloud droplets in periods of high biological activity with respect to quiescent periods. This confirms the possibility of feedback interactions within the biota-aerosol-cloud climate system. Achieving a better characterization of the time and space relationships linking oceanic biological activity to marine aerosol composition and properties may significantly impact our future capability of predicting the chemical composition of the marine atmosphere, potentially contributing to reducing the uncertainty of future climate predictions, through a better understanding of the natural climate system.
Resumo:
The convergence between the recent developments in sensing technologies, data science, signal processing and advanced modelling has fostered a new paradigm to the Structural Health Monitoring (SHM) of engineered structures, which is the one based on intelligent sensors, i.e., embedded devices capable of stream processing data and/or performing structural inference in a self-contained and near-sensor manner. To efficiently exploit these intelligent sensor units for full-scale structural assessment, a joint effort is required to deal with instrumental aspects related to signal acquisition, conditioning and digitalization, and those pertaining to data management, data analytics and information sharing. In this framework, the main goal of this Thesis is to tackle the multi-faceted nature of the monitoring process, via a full-scale optimization of the hardware and software resources involved by the {SHM} system. The pursuit of this objective has required the investigation of both: i) transversal aspects common to multiple application domains at different abstraction levels (such as knowledge distillation, networking solutions, microsystem {HW} architectures), and ii) the specificities of the monitoring methodologies (vibrations, guided waves, acoustic emission monitoring). The key tools adopted in the proposed monitoring frameworks belong to the embedded signal processing field: namely, graph signal processing, compressed sensing, ARMA System Identification, digital data communication and TinyML.
Resumo:
In the last few years, mobile wireless technology has gone through a revolutionary change. Web-enabled devices have evolved into essential tools for communication, information, and entertainment. The fifth generation (5G) of mobile communication networks is envisioned to be a key enabler of the next upcoming wireless revolution. Millimeter wave (mmWave) spectrum and the evolution of Cloud Radio Access Networks (C-RANs) are two of the main technological innovations of 5G wireless systems and beyond. Because of the current spectrum-shortage condition, mmWaves have been proposed for the next generation systems, providing larger bandwidths and higher data rates. Consequently, new radio channel models are being developed. Recently, deterministic ray-based models such as Ray-Tracing (RT) are getting more attractive thanks to their frequency-agility and reliable predictions. A modern RT software has been calibrated and used to analyze the mmWave channel. Knowledge of the electromagnetic properties of materials is therefore essential. Hence, an item-level electromagnetic characterization of common construction materials has been successfully achieved to obtain information about their complex relative permittivity. A complete tuning of the RT tool has been performed against indoor and outdoor measurement campaigns at 27 and 38 GHz, setting the basis for the future development of advanced beamforming techniques which rely on deterministic propagation models (as RT). C-RAN is a novel mobile network architecture which can address a number of challenges that network operators are facing in order to meet the continuous customers’ demands. C-RANs have already been adopted in advanced 4G deployments; however, there are still some issues to deal with, especially considering the bandwidth requirements set by the forthcoming 5G systems. Open RAN specifications have been proposed to overcome the new 5G challenges set on C-RAN architectures, including synchronization aspects. In this work it is described an FPGA implementation of the Synchronization Plane for an O-RAN-compliant radio system.
Resumo:
The deployment of ultra-dense networks is one of the most promising solutions to manage the phenomenon of co-channel interference that affects the latest wireless communication systems, especially in hotspots. To meet the requirements of the use-cases and the immense amount of traffic generated in these scenarios, 5G ultra-dense networks are being deployed using various technologies, such as distributed antenna system (DAS) and cloud-radio access network (C-RAN). Through these centralized densification schemes, virtualized baseband processing units coordinate the distributed access points and manage the available network resources. In particular, link adaptation techniques are shown to be fundamental to overall system operation and performance enhancement. The core of this dissertation is the result of an analysis and a comparison of dynamic and adaptive methods for modulation and coding scheme (MCS) selection applied to the latest mobile telecommunications standards. A novel algorithm based on the proportional-integral-derivative (PID) controller principles and block error rate (BLER) target has been proposed. Tests were conducted in a 4G and 5G system level laboratory and, by means of a channel emulator, the performance was evaluated for different channel models and target BLERs. Furthermore, due to the intrinsic sectorization of the end-users distribution in the investigated scenario, a preliminary analysis on the joint application of users grouping algorithms with multi-antenna and multi-user techniques has been performed. In conclusion, the importance and impact of other fundamental physical layer operations, such as channel estimation and power control, on the overall end-to-end system behavior and performance were highlighted.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.