8 resultados para Distributed fiber optic sensors
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This doctoral dissertation aims to establish fiber-optic technologies overcoming the limiting issues of data communications in indoor environments. Specific applications are broadband mobile distribution in different in-building scenarios and high-speed digital transmission over short-range wired optical systems. Two key enabling technologies are considered: Radio over Fiber (RoF) techniques over standard silica fibers for distributed antenna systems (DAS) and plastic optical fibers (POFs) for short-range communications. Hence, the objectives and achievements of this thesis are related to the application of RoF and POF technologies in different in-building scenarios. On one hand, a theoretical and experimental analysis combined with demonstration activities has been performed on cost-effective RoF systems. An extensive modeling on modal noise impact both on linear and non-linear characteristics of RoF link over silica multimode fiber has been performed to achieve link design rules for an optimum choice of the transmitter, receiver and launching technique. A successful transmission of Long Term Evolution (LTE) mobile signals on the resulting optimized RoF system over silica multimode fiber employing a Fabry-Perot LD, central launch technique and a photodiode with a built-in ball lens was demonstrated up to 525m with performances well compliant with standard requirements. On the other hand, digital signal processing techniques to overcome the bandwidth limitation of POF have been investigated. An uncoded net bit-rate of 5.15Gbit/s was obtained on a 50m long POF link employing an eye-safe transmitter, a silicon photodiode, and DMT modulation with bit and power loading algorithm. With the insertion of 3x2N quadrature amplitude modulation constellation formats, an uncoded net-bit-rate of 5.4Gbit/s was obtained on a 50 m long POF link employing an eye-safe transmitter and a silicon avalanche photodiode. Moreover, simultaneous transmission of baseband 2Gbit/s with DMT and 200Mbit/s with an ultra-wideband radio signal has been validated over a 50m long POF link.
Resumo:
In recent years, composite materials have revolutionized the design of many structures. Their superior mechanical properties and light weight make composites convenient over traditional metal structures for many applications. However, composite materials are susceptible to complex and challenging to predict damage behaviors due to their anisotropy nature. Therefore, structural Health Monitoring (SHM) can be a valuable tool to assess the damage and understand the physics underneath. Distributed Optical Fiber Sensors (DOFS) can be used to monitor several types of damage in composites. However, their implementation outside academia is still unsatisfactory. One of the hindrances is the lack of a rigorous methodology for uncertainty quantification, which is essential for the performance assessment of the monitoring system. The concept of Probability of Detection (POD) must function as the guiding light in this process. However, precautions must be taken since this tool was established for Non-Destructive Evaluation (NDE) rather than Structural Health Monitoring (SHM). In addition, although DOFS have been the object of numerous studies, a well-established POD methodology for their performance assessment is still missing. This thesis aims to develop a methodology to produce POD curves for DOFS in composite materials. The problem is analyzed considering several critical points, such as the strain transfer characterizing the DOFS and the development of an experimental and model-assisted methodology to understand the parameters that affect the DOFS performance.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
The study of optic flow on postural control may explain how self-motion perception contributes to postural stability in young males and females and how such function changes in the old falls risk population. Study I: The aim was to examine the optic flow effect on postural control in young people (n=24), using stabilometry and surface-electromyography. Subjects viewed expansion and contraction optic flow stimuli which were presented full field, in the foveral or in the peripheral visual field. Results showed that optic flow stimulation causes an asymmetry in postural balance and a different lateralization of postural control in men and women. Gender differences evoked by optic flow were found both in the muscle activity and in the prevalent direction of oscillation. The COP spatial variability was reduced during the view of peripheral stimuli which evoked a clustered prevalent direction of oscillation, while foveal and random stimuli induced non-distributed directions. Study II was aimed at investigating the age-related mechanisms of postural stability during the view of optic flow stimuli in young (n=17) and old (n=19) people, using stabilometry and kinematic. Results showed that old people showed a greater effort to maintain posture during the view of optic flow stimuli than the young. Elderly seems to use the head stabilization on trunk strategy. Visual stimuli evoke an excitatory input on postural muscles, but the stimulus structure produces different postural effects. Peripheral optic flow stabilizes postural sway, while random and foveal stimuli provoke larger sway variability similar to those evoked in baseline. Postural control uses different mechanisms within each leg to produce the appropriate postural response to interact with extrapersonal environment. Ageing reduce the effortlessness to stabilize posture during optic flow, suggesting a neuronal processing decline associated with difficulty integrating multi-sensory information of self-motion perception and increasing risk of falls.
Resumo:
The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources.