955 resultados para Networks partner techniques
Resumo:
This doctoral dissertation aims to establish fiber-optic technologies overcoming the limiting issues of data communications in indoor environments. Specific applications are broadband mobile distribution in different in-building scenarios and high-speed digital transmission over short-range wired optical systems. Two key enabling technologies are considered: Radio over Fiber (RoF) techniques over standard silica fibers for distributed antenna systems (DAS) and plastic optical fibers (POFs) for short-range communications. Hence, the objectives and achievements of this thesis are related to the application of RoF and POF technologies in different in-building scenarios. On one hand, a theoretical and experimental analysis combined with demonstration activities has been performed on cost-effective RoF systems. An extensive modeling on modal noise impact both on linear and non-linear characteristics of RoF link over silica multimode fiber has been performed to achieve link design rules for an optimum choice of the transmitter, receiver and launching technique. A successful transmission of Long Term Evolution (LTE) mobile signals on the resulting optimized RoF system over silica multimode fiber employing a Fabry-Perot LD, central launch technique and a photodiode with a built-in ball lens was demonstrated up to 525m with performances well compliant with standard requirements. On the other hand, digital signal processing techniques to overcome the bandwidth limitation of POF have been investigated. An uncoded net bit-rate of 5.15Gbit/s was obtained on a 50m long POF link employing an eye-safe transmitter, a silicon photodiode, and DMT modulation with bit and power loading algorithm. With the insertion of 3x2N quadrature amplitude modulation constellation formats, an uncoded net-bit-rate of 5.4Gbit/s was obtained on a 50 m long POF link employing an eye-safe transmitter and a silicon avalanche photodiode. Moreover, simultaneous transmission of baseband 2Gbit/s with DMT and 200Mbit/s with an ultra-wideband radio signal has been validated over a 50m long POF link.
Resumo:
This thesis investigates context-aware wireless networks, capable to adapt their behavior to the context and the application, thanks to the ability of combining communication, sensing and localization. Problems of signals demodulation, parameters estimation and localization are addressed exploiting analytical methods, simulations and experimentation, for the derivation of the fundamental limits, the performance characterization of the proposed schemes and the experimental validation. Ultrawide-bandwidth (UWB) signals are in certain cases considered and non-coherent receivers, allowing the exploitation of the multipath channel diversity without adopting complex architectures, investigated. Closed-form expressions for the achievable bit error probability of novel proposed architectures are derived. The problem of time delay estimation (TDE), enabling network localization thanks to ranging measurement, is addressed from a theoretical point of view. New fundamental bounds on TDE are derived in the case the received signal is partially known or unknown at receiver side, as often occurs due to propagation or due to the adoption of low-complexity estimators. Practical estimators, such as energy-based estimators, are revised and their performance compared with the new bounds. The localization issue is addressed with experimentation for the characterization of cooperative networks. Practical algorithms able to improve the accuracy in non-line-of-sight (NLOS) channel conditions are evaluated on measured data. With the purpose of enhancing the localization coverage in NLOS conditions, non-regenerative relaying techniques for localization are introduced and ad hoc position estimators are devised. An example of context-aware network is given with the study of the UWB-RFID system for detecting and locating semi-passive tags. In particular a deep investigation involving low-complexity receivers capable to deal with problems of multi-tag interference, synchronization mismatches and clock drift is presented. Finally, theoretical bounds on the localization accuracy of this and others passive localization networks (e.g., radar) are derived, also accounting for different configurations such as in monostatic and multistatic networks.
Resumo:
n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.
Resumo:
In food industry, quality assurance requires low cost methods for the rapid assessment of the parameters that affect product stability. Foodstuffs are complex in their structure, mainly composed by gaseous, liquid and solid phases which often coexist in the same product. Special attention is given to water, concerned as natural component of the major food product or as added ingredient of a production process. Particularly water is structurally present in the matrix and not completely available. In this way, water can be present in foodstuff in many different states: as water of crystallization, bound to protein or starch molecules, entrapped in biopolymer networks or adsorbed on solid surfaces of porous food particles. The traditional technique for the assessment of food quality give reliable information but are destructive, time consuming and unsuitable for on line application. The techniques proposed answer to the limited disposition of time and could be able to characterize the main compositional parameters. Dielectric interaction response is mainly related to water and could be useful not only to provide information on the total content but also on the degree of mobility of this ubiquitous molecule in different complex food matrix. In this way the proposal of this thesis is to answer at this need. Dielectric and electric tool can be used for the scope and led us to describe the complex food matrix and predict food characteristic. The thesis is structured in three main part, in the first one some theoretical tools are recalled to well assess the food parameter involved in the quality definition and the techniques able to reply at the problem emerged. The second part explains the research conducted and the experimental plans are illustrated in detail. Finally the last section is left for rapid method easily implementable in an industrial process.
Resumo:
Il progetto di tesi riguarda principalmente la progettazione di moderni sistemi wireless, come 5G o WiGig, operanti a onde millimetriche, attraverso lo studio di una tecnica avanzata detta Beamforming, che, grazie all'utilizzo di antenne direttive e compatte, permette di superare limiti di link budget dovuti alle alte frequenze e introdurre inoltre diversità spaziale alla comunicazione. L'obiettivo principale del lavoro è stato quello di valutare, tramite simulazioni numeriche, le prestazioni di alcuni diversi schemi di Beamforming integrando come tool di supporto un programma di Ray Tracing capace di fornire le principali informazioni riguardo al canale radio. Con esso infatti è possibile sia effettuare un assessment generale del Beamforming stesso, ma anche formulare i presupposti per innovative soluzioni, chiamate RayTracing-assisted- Beamforming, decisamente promettenti per futuri sviluppi così come confermato dai risultati.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
Real living cell is a complex system governed by many process which are not yet fully understood: the process of cell differentiation is one of these. In this thesis work we make use of a cell differentiation model to develop gene regulatory networks (Boolean networks) with desired differentiation dynamics. To accomplish this task we have introduced techniques of automatic design and we have performed experiments using various differentiation trees. The results obtained have shown that the developed algorithms, except the Random algorithm, are able to generate Boolean networks with interesting differentiation dynamics. Moreover, we have presented some possible future applications and developments of the cell differentiation model in robotics and in medical research. Understanding the mechanisms involved in biological cells can gives us the possibility to explain some not yet understood dangerous disease, i.e the cancer. Le cellula è un sistema complesso governato da molti processi ancora non pienamente compresi: il differenziamento cellulare è uno di questi. In questa tesi utilizziamo un modello di differenziamento cellulare per sviluppare reti di regolazione genica (reti Booleane) con dinamiche di differenziamento desiderate. Per svolgere questo compito abbiamo introdotto tecniche di progettazione automatica e abbiamo eseguito esperimenti utilizzando vari alberi di differenziamento. I risultati ottenuti hanno mostrato che gli algoritmi sviluppati, eccetto l'algoritmo Random, sono in grado di poter generare reti Booleane con dinamiche di differenziamento interessanti. Inoltre, abbiamo presentato alcune possibili applicazioni e sviluppi futuri del modello di differenziamento in robotica e nella ricerca medica. Capire i meccanismi alla base del funzionamento cellulare può fornirci la possibilità di spiegare patologie ancora oggi non comprese, come il cancro.
Resumo:
This paper examines the accuracy of software-based on-line energy estimation techniques. It evaluates today’s most widespread energy estimation model in order to investigate whether the current methodology of pure software-based energy estimation running on a sensor node itself can indeed reliably and accurately determine its energy consumption - independent of the particular node instance, the traffic load the node is exposed to, or the MAC protocol the node is running. The paper enhances today’s widely used energy estimation model by integrating radio transceiver switches into the model, and proposes a methodology to find the optimal estimation model parameters. It proves by statistical validation with experimental data that the proposed model enhancement and parameter calibration methodology significantly increases the estimation accuracy.
Resumo:
Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.
Resumo:
The Simulation Automation Framework for Experiments (SAFE) streamlines the de- sign and execution of experiments with the ns-3 network simulator. SAFE ensures that best practices are followed throughout the workflow a network simulation study, guaranteeing that results are both credible and reproducible by third parties. Data analysis is a crucial part of this workflow, where mistakes are often made. Even when appearing in highly regarded venues, scientific graphics in numerous network simulation publications fail to include graphic titles, units, legends, and confidence intervals. After studying the literature in network simulation methodology and in- formation graphics visualization, I developed a visualization component for SAFE to help users avoid these errors in their scientific workflow. The functionality of this new component includes support for interactive visualization through a web-based interface and for the generation of high-quality, static plots that can be included in publications. The overarching goal of my contribution is to help users create graphics that follow best practices in visualization and thereby succeed in conveying the right information about simulation results.
Resumo:
Since the appearance of downsized and simplified TCP/IP stacks, single nodes from Wireless Sensor Networks (WSNs) have become directly accessible from the Internet with commonly used networking tools and applications (e.g., Telnet or SMTP). However, TCP has been shown to perform poorly in wireless networks, especially across multiple wireless hops. This paper examines TCP performance optimizations based on distributed caching and local retransmission strategies of intermediate nodes in a TCP connection, and proposes extended techniques to these strategies. The paper studies the impact of different radio duty-cycling MAC protocols on the end-to-end TCP performance when using the proposed TCP optimization strategies in an extensive experimental evaluation on a real-world sensor network testbed.
Resumo:
Bluetooth wireless technology is a robust short-range communications system designed for low power (10 meter range) and low cost. It operates in the 2.4 GHz Industrial Scientific Medical (ISM) band and it employs two techniques for minimizing interference: a frequency hopping scheme which nominally splits the 2.400 - 2.485 GHz band in 79 frequency channels and a time division duplex (TDD) scheme which is used to switch to a new frequency channel on 625 μs boundaries. During normal operation a Bluetooth device will be active on a different frequency channel every 625 μs, thus minimizing the chances of continuous interference impacting the performance of the system. The smallest unit of a Bluetooth network is called a piconet, and can have a maximum of eight nodes. Bluetooth devices must assume one of two roles within a piconet, master or slave, where the master governs quality of service and the frequency hopping schedule within the piconet and the slave follows the master’s schedule. A piconet must have a single master and up to 7 active slaves. By allowing devices to have roles in multiple piconets through time multiplexing, i.e. slave/slave or master/slave, the Bluetooth technology allows for interconnecting multiple piconets into larger networks called scatternets. The Bluetooth technology is explored in the context of enabling ad-hoc networks. The Bluetooth specification provides flexibility in the scatternet formation protocol, outlining only the mechanisms necessary for future protocol implementations. A new protocol for scatternet formation and maintenance - mscat - is presented and its performance is evaluated using a Bluetooth simulator. The free variables manipulated in this study include device activity and the probabilities of devices performing discovery procedures. The relationship between the role a device has in the scatternet and it’s probability of performing discovery was examined and related to the scatternet topology formed. The results show that mscat creates dense network topologies for networks of 30, 50 and 70 nodes. The mscat protocol results in approximately a 33% increase in slaves/piconet and a reduction of approximately 12.5% of average roles/node. For 50 node scenarios the set of parameters which creates the best determined outcome is unconnected node inquiry probability (UP) = 10%, master node inquiry probability (MP) = 80% and slave inquiry probability (SP) = 40%. The mscat protocol extends the Bluetooth specification for formation and maintenance of scatternets in an ad-hoc network.
Resumo:
Important food crops like rice are constantly exposed to various stresses that can have devastating effect on their survival and productivity. Being sessile, these highly evolved organisms have developed elaborate molecular machineries to sense a mixture of stress signals and elicit a precise response to minimize the damage. However, recent discoveries revealed that the interplay of these stress regulatory and signaling molecules is highly complex and remains largely unknown. In this work, we conducted large scale analysis of differential gene expression using advanced computational methods to dissect regulation of stress response which is at the heart of all molecular changes leading to the observed phenotypic susceptibility. One of the most important stress conditions in terms of loss of productivity is drought. We performed genomic and proteomic analysis of epigenetic and miRNA mechanisms in regulation of drought responsive genes in rice and found subsets of genes with striking properties. Overexpressed genesets included higher number of epigenetic marks, miRNA targets and transcription factors which regulate drought tolerance. On the other hand, underexpressed genesets were poor in above features but were rich in number of metabolic genes with multiple co-expression partners contributing majorly towards drought resistance. Identification and characterization of the patterns exhibited by differentially expressed genes hold key to uncover the synergistic and antagonistic components of the cross talk between stress response mechanisms. We performed meta-analysis on drought and bacterial stresses in rice and Arabidopsis, and identified hundreds of shared genes. We found high level of conservation of gene expression between these stresses. Weighted co-expression network analysis detected two tight clusters of genes made up of master transcription factors and signaling genes showing strikingly opposite expression status. To comprehensively identify the shared stress responsive genes between multiple abiotic and biotic stresses in rice, we performed meta-analyses of microarray studies from seven different abiotic and six biotic stresses separately and found more than thirteen hundred shared stress responsive genes. Various machine learning techniques utilizing these genes classified the stresses into two major classes' namely abiotic and biotic stresses and multiple classes of individual stresses with high accuracy and identified the top genes showing distinct patterns of expression. Functional enrichment and co-expression network analysis revealed the different roles of plant hormones, transcription factors in conserved and non-conserved genesets in regulation of stress response.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
In recent years, advanced metering infrastructure (AMI) has been the main research focus due to the traditional power grid has been restricted to meet development requirements. There has been an ongoing effort to increase the number of AMI devices that provide real-time data readings to improve system observability. Deployed AMI across distribution secondary networks provides load and consumption information for individual households which can improve grid management. Significant upgrade costs associated with retrofitting existing meters with network-capable sensing can be made more economical by using image processing methods to extract usage information from images of the existing meters. This thesis presents a new solution that uses online data exchange of power consumption information to a cloud server without modifying the existing electromechanical analog meters. In this framework, application of a systematic approach to extract energy data from images replaces the manual reading process. One case study illustrates the digital imaging approach is compared to the averages determined by visual readings over a one-month period.