945 resultados para scale free network
Resumo:
By providing vehicle-to-vehicle and vehicle-to-infrastructure wireless communications, vehicular ad hoc networks (VANETs), also known as the “networks on wheels”, can greatly enhance traffic safety, traffic efficiency and driving experience for intelligent transportation system (ITS). However, the unique features of VANETs, such as high mobility and uneven distribution of vehicular nodes, impose critical challenges of high efficiency and reliability for the implementation of VANETs. This dissertation is motivated by the great application potentials of VANETs in the design of efficient in-network data processing and dissemination. Considering the significance of message aggregation, data dissemination and data collection, this dissertation research targets at enhancing the traffic safety and traffic efficiency, as well as developing novel commercial applications, based on VANETs, following four aspects: 1) accurate and efficient message aggregation to detect on-road safety relevant events, 2) reliable data dissemination to reliably notify remote vehicles, 3) efficient and reliable spatial data collection from vehicular sensors, and 4) novel promising applications to exploit the commercial potentials of VANETs. Specifically, to enable cooperative detection of safety relevant events on the roads, the structure-less message aggregation (SLMA) scheme is proposed to improve communication efficiency and message accuracy. The scheme of relative position based message dissemination (RPB-MD) is proposed to reliably and efficiently disseminate messages to all intended vehicles in the zone-of-relevance in varying traffic density. Due to numerous vehicular sensor data available based on VANETs, the scheme of compressive sampling based data collection (CS-DC) is proposed to efficiently collect the spatial relevance data in a large scale, especially in the dense traffic. In addition, with novel and efficient solutions proposed for the application specific issues of data dissemination and data collection, several appealing value-added applications for VANETs are developed to exploit the commercial potentials of VANETs, namely general purpose automatic survey (GPAS), VANET-based ambient ad dissemination (VAAD) and VANET based vehicle performance monitoring and analysis (VehicleView). Thus, by improving the efficiency and reliability in in-network data processing and dissemination, including message aggregation, data dissemination and data collection, together with the development of novel promising applications, this dissertation will help push VANETs further to the stage of massive deployment.
Resumo:
Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.
Resumo:
A distributed network of cortical and subcortical brain regions mediates the control of voluntary behavior, but it is unclear how this complex system may flexibly shift between different behavioral events. This thesis describes the neurophysiological changes in several key nuclei across the brain during flexible behavior, using saccadic eye movements in rhesus macaque monkeys. We examined five nuclei critical for saccade initiation and modulation: the frontal eye field (FEF) in the cerebral cortex, the subthalamic nucleus (STN), caudate nucleus (CD), and substantia nigra pars reticulata (SNr) in the basal ganglia (BG), and the superior colliculus (SC) in the midbrain. The first study tested whether a ‘threshold’ theory of how neuronal activity cues saccade initiation is consistent with the flexible control of behavior. The theory suggests there is a fixed level of FEF and SC neuronal activation at which saccades are initiated. Our results provide strong evidence against a fixed saccade threshold in either structure during flexible behavior, and indicate that threshold variability might depend on the level of inhibitory signals applied to the FEF or SC. The next two studies investigated the BG network as a likely candidate to modulate a saccade initiation mechanism, based on strong inhibitory output signals from the BG to the FEF and SC. We investigated the STN and CD (BG input), and the SNr (BG oculomotor output) to examine changes across the BG network. This revealed robust task-contingent shifts in BG signaling (Chapter 3), which uniquely impacted saccade initiation according to behavioral condition (Chapters 3 and 4). The thesis concludes with a published short review of the mechanistic effects of BG deep brain stimulation (Chapter 5), and a general discussion including proof of concept saccade behavioral changes in an MPTP-induced Parkinsonian model (Chapter 6). The studies presented here demonstrate that the conditions for saccade initiation by the FEF and SC vary according to behavioral condition, while simultaneously, large-scale task dependent shifts occur in BG signaling consistent with the observed modulation of FEF and SC activity. Taken together, these describe a mechanistic framework by which the cortico-BG loop may contribute to the flexible control of behavior.
Resumo:
In this study, we carried out a comparative analysis between two classical methodologies to prospect residue contacts in proteins: the traditional cutoff dependent (CD) approach and cutoff free Delaunay tessellation (DT). In addition, two alternative coarse-grained forms to represent residues were tested: using alpha carbon (CA) and side chain geometric center (GC). A database was built, comprising three top classes: all alpha, all beta, and alpha/beta. We found that the cutoff value? at about 7.0 A emerges as an important distance parameter.? Up to 7.0 A, CD and DT properties are unified, which implies that at this distance all contacts are complete and legitimate (not occluded). We also have shown that DT has an intrinsic missing edges problem when mapping the first layer of neighbors. In proteins, it may produce systematic errors affecting mainly the contact network in beta chains with CA. The almost-Delaunay (AD) approach has been proposed to solve this DT problem. We found that even AD may not be an advantageous solution. As a consequence, in the strict range up ? to 7.0 A, the CD approach revealed to be a simpler, more complete, and reliable technique than DT or AD. Finally, we have shown that coarse-grained residue representations may introduce bias in the analysis of neighbors in cutoffs up to ? 6.8 A, with CA favoring alpha proteins and GC favoring beta proteins. This provides an additional argument pointing to ? the value of 7.0 A as an important lower bound cutoff to be used in contact analysis of proteins.
Resumo:
Research networks provide a framework for review, synthesis and systematic testing of theories by multiple scientists across international borders critical for addressing global-scale issues. In 2012, a GHG research network referred to as MAGGnet (Managing Agricultural Greenhouse Gases Network) was established within the Croplands Research Group of the Global Research Alliance on Agricultural Greenhouse Gases (GRA). With involvement from 46 alliance member countries, MAGGnet seeks to provide a platform for the inventory and analysis of agricultural GHG mitigation research throughout the world. To date, metadata from 315 experimental studies in 20 countries have been compiled using a standardized spreadsheet. Most studies were completed (74%) and conducted within a 1-3-year duration (68%). Soil carbon and nitrous oxide emissions were measured in over 80% of the studies. Among plant variables, grain yield was assessed across studies most frequently (56%), followed by stover (35%) and root (9%) biomass. MAGGnet has contributed to modeling efforts and has spurred other research groups in the GRA to collect experimental site metadata using an adapted spreadsheet. With continued growth and investment, MAGGnet will leverage limited-resource investments by any one country to produce an inclusive, globally shared meta-database focused on the science of GHG mitigation.
Resumo:
This thesis studies the state-of-the-art of phasor measurement units (PMUs) as well as their metrological requirements stated in the IEEE C37.118.1 and C37.118.2 Standards for guaranteeing correct measurement performances. Communication systems among PMUs and their possible applicability in the field of power quality (PQ) assessment are also investigated. This preliminary study is followed by an analysis of the working principle of real-time (RT) simulators and the importance of hardware-in-the-loop (HIL) implementation, examining the possible case studies specific for PMUs, including compliance tests which are one of the most important parts. The core of the thesis is focused on the implementation of a PMU model in the IEEE 5-bus network in Simulink and in the validation of the results using OPAL RT-4510 as a real-time simulator. An initial check allows one to get an idea about the goodness of the results in Simulink, comparing the PMU data with respect to the load-flow steady-state information. In this part, accuracy indices are also calculated for both voltage and current synchrophasors. The following part consists in the implementation of the same code in OPAL-RT 4510 simulator, after which an initial analysis is carried out in a qualitative way in order to get a sense of the goodness of the outcomes. Finally, the confirmation of the results is based on an examination of the attained voltage and current synchrophasors and accuracy indices coming from Simulink models and from OPAL system, using a Matlab script. This work also proposes suggestions for an upcoming operation of PMUs in a more complex system as the Digital Twin (DT) in order to improve the performances of the already-existing protection devices of the distribution system operator (DSO) for a future enhancement of power systems reliability.
Resumo:
Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.
Resumo:
The coastal ocean is a complex environment with extremely dynamic processes that require a high-resolution and cross-scale modeling approach in which all hydrodynamic fields and scales are considered integral parts of the overall system. In the last decade, unstructured-grid models have been used to advance in seamless modeling between scales. On the other hand, the data assimilation methodologies to improve the unstructured-grid models in the coastal seas have been developed only recently and need significant advancements. Here, we link the unstructured-grid ocean modeling to the variational data assimilation methods. In particular, we show results from the modeling system SANIFS based on SHYFEM fully-baroclinic unstructured-grid model interfaced with OceanVar, a state-of-art variational data assimilation scheme adopted for several systems based on a structured grid. OceanVar implements a 3DVar DA scheme. The combination of three linear operators models the background error covariance matrix. The vertical part is represented using multivariate EOFs for temperature, salinity, and sea level anomaly. The horizontal part is assumed to be Gaussian isotropic and is modeled using a first-order recursive filter algorithm designed for structured and regular grids. Here we introduced a novel recursive filter algorithm for unstructured grids. A local hydrostatic adjustment scheme models the rapidly evolving part of the background error covariance. We designed two data assimilation experiments using SANIFS implementation interfaced with OceanVar over the period 2017-2018, one with only temperature and salinity assimilation by Argo profiles and the second also including sea level anomaly. The results showed a successful implementation of the approach and the added value of the assimilation for the active tracer fields. While looking at the broad basin, no significant improvements are highlighted for the sea level, requiring future investigations. Furthermore, a Machine Learning methodology based on an LSTM network has been used to predict the model SST increments.
Resumo:
The Internet of Things (IoT) has grown rapidly in recent years, leading to an increased need for efficient and secure communication between connected devices. Wireless Sensor Networks (WSNs) are composed of small, low-power devices that are capable of sensing and exchanging data, and are often used in IoT applications. In addition, Mesh WSNs involve intermediate nodes forwarding data to ensure more robust communication. The integration of Unmanned Aerial Vehicles (UAVs) in Mesh WSNs has emerged as a promising solution for increasing the effectiveness of data collection, as UAVs can act as mobile relays, providing extended communication range and reducing energy consumption. However, the integration of UAVs and Mesh WSNs still poses new challenges, such as the design of efficient control and communication strategies. This thesis explores the networking capabilities of WSNs and investigates how the integration of UAVs can enhance their performance. The research focuses on three main objectives: (1) Ground Wireless Mesh Sensor Networks, (2) Aerial Wireless Mesh Sensor Networks, and (3) Ground/Aerial WMSN integration. For the first objective, we investigate the use of the Bluetooth Mesh standard for IoT monitoring in different environments. The second objective focuses on deploying aerial nodes to maximize data collection effectiveness and QoS of UAV-to-UAV links while maintaining the aerial mesh connectivity. The third objective investigates hybrid WMSN scenarios with air-to-ground communication links. One of the main contribution of the thesis consists in the design and implementation of a software framework called "Uhura", which enables the creation of Hybrid Wireless Mesh Sensor Networks and abstracts and handles multiple M2M communication stacks on both ground and aerial links. The operations of Uhura have been validated through simulations and small-scale testbeds involving ground and aerial devices.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the proliferation of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive multiple access. In addition to scalability requirements, new demands have emerged for massive multiple access, including latency and reliability. The challenges envisaged for future wireless communication networks, particularly in the context of massive access, include: i) a very large population size of low power devices transmitting short packets; ii) an ever-increasing scalability requirement; iii) a mild fixed maximum latency requirement; iv) a non-trivial requirement on reliability. To this aim, we suggest the joint utilization of grant-free access protocols, massive MIMO at the base station side, framed schemes to let the contention start and end within a frame, and succesive interference cancellation techniques at the base station side. In essence, this approach is encapsulated in the concept of coded random access with massive MIMO processing. These schemes can be explored from various angles, spanning the protocol stack from the physical (PHY) to the medium access control (MAC) layer. In this thesis, we delve into both of these layers, examining topics ranging from symbol-level signal processing to succesive interference cancellation-based scheduling strategies. In parallel with proposing new schemes, our work includes a theoretical analysis aimed at providing valuable system design guidelines. As a main theoretical outcome, we propose a novel joint PHY and MAC layer design based on density evolution on sparse graphs.
Resumo:
In this Thesis, a life cycle analysis (LCA) of a biofuel cell designed by a team from the University of Bologna was done. The purpose of this study is to investigate the possible environmental impacts of the production and use of the cell and a possible optimization for an industrial scale-up. To do so, a first part of the paper was devoted to studying the present literature on biomass, and fuel cell treatments and then LCA studies on them. The experimental part presents the work done to create the Life Cycle Inventory and Life Cycle Impact Assessment. Several alternative scenarios were created to study process optimization. Reagents and energy supply were changed. To examine whether this technology can be competitive, a comparison was made with some biofuel cell use scenarios with traditional biomass treatment technologies. The result of this study is that this technology is promising from an environmental point of view in case it is possible to recover nutrients in output, without excessive energy consumption, and to minimize the use of energy used to prepare the solution.
Resumo:
Dopamine is a neurotransmitter which has a role in several psychiatric and neurological disorders. In-vivo detection of its concentration at the microscopic scale would benefit the study of these conditions and help in the development of therapies. The ideal sensor would be biocompatible, able to probe concentrations in microscopic volumes and sensitive to the small physiological concentrations of this molecule (10 nM - 1 μM). The ease of oxidation of dopamine makes it possible to detect it by electrochemical methods. An additional requirement in this kind of experiments when run in water, though, is to have a large potential window inside which no redox reactions with water take place. A promising class of materials which are being explored is the one of pyrolyzed photoresists. Photoresists can be lithographically patterned with micrometric resolution and after pyrolysis leave a glassy carbon material which is conductive, biocompatible and has a large electrochemical water window. In this work I developed a fabrication procedure for microelectrode arrays with three dimensional electrodes, making the whole device using just a negative photoresist called SU8. Making 3D electrodes could be a way to enhance the sensitivity of the electrodes without occupying a bigger footprint on the device. I characterized the electrical, morphological, and electrochemical properties of these electrodes, in particular their sensitivity to dopamine. I also fabricated and tested a two dimensional device for comparison. The three dimensional devices fabricated showed inferior properties to their two dimensional counter parts. I found a possible explanation and suggested some ways in which the fabrication could be improved.
Resumo:
L’industria musicale non ha mai vissuto un periodo di cambiamento così intenso e profondo come al giorno d’oggi. La sua digitalizzazione attraverso lo streaming musicale, la nascita di nuovi dispositivi che ne consentano la fruizione e l’interazione tra industria discografica e un pubblico sempre più centrale e determinante nelle scelte produttive e distributive del mercato, sono solo alcuni degli aspetti, indubbiamente i più importanti, che caratterizzano questa nuova fase della storia della musica. Attraverso questa tesi di laurea ho cercato di osservare e approfondire come in questi ultimi anni l’industria musicale si stia approcciando al mondo dei social network per promuovere e distribuire i prodotti musicali. Viene tracciato un quadro generale dell’ambiente dei social media, mostrandone sia i lati tecnologici che sociali, fino ad evidenziarne aspetti positivi e negativi. Nella seconda parte della trattazione invece, vengono osservate le strategie comunicative adottate dagli artisti e dagli addetti ai lavori per arrivare più efficacemente al pubblico e il ruolo che rivestono i vecchi media e l’utenza nel diffondere su larga scala il prodotto musicale.
Resumo:
Urbanization has occasionally been linked to negative consequences. Traffic light system in urban arterial networks plays an essential role to the operation of transport systems. The availability of new Intelligent Transportation System innovations paved the way for connecting vehicles and road infrastructure. GLOSA, or the Green Light Optimal Speed Advisory, is a recent integration of vehicle-to-everything (v2x) technology. This thesis emphasized GLOSA system's potential as a tool for addressing traffic signal optimization. GLOSA serves as an advisory to drivers, informing them of the speed they must maintain to reduce waiting time. The considered study area in this thesis is the Via Aurelio Saffi – Via Emilia Ponente corridor in the Metropolitan City of Bologna which has several signalized intersections. Several simulation runs were performed in SUMOPy software on each peak-hour period (morning and afternoon) using recent actual traffic count data. GLOSA devices were placed on a 300m GLOSA distance. Considering the morning peak-hour, GLOSA outperformed the actuated traffic signal control, which is the baseline scenario, in terms of average waiting time, average speed, average fuel consumption per vehicle and average CO2 emissions. A remarkable 97% reduction on both fuel consumption and CO2 emissions were obtained. The average speed of vehicles running through the simulation was increased as well by 7% and a time saved of 25%. Same results were obtained for the afternoon peak hour with a decrease of 98% on both fuel consumption and CO2 emissions, 20% decrease on average waiting time, and an increase of 2% in average speed. In addition to previously mentioned benefits of GLOSA, a 15% and 13% decrease in time loss were obtained during morning and afternoon peak-hour, respectively. Towards the goal of sustainability, GLOSA shows a promising result of significantly lowering fuel consumption and CO2 emissions per vehicle.