8 resultados para Cluster based protocols
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Wireless sensor networks (WSNs) consist of a large number of sensor nodes, characterized by low power constraint, limited transmission range and limited computational capabilities [1][2].The cost of these devices is constantly decreasing, making it possible to use a large number of sensor devices in a wide array of commercial, environmental, military, and healthcare fields. Some of these applications involve placing the sensors evenly spaced on a straight line for example in roads, bridges, tunnels, water catchments and water pipelines, city drainages, oil and gas pipelines etc., making a special class of these networks which we define as a Linear Wireless Network (LWN). In LWNs, data transmission happens hop by hop from the source to the destination, through a route composed of multiple relays. The peculiarity of the topology of LWNs, motivates the design of specialized protocols, taking advantage of the linearity of such networks, in order to increase reliability, communication efficiency, energy savings, network lifetime and to minimize the end-to-end delay [3]. In this thesis a novel contention based Medium Access Control (MAC) protocol called L-CSMA, specifically devised for LWNs is presented. The basic idea of L-CSMA is to assign different priorities to nodes based on their position along the line. The priority is assigned in terms of sensing duration, whereby nodes closer to the destination are assigned shorter sensing time compared to the rest of the nodes and hence higher priority. This mechanism speeds up the transmission of packets which are already in the path, making transmission flow more efficient. Using NS-3 simulator, the performance of L-CSMA in terms of packets success rate, that is, the percentage of packets that reach destination, and throughput are compared with that of IEEE 802.15.4 MAC protocol, de-facto standard for wireless sensor networks. In general, L-CSMA outperforms the IEEE 802.15.4 MAC protocol.
Resumo:
Internet traffic classification is a relevant and mature research field, anyway of growing importance and with still open technical challenges, also due to the pervasive presence of Internet-connected devices into everyday life. We claim the need for innovative traffic classification solutions capable of being lightweight, of adopting a domain-based approach, of not only concentrating on application-level protocol categorization but also classifying Internet traffic by subject. To this purpose, this paper originally proposes a classification solution that leverages domain name information extracted from IPFIX summaries, DNS logs, and DHCP leases, with the possibility to be applied to any kind of traffic. Our proposed solution is based on an extension of Word2vec unsupervised learning techniques running on a specialized Apache Spark cluster. In particular, learning techniques are leveraged to generate word-embeddings from a mixed dataset composed by domain names and natural language corpuses in a lightweight way and with general applicability. The paper also reports lessons learnt from our implementation and deployment experience that demonstrates that our solution can process 5500 IPFIX summaries per second on an Apache Spark cluster with 1 slave instance in Amazon EC2 at a cost of $ 3860 year. Reported experimental results about Precision, Recall, F-Measure, Accuracy, and Cohen's Kappa show the feasibility and effectiveness of the proposal. The experiments prove that words contained in domain names do have a relation with the kind of traffic directed towards them, therefore using specifically trained word embeddings we are able to classify them in customizable categories. We also show that training word embeddings on larger natural language corpuses leads improvements in terms of precision up to 180%.
Resumo:
The aim of my master thesis is developing novel, greener approaches for the cleaning of artworks: such treatment consists in the removal of old varnish layers which tend to discolor or darken with time, thus allowing replacement with a new protecting coat. While protocols presently applied can be effective in the cleaning of the artworks, none of them take into account conservators’ health safety and environmental issues. Thus, using biomass-derived components, which are non-toxic and reusable and/or compostable might bring into the heritage conservation an additional awareness about safety and environmental claiming. The laboratory work for the thesis is a collaborative work between different groups. The biggest part of the work was at the Polymer group where gels were synthesized using Polyhydroxybutyrate (PHB) from sustainable resources and green solvents. The use of the gels might help to reduce the volatilization of solvents and contributes to the localization of the cleaning action. After the preparation of the gels, different characterization methods were used in order to estimate their properties and shelf-life. Finally, the work was completed on the application of the gels on sculpture, coated with undesired layers to be removed. Here, pre-mapping of the areas of interest was realized with different optical techniques, followed by the application of the gels for the cleaning and analyzing the effectiveness of cleaning.
Resumo:
The main objective of my thesis work is to exploit the Google native and open-source platform Kubeflow, specifically using Kubeflow pipelines, to execute a Federated Learning scalable ML process in a 5G-like and simplified test architecture hosting a Kubernetes cluster and apply the largely adopted FedAVG algorithm and FedProx its optimization empowered by the ML platform ‘s abilities to ease the development and production cycle of this specific FL process. FL algorithms are more are and more promising and adopted both in Cloud application development and 5G communication enhancement through data coming from the monitoring of the underlying telco infrastructure and execution of training and data aggregation at edge nodes to optimize the global model of the algorithm ( that could be used for example for resource provisioning to reach an agreed QoS for the underlying network slice) and after a study and a research over the available papers and scientific articles related to FL with the help of the CTTC that suggests me to study and use Kubeflow to bear the algorithm we found out that this approach for the whole FL cycle deployment was not documented and may be interesting to investigate more in depth. This study may lead to prove the efficiency of the Kubeflow platform itself for this need of development of new FL algorithms that will support new Applications and especially test the FedAVG algorithm performances in a simulated client to cloud communication using a MNIST dataset for FL as benchmark.
Resumo:
Nell'ambito della loro trasformazione digitale, molte organizzazioni stanno adottando nuove tecnologie per supportare lo sviluppo, l'implementazione e la gestione delle proprie architetture basate su microservizi negli ambienti cloud e tra i fornitori di cloud. In questo scenario, le service ed event mesh stanno emergendo come livelli infrastrutturali dinamici e configurabili che facilitano interazioni complesse e la gestione di applicazioni basate su microservizi e servizi cloud. L’obiettivo di questo lavoro è quello di analizzare soluzioni mesh open-source (istio, Linkerd, Apache EventMesh) dal punto di vista delle prestazioni, quando usate per gestire la comunicazione tra applicazioni a workflow basate su microservizi all’interno dell’ambiente cloud. A questo scopo è stato realizzato un sistema per eseguire il dislocamento di ognuno dei componenti all’interno di un cluster singolo e in un ambiente multi-cluster. La raccolta delle metriche e la loro sintesi è stata realizzata con un sistema personalizzato, compatibile con il formato dei dati di Prometheus. I test ci hanno permesso di valutare le prestazioni di ogni componente insieme alla sua efficacia. In generale, mentre si è potuta accertare la maturità delle implementazioni di service mesh testate, la soluzione di event mesh da noi usata è apparsa come una tecnologia ancora non matura, a causa di numerosi problemi di funzionamento.
Resumo:
In this work, two different protocols for the synthesis of Nb2O5-SiO2 with a sol-gel route in which supercritical carbon dioxide was used as solvent have been developed. The tailored design of the reactor allowed the reactants to come into contact only when supercritical CO2 is present, and the high-throughput experimentation scCO2 unit allowed the screening of synthetic parameters, that led to a Nb2O5 incorporation into the silica matrix of 2.5 wt%. N2-physisorption revealed high surface areas and the presence of meso- and micropores. XRD allowed to demonstrate the amorphous character of these materials. SEM-EDX proved the excellent dispersion of Nb2O5 into the silica matrix. These materials were tested in the epoxidation of cyclooctene with hydrogen peroxide, which is considered an environmentally friendly oxidant. The catalysts were virtually inactive in an organic, polar, aprotic solvent (1,4-dioxane). However, the most active scCO2 Nb2O5-SiO2 catalyst achieved a cyclooctene conversion of 44% with a selectivity of 88% towards the epoxide when tested in ethanol. Catalytic tests on cyclohexene revealed the presence of the epoxide, which is remarkable, considering that this substrate is easily oxidised to the diol. The behaviour in protic and aprotic solvents is compared to that of TS-1.
Resumo:
Recent years have witnessed an increasing evolution of wireless mobile networks, with an intensive research work aimed at developing new efficient techniques for the future 6G standards. In the framework of massive machine-type communication (mMTC), emerging Internet of Things (IoT) applications, in which sensor nodes and smart devices transmit unpredictably and sporadically short data packets without coordination, are gaining an increasing interest. In this work, new medium access control (MAC) protocols for massive IoT, capable of supporting a non-instantaneous feedback from the receiver, are studied. These schemes guarantee an high time for the acknowledgment (ACK) messages to the base station (BS), without a significant performance loss. Then, an error floor analysis of the considered protocols is performed in order to obtain useful guidelines for the system design. Furthermore, non-orthogonal multiple access (NOMA) coded random access (CRA) schemes based on power domain are here developed. The introduction of power diversity permits to solve more packet collision at the physical (PHY) layer, with an important reduction of the packet loss rate (PLR) in comparison to the number of active users in the system. The proposed solutions aim to improve the actual grant-free protocols, respecting the stringent constraints of scalability, reliability and latency requested by 6G networks.
Resumo:
The study of the user scheduling problem in a Low Earth Orbit (LEO) Multi-User MIMO system is the objective of this thesis. With the application of cutting-edge digital beamforming algorithms, a LEO satellite with an antenna array and a large number of antenna elements can provide service to many user terminals (UTs) in full frequency reuse (FFR) schemes. Since the number of UTs on-ground are many more than the transmit antennas on the satellite, user scheduling is necessary. Scheduling can be accomplished by grouping users into different clusters: users within the same cluster are multiplexed and served together via Space Division Multiple Access (SDMA), i.e., digital beamforming or Multi-User MIMO techniques; the different clusters of users are then served on different time slots via Time Division Multiple Access (TDMA). The design of an optimal user grouping strategy is known to be an NP-complete problem which can be solved only through exhaustive search. In this thesis, we provide a graph-based user scheduling and feed space beamforming architecture for the downlink with the aim of reducing user inter-beam interference. The main idea is based on clustering users whose pairwise great-circle distance is as large as possible. First, we create a graph where the users represent the vertices, whereas an edge in the graph between 2 users exists if their great-circle distance is above a certain threshold. In the second step, we develop a low complex greedy user clustering technique and we iteratively search for the maximum clique in the graph, i.e., the largest fully connected subgraph in the graph. Finally, by using the 3 aforementioned power normalization techniques, a Minimum Mean Square Error (MMSE) beamforming matrix is deployed on a cluster basis. The suggested scheduling system is compared with a position-based scheduler, which generates a beam lattice on the ground and randomly selects one user per beam to form a cluster.