5 resultados para Resilient infrastructure systems

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last 10 years the number of mobile devices has grown rapidly. Each person usually brings at least two personal devices and researchers says that in a near future this number could raise up to ten devices per person. Moreover, all the devices are becoming more integrated to our life than in the past, therefore the amount of data exchanged increases accordingly to the improvement of people's lifestyle. This is what researchers call Internet of Things. Thus, in the future there will be more than 60 billions of nodes and the current infrastructure is not ready to keep track of all the exchanges of data between them. Therefore, infrastructure improvements have been proposed in the last years, like MobileIP and HIP in order to facilitate the exchange of packets in mobility, however none of them have been optimized for the purpose. In the last years, researchers from Mid Sweden University created The MediaSense Framework. Initially, this framework was based on the Chord protocol in order to route packets in a big network, but the most important change has been the introduction of PGrids in order to create the Overlay and the persistence. Thanks to this technology, a lookup in the trie takes up to 0.5*log(N), where N is the total number of nodes in the network. This result could be improved by further optimizations on the management of the nodes, for example by the dynamic creation of groups of nodes. Moreover, since the nodes move, an underlaying support for connectivity management is needed. SCTP has been selected as one of the most promising upcoming standards for simultaneous multiple connection's management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today more than ever, with the recent war in Ukraine and the increasing number of attacks that affect systems of nations and companies every day, the world realizes that cybersecurity can no longer be considered just as a “cost”. It must become a pillar for our infrastructures that involve the security of our nations and the safety of people. Critical infrastructure, like energy, financial services, and healthcare, have become targets of many cyberattacks from several criminal groups, with an increasing number of resources and competencies, putting at risk the security and safety of companies and entire nations. This thesis aims to investigate the state-of-the-art regarding the best practice for securing Industrial control systems. We study the differences between two security frameworks. The first is Industrial Demilitarized Zone (I-DMZ), a perimeter-based security solution. The second one is the Zero Trust Architecture (ZTA) which removes the concept of perimeter to offer an entirely new approach to cybersecurity based on the slogan ‘Never Trust, always verify’. Starting from this premise, the Zero Trust model embeds strict Authentication, Authorization, and monitoring controls for any access to any resource. We have defined two architectures according to the State-of-the-art and the cybersecurity experts’ guidelines to compare I-DMZ, and Zero Trust approaches to ICS security. The goal is to demonstrate how a Zero Trust approach dramatically reduces the possibility of an attacker penetrating the network or moving laterally to compromise the entire infrastructure. A third architecture has been defined based on Cloud and fog/edge computing technology. It shows how Cloud solutions can improve the security and reliability of infrastructure and production processes that can benefit from a range of new functionalities, that the Cloud could offer as-a-Service.We have implemented and tested our Zero Trust solution and its ability to block intrusion or attempted attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urbanization has occasionally been linked to negative consequences. Traffic light system in urban arterial networks plays an essential role to the operation of transport systems. The availability of new Intelligent Transportation System innovations paved the way for connecting vehicles and road infrastructure. GLOSA, or the Green Light Optimal Speed Advisory, is a recent integration of vehicle-to-everything (v2x) technology. This thesis emphasized GLOSA system's potential as a tool for addressing traffic signal optimization. GLOSA serves as an advisory to drivers, informing them of the speed they must maintain to reduce waiting time. The considered study area in this thesis is the Via Aurelio Saffi – Via Emilia Ponente corridor in the Metropolitan City of Bologna which has several signalized intersections. Several simulation runs were performed in SUMOPy software on each peak-hour period (morning and afternoon) using recent actual traffic count data. GLOSA devices were placed on a 300m GLOSA distance. Considering the morning peak-hour, GLOSA outperformed the actuated traffic signal control, which is the baseline scenario, in terms of average waiting time, average speed, average fuel consumption per vehicle and average CO2 emissions. A remarkable 97% reduction on both fuel consumption and CO2 emissions were obtained. The average speed of vehicles running through the simulation was increased as well by 7% and a time saved of 25%. Same results were obtained for the afternoon peak hour with a decrease of 98% on both fuel consumption and CO2 emissions, 20% decrease on average waiting time, and an increase of 2% in average speed. In addition to previously mentioned benefits of GLOSA, a 15% and 13% decrease in time loss were obtained during morning and afternoon peak-hour, respectively. Towards the goal of sustainability, GLOSA shows a promising result of significantly lowering fuel consumption and CO2 emissions per vehicle.