6 resultados para Elastic traffic
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
This master’s thesis describes the research done at the Medical Technology Laboratory (LTM) of the Rizzoli Orthopedic Institute (IOR, Bologna, Italy), which focused on the characterization of the elastic properties of the trabecular bone tissue, starting from october 2012 to present. The approach uses computed microtomography to characterize the architecture of trabecular bone specimens. With the information obtained from the scanner, specimen-specific models of trabecular bone are generated for the solution with the Finite Element Method (FEM). Along with the FEM modelling, mechanical tests are performed over the same reconstructed bone portions. From the linear-elastic stage of mechanical tests presented by experimental results, it is possible to estimate the mechanical properties of the trabecular bone tissue. After a brief introduction on the biomechanics of the trabecular bone (chapter 1) and on the characterization of the mechanics of its tissue using FEM models (chapter 2), the reliability analysis of an experimental procedure is explained (chapter 3), based on the high-scalable numerical solver ParFE. In chapter 4, the sensitivity analyses on two different parameters for micro-FEM model’s reconstruction are presented. Once the reliability of the modeling strategy has been shown, a recent layout for experimental test, developed in LTM, is presented (chapter 5). Moreover, the results of the application of the new layout are discussed, with a stress on the difficulties connected to it and observed during the tests. Finally, a prototype experimental layout for the measure of deformations in trabecular bone specimens is presented (chapter 6). This procedure is based on the Digital Image Correlation method and is currently under development in LTM.
Resumo:
Il presente lavoro di tesi si inserisce nel contesto dei sistemi ITS e intende realizzare un insieme di protocolli in ambito VANET relativamente semplici ma efficaci, in grado di rilevare la presenza di veicoli in avvicinamento a un impianto semaforico e di raccogliere quelle informazioni di stato che consentano all’infrastruttura stradale di ottenere una stima il più possibile veritiera delle attuali condizioni del traffico in ingresso per ciascuna delle direzioni previste in tale punto. Si prevede di raccogliere i veicoli in gruppi durante il loro avvicinamento al centro di un incrocio. Ogni gruppo sarà costituito esclusivamente da quelle vetture che stanno percorrendo uno stesso tratto stradale e promuoverà l’intercomunicazione tra i suoi diversi membri al fine di raccogliere e integrare i dati sulla composizione del traffico locale. Il sistema realizzato cercherà di trasmettere alle singole unità semaforiche un flusso di dati sintetico ma costante contenente le statistiche sull’ambiente circostante, in modo da consentire loro di applicare politiche dinamiche e intelligenti di controllo della viabilità. L’architettura realizzata viene eseguita all’interno di un ambiente urbano simulato nel quale la mobilità dei nodi di rete corrisponde a rilevazioni reali effettuate su alcune porzioni della città di Bologna. Le performance e le caratteristiche del sistema complessivo vengono analizzate e commentate sulla base dei diversi test condotti.
Resumo:
La gestione del traffico è una delle principali problematiche delle città moderne, e porta alla definizione di nuove sfide per quanto riguarda l’ottimizzazione del flusso veicolare. Il controllo semaforico è uno degli elementi fondamentali per ottimizzare la gestione del traffico. Attualmente la rilevazione del traffico viene effettuata tramite sensori, tra i quali vengono maggiormente utilizzate le spire magnetiche, la cui installazione e gestione implica costi elevati. In questo contesto, il progetto europeo COLOMBO si pone come obiettivo l’ideazione di nuovi sistemi di regolazione semaforica in grado di rilevare il traffico veicolare mediante sensori più economici da installare e mantenere, e capaci, sulla base di tali rilevazioni, di auto organizzarsi, traendo ispirazione dal campo dell’intelligenza artificiale noto come swarm intelligence. Alla base di questa auto organizzazione semaforica di COLOMBO vi sono due diversi livelli di politiche: macroscopico e microscopico. Nel primo caso le politiche macroscopiche, utilizzando il feromone come astrazione dell’attuale livello del traffico, scelgono la politica di gestione in base alla quantità di feromone presente nelle corsie di entrata e di uscita. Per quanto riguarda invece le politiche microscopiche, il loro compito è quello di deci- dere la durata dei periodi di rosso o verde modificando una sequenza di fasi, chiamata in COLOMBO catena. Le catene possono essere scelte dal sistema in base al valore corrente della soglia di desiderabilità e ad ogni catena corrisponde una soglia di desiderabilità. Lo scopo di questo elaborato è quello di suggerire metodi alternativi all’attuale conteggio di questa soglia di desiderabilità in scenari di bassa presenza di dispositivi per la rilevazione dei veicoli. Ogni algoritmo complesso ha bisogno di essere ottimizzato per migliorarne le performance. Anche in questo caso, gli algoritmi proposti hanno subito un processo di parameter tuning per ottimizzarne le prestazioni in scenari di bassa presenza di dispositivi per la rilevazione dei veicoli. Sulla base del lavoro di parameter tuning, infine, sono state eseguite delle simulazioni per valutare quale degli approcci suggeriti sia il migliore.
Resumo:
Faxaflói bay is a short, wide and shallow bay situated in the southwest of Iceland. Although hosting a rather high level of marine traffic, this area is inhabited by many different species of cetaceans, among which the white-beaked dolphin (Lagenorhynchus albirostris), found here all year-round. This study aimed to evaluate the potential effect of increasing marine traffic on white-beaked dolphins distribution and behaviour, and to determine whether or not a variation in sighting frequencies have occurred throughout years (2008 – 2014). Data on sightings and on behaviour, as well as photographic one, has been collected daily taking advantage of the whale-watching company “Elding” operating in the bay. Results have confirmed the importance of this area for white-beaked dolphins, which have shown a certain level of site fidelity. Despite the high level of marine traffic, this dolphin appears to tolerate the presence of boats: no differences in encounter durations and locations over the study years have occurred, even though with increasing number of vessels, an increase in avoidance strategies has been displayed. Furthermore, seasonal differences in probabilities of sightings, with respect to the time of the day, have been found, leading to suggest the existence of a daily cycle of their movements and activities within the bay. This study has also described a major decline in sighting rates throughout years raising concern about white-beaked dolphin conservation status in Icelandic waters. It is therefore highly recommended a new dedicated survey to be conducted in order to document the current population estimate, to better investigate on the energetic costs that chronic exposure to disturbances may cause, and to plan a more suitable conservation strategy for white-beaked dolphin around Iceland.
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
Internet traffic classification is a relevant and mature research field, anyway of growing importance and with still open technical challenges, also due to the pervasive presence of Internet-connected devices into everyday life. We claim the need for innovative traffic classification solutions capable of being lightweight, of adopting a domain-based approach, of not only concentrating on application-level protocol categorization but also classifying Internet traffic by subject. To this purpose, this paper originally proposes a classification solution that leverages domain name information extracted from IPFIX summaries, DNS logs, and DHCP leases, with the possibility to be applied to any kind of traffic. Our proposed solution is based on an extension of Word2vec unsupervised learning techniques running on a specialized Apache Spark cluster. In particular, learning techniques are leveraged to generate word-embeddings from a mixed dataset composed by domain names and natural language corpuses in a lightweight way and with general applicability. The paper also reports lessons learnt from our implementation and deployment experience that demonstrates that our solution can process 5500 IPFIX summaries per second on an Apache Spark cluster with 1 slave instance in Amazon EC2 at a cost of $ 3860 year. Reported experimental results about Precision, Recall, F-Measure, Accuracy, and Cohen's Kappa show the feasibility and effectiveness of the proposal. The experiments prove that words contained in domain names do have a relation with the kind of traffic directed towards them, therefore using specifically trained word embeddings we are able to classify them in customizable categories. We also show that training word embeddings on larger natural language corpuses leads improvements in terms of precision up to 180%.