14 resultados para open system
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Reinforced concrete columns might fail because of buckling of the longitudinal reinforcing bar when exposed to earthquake motions. Depending on the hoop stiffness and the length-over-diameter ratio, the instability can be local (in between two subsequent hoops) or global (the buckling length comprises several hoop spacings). To get insight into the topic, an extensive literary research of 19 existing models has been carried out including different approaches and assumptions which yield different results. Finite element fiberanalysis was carried out to study the local buckling behavior with varying length-over-diameter and initial imperfection-over-diameter ratios. The comparison of the analytical results with some experimental results shows good agreement before the post buckling behavior undergoes large deformation. Furthermore, different global buckling analysis cases were run considering the influence of different parameters; for certain hoop stiffnesses and length-over-diameter ratios local buckling was encountered. A parametric study yields an adimensional critical stress in function of a stiffness ratio characterized by the reinforcement configuration. Colonne in cemento armato possono collassare per via dell’instabilità dell’armatura longitudinale se sottoposte all’azione di un sisma. In funzione della rigidezza dei ferri trasversali e del rapporto lunghezza d’inflessione-diametro, l’instabilità può essere locale (fra due staffe adiacenti) o globale (la lunghezza d’instabilità comprende alcune staffe). Per introdurre alla materia, è proposta un’esauriente ricerca bibliografica di 19 modelli esistenti che include approcci e ipotesi differenti che portano a risultati distinti. Tramite un’analisi a fibre e elementi finiti si è studiata l’instabilità locale con vari rapporti lunghezza d’inflessione-diametro e imperfezione iniziale-diametro. Il confronto dei risultati analitici con quelli sperimentali mostra una buona coincidenza fino al raggiungimento di grandi spostamenti. Inoltre, il caso d’instabilità globale è stato simulato valutando l’influenza di vari parametri; per certe configurazioni di rigidezza delle staffe e lunghezza d’inflessione-diametro si hanno ottenuto casi di instabilità locale. Uno studio parametrico ha permesso di ottenere un carico critico adimensionale in funzione del rapporto di rigidezza dato dalle caratteristiche dell’armatura.
Resumo:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
Resumo:
The work for the present thesis started in California, during my semester as an exchange student overseas. California is known worldwide for its seismicity and its effort in the earthquake engineering research field. For this reason, I immediately found interesting the Structural Dynamics Professor, Maria Q. Feng's proposal, to work on a pushover analysis of the existing Jamboree Road Overcrossing bridge. Concrete is a popular building material in California, and for the most part, it serves its functions well. However, concrete is inherently brittle and performs poorly during earthquakes if not reinforced properly. The San Fernando Earthquake of 1971 dramatically demonstrated this characteristic. Shortly thereafter, code writers revised the design provisions for new concrete buildings so to provide adequate ductility to resist strong ground shaking. There remain, nonetheless, millions of square feet of non-ductile concrete buildings in California. The purpose of this work is to perform a Pushover Analysis and compare the results with those of a Nonlinear Time-History Analysis of an existing bridge, located in Southern California. The analyses have been executed through the software OpenSees, the Open System for Earthquake Engineering Simulation. The bridge Jamboree Road Overcrossing is classified as a Standard Ordinary Bridge. In fact, the JRO is a typical three-span continuous cast-in-place prestressed post-tension box-girder. The total length of the bridge is 366 ft., and the height of the two bents are respectively 26,41 ft. and 28,41 ft.. Both the Pushover Analysis and the Nonlinear Time-History Analysis require the use of a model that takes into account for the nonlinearities of the system. In fact, in order to execute nonlinear analyses of highway bridges it is essential to incorporate an accurate model of the material behavior. It has been observed that, after the occurrence of destructive earthquakes, one of the most damaged elements on highway bridges is a column. To evaluate the performance of bridge columns during seismic events an adequate model of the column must be incorporated. Part of the work of the present thesis is, in fact, dedicated to the modeling of bents. Different types of nonlinear element have been studied and modeled, with emphasis on the plasticity zone length determination and location. Furthermore, different models for concrete and steel materials have been considered, and the selection of the parameters that define the constitutive laws of the different materials have been accurate. The work is structured into four chapters, to follow a brief overview of the content. The first chapter introduces the concepts related to capacity design, as the actual philosophy of seismic design. Furthermore, nonlinear analyses both static, pushover, and dynamic, time-history, are presented. The final paragraph concludes with a short description on how to determine the seismic demand at a specific site, according to the latest design criteria in California. The second chapter deals with the formulation of force-based finite elements and the issues regarding the objectivity of the response in nonlinear field. Both concentrated and distributed plasticity elements are discussed into detail. The third chapter presents the existing structure, the software used OpenSees, and the modeling assumptions and issues. The creation of the nonlinear model represents a central part in this work. Nonlinear material constitutive laws, for concrete and reinforcing steel, are discussed into detail; as well as the different scenarios employed in the columns modeling. Finally, the results of the pushover analysis are presented in chapter four. Capacity curves are examined for the different model scenarios used, and failure modes of concrete and steel are discussed. Capacity curve is converted into capacity spectrum and intersected with the design spectrum. In the last paragraph, the results of nonlinear time-history analyses are compared to those of pushover analysis.
Resumo:
Vista la necessità di migliorare le prestazioni sismiche delle costruzioni, in particolare di quelle prefabbricate, in questa tesi è stato studiato il comportamento di un particolare tipo di collegamento fra pilastro prefabbricato e plinto di fondazione, proposto e utilizzato dalla ditta APE di Montecchio Emilia. Come noto, l'assemblaggio degli elementi prefabbricati pone il problema delle modalità di collegamento nei nodi, le quali condizionano il comportamento statico e la risposta al sisma dell'insieme strutturale. Per studiare il comportamento del collegamento in questione, sono state effettuate delle prove di pressoflessione ciclica su due provini. Inoltre, sono stati sviluppati dei modelli numerici con l'obiettivo di simulare il comportamento reale. Si è utilizzato il software Opensees (the Open System for Earthquake Engineering Simulation), creato per la simulazione sismica delle strutture.
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
Il focus di questo elaborato è sui sistemi di recommendations e le relative caratteristiche. L'utilizzo di questi meccanism è sempre più forte e presente nel mondo del web, con un parallelo sviluppo di soluzioni sempre più accurate ed efficienti. Tra tutti gli approcci esistenti, si è deciso di prendere in esame quello affrontato in Apache Mahout. Questa libreria open source implementa il collaborative-filtering, basando il processo di recommendation sulle preferenze espresse dagli utenti riguardo ifferenti oggetti. Grazie ad Apache Mahout e ai principi base delle varie tipologie di recommendationè stato possibile realizzare un applicativo web che permette di produrre delle recommendations nell'ambito delle pubblicazioni scientifiche, selezionando quegli articoli che hanno un maggiore similarità con quelli pubblicati dall'utente corrente. La realizzazione di questo progetto ha portato alla definizione di un sistema ibrido. Infatti l'approccio alla recommendation di Apache Mahout non è completamente adattabile a questa situazione, per questo motivo le sue componenti sono state estese e modellate per il caso di studio. Siè cercato quindi di combinare il collaborative filtering e il content-based in un unico approccio. Di Apache Mahout si è mantenuto l'algoritmo attraverso il quale esaminare i dati del data set, tralasciando completamente l'aspetto legato alle preferenze degli utenti, poichè essi non esprimono delle valutazioni sugli articoli. Del content-based si è utilizzata l'idea del confronto tra i titoli delle pubblicazioni. La valutazione di questo applicativo ha portato alla luce diversi limiti, ma anche possibili sviluppi futuri che potrebbero migliorare la qualità delle recommendations, ma soprattuto le prestazioni. Grazie per esempio ad Apache Hadoop sarebbe possibile una computazione distribuita che permetterebbe di elaborare migliaia di dati con dei risultati più che discreti.
Resumo:
In these last years, systems engineering has became one of the major research domains. The complexity of systems has increased constantly and nowadays Cyber-Physical Systems (CPS) are a category of particular interest: these, are systems composed by a cyber part (computer-based algorithms) that monitor and control some physical processes. Their development and simulation are both complex due to the importance of the interaction between the cyber and the physical entities: there are a lot of models written in different languages that need to exchange information among each other. Normally people use an orchestrator that takes care of the simulation of the models and the exchange of informations. This orchestrator is developed manually and this is a tedious and long work. Our proposition is to achieve to generate the orchestrator automatically through the use of Co-Modeling, i.e. by modeling the coordination. Before achieving this ultimate goal, it is important to understand the mechanisms and de facto standards that could be used in a co-modeling framework. So, I studied the use of a technology employed for co-simulation in the industry: FMI. In order to better understand the FMI standard, I realized an automatic export, in the FMI format, of the models realized in an existing software for discrete modeling: TimeSquare. I also developed a simple physical model in the existing open source openmodelica tool. Later, I started to understand how works an orchestrator, developing a simple one: this will be useful in future to generate an orchestrator automatically.
Resumo:
The purpose of this thesis is to analyse the spatial and temporal variability of the aragonite saturation state (ΩAR), commonly used as an indicator of ocean acidification, in the North-East Atlantic. When the aragonite saturation state decreases below a certain threshold, ΩAR <1, calcifying organisms (i.e. molluscs, pteropods, foraminifera, crabs, etc.) are subject to dissolution of shells and aragonite structures. This objective agrees with the challenge 'Ocean, climate change and acidification' of the EU COST Ocean Governance for Sustainability project, which aims to combine the information collected on the state of health of the oceans. Two open-sources data products, EMODnet and GLODAPv2, have been integrated and analysed for the first time in the North-East Atlantic region. The integrated dataset contains 1038 ΩAR vertical profiles whose time distribution spans from 1970 to 2014. The ΩAR has been computed from CO2SYS software considering different combinations of input parameters, pH, Total Alkalinity (TAlk) and Dissolved Inorganic Carbon (DIC), associated with Temperature, Salinity and Pressure at in situ conditions. A sensitivity analysis has been performed to better understand the data consistency of ΩAR computed from the different combinations of pH, Talk and DIC and to verify the difference among observed TAlk and DIC parameters and their output values from the CO2SYS tool. Maps of ΩAR have been computed with the best data coverage obtained from the two datasets, at different levels of depth in the area of investigation and they have been compared to the work of Jiang et al. (2015). The results are consistent and show similar horizontal and vertical patterns. The study highlights some aragonite undersaturated values (ΩAR <1) below 500 meters depth, suggesting a potential effect of acidification in the considered time period. This thesis aims to be a preliminary work for future studies that will be able to design the ΩAR variability on a decadal distribution based on the extended time-series acquired in this work.
Resumo:
L'elaborato tratta la progettazione e la costruzione di un database, mediante l'utilizzo di un Database Management System (DBMS), correlato a una interfaccia GIS, con l'obiettivo di ottenere una panoramica del sistema della mobilità del Comune di Bologna. I software che si utilizzano sono QGIS e PostgreSQL, entrambi open source. Si descrive nel dettaglio la fase di raccolta e selezione dei file da inserire nel database, riguardanti la mobilità, i trasporti e le infrastrutture viarie, la maggior parte dei quali provenienti dagli uffici del Comune di Bologna. In seguito si mostra la fase di realizzazione del database e si effettuano interrogazioni sui dati inseriti, per ottenere nuove informazioni che descrivono la mobilità e l'accessibilità urbana. L'analisi della tesi si focalizza verso la valutazione dell'accessibilità alle modalità sostenibili, quali trasporto pubblico e modalità non motorizzate (mobilità ciclistica e pedonale). In questo modo si crea uno strumento per individuare punti di forza e criticità del sistema della mobilità.
Resumo:
This thesis aims to illustrate the construction of a mathematical model of a hydraulic system, oriented to the design of a model predictive control (MPC) algorithm. The modeling procedure starts with the basic formulation of a piston-servovalve system. The latter is a complex non linear system with some unknown and not measurable effects that constitute a challenging problem for the modeling procedure. The first level of approximation for system parameters is obtained basing on datasheet informations, provided workbench tests and other data from the company. Then, to validate and refine the model, open-loop simulations have been made for data matching with the characteristics obtained from real acquisitions. The final developed set of ODEs captures all the main peculiarities of the system despite some characteristics due to highly varying and unknown hydraulic effects, like the unmodeled resistive elements of the pipes. After an accurate analysis, since the model presents many internal complexities, a simplified version is presented. The latter is used to linearize and discretize correctly the non linear model. Basing on that, a MPC algorithm for reference tracking with linear constraints is implemented. The results obtained show the potential of MPC in this kind of industrial applications, thus a high quality tracking performances while satisfying state and input constraints. The increased robustness and flexibility are evident with respect to the standard control techniques, such as PID controllers, adopted for these systems. The simulations for model validation and the controlled system have been carried out in a Python code environment.
Resumo:
This thesis investigates if emotional states of users interacting with a virtual robot can be recognized reliably and if specific interaction strategy can change the users’ emotional state and affect users’ risk decision. For this investigation, the OpenFace [1] emotion recognition model was intended to be integrated into the Flobi [2] system, to allow the agent to be aware of the current emotional state of the user and to react appropriately. There was an open source ROS [3] bridge available online to integrate OpenFace to the Flobi simulation but it was not consistent with some other projects in Flobi distribution. Then due to technical reasons DeepFace was selected. In a human-agent interaction, the system is compared to a system without using emotion recognition. Evaluation could happen at different levels: evaluation of emotion recognition model, evaluation of the interaction strategy, and evaluation of effect of interaction on user decision. The results showed that the happy emotion induction was 58% and fear emotion induction 77% successful. Risk decision results show that: in happy induction after interaction 16.6% of participants switched to a lower risk decision and 75% of them did not change their decision and the remaining switched to a higher risk decision. In fear inducted participants 33.3% decreased risk 66.6 % did not change their decision The emotion recognition accuracy was and had bias to. The sensitivity and specificity is calculated for each emotion class. The emotion recognition model classifies happy emotions as neutral in most of the time.
Resumo:
In this thesis the design of a pressure regulation system for space propulsion engines (electric and cold gas) has been performed. The Bang-Bang Control (BBC) method has been implemented through the open/close command on a solenoid valve, and the mass flow rate of the propellant has been fixed with suitable flow restrictors. At the beginning, research for the comparison between mechanical and electronic (for BBC) pressure regulators has been performed, which resulted in enough advantages for the selection of the second valve type. The major advantage is about the possibility to have a variable outlet pressure with a variable inlet pressure through a simple remote command, while in mechanical pressure regulators the ratio between inlet and outlet pressures must be mechanically settled. Different pressure control schemes have been analyzed, changing number of solenoid valves, flow restrictors and plenums. For each scheme the valve’s frequencies were evaluated with simplified mathematical models and with the use of simulators implemented on Python; the results obtained from those two methods matched quiet well. From all the schemes it was possible to observe varying frequency and duty cycle, for changes in different parameters. This results, after experimental checks, can be used to design the control system for a given total number of cycles that a specific solenoid valve can guarantee. Finally, tests were performed and it was possible to verify the goodness of the control system. Moreover from the tests it was possible to deduce some tips in order to optimize the running of the simulator.
Resumo:
Il rilevamento di intrusioni nel contesto delle pratiche di Network Security Monitoring è il processo attraverso cui, passando per la raccolta e l'analisi di dati prodotti da una o più fonti di varia natura, (p.e. copie del traffico di rete, copie dei log degli applicativi/servizi, etc..) vengono identificati, correlati e analizzati eventi di sicurezza con l'obiettivo di rilevare potenziali tenativi di compromissione al fine di proteggere l'asset tecnologico all'interno di una data infrastruttura di rete. Questo processo è il prodotto di una combinazione di hardware, software e fattore umano. Spetta a quest'ultimo nello specifico il compito più arduo, ovvero quello di restare al passo con una realtà in continua crescita ed estremamente dinamica: il crimine informatico. Spetta all'analista filtrare e analizzare le informazioni raccolte in merito per contestualizzarle successivamente all'interno della realta che intende proteggere, con il fine ultimo di arricchire e perfezionare le logiche di rilevamento implementate sui sistemi utilizzati. È necessario comprendere come il mantenimento e l'aggiornamento di questi sistemi sia un'attività che segue l'evolversi delle tecnologie e delle strategie di attacco. Un suo svolgimento efficacie ed efficiente risulta di primaria importanza per consentire agli analisti di focalizzare le proprie risorse sulle attività di investigazione di eventi di sicurezza, ricerca e aggiornamento delle logiche di rilevamento, minimizzando quelle ripetitive, "time consuming", e potenzialmente automatizzabili. Questa tesi ha come obiettivo quello di presentare un possibile approccio ad una gestione automatizzata e centralizzata di sistemi per il rilevamento delle intrusioni, ponendo particolare attenzione alle tecnologie IDS presenti sul panorama open source oltre a rapportare tra loro gli aspetti di scalabilità e personalizzazione che ci si trova ad affrontare quando la gestione viene estesa ad infrastrutture di rete eterogenee e distribuite.
Resumo:
From 2010, the Proton Radius has become one of the most interest value to determine. The first proof of not complete understanding of its internal structure was the measurement of the Lamb Shift using the muonic hydrogen, leading to a value 7σ lower. A new road so was open and the Proton Radius Puzzle epoch begun. FAMU Experiment is a project that tries to give an answer to this Puzzle implementing high precision experimental apparatus. The work of this thesis is based on the study, construction and first characterization of a new detection system. Thanks to the previous experiments and simulations, this apparatus is composed by 17 detectors positioned on a semicircular crown with the related electronic circuit. The detectors' characterization is based on the use of a LabView program controlling a digital potentiometer and on other two analog potentiometers, all three used to set the amplitude of each detector to a predefined value, around 1.2 V, set on the oscilloscope by which is possible to observe the signal. This is the requirement in order to have, in the final measurement, a single high peak given by the sum of all the signals coming from the detectors. Each signal has been acquired for almost half of an hour, but the entire circuit has been maintained active for more time to observe its capacity to work for longer periods. The principal results of this thesis are given by the spectra of 12 detectors and the corresponding values of Voltages, FWHM and Resolution. The outcomes of the acquisitions show also another expected behavior: the strong dependence of the detectors from the temperature, demonstrating that an its change causes fluctuations in the signal. In turn, these fluctuations will affect the spectrum, resulting in a shifting of the curve and a lower Resolution. On the other hand, a measurement performed in stable conditions will lead to accordance between the nominal and experimental measurements, as for the detectors 10, 11 and 12 of our system.