16 resultados para Simulation and modeling applications
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The work presented in this thesis aims to contribute to innovation in the Urban Air Mobility and Delivery sector and represents a solid starting point for air logistics and its future scenarios. The dissertation focuses on modeling, simulation, and control of a formation of multirotor aircraft for cooperative load transportation, with particular attention to environmental sustainability. First, a simulation and test environment is developed to assess technologies for suspended load stabilization. Starting from the mathematical model of two identical multirotors, formation-flight-keeping and collision-avoidance algorithms are analyzed. This approach guarantees both the safety of the vehicles within the formation and that of the payload, which may be made of people in the very near future. Afterwards, a mathematical model for the suspended load is implemented, as well as an active controller for its stabilization. The key focus of this part is represented by both analysis and control of payload oscillatory motion, by thoroughly investigating load kinetic energy decay. At this point, several test cases were introduced, in order to understand which strategy is the most effective and safe in terms of future applications in the field of air logistics.
Resumo:
In this thesis, the study and the simulation of two advanced sensorless speed control techniques for a surface PMSM are presented. The aim is to implement a sensorless control algorithm for a submarine auxiliary propulsion system. This experimental activity is the result of a project collaboration with L3Harris Calzoni, a leader company in A&D systems for naval handling in military field. A Simulink model of the whole electric drive has been developed. Due to the satisfactory results of the simulations, the sensorless control system has been implemented in C code for STM32 environment. Finally, several tests on a real brushless machine have been carried out while the motor was connected to a mechanical load to simulate the real scenario of the final application. All the experimental results have been recorded through a graphical interface software developed at Calzoni.
Resumo:
Nella tesi verranno presi in considerazione tre aspetti: si descriverà come la teoria dei nodi si sia sviluppata nel corso degli anni in relazione alle diverse scoperte scientifiche avvenute. Si potrà quindi subito avere una idea di come questa teoria sia estremamente connessa a diverse altre. Nel secondo capitolo ci si occuperà degli aspetti più formali di questa teoria. Si introdurrà il concetto di nodi equivalenti e di invariante dei nodi. Si definiranno diversi invarianti, dai più elementari, le mosse di Reidemeister, il numero di incroci e la tricolorabilità, fino ai polinomi invarianti, tra cui il polinomio di Alexander, il polinomio di Jones e quello di Kaufman. Infine si spiegheranno alcune applicazioni della teoria dei nodi in chimica, fisica e biologia. Sulla chimica, si definirà la chiralità molecolare e si mostrerà come la chiralità dei nodi possa essere utile nel determinare quella molecolare. In campo fisico, si mostrerà la relazione che esiste tra l'equazione di Yang-Baxter e i nodi. E in conclusione si mostrerà come modellare un importante processo biologico, la recombinazione del DNA, grazie alla teoria dei nodi.
Resumo:
Il fenomeno dello scattering diffuso è stato oggetto di numerosi studi nell’arco degli ultimi anni, questo grazie alla sua rilevanza nell’ambito della propagazione elettromagnetica così come in molti altri campi di applicazione (remote sensing, ottica, fisica, etc.), ma la compresione completa di questo effetto è lungi dall’essere raggiunta. Infatti la complessità nello studio e nella caratterizzazione della diffusione deriva dalla miriade di casistiche ed effetti che si possono incontrare in un ambiente di propagazione reale, lasciando intuire la necessità di trattarne probabilisticamente il relativo contributo. Da qui nasce l’esigenza di avere applicazioni efficienti dal punto di vista ingegneristico che coniughino la definizione rigorosa del fenomeno e la conseguente semplificazione per fini pratici. In tale visione possiamo descrivere lo scattering diffuso come la sovrapposizione di tutti quegli effetti che si scostano dalle classiche leggi dell’ottica geometrica (riflessione, rifrazione e diffrazione) che generano contributi del campo anche in punti dello spazio e direzioni in cui teoricamente, per oggetti lisci ed omogenei, non dovrebbe esserci alcun apporto. Dunque l’effetto principale, nel caso di ambiente di propagazione reale, è la diversa distribuzione spaziale del campo rispetto al caso teorico di superficie liscia ed omogenea in congiunzione ad effetti di depolarizzazione e redistribuzione di energia nel bilancio di potenza. Perciò la complessità del fenomeno è evidente e l’obiettivo di tale elaborato è di proporre nuovi risultati che permettano di meglio descrivere lo scattering diffuso ed individuare altresì le tematiche sulle quali concentrare l’attenzione nei lavori futuri. In principio è stato quindi effettuato uno studio bibliografico così da identificare i modelli e le teorie esistenti individuando i punti sui quali riflettere maggiormente; nel contempo si sono analizzate le metodologie di caratterizzazione della permittività elettrica complessa dei materiali, questo per valutare la possibilità di ricavare i parametri da utilizzare nelle simulazioni utilizzando il medesimo setup di misura ideato per lo studio della diffusione. Successivamente si è realizzato un setup di simulazione grazie ad un software di calcolo elettromagnetico (basato sul metodo delle differenze finite nel dominio del tempo) grazie al quale è stato possibile analizzare la dispersione tridimensionale dovuta alle irregolarità del materiale. Infine è stata condotta una campagna di misure in camera anecoica con un banco sperimentale realizzato ad-hoc per effettuare una caratterizzazione del fenomeno di scattering in banda larga.
Resumo:
This is a research B for the University of Bologna. The course is the civil engineering LAUREA MAGISTRALE at UNIBO. The main purpose of this research is to promote another way of explaining, analyzing and presenting some civil engineering aspects to the students worldwide by theory, modeling and photos. The basic idea is divided into three steps. The first one is to present and analyze the theoretical parts. So a detailed analysis of the theory combined with theorems, explanations, examples and exercises will cover this step. At the second, a model will make clear all these parts that were discussed in the theory by showing how the structures work or fail. The modeling is able to present the behavior of many elements, in scale which we use in the real structures. After these two steps an interesting exhibition of photos from the real world with comments will give the chance to the engineers to observe all these theoretical and modeling-laboratory staff in many different cases. For example many civil engineers in the world may know about the air pressure on the structures but many of them have never seen the extraordinary behavior of the bridge of Tacoma ‘dancing with the air’. At this point I would like to say that what I have done is not a book, but a research of how this ‘3 step’ presentation or explanation of some mechanical characteristics could be helpful. I know that my research is something different and new and in my opinion is very important because it helps students to go deeper in the science and also gives new ideas and inspirations. This way of teaching can be used at all lessons especially at the technical. Hope that one day all the books will adopt this kind of presentation.
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.
Resumo:
A microfluidic Organ-on-Chip has been developed for monitoring the epithelial cells monolayer. Equivalent circuit Model was used to determine the electrical properties from the impedance spectra of the epithelial cells monolayer. Black platinum on platinum electrodes was electrochemically deposited onto the surface of electrodes to reduce the influence of the electrical double layer on the impedance measurements. Measurements of impedance with an Impedance Analyzer were done to validate the equivalent circuit model and the decrease of the double layer effect. A Lock-in Amplifier was designed to measure the impedance.
Resumo:
La geometria euclidea risulta spesso inadeguata a descrivere le forme della natura. I Frattali, oggetti interrotti e irregolari, come indica il nome stesso, sono più adatti a rappresentare la forma frastagliata delle linee costiere o altri elementi naturali. Lo strumento necessario per studiare rigorosamente i frattali sono i teoremi riguardanti la misura di Hausdorff, con i quali possono definirsi gli s-sets, dove s è la dimensione di Hausdorff. Se s non è intero, l'insieme in gioco può riconoscersi come frattale e non presenta tangenti e densità in quasi nessun punto. I frattali più classici, come gli insiemi di Cantor, Koch e Sierpinski, presentano anche la proprietà di auto-similarità e la dimensione di similitudine viene a coincidere con quella di Hausdorff. Una tecnica basata sulla dimensione frattale, detta box-counting, interviene in applicazioni bio-mediche e risulta utile per studiare le placche senili di varie specie di mammiferi tra cui l'uomo o anche per distinguere un melanoma maligno da una diversa lesione della cute.
Resumo:
Global climate change in recent decades has strongly influenced the Arctic generating pronounced warming accompanied by significant reduction of sea ice in seasonally ice-covered seas and a dramatic increase of open water regions exposed to wind [Stephenson et al., 2011]. By strongly scattering the wave energy, thick multiyear ice prevents swell from penetrating deeply into the Arctic pack ice. However, with the recent changes affecting Arctic sea ice, waves gain more energy from the extended fetch and can therefore penetrate further into the pack ice. Arctic sea ice also appears weaker during melt season, extending the transition zone between thick multi-year ice and the open ocean. This region is called the Marginal Ice Zone (MIZ). In the Arctic, the MIZ is mainly encountered in the marginal seas, such as the Nordic Seas, the Barents Sea, the Beaufort Sea and the Labrador Sea. Formed by numerous blocks of sea ice of various diameters (floes) the MIZ, under certain conditions, allows maritime transportation stimulating dreams of industrial and touristic exploitation of these regions and possibly allowing, in the next future, a maritime connection between the Atlantic and the Pacific. With the increasing human presence in the Arctic, waves pose security and safety issues. As marginal seas are targeted for oil and gas exploitation, understanding and predicting ocean waves and their effects on sea ice become crucial for structure design and for real time safety of operations. The juxtaposition of waves and sea ice represents a risk for personnel and equipment deployed on ice, and may complicate critical operations such as platform evacuations. The risk is difficult to evaluate because there are no long-term observations of waves in ice, swell events are difficult to predict from local conditions, ice breakup can occur on very short time-scales and wave-ice interactions are beyond the scope of current forecasting models [Liu and Mollo-Christensen, 1988,Marko, 2003]. In this thesis, a newly developed Waves in Ice Model (WIM) [Williams et al., 2013a,Williams et al., 2013b] and its related Ocean and Sea Ice model (OSIM) will be used to study the MIZ and the improvements of wave modeling in ice infested waters. The following work has been conducted in collaboration with the Nansen Environmental and Remote Sensing Center and within the SWARP project which aims to extend operational services supporting human activity in the Arctic by including forecast of waves in ice-covered seas, forecast of sea-ice in the presence of waves and remote sensing of both waves and sea ice conditions. The WIM will be included in the downstream forecasting services provided by Copernicus marine environment monitoring service.
Resumo:
In modern society, security issues of IT Systems are intertwined with interdisciplinary aspects, from social life to sustainability, and threats endanger many aspects of every- one’s daily life. To address the problem, it’s important that the systems that we use guarantee a certain degree of security, but to achieve this, it is necessary to be able to give a measure to the amount of security. Measuring security is not an easy task, but many initiatives, including European regulations, want to make this possible. One method of measuring security is based on the use of security metrics: those are a way of assessing, from various aspects, vulnera- bilities, methods of defense, risks and impacts of successful attacks then also efficacy of reactions, giving precise results using mathematical and statistical techniques. I have done literature research to provide an overview on the meaning, the effects, the problems, the applications and the overall current situation over security metrics, with particular emphasis in giving practical examples. This thesis starts with a summary of the state of the art in the field of security met- rics and application examples to outline the gaps in current literature, the difficulties found in the change of application context, to then advance research questions aimed at fostering the discussion towards the definition of a more complete and applicable view of the subject. Finally, it stresses the lack of security metrics that consider interdisciplinary aspects, giving some potential starting point to develop security metrics that cover all as- pects involved, taking the field to a new level of formal soundness and practical usability.
Resumo:
The rising of concerns around the scarcity of non-renewable resources has raised curiosity around new frontiers in the polymer science field. Biopolymers is a general term describing different kind of polymers that are linked with the biological world because of either monomer derivation, end of life degradation or both. The current work is aimed at studying one example of both biopolymers types. Polyhydroxibutyrate (P3HB) is a biodegradable microbial-produced polymer which holds massive potentiality as a substitute of polyolefins such as polypropylene. Though, its highly crystalline nature and stereoregularity of structure make it difficult to work with. The project P3HB-Mono take advantage of polarized Raman spectroscopy to see how annealing of chains with different weights influence the crystallinity and molecular structure of the polymer, eventually reflecting on its mechanical properties. The technique employed is also optimal in order to see how mesophase, a particular conformation of chains different from crystalline and amorphous phase, develops in the polymer structure and changes depending on temperature and mechanical stress applied to the fiber. Polycaprolactone (PCL) on the other hand is a biodegradable fossil-fuel polymer which has biocompatibility and bio-resorbability features. As a consequence this material is very appealing for medical industry and can be used for different applications in this field. One interesting option is to produce narrow and long liquid filled fibers for drug delivery inside human body, using a traditional technique in an innovative way. The project BioLiCoF investigates the feasability of producing liquid filled fibers using melt-spinning techniques and will examine the role that melt-spinning parameters and liquids employed as a core solution have on the final fiber. The physical analysis of the fibers is also interpreted and idea on future developments of the trials are suggested.
Resumo:
Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.
Resumo:
Laser shock peening is a technique similar to shot peening that imparts compressive residual stresses in materials for improving fatigue resistance. The ability to use a high energy laser pulse to generate shock waves, inducing a compressive residual stress field in metallic materials, has applications in multiple fields such as turbo-machinery, airframe structures, and medical appliances. The transient nature of the LSP phenomenon and the high rate of the laser's dynamic make real time in-situ measurement of laser/material interaction very challenging. For this reason and for the high cost of the experimental tests, reliable analytical methods for predicting detailed effects of LSP are needed to understand the potential of the process. Aim of this work has been the prediction of residual stress field after Laser Peening process by means of Finite Element Modeling. The work has been carried out in the Stress Methods department of Airbus Operations GmbH (Hamburg) and it includes investigation on compressive residual stresses induced by Laser Shock Peening, study on mesh sensitivity, optimization and tuning of the model by using physical and numerical parameters, validation of the model by comparing it with experimental results. The model has been realized with Abaqus/Explicit commercial software starting from considerations done on previous works. FE analyses are “Mesh Sensitive”: by increasing the number of elements and by decreasing their size, the software is able to probe even the details of the real phenomenon. However, these details, could be only an amplification of real phenomenon. For this reason it was necessary to optimize the mesh elements' size and number. A new model has been created with a more fine mesh in the trough thickness direction because it is the most involved in the process deformations. This increment of the global number of elements has been paid with an "in plane" size reduction of the elements far from the peened area in order to avoid too high computational costs. Efficiency and stability of the analyses has been improved by using bulk viscosity coefficients, a merely numerical parameter available in Abaqus/Explicit. A plastic rate sensitivity study has been also carried out and a new set of Johnson Cook's model coefficient has been chosen. These investigations led to a more controllable and reliable model, valid even for more complex geometries. Moreover the study about the material properties highlighted a gap of the model about the simulation of the surface conditions. Modeling of the ablative layer employed during the real process has been used to fill this gap. In the real process ablative layer is a super thin sheet of pure aluminum stuck on the masterpiece. In the simulation it has been simply reproduced as a 100µm layer made by a material with a yield point of 10MPa. All those new settings has been applied to a set of analyses made with different geometry models to verify the robustness of the model. The calibration of the model with the experimental results was based on stress and displacement measurements carried out on the surface and in depth as well. The good correlation between the simulation and experimental tests results proved this model to be reliable.