8 resultados para Many-to-many-assignment problem
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Il lavoro è dedicato all'analisi fisica e alla modellizzazione dello strato limite atmosferico in condizioni stabili. L'obiettivo principale è quello di migliorare i modelli di parametrizzazione della turbulenza attualmente utilizzati dai modelli meteorologici a grande scala. Questi modelli di parametrizzazione della turbolenza consistono nell' esprimere gli stress di Reynolds come funzioni dei campi medi (componenti orizzontali della velocità e temperatura potenziale) usando delle chiusure. La maggior parte delle chiusure sono state sviluppate per i casi quasi-neutrali, e la difficoltà è trattare l'effetto della stabilità in modo rigoroso. Studieremo in dettaglio due differenti modelli di chiusura della turbolenza per lo strato limite stabile basati su assunzioni diverse: uno schema TKE-l (Mellor-Yamada,1982), che è usato nel modello di previsione BOLAM (Bologna Limited Area Model), e uno schema sviluppato recentemente da Mauritsen et al. (2007). Le assunzioni delle chiusure dei due schemi sono analizzate con dati sperimentali provenienti dalla torre di Cabauw in Olanda e dal sito CIBA in Spagna. Questi schemi di parametrizzazione della turbolenza sono quindi inseriti all'interno di un modello colonnare dello strato limite atmosferico, per testare le loro predizioni senza influenze esterne. Il confronto tra i differenti schemi è effettuato su un caso ben documentato in letteratura, il "GABLS1". Per confermare la validità delle predizioni, un dataset tridimensionale è creato simulando lo stesso caso GABLS1 con una Large Eddy Simulation. ARPS (Advanced Regional Prediction System) è stato usato per questo scopo. La stratificazione stabile vincola il passo di griglia, poichè la LES deve essere ad una risoluzione abbastanza elevata affinchè le tipiche scale verticali di moto siano correttamente risolte. Il confronto di questo dataset tridimensionale con le predizioni degli schemi turbolenti permettono di proporre un insieme di nuove chiusure atte a migliorare il modello di turbolenza di BOLAM. Il lavoro è stato compiuto all' ISAC-CNR di Bologna e al LEGI di Grenoble.
Resumo:
Canned tuna is one of the most widespread and recognizable fish commodities in the world. Over all oceans 80% of the total tuna catches are caught by purse seine fishery and in tropical waters their target species are: yellowfin (Thunnus albacares), bigeye (Thunnus obesus) and skipjack (Katsuwonus pelamis). Even if this fishing gear is claimed to be very selective, there are high levels of by-catch especially when operating under Fish Aggregating Devices (FADs). The main problem is underestimation of by-catch data. In order to solve this problem the scientific community has developed many specific programs (e.g. Observe Program) to collect data about both target species and by-catch with observers onboard. The purposes of this study are to estimate the quantity and composition of target species and by-catch by tuna purse seiner fishery operating in tropical waters and to underline a possible seasonal variability in the by-catch ratio (tunas versus by-catch). Data were collected with the French scientific program ”Observe” on board of the French tuna purse seiner “Via Avenir” during a fishing trip in the Gulf of Guinea (C-E Atlantic) from August to September 2012. Furthermore some by-catch specimens have been sampled to obtain more information about size class composition. In order to achieve those purposes we have shared our data with the French Institute of Research for the Development (IRD), which has data collected by observers onboard in the same study area. Yellowfin tuna results to be the main specie caught in all trips considered (around 71% of the total catches) especially on free swimming schools (FSC) sets. Instead skipjack tuna is the main specie caught under FADs. Different percentages of by-catch with the two fishing modes are observed: the by-catch incidence is higher on FADs sets (96.5% of total by-catch) than on FSC sets (3.5%) and the main category of by-catch is little-tuna (73%). When pooling data for both fishing sets used in purse seine fishery the overall by-catch/catch ratio is 5%, a lower level than in other fishing gears like long-lining and trawling.
Resumo:
All the structures designed by engineers are vulnerable to natural disasters including floods and earthquakes. The energy released during strong ground motions should be dissipated by structural elements. Before 1990’s, this energy was expected to be dissipated through the beams and columns which at the same time were a part of gravity-load-resisting system. However, the main disadvantage of this idea was that gravity-resisting-frame was not repairable. Hence, during 1990’s, the idea of designing passive energy dissipation systems, including dampers, emerged. At the beginning, main problem was lack of guidelines for passive energy dissipation systems. Although till 2000 many guidelines and procedures where published, yet most of them were based on complicated analysis which was not so convenient for engineers and practitioners. In order to solve this problem recently some alternative design methods are proposed including 1. Lopez Garcia (2001) simple procedure for optimal damper configuration in MDOF structures 2. Christopoulos and Filiatrault (2006) trial and error procedure 3. Silvestri et al. (2010) Five-Step Method. 4. Palermo et al. (2015) Direct Five-Step Method. 5. Palermo et al. (2016) Simplified Equivalent Static Analysis (ESA). In this study, effectiveness and differences between last three alternative methods have been evaluated.
Resumo:
Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.
Resumo:
In recent years, environmental concerns and the expected shortage in the fossil reserves have increased further development of biomaterials. Among them, poly(lactide) PLA possess some potential properties such as good ability process, excellent tensile strength and stiffness equivalent to some commercial petroleum-based polymers (PP, PS, PET, etc.). This biobased polymer is also biodegradable and biocompatible However, one great disadvantage of commercial PLA is slow crystallization rate, which restricts its use in many fields. Using of nanofillers is viewed as an efficient strategy to overcome this problem. In this thesis, the effect of bionanofillers in neat PLA and in blends of poly (L-lactide)(PLA)/poly(ε-Caprolactone) (PCL) has been investigated. The used nanofillers are: poly(L-lactide-co-ε-caprolactone) and poly(L-lactide-b-ε-caprolactone) grafted on cellulose nanowhiskers and neat cellulose nanowhiskers (CNW). The grafting reaction of poly(L-lactide-co-caprolactone) and poly (L-lactide-b-caprolactone) on the nanocellulose has been performed by the grafting from technique. In this way the polymerization reaction it is directly initiated on the substrate surface. The condition of the reaction were chosen after a temperature and solvent screening. By non-isothermal an isothermal DSC analysis the effect of bionanofillers on PLA and 80/20 PLA/PCL was evaluated. Non-isothermal DSC scans show a nucleating effect of the bionanofillers on PLA. This effect is detectable during PLA crystallization from the glassy state. Cold crystallization temperature is reduced upon the addition of the poly(L-lactide-b-caprolactone) grafted on cellulose nanowhiskers that is most performing bionanofiller in acting as a nucleating agent. On the other hand, DSC isothermal analysis on the overall crystallization rate indicate that cellulose nanowhiskers are best nucleating agents during isothermal crystallization from the melt state. In conclusion, nanofillers have different behavior depending on the processing conditions. However, the efficiency of our nanofillers as nucleating agent was clearly demonstrated in both isothermal as in non-isothermal condition.
Resumo:
In modern society, security issues of IT Systems are intertwined with interdisciplinary aspects, from social life to sustainability, and threats endanger many aspects of every- one’s daily life. To address the problem, it’s important that the systems that we use guarantee a certain degree of security, but to achieve this, it is necessary to be able to give a measure to the amount of security. Measuring security is not an easy task, but many initiatives, including European regulations, want to make this possible. One method of measuring security is based on the use of security metrics: those are a way of assessing, from various aspects, vulnera- bilities, methods of defense, risks and impacts of successful attacks then also efficacy of reactions, giving precise results using mathematical and statistical techniques. I have done literature research to provide an overview on the meaning, the effects, the problems, the applications and the overall current situation over security metrics, with particular emphasis in giving practical examples. This thesis starts with a summary of the state of the art in the field of security met- rics and application examples to outline the gaps in current literature, the difficulties found in the change of application context, to then advance research questions aimed at fostering the discussion towards the definition of a more complete and applicable view of the subject. Finally, it stresses the lack of security metrics that consider interdisciplinary aspects, giving some potential starting point to develop security metrics that cover all as- pects involved, taking the field to a new level of formal soundness and practical usability.
Resumo:
Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.
Resumo:
In recent years, global supply chains have increasingly suffered from reliability issues due to various external and difficult to-manage events. The following paper aims to build an integrated approach for the design of a Supply Chain under the risk of disruption and demand fluctuation. The study is divided in two parts: a mathematical optimization model, to identify the optimal design and assignments customer-facility, and a discrete-events simulation of the resulting network. The first one describes a model in which plant location decisions are influenced by variables such as distance to customers, investments needed to open plants and centralization phenomena that help contain the risk of demand variability (Risk Pooling). The entire model has been built with a proactive approach to manage the risk of disruptions assigning to each customer two types of open facilities: one that will serve it under normal conditions and a back-up facility, which comes into operation when the main facility has failed. The study is conducted on a relatively small number of instances due to the computational complexity, a matheuristic approach can be found in part A of the paper to evaluate the problem with a larger set of players. Once the network is built, a discrete events Supply Chain simulation (SCS) has been implemented to analyze the stock flow within the facilities warehouses, the actual impact of disruptions and the role of the back-up facilities which suffer a great stress on their inventory due to a large increase in demand caused by the disruptions. Therefore, simulation follows a reactive approach, in which customers are redistributed among facilities according to the interruptions that may occur in the system and to the assignments deriving from the design model. Lastly, the most important results of the study will be reported, analyzing the role of lead time in a reactive approach for the occurrence of disruptions and comparing the two models in terms of costs.