9 resultados para Optimal experiment design
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.
Resumo:
Nella tesi si analizzano le principali fonti del rumore aeronautico, lo stato dell'arte dal punto di vista normativo, tecnologico e procedurale. Si analizza lo stato dell'arte anche riguardo alla classificazione degli aeromobili, proponendo un nuovo indice prestazionale in alternativa a quello indicato dalla metodologia di certificazione (AC36-ICAO) Allo scopo di diminuire l'impatto acustico degli aeromobili in fase di atterraggio, si analizzano col programma INM i benefici di procedure CDA a 3° rispetto alle procedure tradizionali e, di seguito di procedure CDA ad angoli maggiori in termini di riduzione di lunghezza e di area delle isofoniche SEL85, SEL80 e SEL75.
Resumo:
Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).
Resumo:
Fra i sistemi di propulsione elettrica per satelliti, il Pulsed Plasma Thruster, PPT, è quello dal design più semplice. È anche il primo sistema di propulsione elettrica utilizzato in un satellite artificiale, ossia ZOND-2 lanciato nel 1964 dall’Unione Sovietica. Tuttavia, dopo circa 50 anni di ricerca, la comprensione teorica e sperimentale di questo dispositivo rimane limitata. Questo elaborato di tesi magistrale indaga sul sottosistema di accensione del PPT, cercando di mettere in luce alcuni aspetti legati al lifetime della spark plug, SP. Tale SP, o candela, è l’attuatore del sottosistema di accensione. Questa produce una scintilla sulla sua superficie, la quale permette la realizzazione della scarica elettrica principale fra i due elettrodi del motore. Questa scarica crea una sottile parete di plasma che, per mezzo della forza elettromagnetica di Lorentz, produce la spinta del PPT. Poiché la SP si trova all’interno del catodo del motore e si affaccia nella camera di scarica, questa soffre di fenomeni di corrosione e di deposizione carbonacea proveniente dal propellente. Questi fenomeni possono limitare notevolmente il lifetime della SP. I parametri connessi alla vita operativa della SP sono numerosi. In questo elaborato si è analizzata la possibilità di utilizzare una elettronica di accensione della candela alternativa alla classica soluzione che utilizza un trasformatore. Il sottosistema di accensione classico e quello nuovo sono stati realizzati e testati, per metterne in luce le differenze ed i possibili vantaggi/svantaggi.
Resumo:
In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.
Resumo:
All the structures designed by engineers are vulnerable to natural disasters including floods and earthquakes. The energy released during strong ground motions should be dissipated by structural elements. Before 1990’s, this energy was expected to be dissipated through the beams and columns which at the same time were a part of gravity-load-resisting system. However, the main disadvantage of this idea was that gravity-resisting-frame was not repairable. Hence, during 1990’s, the idea of designing passive energy dissipation systems, including dampers, emerged. At the beginning, main problem was lack of guidelines for passive energy dissipation systems. Although till 2000 many guidelines and procedures where published, yet most of them were based on complicated analysis which was not so convenient for engineers and practitioners. In order to solve this problem recently some alternative design methods are proposed including 1. Lopez Garcia (2001) simple procedure for optimal damper configuration in MDOF structures 2. Christopoulos and Filiatrault (2006) trial and error procedure 3. Silvestri et al. (2010) Five-Step Method. 4. Palermo et al. (2015) Direct Five-Step Method. 5. Palermo et al. (2016) Simplified Equivalent Static Analysis (ESA). In this study, effectiveness and differences between last three alternative methods have been evaluated.
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
All structures are subjected to various loading conditions and combinations. For offshore structures, these loads include permanent loads, hydrostatic pressure, wave, current, and wind loads. Typically, sea environments in different geographical regions are characterized by the 100-year wave height, surface currents, and velocity speeds. The main problems associated with the commonly used, deterministic method is the fact that not all waves have the same period, and that the actual stochastic nature of the marine environment is not taken into account. Offshore steel structure fatigue design is done using the DNVGL-RP-0005:2016 standard which takes precedence over the DNV-RP-C203 standard (2012). Fatigue analysis is necessary for oil and gas producing offshore steel structures which were first constructed in the Gulf of Mexico North Sea (the 1930s) and later in the North Sea (1960s). Fatigue strength is commonly described by S-N curves which have been obtained by laboratory experiments. The rapid development of the Offshore wind industry has caused the exploration into deeper ocean areas and the adoption of new support structural concepts such as full lattice tower systems amongst others. The optimal design of offshore wind support structures including foundation, turbine towers, and transition piece components putting into consideration, economy, safety, and even the environment is a critical challenge. In this study, fatigue design challenges of transition pieces from decommissioned platforms for offshore wind energy are proposed to be discussed. The fatigue resistance of the material and structural components under uniaxial and multiaxial loading is introduced with the new fatigue design rules whilst considering the combination of global and local modeling using finite element analysis software programs.
Resumo:
In recent years, global supply chains have increasingly suffered from reliability issues due to various external and difficult to-manage events. The following paper aims to build an integrated approach for the design of a Supply Chain under the risk of disruption and demand fluctuation. The study is divided in two parts: a mathematical optimization model, to identify the optimal design and assignments customer-facility, and a discrete-events simulation of the resulting network. The first one describes a model in which plant location decisions are influenced by variables such as distance to customers, investments needed to open plants and centralization phenomena that help contain the risk of demand variability (Risk Pooling). The entire model has been built with a proactive approach to manage the risk of disruptions assigning to each customer two types of open facilities: one that will serve it under normal conditions and a back-up facility, which comes into operation when the main facility has failed. The study is conducted on a relatively small number of instances due to the computational complexity, a matheuristic approach can be found in part A of the paper to evaluate the problem with a larger set of players. Once the network is built, a discrete events Supply Chain simulation (SCS) has been implemented to analyze the stock flow within the facilities warehouses, the actual impact of disruptions and the role of the back-up facilities which suffer a great stress on their inventory due to a large increase in demand caused by the disruptions. Therefore, simulation follows a reactive approach, in which customers are redistributed among facilities according to the interruptions that may occur in the system and to the assignments deriving from the design model. Lastly, the most important results of the study will be reported, analyzing the role of lead time in a reactive approach for the occurrence of disruptions and comparing the two models in terms of costs.