7 resultados para Numerical experiments
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
The Mediterranean Sea is a semi-enclosed basin connected to the Atlantic Ocean through the narrow and shallow Strait of Gibraltar and further subdivided in two different sub-basins, the Eastern Mediterranean and the Western Mediterranean, connected through the Stait of Sicily. On annual basis, a net heat budget of −7 W/m2, combined with exceeding evaporation over precipation and runoff together with wind stress, is responsible for the antiestuarine character of the zonal thermoaline circulation. The outflow at Gibraltar Strait is mainly composed of Levantine Intermediate Water (LIW) and deep water masses formed in the Western Mediterranean Sea. The aim of this thesis is to validate and quantitatively assess the main routes of water masses composing the ouflow at Gibraltar Strait, using for the first time in the Mediterranean Sea a lagrangian interpretation of the eulerian velocity field produced from an eddy-resolving reanalysis dataset, spanning from 2000 to 2012. A lagrangian model named Ariane is used to map out three-dimensional trajectories in order to describe the pathways of water mass transport from the Strait of Sicily, the Gulf of Lyon and the Northern Tyrrhenian Sea to the Gibraltar Strait. Numerical experiments were carried out by seeding millions of particles in the Strait of Gibraltar and following them backwards in time to track the origins of water masses and transport exchanged between the different sections of the Mediterranean. Finally, the main routes of the intermediate and deep water masses are reconstructed from virtual particles trajectories, which highlight the role of the Western Mediterranean Deep Water (WMDW) as the main contributor to the Gibraltar Strait outflow. For the first time, the quantitative description of the flow of water masses coming from the Eastern Mediterranean towards the Gibraltar Strait is provided and a new route that directly links the Northern Tyrrhenian Sea to Gibralatr Strait has been detected.
Resumo:
A way to investigate turbulence is through experiments where hot wire measurements are performed. Analysis of the in turbulence of a temperature gradient on hot wire measurements is the aim of this thesis work. Actually - to author's knowledge - this investigation is the first attempt to document, understand and ultimately correct the effect of temperature gradients on turbulence statistics. However a numerical approach is used since instantaneous temperature and streamwise velocity fields are required to evaluate this effect. A channel flow simulation at Re_tau = 180 is analyzed to make a first evaluation of the amount of error introduced by temperature gradient inside the domain. Hot wire data field is obtained processing the numerical flow field through the application of a proper version of the King's law, which connect voltage, velocity and temperature. A drift in mean streamwise velocity profile and rms is observed when temperature correction is performed by means of centerline temperature. A correct mean velocity pro�le is achieved correcting temperature through its mean value at each wall normal position, but a not negligible error is still present into rms. The key point to correct properly the sensed velocity from the hot wire is the knowledge of the instantaneous temperature field. For this purpose three correction methods are proposed. At the end a numerical simulation at Re_tau =590 is also evaluated in order to confirm the results discussed earlier.
Resumo:
Turbulent energy dissipation is presented in the theoretical context of the famous Kolmogorov theory, formulated in 1941. Some remarks and comments about this theory help the reader understand the approach to turbulence study, as well as give some basic insights to the problem. A clear distinction is made amongst dissipation, pseudo-dissipation and dissipation surrogates. Dissipation regulates how turbulent kinetic energy in a flow gets transformed into internal energy, which makes this quantity a fundamental characteristic to investigate in order to enhance our understanding of turbulence. The dissertation focuses on experimental investigation of the pseudo-dissipation. Indeed this quantity is difficult to measure as it requires the knowledge of all the possible derivatives of the three dimensional velocity field. Once considering an hot-wire technique to measure dissipation we need to deal with surrogates of dissipation, since not all the terms can be measured. The analysis of surrogates is the main topic of this work. In particular two flows, the turbulent channel and the turbulent jet, are considered. These canonic flows, introduced in a brief fashion, are often used as a benchmark for CFD solvers and experimental equipment due to their simple structure. Observations made in the canonic flows are often transferable to more complicated and interesting cases, with many industrial applications. The main tools of investigation are DNS simulations and experimental measures. DNS data are used as a benchmark for the experimental results since all the components of dissipation are known within the numerical simulation. The results of some DNS were already available at the start of this thesis, so the main work consisted in reading and processing the data. Experiments were carried out by means of hot-wire anemometry, described in detail on a theoretical and practical level. The study of DNS data of a turbulent channel at Re=298 reveals that the traditional surrogate can be improved Consequently two new surrogates are proposed and analysed, based on terms of the velocity gradient that are easy to measure experimentally. We manage to find a formulation that improves the accuracy of surrogates by an order of magnitude. For the jet flow results from a DNS at Re=1600 of a temporal jet, and results from our experimental facility CAT at Re=70000, are compared to validate the experiment. It is found that the ratio between components of the dissipation differs between DNS and experimental data. Possible errors in both sets of data are discussed, and some ways to improve the data are proposed.
Resumo:
Linear cascade testing serves a fundamental role in the research, development, and design of turbomachines as it is a simple yet very effective way to compute the performance of a generic blade geometry. These kinds of experiments are usually carried out in specialized wind tunnel facilities. This thesis deals with the numerical characterization and subsequent partial redesign of the S-1/C Continuous High Speed Wind Tunnel of the Von Karman Institute for Fluid Dynamics. The current facility is powered by a 13-stage axial compressor that is not powerful enough to balance the energy loss experienced when testing low turning airfoils. In order to address this issue a performance assessment of the wind tunnel was performed under several flow regimes via numerical simulations. After that, a redesign proposal aimed at reducing the pressure loss was investigated. This consists of a linear cascade of turning blades to be placed downstream of the test section and designed specifically for the type of linear cascade being tested. An automatic design procedure was created taking as input parameters those measured at the outlet of the cascade. The parametrization method employed Bézier curves to produce an airfoil geometry that could be imported into a CAD software so that a cascade could be designed. The proposal was simulated via CFD analysis and proved to be effective in reducing pressure losses up to 41%. The same tool developed in this thesis could be adopted to design similar apparatuses and could also be optimized and specialized for the design of turbomachines components.
Resumo:
The emissions estimation, both during homologation and standard driving, is one of the new challenges that automotive industries have to face. The new European and American regulation will allow a lower and lower quantity of Carbon Monoxide emission and will require that all the vehicles have to be able to monitor their own pollutants production. Since numerical models are too computationally expensive and approximated, new solutions based on Machine Learning are replacing standard techniques. In this project we considered a real V12 Internal Combustion Engine to propose a novel approach pushing Random Forests to generate meaningful prediction also in extreme cases (extrapolation, very high frequency peaks, noisy instrumentation etc.). The present work proposes also a data preprocessing pipeline for strongly unbalanced datasets and a reinterpretation of the regression problem as a classification problem in a logarithmic quantized domain. Results have been evaluated for two different models representing a pure interpolation scenario (more standard) and an extrapolation scenario, to test the out of bounds robustness of the model. The employed metrics take into account different aspects which can affect the homologation procedure, so the final analysis will focus on combining all the specific performances together to obtain the overall conclusions.
Resumo:
Fiber-reinforced concrete is a composite material consisting of discrete, discontinuous, and uniformly distributed fibers in plain concrete primarily used to enhance the tensile properties of the concrete. FRC performance depends upon the fiber, interface, and matrix properties. The use of fiber-reinforced concrete has been increasing substantially in the past few years in different fields of the construction industry such as ground-level application in sidewalks and building floors, tunnel lining, aircraft parking, runways, slope stabilization, etc. Many experiments have been performed to observe the short-term and long-term mechanical behavior of fiber-reinforced concrete in the last decade and numerous numerical models have been formulated to accurately capture the response of fiber-reinforced concrete. The main purpose of this dissertation is to numerically calibrate the short-term response of the concrete and fiber parameters in mesoscale for the three-point bending test and cube compression test in the MARS framework which is based on the lattice discrete particle model (LDPM) and later validate the same parameters for the round panels. LDPM is the most validated theory in mesoscale theories for concrete. Different seeds representing the different orientations of concrete and fiber particles are simulated to produce the mean numerical response. The result of numerical simulation shows that the lattice discrete particle model for fiber-reinforced concrete can capture results of experimental tests on the behavior of fiber-reinforced concrete to a great extent.