10 resultados para sap flow dynamics

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this thesis was to improve the commercial CFD software Ansys Fluent to obtain a tool able to perform accurate simulations of flow boiling in the slug flow regime. The achievement of a reliable numerical framework allows a better understanding of the bubble and flow dynamics induced by the evaporation and makes possible the prediction of the wall heat transfer trends. In order to save computational time, the flow is modeled with an axisymmetrical formulation. Vapor and liquid phases are treated as incompressible and in laminar flow. By means of a single fluid approach, the flow equations are written as for a single phase flow, but discontinuities at the interface and interfacial effects need to be accounted for and discretized properly. Ansys Fluent provides a Volume Of Fluid technique to advect the interface and to map the discontinuous fluid properties throughout the flow domain. The interfacial effects are dominant in the boiling slug flow and the accuracy of their estimation is fundamental for the reliability of the solver. Self-implemented functions, developed ad-hoc, are introduced within the numerical code to compute the surface tension force and the rates of mass and energy exchange at the interface related to the evaporation. Several validation benchmarks assess the better performances of the improved software. Various adiabatic configurations are simulated in order to test the capability of the numerical framework in modeling actual flows and the comparison with experimental results is very positive. The simulation of a single evaporating bubble underlines the dominant effect on the global heat transfer rate of the local transient heat convection in the liquid after the bubble transit. The simulation of multiple evaporating bubbles flowing in sequence shows that their mutual influence can strongly enhance the heat transfer coefficient, up to twice the single phase flow value.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The application of Computational Fluid Dynamics based on the Reynolds-Averaged Navier-Stokes equations to the simulation of bluff body aerodynamics has been thoroughly investigated in the past. Although a satisfactory accuracy can be obtained for some urban physics problems their predictive capability is limited to the mean flow properties, while the ability to accurately predict turbulent fluctuations is recognized to be of fundamental importance when dealing with wind loading and pollution dispersion problems. The need to correctly take into account the flow dynamics when such problems are faced has led researchers to move towards scale-resolving turbulence models such as Large Eddy Simulations (LES). The development and assessment of LES as a tool for the analysis of these problems is nowadays an active research field and represents a demanding engineering challenge. This research work has two objectives. The first one is focused on wind loads assessment and aims to study the capabilities of LES in reproducing wind load effects in terms of internal forces on structural members. This differs from the majority of the existing research, where performance of LES is evaluated only in terms of surface pressures, and is done with a view of adopting LES as a complementary design tools alongside wind tunnel tests. The second objective is the study of LES capabilities in calculating pollutant dispersion in the built environment. The validation of LES in this field is considered to be of the utmost importance in order to conceive healthier and more sustainable cities. In order to validate the numerical setup adopted, a systematic comparison between numerical and experimental data is performed. The obtained results are intended to be used in the drafting of best practice guidelines for the application of LES in the urban physics field with a particular attention to wind load assessment and pollution dispersion problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atrial fibrillation is associated with a five-fold increase in the risk of cerebrovascular events,being responsible of 15-18% of all strokes.The morphological and functional remodelling of the left atrium caused by atrial fibrillation favours blood stasis and, consequently, stroke risk. In this context, several clinical studies suggest that stroke risk stratification could be improved by using haemodynamic information on the left atrium (LA) and the left atrial appendage (LAA). The goal of this study was to develop a personalized computational fluid-dynamics (CFD) model of the left atrium which could clarify the haemodynamic implications of atrial fibrillation on a patient specific basis. The developed CFD model was first applied to better understand the role of LAA in stroke risk. Infact, the interplay of the LAA geometric parameters such as LAA length, tortuosity, surface area and volume with the fluid-dynamics parameters and the effects of the LAA closure have not been investigated. Results demonstrated the capabilities of the CFD model to reproduce the real physiological behaviour of the blood flow dynamics inside the LA and the LAA. Finally, we determined that the fluid-dynamics parameters enhanced in this research project could be used as new quantitative indexes to describe the different types of AF and open new scenarios for the patient-specific stroke risk stratification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Domestic gas burners are investigated experimentally and numerically in order to further understand the fluid dynamics processes that drive the cooking appliance performances. In particular, a numerical simulation tool has been developed in order to predict the onset of two flame instabilities which may deteriorate the performances of the burner: the flame back and flame lift. The numerical model has been firstly validated by comparing the simulated flow field with a data set of experimental measurements. A prediction criterion for the flame back instability has been formulated based on isothermal simulations without involving the combustion modelization. This analysis has been verified by a Design Of Experiments investigation performed on different burner prototype geometries. On the contrary, the formulation of a prediction criterion regarding the flame lift instability has required the use of a combustion model in the numerical code. In this analysis, the structure and aerodynamics of the flame generated by a cooking appliance has thus been characterized by experimental and numerical investigations, in which, by varying the flow inlet conditions, the flame behaviour was studied from a stable reference case toward a complete blow-out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thanks to the increasing slenderness and lightness allowed by new construction techniques and materials, the effects of wind on structures became in the last decades a research field of great importance in Civil Engineering. Thanks to the advances in computers power, the numerical simulation of wind tunnel tests has became a valid complementary activity and an attractive alternative for the future. Due to its flexibility, during the last years, the computational approach gained importance with respect to the traditional experimental investigation. However, still today, the computational approach to fluid-structure interaction problems is not as widely adopted as it could be expected. The main reason for this lies in the difficulties encountered in the numerical simulation of the turbulent, unsteady flow conditions generally encountered around bluff bodies. This thesis aims at providing a guide to the numerical simulation of bridge deck aerodynamic and aeroelastic behaviour describing in detail the simulation strategies and setting guidelines useful for the interpretation of the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using Computational Wind Engineering, CWE, for solving wind-related problems is still a challenging task today, mainly due to the high computational cost required to obtain trustworthy simulations. In particular, the Large Eddy Simulation, LES, has been widely used for evaluating wind loads on buildings. The present thesis assesses the capability of LES as a design tool for wind loading predictions through three cases. The first case is using LES for simulating the wind field around a ground-mounted rectangular prism in Atmospheric Boundary Layer (ABL) flow. The numerical results are validated with experimental results for seven wind attack angles, giving a global understanding of the model performance. The case with the worst model behaviour is investigated, including the spatial distribution of the pressure coefficients and their discrepancies with respect to experimental results. The effects of some numerical parameters are investigated for this case to understand their effectiveness in modifying the obtained numerical results. The second case is using LES for investigating the wind effects on a real high-rise building, aiming at validating the performance of LES as a design tool in practical applications. The numerical results are validated with the experimental results in terms of the distribution of the pressure statistics and the global forces. The mesh sensitivity and the computational cost are discussed. The third case is using LES for studying the wind effects on the new large-span roof over the Bologna stadium. The dynamic responses are analyzed and design envelopes for the structure are obtained. Although it is a numerical simulation before the traditional wind tunnel tests, i.e. the validation of the numerical results are not performed, the preliminary evaluations can effectively inform later investigations and provide the final design processes with deeper confidence regarding the absence of potentially unexpected behaviours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the viability of the Dynamic Mode Decomposition (DMD) as a technique to analyze and model complex dynamic real-world systems is presented. This method derives, directly from data, computationally efficient reduced-order models (ROMs) which can replace too onerous or unavailable high-fidelity physics-based models. Optimizations and extensions to the standard implementation of the methodology are proposed, investigating diverse case studies related to the decoding of complex flow phenomena. The flexibility of this data-driven technique allows its application to high-fidelity fluid dynamics simulations, as well as time series of real systems observations. The resulting ROMs are tested against two tasks: (i) reduction of the storage requirements of high-fidelity simulations or observations; (ii) interpolation and extrapolation of missing data. The capabilities of DMD can also be exploited to alleviate the cost of onerous studies that require many simulations, such as uncertainty quantification analysis, especially when dealing with complex high-dimensional systems. In this context, a novel approach to address parameter variability issues when modeling systems with space and time-variant response is proposed. Specifically, DMD is merged with another model-reduction technique, namely the Polynomial Chaos Expansion, for uncertainty quantification purposes. Useful guidelines for DMD deployment result from the study, together with the demonstration of its potential to ease diagnosis and scenario analysis when complex flow processes are involved.