982 resultados para Simulations, Quantum Models, Resonant Tunneling Diode
Resumo:
The application of Computational Fluid Dynamics based on the Reynolds-Averaged Navier-Stokes equations to the simulation of bluff body aerodynamics has been thoroughly investigated in the past. Although a satisfactory accuracy can be obtained for some urban physics problems their predictive capability is limited to the mean flow properties, while the ability to accurately predict turbulent fluctuations is recognized to be of fundamental importance when dealing with wind loading and pollution dispersion problems. The need to correctly take into account the flow dynamics when such problems are faced has led researchers to move towards scale-resolving turbulence models such as Large Eddy Simulations (LES). The development and assessment of LES as a tool for the analysis of these problems is nowadays an active research field and represents a demanding engineering challenge. This research work has two objectives. The first one is focused on wind loads assessment and aims to study the capabilities of LES in reproducing wind load effects in terms of internal forces on structural members. This differs from the majority of the existing research, where performance of LES is evaluated only in terms of surface pressures, and is done with a view of adopting LES as a complementary design tools alongside wind tunnel tests. The second objective is the study of LES capabilities in calculating pollutant dispersion in the built environment. The validation of LES in this field is considered to be of the utmost importance in order to conceive healthier and more sustainable cities. In order to validate the numerical setup adopted, a systematic comparison between numerical and experimental data is performed. The obtained results are intended to be used in the drafting of best practice guidelines for the application of LES in the urban physics field with a particular attention to wind load assessment and pollution dispersion problems.
Resumo:
Since their emergence, locally resonant metamaterials have found several applications for the control of surface waves, from micrometer-sized electronic devices to meter-sized seismic barriers. The interaction between Rayleigh-type surface waves and resonant metamaterials has been investigated through the realization of locally resonant metasurfaces, thin elastic interfaces constituted by a cluster of resonant inclusions or oscillators embedded near the surface of an elastic waveguide. When such resonant metasurfaces are embedded in an elastic homogeneous half-space, they can filter out the propagation of Rayleigh waves, creating low-frequency bandgaps at selected frequencies. In the civil engineering context, heavy resonating masses are needed to extend the bandgap frequency width of locally resonant devices, a requirement that limits their practical implementations. In this dissertation, the wave attenuation capabilities of locally resonant metasurfaces have been enriched by proposing (i) tunable metasurfaces to open large frequency bandgaps with small effective inertia, and by developing (ii) an analytical framework aimed at studying the propagation of Rayleigh waves propagation in deep resonant waveguides. In more detail, inertial amplified resonators are exploited to design advanced metasurfaces with a prescribed static and a tunable dynamic response. The modular design of the tunable metasurfaces allows to shift and enlarge low-frequency spectral bandgaps without modifying the total inertia of the metasurface. Besides, an original dispersion law is derived to study the dispersive properties of Rayleigh waves propagating in thick resonant layers made of sub-wavelength resonators. Accordingly, a deep resonant wave barrier of mechanical resonators embedded inside the soil is designed to impede the propagation of seismic surface waves. Numerical models are developed to confirm the analytical dispersion predictions of the tunable metasurface and resonant layer. Finally, a medium-size scale resonant wave barrier is designed according to the soil stratigraphy of a real geophysical scenario to attenuate ground-borne vibration.
Resumo:
The topic of the Ph.D project focuses on the modelling of the soil-water dynamics inside an instrumented embankment section along Secchia River (Cavezzo (MO)) in the period from 2017 to 2018 and the quantification of the performance of the direct and indirect simulations . The commercial code Hydrus2D by Pc-Progress has been chosen to run the direct simulations. Different soil-hydraulic models have been adopted and compared. The parameters of the different hydraulic models are calibrated using a local optimization method based on the Levenberg - Marquardt algorithm implemented in the Hydrus package. The calibration program is carried out using different types of dataset of observation points, different weighting distributions, different combinations of optimized parameters and different initial sets of parameters. The final goal is an in-depth study of the potentialities and limits of the inverse analysis when applied to a complex geotechnical problem as the case study. The second part of the research focuses on the effects of plant roots and soil-vegetation-atmosphere interaction on the spatial and temporal distribution of pore water pressure in soil. The investigated soil belongs to the West Charlestown Bypass embankment, Newcastle, Australia, that showed in the past years shallow instabilities and the use of long stem planting is intended to stabilize the slope. The chosen plant species is the Malaleuca Styphelioides, native of eastern Australia. The research activity included the design and realization of a specific large scale apparatus for laboratory experiments. Local suction measurements at certain intervals of depth and radial distances from the root bulb are recorded within the vegetated soil mass under controlled boundary conditions. The experiments are then reproduced numerically using the commercial code Hydrus 2D. Laboratory data are used to calibrate the RWU parameters and the parameters of the hydraulic model.
Resumo:
For 40 years, at the University of Bologna, a group of researchers coordinated by professor Claudio Zannoni has been studying liquid crystals by employing computational techniques. They have developed effective models of these interesting, and still far from being completely understood, systems. They were able to reproduce with simulations important features of some liquid crystal molecules, such as transition temperature. Then they focused their attention on the interactions that these molecules have with different kinds of surface, and how these interactions affect the alignment of liquid crystals. The group studied the behaviour of liquid crystals in contact with different kinds of surfaces, from silica, either amorphous and crystalline, to organic self assembled monolayers (SAMs) and even some common polymers, such as polymethylmethacrylate (PMMA) and polystyrene (PS). Anyway, a library of typical surfaces is still far from being complete, and a lot of work must be done to investigate the cases which have not been analyzed yet. A hole that must be filled is represented by polydimethylsiloxane (PDMS), a polymer on which the interest of industry has enormously grown up in the last years, thanks to its peculiar features, allowing it to be employed in many fields of applications. It has been observed experimentally that PDMS causes 4-cyano-4’-pentylbiphenyl (well known as 5CB), one of the most common liquid crystal molecules, to align homeotropically (i.e. perpendicular) with respect to a surface made of this polymer. Even though some hypothesis have been presented to rationalize the effect, a clear explanation of this phenomenon has not been given yet. This dissertation shows the work I did during my internship in the group of professor Zannoni. The challenge that I had to tackle was to investigate, via Molecular Dynamics (MD) simulations, the reasons of 5CB homeotropic alignment on a PDMS surface, as the group had previously done for other surfaces.
Resumo:
Additive Manufacturing (AM) is nowadays considered an important alternative to traditional manufacturing processes. AM technology shows several advantages in literature as design flexibility, and its use increases in automotive, aerospace and biomedical applications. As a systematic literature review suggests, AM is sometimes coupled with voxelization, mainly for representation and simulation purposes. Voxelization can be defined as a volumetric representation technique based on the model’s discretization with hexahedral elements, as occurs with pixels in the 2D image. Voxels are used to simplify geometric representation, store intricated details of the interior and speed-up geometric and algebraic manipulation. Compared to boundary representation used in common CAD software, voxel’s inherent advantages are magnified in specific applications such as lattice or topologically structures for visualization or simulation purposes. Those structures can only be manufactured with AM employment due to their complex topology. After an accurate review of the existent literature, this project aims to exploit the potential of the voxelization algorithm to develop optimized Design for Additive Manufacturing (DfAM) tools. The final aim is to manipulate and support mechanical simulations of lightweight and optimized structures that should be ready to be manufactured with AM with particular attention to automotive applications. A voxel-based methodology is developed for efficient structural simulation of lattice structures. Moreover, thanks to an optimized smoothing algorithm specific for voxel-based geometries, a topological optimized and voxelized structure can be transformed into a surface triangulated mesh file ready for the AM process. Moreover, a modified panel code is developed for simple CFD simulations using the voxels as a discretization unit to understand the fluid-dynamics performances of industrial components for preliminary aerodynamic performance evaluation. The developed design tools and methodologies perfectly fit the automotive industry’s needs to accelerate and increase the efficiency of the design workflow from the conceptual idea to the final product.
Resumo:
The simulation of ultrafast photoinduced processes is a fundamental step towards the understanding of the underlying molecular mechanism and interpretation/prediction of experimental data. Performing a computer simulation of a complex photoinduced process is only possible introducing some approximations but, in order to obtain reliable results, the need to reduce the complexity must balance with the accuracy of the model, which should include all the relevant degrees of freedom and a quantitatively correct description of the electronic states involved in the process. This work presents new computational protocols and strategies for the parameterisation of accurate models for photochemical/photophysical processes based on state-of-the-art multiconfigurational wavefunction-based methods. The required ingredients for a dynamics simulation include potential energy surfaces (PESs) as well as electronic state couplings, which must be mapped across the wide range of geometries visited during the wavepacket/trajectory propagation. The developed procedures allow to obtain solid and extended databases reducing as much as possible the computational cost, thanks to, e.g., specific tuning of the level of theory for different PES regions and/or direct calculation of only the needed components of vectorial quantities (like gradients or nonadiabatic couplings). The presented approaches were applied to three case studies (azobenzene, pyrene, visual rhodopsin), all requiring an accurate parameterisation but for different reasons. The resulting models and simulations allowed to elucidate the mechanism and time scale of the internal conversion, reproducing or even predicting new transient experiments. The general applicability of the developed protocols to systems with different peculiarities and the possibility to parameterise different types of dynamics on an equal footing (classical vs purely quantum) prove that the developed procedures are flexible enough to be tailored for each specific system, and pave the way for exact quantum dynamics with multiple degrees of freedom.
Resumo:
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.
Resumo:
The investigation of the mechanisms lying behind the (photo-)chemical processes is fundamental to address and improve the design of new organic functional materials. In many cases, dynamics simulations represent the only tool to capture the system properties emerging from complex interactions between many molecules. Despite the outstanding progresses in calculation power, the only way to carry out such computational studies is to introduce several approximations with respect to a fully quantum mechanical (QM) description. This thesis presents an approach that combines QM calculations with a classical Molecular Dynamics (MD) approach by means of accurate QM-derived force fields. It is based on a careful selection of the most relevant molecular degrees of freedom, whose potential energy surface is calculated at QM level and reproduced by the analytic functions of the force field, as well as by an accurate tuning of the approximations introduced in the model of the process to be simulated. This is made possible by some tools developed purposely, that allow to obtain and test the FF parameters through comparison with the QM frequencies and normal modes. These tools were applied in the modelling of three processes: the npi* photoisomerisation of azobenzene, where the FF description was extended to the excited state too and the non-adiabatic events were treated stochastically with Tully fewest switching algorithm; the charge separation in donors-acceptors bulk heterojunction organic solar cells, where a tight-binding Hamiltonian was carefully parametrised and solved by means of a code, also written specifically; the effect of the protonation state on the photoisomerisation quantum yield of the aryl-azoimidazolium unit of the axle molecule of a rotaxane molecular shuttle. In each case, the QM-based MD models that were specifically developed gave noteworthy information about the investigated phenomena, proving to be a fundamental key for a deeper comprehension of several experimental evidences.
Resumo:
The growing interest for constellation of small, less expensive satellites is bringing space junk and traffic management to the attention of space community. At the same time, the continuous quest for more efficient propulsion systems put the spotlight on electric (low thrust) propulsion as an appealing solution for collision avoidance. Starting with an overview of the current techniques for conjunction assessment and avoidance, we then highlight the possible problems when a low thrust propulsion is used. The need for accurate propagation model shows up from the conducted simulations. Thus, aiming at propagation models with low computational burden, we study the available models from the literature and propose an analytical alternative to improve propagation accuracy. The model is then tested in the particular case of a tangential maneuver. Results show that the proposed solution significantly improve on state of the art methods and is a good candidate to be used in collision avoidance operations. For instance to propagate satellite uncertainty or optimizing avoidance maneuver when conjunction occurs within few (3-4) orbits from measurements time.
Resumo:
Understanding the natural and forced variability of the atmospheric general circulation and its drivers is one of the grand challenges in climate science. It is of paramount importance to understand to what extent the systematic error of climate models affects the processes driving such variability. This is done by performing a set of simulations (ROCK experiments) with an intermediate complexity atmospheric model (SPEEDY), in which the Rocky Mountains orography is increased or decreased to influence the structure of the North Pacific jet stream. For each of these modified-orography experiments, the climatic response to idealized sea surface temperature anomalies of varying intensity in the El Niño Southern Oscillation (ENSO) region is studied. ROCK experiments are characterized by variations in the Pacific jet stream intensity whose extension encompasses the spread of the systematic error found in Coupled Model Intercomparison Project (CMIP6) models. When forced with ENSO-like idealised anomalies, they exhibit a non-negligible sensitivity in the response pattern over the Pacific North American region, indicating that the model mean state can affect the model response to ENSO. It is found that the classical Rossby wave train response to ENSO is more meridionally oriented when the Pacific jet stream is weaker and more zonally oriented with a stronger jet. Rossby wave linear theory suggests that a stronger jet implies a stronger waveguide, which traps Rossby waves at a lower latitude, favouring a zonal propagation of Rossby waves. The shape of the dynamical response to ENSO affects the ENSO impacts on surface temperature and precipitation over Central and North America. A comparison of the SPEEDY results with CMIP6 models suggests a wider applicability of the results to more resources-demanding climate general circulation models (GCMs), opening up to future works focusing on the relationship between Pacific jet misrepresentation and response to external forcing in fully-fledged GCMs.
Resumo:
The present manuscript focuses on out of equilibrium physics in two dimensional models. It has the purpose of presenting some results obtained as part of out of equilibrium dynamics in its non perturbative aspects. This can be understood in two different ways: the former is related to integrability, which is non perturbative by nature; the latter is related to emergence of phenomena in the out of equilibirum dynamics of non integrable models that are not accessible by standard perturbative techniques. In the study of out of equilibirum dynamics, two different protocols are used througout this work: the bipartitioning protocol, within the Generalised Hydrodynamics (GHD) framework, and the quantum quench protocol. With GHD machinery we study the Staircase Model, highlighting how the hydrodynamic picture sheds new light into the physics of Integrable Quantum Field Theories; with quench protocols we analyse different setups where a non-perturbative description is needed and various dynamical phenomena emerge, such as the manifistation of a dynamical Gibbs effect, confinement and the emergence of Bloch oscillations preventing thermalisation.
Resumo:
In this thesis I show a triple new connection we found between quantum integrability, N=2 supersymmetric gauge theories and black holes perturbation theory. I use the approach of the ODE/IM correspondence between Ordinary Differential Equations (ODE) and Integrable Models (IM), first to connect basic integrability functions - the Baxter’s Q, T and Y functions - to the gauge theory periods. This fundamental identification allows several new results for both theories, for example: an exact non linear integral equation (Thermodynamic Bethe Ansatz, TBA) for the gauge periods; an interpretation of the integrability functional relations as new exact R-symmetry relations for the periods; new formulas for the local integrals of motion in terms of gauge periods. This I develop in all details at least for the SU(2) gauge theory with Nf=0,1,2 matter flavours. Still through to the ODE/IM correspondence, I connect the mathematically precise definition of quasinormal modes of black holes (having an important role in gravitational waves’ obervations) with quantization conditions on the Q, Y functions. In this way I also give a mathematical explanation of the recently found connection between quasinormal modes and N=2 supersymmetric gauge theories. Moreover, it follows a new simple and effective method to numerically compute the quasinormal modes - the TBA - which I compare with other standard methods. The spacetimes for which I show these in all details are in the simplest Nf=0 case the D3 brane in the Nf=1,2 case a generalization of extremal Reissner-Nordström (charged) black holes. Then I begin treating also the Nf=3,4 theories and argue on how our integrability-gauge-gravity correspondence can generalize to other types of black holes in either asymptotically flat (Nf=3) or Anti-de-Sitter (Nf=4) spacetime. Finally I begin to show the extension to a 4-fold correspondence with also Conformal Field Theory (CFT), through the renowned AdS/CFT correspondence.
Diffusive models and chaos indicators for non-linear betatron motion in circular hadron accelerators
Resumo:
Understanding the complex dynamics of beam-halo formation and evolution in circular particle accelerators is crucial for the design of current and future rings, particularly those utilizing superconducting magnets such as the CERN Large Hadron Collider (LHC), its luminosity upgrade HL-LHC, and the proposed Future Circular Hadron Collider (FCC-hh). A recent diffusive framework, which describes the evolution of the beam distribution by means of a Fokker-Planck equation, with diffusion coefficient derived from the Nekhoroshev theorem, has been proposed to describe the long-term behaviour of beam dynamics and particle losses. In this thesis, we discuss the theoretical foundations of this framework, and propose the implementation of an original measurement protocol based on collimator scans in view of measuring the Nekhoroshev-like diffusive coefficient by means of beam loss data. The available LHC collimator scan data, unfortunately collected without the proposed measurement protocol, have been successfully analysed using the proposed framework. This approach is also applied to datasets from detailed measurements of the impact on the beam losses of so-called long-range beam-beam compensators also at the LHC. Furthermore, dynamic indicators have been studied as a tool for exploring the phase-space properties of realistic accelerator lattices in single-particle tracking simulations. By first examining the classification performance of known and new indicators in detecting the chaotic character of initial conditions for a modulated Hénon map and then applying this knowledge to study the properties of realistic accelerator lattices, we tried to identify a connection between the presence of chaotic regions in the phase space and Nekhoroshev-like diffusive behaviour, providing new tools to the accelerator physics community.
Resumo:
The accurate representation of the Earth Radiation Budget by General Circulation Models (GCMs) is a fundamental requirement to provide reliable historical and future climate simulations. In this study, we found reasonable agreement between the integrated energy fluxes at the top of the atmosphere simulated by 34 state-of-the-art climate models and the observations provided by the Cloud and Earth Radiant Energy System (CERES) mission on a global scale, but large regional biases have been detected throughout the globe. Furthermore, we highlighted that a good agreement between simulated and observed integrated Outgoing Longwave Radiation (OLR) fluxes may be obtained from the cancellation of opposite-in-sign systematic errors, localized in different spectral ranges. To avoid this and to understand the causes of these biases, we compared the observed Earth emission spectra, measured by the Infrared Atmospheric Sounding Interferometer (IASI) in the period 2008-2016, with the synthetic radiances computed on the basis of the atmospheric fields provided by the EC-Earth GCM. To this purpose, the fast σ-IASI radiative transfer model was used, after its validation and implementation in EC-Earth. From the comparison between observed and simulated spectral radiances, a positive temperature bias in the stratosphere and a negative temperature bias in the middle troposphere, as well as a dry bias of the water vapor concentration in the upper troposphere, have been identified in the EC-Earth climate model. The analysis has been performed in clear-sky conditions, but the feasibility of its extension in the presence of clouds, whose impact on the radiation represents the greatest source of uncertainty in climate models, has also been proven. Finally, the analysis of simulated and observed OLR trends indicated good agreement and provided detailed information on the spectral fingerprints of the evolution of the main climate variables.
Resumo:
In this thesis, a TCAD approach for the investigation of charge transport in amorphous silicon dioxide is presented for the first time. The proposed approach is used to investigate high-voltage silicon oxide thick TEOS capacitors embedded in the back-end inter-level dielectric layers for galvanic insulation applications. In the first part of this thesis, a detailed review of the main physical and chemical properties of silicon dioxide and the main physical models for the description of charge transport in insulators are presented. In the second part, the characterization of high-voltage MIM structures at different high-field stress conditions up to the breakdown is presented. The main physical mechanisms responsible of the observed results are then discussed in details. The third part is dedicated to the implementation of a TCAD approach capable of describing charge transport in silicon dioxide layers in order to gain insight into the microscopic physical mechanisms responsible of the leakage current in MIM structures. In particular, I investigated and modeled the role of charge injection at contacts and charge build-up due to trapping and de-trapping mechanisms in the oxide layer to the purpose of understanding its behavior under DC and AC stress conditions. In addition, oxide breakdown due to impact-ionization of carriers has been taken into account in order to have a complete representation of the oxide behavior at very high fields. Numerical simulations have been compared against experiments to quantitatively validate the proposed approach. In the last part of the thesis, the proposed approach has been applied to simulate the breakdown in realistic structures under different stress conditions. The TCAD tool has been used to carry out a detailed analysis of the most relevant physical quantities, in order to gain a detailed understanding on the main mechanisms responsible for breakdown and guide design optimization.