933 resultados para NUMERICAL SIMULATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we consider three different models for strongly correlated electrons, namely a multi-band Hubbard model as well as the spinless Falicov-Kimball model, both with a semi-elliptical density of states in the limit of infinite dimensions d, and the attractive Hubbard model on a square lattice in d=2. In the first part, we study a two-band Hubbard model with unequal bandwidths and anisotropic Hund's rule coupling (J_z-model) in the limit of infinite dimensions within the dynamical mean-field theory (DMFT). Here, the DMFT impurity problem is solved with the use of quantum Monte Carlo (QMC) simulations. Our main result is that the J_z-model describes the occurrence of an orbital-selective Mott transition (OSMT), in contrast to earlier findings. We investigate the model with a high-precision DMFT algorithm, which was developed as part of this thesis and which supplements QMC with a high-frequency expansion of the self-energy. The main advantage of this scheme is the extraordinary accuracy of the numerical solutions, which can be obtained already with moderate computational effort, so that studies of multi-orbital systems within the DMFT+QMC are strongly improved. We also found that a suitably defined Falicov-Kimball (FK) model exhibits an OSMT, revealing the close connection of the Falicov-Kimball physics to the J_z-model in the OSM phase. In the second part of this thesis we study the attractive Hubbard model in two spatial dimensions within second-order self-consistent perturbation theory. This model is considered on a square lattice at finite doping and at low temperatures. Our main result is that the predictions of first-order perturbation theory (Hartree-Fock approximation) are renormalized by a factor of the order of unity even at arbitrarily weak interaction (U->0). The renormalization factor q can be evaluated as a function of the filling n for 00, the q-factor vanishes, signaling the divergence of self-consistent perturbation theory in this limit. Thus we present the first asymptotically exact results at weak-coupling for the negative-U Hubbard model in d=2 at finite doping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graphene excellent properties make it a promising candidate for building future nanoelectronic devices. Nevertheless, the absence of an energy gap is an open problem for the transistor application. In this thesis, graphene nanoribbons and pattern-hydrogenated graphene, two alternatives for inducing an energy gap in graphene, are investigated by means of numerical simulations. A tight-binding NEGF code is developed for the simulation of GNR-FETs. To speed up the simulations, the non-parabolic effective mass model and the mode-space tight-binding method are developed. The code is used for simulation studies of both conventional and tunneling FETs. The simulations show the great potential of conventional narrow GNR-FETs, but highlight at the same time the leakage problems in the off-state due to various tunneling mechanisms. The leakage problems become more severe as the width of the devices is made larger, and thus the band gap smaller, resulting in a poor on/off current ratio. The tunneling FET architecture can partially solve these problems thanks to the improved subthreshold slope; however, it is also shown that edge roughness, unless well controlled, can have a detrimental effect in the off-state performance. In the second part of this thesis, pattern-hydrogenated graphene is simulated by means of a tight-binding model. A realistic model for patterned hydrogenation, including disorder, is developed. The model is validated by direct comparison of the momentum-energy resolved density of states with the experimental angle-resolved photoemission spectroscopy. The scaling of the energy gap and the localization length on the parameters defining the pattern geometry is also presented. The results suggest that a substantial transport gap can be attainable with experimentally achievable hydrogen concentration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser shock peening is a technique similar to shot peening that imparts compressive residual stresses in materials for improving fatigue resistance. The ability to use a high energy laser pulse to generate shock waves, inducing a compressive residual stress field in metallic materials, has applications in multiple fields such as turbo-machinery, airframe structures, and medical appliances. The transient nature of the LSP phenomenon and the high rate of the laser's dynamic make real time in-situ measurement of laser/material interaction very challenging. For this reason and for the high cost of the experimental tests, reliable analytical methods for predicting detailed effects of LSP are needed to understand the potential of the process. Aim of this work has been the prediction of residual stress field after Laser Peening process by means of Finite Element Modeling. The work has been carried out in the Stress Methods department of Airbus Operations GmbH (Hamburg) and it includes investigation on compressive residual stresses induced by Laser Shock Peening, study on mesh sensitivity, optimization and tuning of the model by using physical and numerical parameters, validation of the model by comparing it with experimental results. The model has been realized with Abaqus/Explicit commercial software starting from considerations done on previous works. FE analyses are “Mesh Sensitive”: by increasing the number of elements and by decreasing their size, the software is able to probe even the details of the real phenomenon. However, these details, could be only an amplification of real phenomenon. For this reason it was necessary to optimize the mesh elements' size and number. A new model has been created with a more fine mesh in the trough thickness direction because it is the most involved in the process deformations. This increment of the global number of elements has been paid with an "in plane" size reduction of the elements far from the peened area in order to avoid too high computational costs. Efficiency and stability of the analyses has been improved by using bulk viscosity coefficients, a merely numerical parameter available in Abaqus/Explicit. A plastic rate sensitivity study has been also carried out and a new set of Johnson Cook's model coefficient has been chosen. These investigations led to a more controllable and reliable model, valid even for more complex geometries. Moreover the study about the material properties highlighted a gap of the model about the simulation of the surface conditions. Modeling of the ablative layer employed during the real process has been used to fill this gap. In the real process ablative layer is a super thin sheet of pure aluminum stuck on the masterpiece. In the simulation it has been simply reproduced as a 100µm layer made by a material with a yield point of 10MPa. All those new settings has been applied to a set of analyses made with different geometry models to verify the robustness of the model. The calibration of the model with the experimental results was based on stress and displacement measurements carried out on the surface and in depth as well. The good correlation between the simulation and experimental tests results proved this model to be reliable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Herbicides are becoming emergent contaminants in Italian surface, coastal and ground waters, due to their intensive use in agriculture. In marine environments herbicides have adverse effects on non-target organisms, as primary producers, resulting in oxygen depletion and decreased primary productivity. Alterations of species composition in algal communities can also occur due to the different sensitivity among the species. In the present thesis the effects of herbicides, widely used in the Northern Adriatic Sea, on different algal species were studied. The main goal of this work was to study the influence of temperature on algal growth in the presence of the triazinic herbicide terbuthylazine (TBA), and the cellular responses adopted to counteract the toxic effects of the pollutant (Chapter 1 and 2). The development of simulation models to be applied in environmental management are needed to organize and track information in a way that would not be possible otherwise and simulate an ecological prospective. The data collected from laboratory experiments were used to simulate algal responses to the TBA exposure at increasing temperature conditions (Chapter 3). Part of the thesis was conducted in foreign countries. The work presented in Chapter 4 was focused on the effect of high light on growth, toxicity and mixotrophy of the ichtyotoxic species Prymnesium parvum. In addition, a mesocosm experiment was conducted in order to study the synergic effect of the pollutant emamectin benzoate with other anthropogenic stressors, such as oil pollution and induced phytoplankton blooms (Chapter 5).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hochreichende Konvektion über Waldbränden ist eine der intensivsten Formen von atmosphärischer Konvektion. Die extreme Wolkendynamik mit hohen vertikalen Windgeschwindigkeiten (bis 20 m/s) bereits an der Wolkenbasis, hohen Wasserdampfübersättigungen (bis 1%) und die durch das Feuer hohen Anzahlkonzentration von Aerosolpartikeln (bis 100000 cm^-3) bilden einen besonderen Rahmen für Aerosol-Wolken Wechselwirkungen.Ein entscheidender Schritt in der mikrophysikalischen Entwicklung einer konvektiven Wolke ist die Aktivierung von Aerosolpartikeln zu Wolkentropfen. Dieser Aktivierungsprozess bestimmt die anfängliche Anzahl und Größe der Wolkentropfen und kann daher die Entwicklung einer konvektiven Wolke und deren Niederschlagsbildung beeinflussen. Die wichtigsten Faktoren, welche die anfängliche Anzahl und Größe der Wolkentropfen bestimmen, sind die Größe und Hygroskopizität der an der Wolkenbasis verfügbaren Aerosolpartikel sowie die vertikale Windgeschwindigkeit. Um den Einfluss dieser Faktoren unter pyro-konvektiven Bedingungen zu untersuchen, wurden numerische Simulationen mit Hilfe eines Wolkenpaketmodells mit detaillierter spektraler Beschreibung der Wolkenmikrophysik durchgeführt. Diese Ergebnisse können in drei unterschiedliche Bereiche abhängig vom Verhältnis zwischen vertikaler Windgeschwindigkeit und Aerosolanzahlkonzentration (w/NCN) eingeteilt werden: (1) ein durch die Aerosolkonzentration limitierter Bereich (hohes w/NCN), (2) ein durch die vertikale Windgeschwindigkeit limitierter Bereich (niedriges w/NCN) und (3) ein Übergangsbereich (mittleres w/NCN). Die Ergebnisse zeigen, dass die Variabilität der anfänglichen Anzahlkonzentration der Wolkentropfen in (pyro-) konvektiven Wolken hauptsächlich durch die Variabilität der vertikalen Windgeschwindigkeit und der Aerosolkonzentration bestimmt wird. rnUm die mikrophysikalischen Prozesse innerhalb der rauchigen Aufwindregion einer pyrokonvektiven Wolke mit einer detaillierten spektralen Mikrophysik zu untersuchen, wurde das Paketmodel entlang einer Trajektorie innerhalb der Aufwindregion initialisiert. Diese Trajektore wurde durch dreidimensionale Simulationen eines pyro-konvektiven Ereignisses durch das Model ATHAM berechnet. Es zeigt sich, dass die Anzahlkonzentration der Wolkentropfen mit steigender Aerosolkonzentration ansteigt. Auf der anderen Seite verringert sich die Größe der Wolkentropfen mit steigender Aerosolkonzentration. Die Reduzierung der Verbreiterung des Tropfenspektrums stimmt mit den Ergebnissen aus Messungen überein und unterstützt das Konzept der Unterdrückung von Niederschlag in stark verschmutzen Wolken.Mit Hilfe des Models ATHAM wurden die dynamischen und mikrophysikalischen Prozesse von pyro-konvektiven Wolken, aufbauend auf einer realistischen Parametrisierung der Aktivierung von Aerosolpartikeln durch die Ergebnisse der Aktivierungsstudie, mit zwei- und dreidimensionalen Simulationen untersucht. Ein modernes zweimomenten mikrophysikalisches Schema wurde in ATHAM implementiert, um den Einfluss der Anzahlkonzentration von Aerosolpartikeln auf die Entwicklung von idealisierten pyro-konvektiven Wolken in US Standardamtosphären für die mittleren Breiten und den Tropen zu untersuchen. Die Ergebnisse zeigen, dass die Anzahlkonzentration der Aerosolpartikel die Bildung von Regen beeinflusst. Für geringe Aerosolkonzentrationen findet die rasche Regenbildung hauptsächlich durch warme mikrophysikalische Prozesse statt. Für höhere Aerosolkonzentrationen ist die Eisphase wichtiger für die Bildung von Regen. Dies führt zu einem verspäteten Einsetzen von Niederschlag für verunreinigtere Atmosphären. Außerdem wird gezeigt, dass die Zusammensetzung der Eisnukleationspartikel (IN) einen starken Einfluss auf die dynamische und mikrophysikalische Struktur solcher Wolken hat. Bei sehr effizienten IN bildet sich Regen früher. Die Untersuchung zum Einfluss des atmosphärischen Hintergrundprofils zeigt eine geringe Auswirkung der Meteorologie auf die Sensitivität der pyro-konvektiven Wolken auf diernAerosolkonzentration. Zum Abschluss wird gezeigt, dass die durch das Feuer emittierte Hitze einen deutlichen Einfluss auf die Entwicklung und die Wolkenobergrenze von pyro-konvektive Wolken hat. Zusammenfassend kann gesagt werden, dass in dieser Dissertation die Mikrophysik von pyrokonvektiven Wolken mit Hilfe von idealisierten Simulation eines Wolkenpaketmodell mit detaillierte spektraler Mikrophysik und eines 3D Modells mit einem zweimomenten Schema im Detail untersucht wurde. Es wird gezeigt, dass die extremen Bedingungen im Bezug auf die vertikale Windgeschwindigkeiten und Aerosolkonzentrationen einen deutlichen Einfluss auf die Entwicklung von pyro-konvektiven Wolken haben.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constant developments in the field of offshore wind energy have increased the range of water depths at which wind farms are planned to be installed. Therefore, in addition to monopile support structures suitable in shallow waters (up to 30 m), different types of support structures, able to withstand severe sea conditions at the greater water depths, have been developed. For water depths above 30 m, the jacket is one of the preferred support types. Jacket represents a lightweight support structure, which, in combination with complex nature of environmental loads, is prone to highly dynamic behavior. As a consequence, high stresses with great variability in time can be observed in all structural members. The highest concentration of stresses occurs in joints due to their nature (structural discontinuities) and due to the existence of notches along the welds present in the joints. This makes them the weakest elements of the jacket in terms of fatigue. In the numerical modeling of jackets for offshore wind turbines, a reduction of local stresses at the chord-brace joints, and consequently an optimization of the model, can be achieved by implementing joint flexibility in the chord-brace joints. Therefore, in this work, the influence of joint flexibility on the fatigue damage in chord-brace joints of a numerical jacket model, subjected to advanced load simulations, is studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurden Computersimulationen von Keimbildungs- und Kris\-tallisationsprozessen in rnkolloidalen Systemen durchgef\"uhrt. rnEine Kombination von Monte-Carlo-Simulationsmethoden und der Forward-Flux-Sampling-Technik wurde rnimplementiert, um die homogene und heterogene Nukleation von Kristallen monodisperser Hart\-kugeln zu untersuchen. rnIm m\"a\ss{ig} unterk\"uhlten Bulk-Hartkugelsystem sagen wir die homogenen Nukleationsraten voraus und rnvergleichen die Resultate mit anderen theoretischen Ergebnissen und experimentellen Daten. rnWeiterhin analysieren wir die kristallinen Cluster in den Keimbildungs- und Wachstumszonen, rnwobei sich herausstellt, dass kristalline Cluster sich in unterschiedlichen Formen im System bilden. rnKleine Cluster sind eher l\"anglich in eine beliebige Richtung ausgedehnt, w\"ahrend gr\"o\ss{ere} rnCluster kompakter und von ellipsoidaler Gestalt sind. rn rnIm n\"achsten Teil untersuchen wir die heterogene Keimbildung an strukturierten bcc (100)-W\"anden. rnDie 2d-Analyse der kristallinen Schichten an der Wand zeigt, dass die Struktur der rnWand eine entscheidende Rolle in der Kristallisation von Hartkugelkolloiden spielt. rnWir sagen zudem die heterogenen Kristallbildungsraten bei verschiedenen \"Ubers\"attigungsgraden voraus. rnDurch Analyse der gr\"o\ss{ten} Cluster an der Wand sch\"atzen wir zus\"atzlich den Kontaktwinkel rnzwischen Kristallcluster und Wand ab. rnEs stellt sich heraus, dass wir in solchen Systemen weit von der Benetzungsregion rnentfernt sind und der Kristallisationsprozess durch heterogene Nukleation stattfindet. rn rnIm letzten Teil der Arbeit betrachten wir die Kristallisation von Lennard-Jones-Kolloidsystemen rnzwischen zwei ebenen W\"anden. rnUm die Erstarrungsprozesse f\"ur ein solches System zu untersuchen, haben wir eine Analyse des rnOrdnungsparameters f\"ur die Bindung-Ausrichtung in den Schichten durchgef\"urt. rnDie Ergebnisse zeigen, dass innerhalb einer Schicht keine hexatische Ordnung besteht, rnwelche auf einen Kosterlitz-Thouless-Schmelzvorgang hinweisen w\"urde. rnDie Hysterese in den Erhitzungs-Gefrier\-kurven zeigt dar\"uber hinaus, dass der Kristallisationsprozess rneinen aktivierten Prozess darstellt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is the elucidation of structure-properties relationship of molecular semiconductors for electronic devices. This involves the use of a comprehensive set of simulation techniques, ranging from quantum-mechanical to numerical stochastic methods, and also the development of ad-hoc computational tools. In more detail, the research activity regarded two main topics: the study of electronic properties and structural behaviour of liquid crystalline (LC) materials based on functionalised oligo(p-phenyleneethynylene) (OPE), and the investigation on the electric field effect associated to OFET operation on pentacene thin film stability. In this dissertation, a novel family of substituted OPE liquid crystals with applications in stimuli-responsive materials is presented. In more detail, simulations can not only provide evidence for the characterization of the liquid crystalline phases of different OPEs, but elucidate the role of charge transfer states in donor-acceptor LCs containing an endohedral metallofullerene moiety. Such systems can be regarded as promising candidates for organic photovoltaics. Furthermore, exciton dynamics simulations are performed as a way to obtain additional information about the degree of order in OPE columnar phases. Finally, ab initio and molecular mechanics simulations are used to investigate the influence of an applied electric field on pentacene reactivity and stability. The reaction path of pentacene thermal dimerization in the presence of an external electric field is investigated; the results can be related to the fatigue effect observed in OFETs, that show significant performance degradation even in the absence of external agents. In addition to this, the effect of the gate voltage on a pentacene monolayer are simulated, and the results are then compared to X-ray diffraction measurements performed for the first time on operating OFETs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work illustrates a soil-tunnel-structure interaction study performed by an integrated,geotechnical and structural,approach based on 3D finite element analyses and validated against experimental observations.The study aims at analysing the response of reinforced concrete framed buildings on discrete foundations in interaction with metro lines.It refers to the case of the twin tunnels of the Milan (Italy) metro line 5,recently built in coarse grained materials using EPB machines,for which subsidence measurements collected along ground and building sections during tunnelling were available.Settlements measured under freefield conditions are firstly back interpreted using Gaussian empirical predictions. Then,the in situ measurements’ analysis is extended to include the evolving response of a 9 storey reinforced concrete building while being undercrossed by the metro line.In the finite element study,the soil mechanical behaviour is described using an advanced constitutive model. This latter,when combined with a proper simulation of the excavation process, proves to realistically reproduce the subsidence profiles under free field conditions and to capture the interaction phenomena occurring between the twin tunnels during the excavation. Furthermore, when the numerical model is extended to include the building, schematised in a detailed manner, the results are in good agreement with the monitoring data for different stages of the twin tunnelling. Thus, they indirectly confirm the satisfactory performance of the adopted numerical approach which also allows a direct evaluation of the structural response as an outcome of the analysis. Further analyses are also carried out modelling the building with different levels of detail. The results highlight that, in this case, the simplified approach based on the equivalent plate schematisation is inadequate to capture the real tunnelling induced displacement field. The overall behaviour of the system proves to be mainly influenced by the buried portion of the building which plays an essential role in the interaction mechanism, due to its high stiffness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Doktorarbeit befasst sich mit klassischen Vektor-Spingläsern eine Art von ungeordneten Magneten - auf verschiedenen Gittertypen. Da siernbedeutsam für eine experimentelle Realisierung sind, ist ein theoretisches Verständnis von Spinglas-Modellen mit wenigen Spinkomponenten und niedriger Gitterdimension von großer Bedeutung. Da sich dies jedoch als sehr schwierigrnerweist, sind neue, aussichtsreiche Ansätze nötig. Diese Arbeit betrachtet daher den Limesrnunendlich vieler Spindimensionen. Darin entstehen mehrere Vereinfachungen im Vergleichrnzu Modellen niedriger Spindimension, so dass für dieses bedeutsame Problem Eigenschaften sowohl bei Temperatur Null als auch bei endlichen Temperaturenrnüberwiegend mit numerischen Methoden ermittelt werden. Sowohl hyperkubische Gitter als auch ein vielseitiges 1d-Modell werden betrachtet. Letzteres erlaubt es, unterschiedliche Universalitätsklassen durch bloßes Abstimmen eines einzigen Parameters zu untersuchen. "Finite-size scaling''-Formen, kritische Exponenten, Quotienten kritischer Exponenten und andere kritische Größen werden nahegelegt und mit numerischen Ergebnissen verglichen. Eine detaillierte Beschreibung der Herleitungen aller numerisch ausgewerteter Gleichungen wird ebenso angegeben. Bei Temperatur Null wird eine gründliche Untersuchung der Grundzustände und Defektenergien gemacht. Eine Reihe interessanter Größen wird analysiert und insbesondere die untere kritische Dimension bestimmt. Bei endlicher Temperatur sind der Ordnungsparameter und die Spinglas-Suszeptibilität über die numerisch berechnete Korrelationsmatrix zugänglich. Das Spinglas-Modell im Limes unendlich vieler Spinkomponenten kann man als Ausgangspunkt zur Untersuchung der natürlicheren Modelle mit niedriger Spindimension betrachten. Wünschenswert wäre natürlich ein Modell, das die Vorteile des ersten mit den Eigenschaften des zweiten verbände. Daher wird in Modell mit Anisotropie vorgeschlagen und getestet, mit welchem versucht wird, dieses Ziel zu erreichen. Es wird auf reizvolle Wege hingewiesen, das Modell zu nutzen und eine tiefergehende Beschäftigung anzuregen. Zuletzt werden sogenannte "real-space" Renormierungsgruppenrechnungen sowohl analytisch als auch numerisch für endlich-dimensionale Vektor-Spingläser mit endlicher Anzahl von Spinkomponenten durchgeführt. Dies wird mit einer zuvor bestimmten neuen Migdal-Kadanoff Rekursionsrelation geschehen. Neben anderen Größen wird die untere kritische Dimension bestimmt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays computer simulation is used in various fields, particularly in laboratories where it is used for the exploration data which are sometimes experimentally inaccessible. In less developed countries where there is a need for up to date laboratories for the realization of practical lessons in chemistry, especially in secondary schools and some higher institutions of learning, it may permit learners to carryout experiments such as titrations without the use of laboratory materials and equipments. Computer simulations may also permit teachers to better explain the realities of practical lessons, given that computers have now become very accessible and less expensive compared to the acquisition of laboratory materials and equipments. This work is aimed at coming out with a virtual laboratory that shall permit the simulation of an acid-base titration and an oxidation-reduction titration with the use of synthetic images. To this effect, an appropriate numerical method was used to obtain appropriate organigram, which were further transcribed into source codes with the help of a programming language so as to come out with the software.