954 resultados para Monte-Carlo method
Resumo:
The pulmonary crackling and the formation of liquid bridges are problems that for centuries have been attracting the attention of scientists. In order to study these phenomena, it was developed a canonical cubic lattice-gas like model to explain the rupture of liquid bridges in lung airways [A. Alencar et al., 2006, PRE]. Here, we further develop this model and add entropy analysis to study thermodynamic properties, such as free energy and force. The simulations were performed using the Monte Carlo method with Metropolis algorithm. The exchange between gas and liquid particles were performed randomly according to the Kawasaki dynamics and weighted by the Boltzmann factor. Each particle, which can be solid (s), liquid (l) or gas (g), has 26 neighbors: 6 + 12 + 8, with distances 1, √2 and √3, respectively. The energy of a lattice's site m is calculated by the following expression: Em = ∑k=126 Ji(m)j(k) in witch (i, j) = g, l or s. Specifically, it was studied the surface free energy of the liquid bridge, trapped between two planes, when its height is changed. For that, was considered two methods. First, just the internal energy was calculated. Then was considered the entropy. It was fond no difference in the surface free energy between this two methods. We calculate the liquid bridge force between the two planes using the numerical surface free energy. This force is strong for small height, and decreases as the distance between the two planes, height, is increased. The liquid-gas system was also characterized studying the variation of internal energy and heat capacity with the temperature. For that, was performed simulation with the same proportion of liquid and gas particle, but different lattice size. The scale of the liquid-gas system was also studied, for low temperature, using different values to the interaction Jij.
Resumo:
Quantitative structure – activity relationships (QSARs) developed to evaluate percentage of inhibition of STa-stimulated (Escherichia coli) cGMP accumulation in T84 cells are calculated by the Monte Carlo method. This endpoint represents a measure of biological activity of a substance against diarrhea. Statistical quality of the developed models is quite good. The approach is tested using three random splits of data into the training and test sets. The statistical characteristics for three splits are the following: (1) n = 20, r2 = 0.7208, q2 = 0.6583, s = 16.9, F = 46 (training set); n = 11, r2 = 0.8986, s = 14.6 (test set); (2) n = 19, r2 = 0.6689, q2 = 0.5683, s = 17.6, F = 34 (training set); n = 12, r2 = 0.8998, s = 12.1 (test set); and (3) n = 20, r2 = 0.7141, q2 = 0.6525, s = 14.7, F = 45 (training set); n = 11, r2 = 0.8858, s = 19.5 (test set). Based on the proposed here models hypothetical compounds which can be useful agents against diarrhea are suggested.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
The conversion coefficients from air kerma to ICRU operational dose equivalent quantities for ENEA’s realization of the X-radiation qualities L10-L35 of the ISO “Low Air Kerma rate” series (L), N10-N40 of the ISO “Narrow spectrum” series (N) and H10-H60 of the ISO “High Air-kerma rate” (H) series and two beams at 5 kV and 7.5 kV were determined by utilising X-ray spectrum measurements. The pulse-height spectra were measured using a planar high-purity germanium spectrometer (HPGe) and unfolded to fluence spectra using a stripping procedure then validate with using Monte Carlo generated data of the spectrometer response. HPGe portable detector has a diameter of 8.5 mm and a thickness of 5 mm. The entrance window of the crystal is collimated by a 0.5 mm thick Aluminum ring to an open diameter of 6.5 mm. The crystal is mounted at a distance of 5 mm from the Berillium window (thickness 25.4 micron). The Monte Carlo method (MCNP-4C) was used to calculate the efficiency, escape and Compton curves of a planar high-purity germanium detector (HPGe) in the 5-60 keV energy. These curves were used for the determination of photon spectra produced by the X-ray machine SEIFERT ISOVOLT 160 kV in order to allow a precise characterization of photon beams in the low energy range, according to the ISO 4037. The detector was modelled with the MCNP computer code and validated with experimental data. To verify the measuring and the stripping procedure, the first and the second half-value layers and the air kerma rate were calculated from the counts spectra and compared with the values measured using an a free-air ionization chamber. For each radiation quality, the spectrum was characterized by the parameters given in ISO 4037-1. The conversion coefficients from the air kerma to the ICRU operational quantities Hp(10), Hp(0.07), H’(0.07) and H*(10) were calculated using monoenergetic conversion coefficients. The results are discussed with respect to ISO 4037-4, and compared with published results for low-energy X-ray spectra. The main motivation for this work was the lack of a treatment of the low photon energy region (from a few keV up to about 60 keV).
Resumo:
This thesis deals with inflation theory, focussing on the model of Jarrow & Yildirim, which is nowadays used when pricing inflation derivatives. After recalling main results about short and forward interest rate models, the dynamics of the main components of the market are derived. Then the most important inflation-indexed derivatives are explained (zero coupon swap, year-on-year, cap and floor), and their pricing proceeding is shown step by step. Calibration is explained and performed with a common method and an heuristic and non standard one. The model is enriched with credit risk, too, which allows to take into account the possibility of bankrupt of the counterparty of a contract. In this context, the general method of pricing is derived, with the introduction of defaultable zero-coupon bonds, and the Monte Carlo method is treated in detailed and used to price a concrete example of contract. Appendixes: A: martingale measures, Girsanov's theorem and the change of numeraire. B: some aspects of the theory of Stochastic Differential Equations; in particular, the solution for linear EDSs, and the Feynman-Kac Theorem, which shows the connection between EDSs and Partial Differential Equations. C: some useful results about normal distribution.
Resumo:
Mit Hilfe der Pfadintegral-Monte Carlo-Methode werdenPhasendiagramme von physisorbierten Molekülschichten aufGraphit untersucht. Die Verwendung von realistischen Potenzialen sowie dieBehandlung aller translatorischen und rotatorischenFreiheitsgrade erlaubt einen quantitativen Vergleich mit denExperimenten.Krypton-Atome bilden in der Monolage ein kommensurablesGitter mit den Atomen über der Mitte jeder drittenGraphitwabe.Die Vorgänge am Schmelzübergang werden von der Desorptioneiniger Atome dominiert. Die Argon-Schicht auf Graphit ist dagegen inkommensurabel.Zweiatomigen Stickstoff-Moleküle bilden eineorientierungsgeordnete Tieftemperaturphase(Fischgrät-Struktur). Quantenfluktuationen führen zu einer Erniedrigung der mitklassischen Methoden berechneten Phasenübergangstemperaturum 12%.Damit wird der experimentelle Wert von 28 K erreicht.Die Anisotropie und das Dipolmoment von Kohlenmonoxid führenzu einer dipolar geordneten Tieftemperaturphase.Die experimentell nicht geklärte Struktur kann in derQuantensimulation als antiferroelektrischeFischgrät-Struktur identifiziert werden.Der Phasenübergang liegt mit 6 K sehr nahe am Experiment(5.2 K).Für die Argon-Stickstoff-Mischsysteme wird dasPhasendiagramm in der Konzentrations(x)-Temperatur(T)-Ebeneerstellt. Die Übergangstemperaturen decken sich mit denen desExperiments.In Konfigurationen mit zufälliger Teilchenbesetzung weisen die linearen Moleküle ab Argon-Konzentrationen von10% ein Orientierungsglas-Verhalten auf.Durch einen zusätzlichen Teilchenaustausch wird in denMischsystemen die Bildung einer Windrad-Phase ermöglicht, inder die Argon-Atome eine Überstruktur annehmen.Diese Phase wird experimentell imArgon-Kohlenmonoxid-Mischsystem vorgefunden, dessenx-T-Phasendiagramm in guter Übereinstimmung mit denSimulationsergebnissen steht.Die explizite Berücksichtigung der Quantenmechanik in denComputersimulationen liefert wesentliche Beiträge zurKlärung des Phasenverhaltens und der Bestimmung vonÜbergangstemperaturen der Tieftemperaturstrukturen.
Resumo:
In dieser Arbeit wurden die Phasenübergänge einer einzelnen Polymerkette mit Hilfe der Monte Carlo Methode untersucht. Das Bondfluktuationsmodell wurde zur Simulation benutzt, wobei ein attraktives Kastenpotential zwischen allen Monomeren der Polymerkette gewirkt hat. Drei Arten von Bewegungen sind eingeführt worden, um die Polymerkette richtig zu relaxieren. Diese sind die Hüpfbewegung, die Reptationsbewegung und die Pivotbewegung. Um die Volumenausschlußwechselwirkung zu prüfen und um die Anzahl der Nachbarn jedes Monomers zu bestimmen ist ein hierarchischer Suchalgorithmus eingeführt worden. Die Zustandsdichte des Modells ist mittels des Wang-Landau Algorithmus bestimmt worden. Damit sind thermodynamische Größen berechnet worden, um die Phasenübergänge der einzelnen Polymerkette zu studieren. Wir haben zuerst eine freie Polymerkette untersucht. Der Knäuel-Kügelchen Übergang zeigt sich als ein kontinuierlicher Übergang, bei dem der Knäuel zum Kügelchen zusammenfällt. Der Kügelchen-Kügelchen Übergang bei niedrigeren Temperaturen ist ein Phasenübergang der ersten Ordnung, mit einer Koexistenz des flüssigen und festen Kügelchens, das eine kristalline Struktur hat. Im thermodynamischen Limes sind die Übergangstemperaturen identisch. Das entspricht einem Verschwinden der flüssigen Phase. In zwei Dimensionen zeigt das Modell einen kontinuierlichen Knäuel-Kügelchen Übergang mit einer lokal geordneten Struktur. Wir haben ferner einen Polymermushroom, das ist eine verankerte Polymerkette, zwischen zwei repulsiven Wänden im Abstand D untersucht. Das Phasenverhalten der Polymerkette zeigt einen dimensionalen crossover. Sowohl die Verankerung als auch die Beschränkung fördern den Knäuel-Kügelchen Übergang, wobei es eine Symmetriebrechung gibt, da die Ausdehnung der Polymerkette parallel zu den Wänden schneller schrumpft als die senkrecht zu den Wänden. Die Beschränkung hindert den Kügelchen-Kügelchen Übergang, wobei die Verankerung keinen Einfluss zu haben scheint. Die Übergangstemperaturen im thermodynamischen Limes sind wiederum identisch im Rahmen des Fehlers. Die spezifische Wärme des gleichen Modells aber mit einem abstoßendem Kastenpotential zeigt eine Schottky Anomalie, typisch für ein Zwei-Niveau System.
Resumo:
Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn
Resumo:
To assist rational compound design of organic semiconductors, two problems need to be addressed. First, the material morphology has to be known at an atomistic level. Second, with the morphology at hand, an appropriate charge transport model needs to be developed in order to link charge carrier mobility to structure.rnrnThe former can be addressed by generating atomistic morphologies using molecular dynamics simulations. However, the accessible range of time- and length-scales is limited. To overcome these limitations, systematic coarse-graining methods can be used. In the first part of the thesis, the Versatile Object-oriented Toolkit for Coarse-graining Applications is introduced, which provides a platform for the implementation of coarse-graining methods. Tools to perform Boltzmann inversion, iterative Boltzmann inversion, inverse Monte Carlo, and force-matching are available and have been tested on a set of model systems (water, methanol, propane and a single hexane chain). Advantages and problems of each specific method are discussed.rnrnIn partially disordered systems, the second issue is closely connected to constructing appropriate diabatic states between which charge transfer occurs. In the second part of the thesis, the description initially used for small conjugated molecules is extended to conjugated polymers. Here, charge transport is modeled by introducing conjugated segments on which charge carriers are localized. Inter-chain transport is then treated within a high temperature non-adiabatic Marcus theory while an adiabatic rate expression is used for intra-chain transport. The charge dynamics is simulated using the kinetic Monte Carlo method.rnrnThe entire framework is finally employed to establish a relation between the morphology and the charge mobility of the neutral and doped states of polypyrrole, a conjugated polymer. It is shown that for short oligomers, charge carrier mobility is insensitive to the orientational molecular ordering and is determined by the threshold transfer integral which connects percolating clusters of molecules that form interconnected networks. The value of this transfer integral can be related to the radial distribution function. Hence, charge mobility is mainly determined by the local molecular packing and is independent of the global morphology, at least in such a non-crystalline state of a polymer.
Resumo:
In the large maturity limit, we compute explicitly the Local Volatility surface for Heston, through Dupire’s formula, with Fourier pricing of the respective derivatives of the call price. Than we verify that the prices of European call options produced by the Heston model, concide with those given by the local volatility model where the Local Volatility is computed as said above.
Resumo:
Among all possible realizations of quark and antiquark assembly, the nucleon (the proton and the neutron) is the most stable of all hadrons and consequently has been the subject of intensive studies. Mass, shape, radius and more complex representations of its internal structure are measured since several decades using different probes. The proton (spin 1/2) is described by the electric GE and magnetic GM form factors which characterise its internal structure. The simplest way to measure the proton form factors consists in measuring the angular distribution of the electron-proton elastic scattering accessing the so-called Space-Like region where q2 < 0. Using the crossed channel antiproton proton <--> e+e-, one accesses another kinematical region, the so-called Time-Like region where q2 > 0. However, due to the antiproton proton <--> e+e- threshold q2th, only the kinematical domain q2 > q2th > 0 is available. To access the unphysical region, one may use the antiproton proton --> pi0 e+ e- reaction where the pi0 takes away a part of the system energy allowing q2 to be varied between q2th and almost 0. This thesis aims to show the feasibility of such measurements with the PANDA detector which will be installed on the new high intensity antiproton ring at the FAIR facility at Darmstadt. To describe the antiproton proton --> pi0 e+ e- reaction, a Lagrangian based approach is developed. The 5-fold differential cross section is determined and related to linear combinations of hadronic tensors. Under the assumption of one nucleon exchange, the hadronic tensors are expressed in terms of the 2 complex proton electromagnetic form factors. An extraction method which provides an access to the proton electromagnetic form factor ratio R = |GE|/|GM| and for the first time in an unpolarized experiment to the cosine of the phase difference is developed. Such measurements have never been performed in the unphysical region up to now. Extended simulations were performed to show how the ratio R and the cosine can be extracted from the positron angular distribution. Furthermore, a model is developed for the antiproton proton --> pi0 pi+ pi- background reaction considered as the most dangerous one. The background to signal cross section ratio was estimated under different cut combinations of the particle identification information from the different detectors and of the kinematic fits. The background contribution can be reduced to the percent level or even less. The corresponding signal efficiency ranges from a few % to 30%. The precision on the determination of the ratio R and of the cosine is determined using the expected counting rates via Monte Carlo method. A part of this thesis is also dedicated to more technical work with the study of the prototype of the electromagnetic calorimeter and the determination of its resolution.
Resumo:
In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.
Resumo:
In this thesis we are presenting a broadly based computer simulation study of two-dimensional colloidal crystals under different external conditions. In order to fully understand the phenomena which occur when the system is being compressed or when the walls are being sheared, it proved necessary to study also the basic motion of the particles and the diffusion processes which occur in the case without these external forces. In the first part of this thesis we investigate the structural transition in the number of rows which occurs when the crystal is being compressed by placing the structured walls closer together. Previous attempts to locate this transition were impeded by huge hysteresis effects. We were able to determine the transition point with higher precision by applying both the Schmid-Schilling thermodynamic integration method and the phase switch Monte Carlo method in order to determine the free energies. These simulations showed not only that the phase switch method can successfully be applied to systems with a few thousand particles and a soft crystalline structure with a superimposed pattern of defects, but also that this method is way more efficient than a thermodynamic integration when free energy differences are to be calculated. Additionally, the phase switch method enabled us to distinguish between several energetically very similar structures and to determine which one of them was actually stable. Another aspect considered in the first result chapter of this thesis is the ensemble inequivalence which can be observed when the structural transition is studied in the NpT and in the NVT ensemble. The second part of this work deals with the basic motion occurring in colloidal crystals confined by structured walls. Several cases are compared where the walls are placed in different positions, thereby introducing an incommensurability into the crystalline structure. Also the movement of the solitons, which are created in the course of the structural transition, is investigated. Furthermore, we will present results showing that not only the well-known mechanism of vacancies and interstitial particles leads to diffusion in our model system, but that also cooperative ring rotation phenomena occur. In this part and the following we applied Langevin dynamics simulations. In the last chapter of this work we will present results on the effect of shear on the colloidal crystal. The shear was implemented by moving the walls with constant velocity. We have observed shear banding and, depending on the shear velocity, that the inner part of the crystal breaks into several domains with different orientations. At very high shear velocities holes are created in the structure, which originate close to the walls, but also diffuse into the inner part of the crystal.
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn