951 resultados para In-loop-simulations
Resumo:
Top quark studies play an important role in the physics program of the Large Hadron Collider (LHC). The energy and luminosity reached allow the acquisition of a large amount of data especially in kinematic regions never studied before. In this thesis is presented the measurement of the ttbar production differential cross section on data collected by ATLAS in 2012 in proton proton collisions at \sqrt{s} = 8 TeV, corresponding to an integrated luminosity of 20.3 fb^{−1}. The measurement is performed for ttbar events in the semileptonic channel where the hadronically decaying top quark has a transverse momentum above 300 GeV. The hadronic top quark decay is reconstructed as a single large radius jet and identified using jet substructure properties. The final differential cross section result has been compared with several theoretical distributions obtaining a discrepancy of about the 25% between data and predictions, depending on the MC generator. Furthermore the kinematic distributions of the ttbar production process are very sensitive to the choice of the parton distribution function (PDF) set used in the simulations and could provide constraints on gluons PDF. In particular in this thesis is performed a systematic study on the PDF of the protons, varying several PDF sets and checking which one better describes the experimental distributions. The boosted techniques applied in this measurement will be fundamental in the next data taking at \sqrt{s}=13 TeV when will be produced a large amount of heavy particles with high momentum.
Resumo:
In dieser Arbeit wurde der Effekt verschiedener Hilfsstoffe auf die Permeabilität von Substanzen der BCS Klasse III untersucht. Drei pharmazeutische Hilfsstoffe wurden hinsichtlich der Möglichkeit ihres Einsatzes als Permeationsverbesserer in Arzneistoffformulierungen untersucht. Außerdem wurde die Beteiligung von Gallensalzen an der Nahrungsmittel-Interaktion von Trospium untersucht.rnEs wurden Komplexe aus Trospium und λ-Carrageen hergestellt. Eine verbesserte Permeation, die höchstwahrscheinlich durch Mukoadhäsion zustande kam, war im Ussing-Kammer-Modell sehr gut reproduzierbar. In vivo war der Effekt nur bei einigen Tieren zu sehen und es kam zu hohen Standardabweichungen.rnTrospium bildet Ionenpaare mit Gallensalzen, welche zu einer besseren Permeabilität des Wirkstoffes führten. In Gegenwart von Nahrungsfetten blieb dieser Effekt aus. Eine Beteiligung der Interaktion von Trospium und Gallensalzen am Food-Effekt kann auf Basis dieser Ergebnisse als wahrscheinlich gelten.rnIm Caco-2-Modell konnte bereits eine Verbesserung der Permeabilität von Trospium durch Zusatz von Eudragit E gezeigt werden. Nun konnte gezeigt werden, dass durch den Hilfsstoff auch in vivo in Ratten eine verbesserte Permeation erreicht werden kann.rnDie Permeationsverbesserung von Aciclovir durch Zusatz von Chitosan-HCl sollte untersucht werden. Im Caco-2-Modell kam es zu einer signifikanten Permeationsverbesserung. Im Ussing-Kammer-Modell wurde die Permeation nicht verbessert. In Loop-Studien konnte nur bei hohen Hilfsstoff-Konzentrationen eine Tendenz zur Permeationsverbesserung erkannt werden.rn
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
In this thesis different approaches for the modeling and simulation of the blood protein fibrinogen are presented. The approaches are meant to systematically connect the multiple time and length scales involved in the dynamics of fibrinogen in solution and at inorganic surfaces. The first part of the thesis will cover simulations of fibrinogen on an all atom level. Simulations of the fibrinogen protomer and dimer are performed in explicit solvent to characterize the dynamics of fibrinogen in solution. These simulations reveal an unexpectedly large and fast bending motion that is facilitated by molecular hinges located in the coiled-coil region of fibrinogen. This behavior is characterized by a bending and a dihedral angle and the distribution of these angles is measured. As a consequence of the atomistic detail of the simulations it is possible to illuminate small scale behavior in the binding pockets of fibrinogen that hints at a previously unknown allosteric effect. In a second step atomistic simulations of the fibrinogen protomer are performed at graphite and mica surfaces to investigate initial adsorption stages. These simulations highlight the different adsorption mechanisms at the hydrophobic graphite surface and the charged, hydrophilic mica surface. It is found that the initial adsorption happens in a preferred orientation on mica. Many effects of practical interest involve aggregates of many fibrinogen molecules. To investigate such systems, time and length scales need to be simulated that are not attainable in atomistic simulations. It is therefore necessary to develop lower resolution models of fibrinogen. This is done in the second part of the thesis. First a systematically coarse grained model is derived and parametrized based on the atomistic simulations of the first part. In this model the fibrinogen molecule is represented by 45 beads instead of nearly 31,000 atoms. The intra-molecular interactions of the beads are modeled as a heterogeneous elastic network while inter-molecular interactions are assumed to be a combination of electrostatic and van der Waals interaction. A method is presented that determines the charges assigned to beads by matching the electrostatic potential in the atomistic simulation. Lastly a phenomenological model is developed that represents fibrinogen by five beads connected by rigid rods with two hinges. This model only captures the large scale dynamics in the atomistic simulations but can shed light on experimental observations of fibrinogen conformations at inorganic surfaces.
Resumo:
Numerous bacterial pathogens subvert cellular functions of eukaryotic host cells by the injection of effector proteins via dedicated secretion systems. The type IV secretion system (T4SS) effector protein BepA from Bartonella henselae is composed of an N-terminal Fic domain and a C-terminal Bartonella intracellular delivery domain, the latter being responsible for T4SS-mediated translocation into host cells. A proteolysis resistant fragment (residues 10-302) that includes the Fic domain shows autoadenylylation activity and adenylyl transfer onto Hela cell extract proteins as demonstrated by autoradiography on incubation with α-[(32)P]-ATP. Its crystal structure, determined to 2.9-Å resolution by the SeMet-SAD method, exhibits the canonical Fic fold including the HPFxxGNGRxxR signature motif with several elaborations in loop regions and an additional β-rich domain at the C-terminus. On crystal soaking with ATP/Mg(2+), additional electron density indicated the presence of a PP(i) /Mg(2+) moiety, the side product of the adenylylation reaction, in the anion binding nest of the signature motif. On the basis of this information and that of the recent structure of IbpA(Fic2) in complex with the eukaryotic target protein Cdc42, we present a detailed model for the ternary complex of Fic with the two substrates, ATP/Mg(2+) and target tyrosine. The model is consistent with an in-line nucleophilic attack of the deprotonated side-chain hydroxyl group onto the α-phosphorus of the nucleotide to accomplish AMP transfer. Furthermore, a general, sequence-independent mechanism of target positioning through antiparallel β-strand interactions between enzyme and target is suggested.
Resumo:
Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.
Resumo:
Slow conduction and unidirectional conduction block (UCB) are key mechanisms of reentry. Following abrupt changes in heart rate, dynamic changes of conduction velocity (CV) and structurally determined UCB may critically influence arrhythmogenesis. Using patterned cultures of neonatal rat ventricular myocytes grown on microelectrode arrays, we investigated the dynamics of CV in linear strands and the behavior of UCB in tissue expansions following an abrupt decrease in pacing cycle length (CL). Ionic mechanisms underlying rate-dependent conduction changes were investigated using the Pandit-Clark-Giles-Demir model. In linear strands, CV gradually decreased upon a reduction of CL from 500 ms to 230-300 ms. In contrast, at very short CLs (110-220 ms), CV first decreased before increasing again. The simulations suggested that the initial conduction slowing resulted from gradually increasing action potential duration (APD), decreasing diastolic intervals, and increasing postrepolarization refractoriness, which impaired Na(+) current (I(Na)) recovery. Only at very short CLs did APD subsequently shorten again due to increasing Na(+)/K(+) pump current secondary to intracellular Na(+) accumulation, which caused recovery of CV. Across tissue expansions, the degree of UCB gradually increased at CLs of 250-390 ms, whereas at CLs of 180-240 ms, it first increased and subsequently decreased. In the simulations, reduction of inward currents caused by increasing intracellular Na(+) and Ca(2+) concentrations contributed to UCB progression, which was reversed by increasing Na(+)/K(+) pump activity. In conclusion, CV and UCB follow intricate dynamics upon an abrupt decrease in CL that are determined by the interplay among I(Na) recovery, postrepolarization refractoriness, APD changes, ion accumulation, and Na(+)/K(+) pump function.
Resumo:
This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.
Resumo:
For human beings, the origin of life has always been an interesting and mysterious matter, particularly how life arose from inorganic matter through natural processes. Polymerization is always involved in such processes. In this paper we built what we refer to as ideal and physical models to simulate spontaneous polymerization based on certain physical principles. As the modeling confirms, without taking external energy, small and simple inorganic molecules formed bigger and more complicated molecules, which are necessary ingredients of all living organisms. In our simulations, we utilized actual ranges of parameters according to their experimentally observed values. The results from the simulations led to a good agreement with the nature of polymerization. After sorting out through all the models that were built, we arrived at a final model that, it is hoped, can be used to simply and efficiently describe spontaneous polymerization using only three parameters: the dipole moment, the distance between molecules, and the temperature.
Resumo:
We consider the 2d XY Model with topological lattice actions, which are invariant against small deformations of the field configuration. These actions constrain the angle between neighbouring spins by an upper bound, or they explicitly suppress vortices (and anti-vortices). Although topological actions do not have a classical limit, they still lead to the universal behaviour of the Berezinskii-Kosterlitz-Thouless (BKT) phase transition — at least up to moderate vortex suppression. In the massive phase, the analytically known Step Scaling Function (SSF) is reproduced in numerical simulations. However, deviations from the expected universal behaviour of the lattice artifacts are observed. In the massless phase, the BKT value of the critical exponent ηc is confirmed. Hence, even though for some topological actions vortices cost zero energy, they still drive the standard BKT transition. In addition we identify a vortex-free transition point, which deviates from the BKT behaviour.
Resumo:
In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
Atmospheric circulation modes are important concepts in understanding the variability of atmospheric dynamics. Assuming their spatial patterns to be fixed, such modes are often described by simple indices from rather short observational data sets. The increasing length of reanalysis products allows these concepts and assumptions to be scrutinised. Here we investigate the stability of spatial patterns of Northern Hemisphere teleconnections by using the Twentieth Century Reanalysis as well as several control and transient millennium-scale simulations with coupled models. The observed and simulated centre of action of the two major teleconnection patterns, the North Atlantic Oscillation (NAO) and to some extent the Pacific North American (PNA), are not stable in time. The currently observed dipole pattern of the NAO, its centre of action over Iceland and the Azores, split into a north–south dipole pattern in the western Atlantic with a wave train pattern in the eastern part, connecting the British Isles with West Greenland and the eastern Mediterranean during the period 1940–1969 AD. The PNA centres of action over Canada are shifted southwards and over Florida into the Gulf of Mexico during the period 1915–1944 AD. The analysis further shows that shifts in the centres of action of either teleconnection pattern are not related to changes in the external forcing applied in transient simulations of the last millennium. Such shifts in their centres of action are accompanied by changes in the relation of local precipitation and temperature with the overlying atmospheric mode. These findings further undermine the assumption of stationarity between local climate/proxy variability and large-scale dynamics inherent when using proxy-based reconstructions of atmospheric modes, and call for a more robust understanding of atmospheric variability on decadal timescales.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
The discovery of grid cells in the medial entorhinal cortex (MEC) permits the characterization of hippocampal computation in much greater detail than previously possible. The present study addresses how an integrate-and-fire unit driven by grid-cell spike trains may transform the multipeaked, spatial firing pattern of grid cells into the single-peaked activity that is typical of hippocampal place cells. Previous studies have shown that in the absence of network interactions, this transformation can succeed only if the place cell receives inputs from grids with overlapping vertices at the location of the place cell's firing field. In our simulations, the selection of these inputs was accomplished by fast Hebbian plasticity alone. The resulting nonlinear process was acutely sensitive to small input variations. Simulations differing only in the exact spike timing of grid cells produced different field locations for the same place cells. Place fields became concentrated in areas that correlated with the initial trajectory of the animal; the introduction of feedback inhibitory cells reduced this bias. These results suggest distinct roles for plasticity of the perforant path synapses and for competition via feedback inhibition in the formation of place fields in a novel environment. Furthermore, they imply that variability in MEC spiking patterns or in the rat's trajectory is sufficient for generating a distinct population code in a novel environment and suggest that recalling this code in a familiar environment involves additional inputs and/or a different mode of operation of the network.