888 resultados para Computational experiment
Resumo:
In dieser Arbeit werden der experimentelle Aufbau und erste Messungen für die Bestimmung des g-Faktors des Elektrons gebunden in wasserstoff- und lithiumähnlichen mittelschweren Ionen beschrieben. Mit dem hochpräzisenWert des g-Faktors können theoretische Berechnungen der Quantenelektrodynamik gebundener Zustände überprüft werden. Die Messungen werden in einem Dreifach-Penningfallen-System durchgeführt. Dort wurden im Rahmen dieser Arbeit auch erstmals hochgeladene Ionen bis 28Si13+ in einer hierfür entwickelten Elektronenstrahl-Ionenquelle/-falle erzeugt. Für die Bestimmung des g-Faktors werden die freie Zyklotronfrequenz und die Larmorfrequenz benötigt. Erstere wird aus den drei Eigenfrequenzen des in der Präzisionsfalle gespeicherten Ions berechnet. Um das Ion bei den Messungen nicht zu verlieren, werden die Eigenfrequenzen des Ions durch Kopplung an einen radiofrequenten Nachweisschwingkreis nicht-destruktiv nachgewiesen. Die freie Zyklotronfrequenz konnte dabei mit einer relativen Genauigkeit von wenigen 10E−9 bestimmt werden. Zur Bestimmung der Larmorfrequenz ist die genaue Kenntnis der Spinrichtung des Elektrons im Magnetfeld notwendig. Diese wird durch den kontinuierlichen Stern-Gerlach-Effekt in der sogenannten Analysefalle bestimmt. Hierzu muss eine hohe Stabilität der axialen Frequenz des Ions erreicht werden. Um dies sowie die Hochpräzisionsmessungen in der Präzisionsfalle zu erreichen, wurden in dieser Arbeit beide Fallen hinsichtlich ihrer elektrischen und magnetischen Eigenschaften charakterisiert.
Resumo:
Der Wirkungsquerschnitt der Charmoniumproduktion wurde unter Nutzung der Daten aus pp-Kollisionen bei s^{1/2}=7TeV, die im Jahr 2010 vom Atlas-Experiment am LHC aufgezeichnet wurden, gemessen. Um das notwendige Detektorverständnis zu verbessern, wurde eine Energiekalibration durchgeführt.rnrnrnUnter Nutzung von Elektronen aus Zerfällen des Charmoniums wurde die Energieskala der elektromagnetischen Kalorimeter bei niedrigen Energien untersucht. Nach Anwendung der Kalibration wurden für die Energiemessung im Vergleich mit in Monte-Carlo-Simulationen gemessenen Energien Abweichungen von weniger als 0,5% gefunden.rnrnrnMit einer integrierten Luminosität von 2,2pb^{-1} wurde eine erste Messung des inklusiven Wirkungsquerschnittes für den Prozess pp->J/psi(e^{+}e^{-})+X bei s^{1/2}=7TeV vorgenommen. Das geschah im zugänglichen Bereich für Transversalimpulse p_{T,ee}>7GeV und Rapiditäten |y_{ee}|<2,4. Es wurden differentielle Wirkungsquerschnitte für den Transversalimpuls p_{T,ee} und für die Rapidität |y_{ee}| bestimmt. Integration beider Verteilungen lieferte für den inklusiven Wirkungsquerschnitt sigma(pp-> J/psi X)BR(J/psi->e^{+}e^{-}) die Werte (85,1+/-1,9_{stat}+/-11,2_{syst}+/-2,9_{Lum})nb und (75,4+/-1,6_{stat}+/-11,9_{syst}+/-2,6_{Lum})nb, die innerhalb der Systematik kompatibel sind.rnrnrnVergleiche mit Messungen von Atlas und CMS für den Prozess pp->J/psi(mu^{+}mu^{-})+X zeigten gute Übereinstimmung. Zum Vergleich mit der Theorie wurden Vorhersagen mit verschiedenen Modellen in nächst-zu-führender und mit Anteilen in nächst-zu-nächst-zu-führender Ordnung kombiniert. Der Vergleich zeigt eine gute Übereinstimmung bei Berücksichtigung von Anteilen in nächst-zu-nächst-zu-führender Ordnung.
Resumo:
Thanks to the increasing slenderness and lightness allowed by new construction techniques and materials, the effects of wind on structures became in the last decades a research field of great importance in Civil Engineering. Thanks to the advances in computers power, the numerical simulation of wind tunnel tests has became a valid complementary activity and an attractive alternative for the future. Due to its flexibility, during the last years, the computational approach gained importance with respect to the traditional experimental investigation. However, still today, the computational approach to fluid-structure interaction problems is not as widely adopted as it could be expected. The main reason for this lies in the difficulties encountered in the numerical simulation of the turbulent, unsteady flow conditions generally encountered around bluff bodies. This thesis aims at providing a guide to the numerical simulation of bridge deck aerodynamic and aeroelastic behaviour describing in detail the simulation strategies and setting guidelines useful for the interpretation of the results.
Resumo:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
Resumo:
In the context of increasing beam energy and luminosity of the LHC accelerator at CERN, it will be important to accurately measure the Machine Induced Background. A new monitoring system will be installed in the CMS cavern for measuring the beam background at high radius. This detector, called the Beam Halo Monitor, will provide an online, bunch-by-bunch measurement of background induced by beam halo interactions, separately for each beam. The detector is composed of synthetic quartz Cherenkov radiators, coupled to fast UV sensitive photomultiplier tubes. The directional and fast response of the system allows the discrimination of the background particles from the dominant flux in the cavern induced by pp collision debris, produced within the 25 ns bunch spacing. The readout electronics of this detector will make use of many components developed for the upgrade of the CMS Hadron Calorimeter electronics, with a dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal will be digitized by a charge integrating ASIC, providing both the signal rise time and the charge integrated over one bunch crossing. The backend electronics will record bunch-by-bunch histograms, which will be published to CMS and the LHC using the newly designed CMS beam instrumentation specific DAQ. A calibration and monitoring system has been designed to generate triggered pulses of UV light to monitor the efficiency of the system. The experimental results validating the design of the detector, the calibration system and the electronics will be presented.
Resumo:
The production rate of $b$ and $\bar{b}$ hadrons in $pp$ collisions are not expected to be strictly identical, due to imbalance between quarks and anti-quarks in the initial state. This phenomenon can be naively related to the fact that the $\bar{b}$ quark produced in the hard scattering might combine with a $u$ or $d$ valence quark from the colliding protons, whereas the same cannot happen for a $b$ quark. This thesis presents the analysis performed to determine the production asymmetries of $B^0$ and $B^0_s$. The analysis relies on data samples collected by the LHCb detector at the Large Hadron Collider (LHC) during the 2011 and 2012 data takings at two different values of the centre of mass energy $\sqrt{s}=7$ TeV and at $\sqrt{s}=8$ TeV, corresponding respectively to an integrated luminosity of 1 fb$^{-1}$ and of 2 fb$^{-1}$. The production asymmetry is one of the key ingredients to perform measurements of $CP$ violation in b-hadron decays at the LHC, since $CP$ asymmetries must be disentangled from other sources. The measurements of the production asymmetries are performed in bins of $p_\mathrm{T}$ and $\eta$ of the $B$-meson. The values of the production asymmetries, integrated in the ranges $4 < p_\mathrm{T} < 30$ GeV/c and $2.5<\eta<4.5$, are determined to be: \begin{equation} A_\mathrm{P}(\B^0)= (-1.00\pm0.48\pm0.29)\%,\nonumber \end{equation} \begin{equation} A_\mathrm{P}(\B^0_s)= (\phantom{-}1.09\pm2.61\pm0.61)\%,\nonumber \end{equation} where the first uncertainty is statistical and the second is systematic. The measurement of $A_\mathrm{P}(B^0)$ is performed using the full statistics collected by LHCb so far, corresponding to an integrated luminosity of 3 fb$^{-1}$, while the measurement of $A_\mathrm{P}(B^0_s)$ is realized with the first 1 fb$^{-1}$, leaving room for improvement. No clear evidence of dependences on the values of $p_\mathrm{T}$ and $\eta$ is observed. The results presented in this thesis are the most precise measurements available up to date.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
The availability of a high-intensity antiproton beam with momentum up to 15,GeV/c at the future FAIR will open a unique opportunity to investigate wide areas of nuclear physics with the $overline{P}$ANDA (anti{$overline{P}$}roton ANnihilations at DArmstadt) detector. Part of these investigations concern the Electromagnetic Form Factors of the proton in the time-like region and the study of the Transition Distribution Amplitudes, for which feasibility studies have been performed in this Thesis. rnMoreover, simulations to study the efficiency and the energy resolution of the backward endcap of the electromagnetic calorimeter of $overline{P}$ANDA are presented. This detector is crucial especially for the reconstruction of processes like $bar pprightarrow e^+ e^- pi^0$, investigated in this work. Different arrangements of dead material were studied. The results show that both, the efficiency and the energy resolution of the backward endcap of the electromagnetic calorimeter fullfill the requirements for the detection of backward particles, and that this detector is necessary for the reconstruction of the channels of interest. rnrnThe study of the annihilation channel $bar pprightarrow e^+ e^-$ will improve the knowledge of the Electromagnetic Form Factors in the time-like region, and will help to understand their connection with the Electromagnetic Form Factors in the space-like region. In this Thesis the feasibility of a measurement of the $bar pprightarrow e^+ e^-$ cross section with $overline{P}$ANDA is studied using Monte-Carlo simulations. The major background channel $bar pprightarrow pi^+ pi^-$ is taken into account. The results show a $10^9$ background suppression factor, which assure a sufficiently clean signal with less than 0.1% background contamination. The signal can be measured with an efficiency greater than 30% up to $s=14$,(GeV/c)$^2$. The Electromagnetic Form Factors are extracted from the reconstructed signal and corrected angular distribution. Above this $s$ limit, the low cross section will not allow the direct extraction of the Electromagnetic Form Factors. However, the total cross section can still be measured and an extraction of the Electromagnetic Form Factors is possible considering certain assumptions on the ratio between the electric and magnetic contributions.rnrnThe Transition Distribution Amplitudes are new non-perturbative objects describing the transition between a baryon and a meson. They are accessible in hard exclusive processes like $bar pprightarrow e^+ e^- pi^0$. The study of this process with $overline{P}$ANDA will test the Transition Distribution Amplitudes approach. This work includes a feasibility study for measuring this channel with $overline{P}$ANDA. The main background reaction is here $bar pprightarrow pi^+ pi^- pi^0$. A background suppression factor of $10^8$ has been achieved while keeping a signal efficiency above 20%.rnrnrnPart of this work has been published in the European Physics Journal A 44, 373-384 (2010).rn
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.
Resumo:
Die Messung eines möglichen elektrischen Dipolmoments des freien Neutrons erfordert genaustmögliche Kenntnis und Überwachung des magnetischen Feldes im Inneren der n2EDM-Spektrometerkammer. Die freie Spinpräzession von hyperpolarisiertem ³He kann verbunden mit einer Signalauslese mittels optisch gepumpter Cs-Magnetometer dazu genutzt werden, Messempfindlichkeit auf Magnetfeldschwankungen im Bereich weniger Femto-Tesla zu erhalten. Am Institut für Physik der Universität Mainz wurde eine ³He/Cs-Testanlage aufgebaut, um die Möglichkeiten der Signalauslese der ³He-Spinpräzession mittels eines lampengepumpten Cs-Magnetometers zu untersuchen. Darüber hinaus wurde eine ultrakompakte und transportable Polarisationseinheit entwickelt und installiert, welche ermöglicht, eine ³He-Hyperpolarisation von bis zu 55 Prozent zu erreichen. Im Anschluss wird das polarisierte 3He-Gas automatisiert komprimiert und in zwei Magnetometerzellen in Sandwichanordnung innerhalb der n2EDM-Spektrometerkammer gefüllt. In dieser Arbeit werden die Ergebnisse der ersten im Januar 2012 erfolgreich durchgeführten Messungen vorgestellt. Bei diesen Messungen wurde ³He-Gas in der ultrakompakten Polarisationseinheit hyperpolarisiert und über Führungsfelder eines Transfersystems in eine vierlagige Mumetall-Abschirmung transferiert. Im Anschluss konnte im Inneren der magnetischen Abschirmung die freie ³He-Spinpräzession mittels eines lampengepumpten Cs-Magnetometer eindeutig nachgewiesen werden.
Resumo:
The aim of the work was to explore the practical applicability of molecular dynamics at different length and time scales. From nanoparticles system over colloids and polymers to biological systems like membranes and finally living cells, a broad range of materials was considered from a theoretical standpoint. In this dissertation five chemistry-related problem are addressed by means of theoretical and computational methods. The main results can be outlined as follows. (1) A systematic study of the effect of the concentration, chain length, and charge of surfactants on fullerene aggregation is presented. The long-discussed problem of the location of C60 in micelles was addressed and fullerenes were found in the hydrophobic region of the micelles. (2) The interactions between graphene sheet of increasing size and phospholipid membrane are quantitatively investigated. (3) A model was proposed to study structure, stability, and dynamics of MoS2, a material well-known for its tribological properties. The telescopic movement of nested nanotubes and the sliding of MoS2 layers is simulated. (4) A mathematical model to gain understaning of the coupled diffusion-swelling process in poly(lactic-co-glycolic acid), PLGA, was proposed. (5) A soft matter cell model is developed to explore the interaction of living cell with artificial surfaces. The effect of the surface properties on the adhesion dynamics of cells are discussed.
Resumo:
Self-organising pervasive ecosystems of devices are set to become a major vehicle for delivering infrastructure and end-user services. The inherent complexity of such systems poses new challenges to those who want to dominate it by applying the principles of engineering. The recent growth in number and distribution of devices with decent computational and communicational abilities, that suddenly accelerated with the massive diffusion of smartphones and tablets, is delivering a world with a much higher density of devices in space. Also, communication technologies seem to be focussing on short-range device-to-device (P2P) interactions, with technologies such as Bluetooth and Near-Field Communication gaining greater adoption. Locality and situatedness become key to providing the best possible experience to users, and the classic model of a centralised, enormously powerful server gathering and processing data becomes less and less efficient with device density. Accomplishing complex global tasks without a centralised controller responsible of aggregating data, however, is a challenging task. In particular, there is a local-to-global issue that makes the application of engineering principles challenging at least: designing device-local programs that, through interaction, guarantee a certain global service level. In this thesis, we first analyse the state of the art in coordination systems, then motivate the work by describing the main issues of pre-existing tools and practices and identifying the improvements that would benefit the design of such complex software ecosystems. The contribution can be divided in three main branches. First, we introduce a novel simulation toolchain for pervasive ecosystems, designed for allowing good expressiveness still retaining high performance. Second, we leverage existing coordination models and patterns in order to create new spatial structures. Third, we introduce a novel language, based on the existing ``Field Calculus'' and integrated with the aforementioned toolchain, designed to be usable for practical aggregate programming.
Resumo:
In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.
Resumo:
The assessment of historical structures is a significant need for the next generations, as historical monuments represent the community’s identity and have an important cultural value to society. Most of historical structures built by using masonry which is one of the oldest and most common construction materials used in the building sector since the ancient time. Also it is considered a complex material, as it is a composition of brick units and mortar, which affects the structural performance of the building by having different mechanical behaviour with respect to different geometry and qualities given by the components.
Resumo:
Heart diseases are the leading cause of death worldwide, both for men and women. However, the ionic mechanisms underlying many cardiac arrhythmias and genetic disorders are not completely understood, thus leading to a limited efficacy of the current available therapies and leaving many open questions for cardiac electrophysiologists. On the other hand, experimental data availability is still a great issue in this field: most of the experiments are performed in vitro and/or using animal models (e.g. rabbit, dog and mouse), even when the final aim is to better understand the electrical behaviour of in vivo human heart either in physiological or pathological conditions. Computational modelling constitutes a primary tool in cardiac electrophysiology: in silico simulations, based on the available experimental data, may help to understand the electrical properties of the heart and the ionic mechanisms underlying a specific phenomenon. Once validated, mathematical models can be used for making predictions and testing hypotheses, thus suggesting potential therapeutic targets. This PhD thesis aims to apply computational cardiac modelling of human single cell action potential (AP) to three clinical scenarios, in order to gain new insights into the ionic mechanisms involved in the electrophysiological changes observed in vitro and/or in vivo. The first context is blood electrolyte variations, which may occur in patients due to different pathologies and/or therapies. In particular, we focused on extracellular Ca2+ and its effect on the AP duration (APD). The second context is haemodialysis (HD) therapy: in addition to blood electrolyte variations, patients undergo a lot of other different changes during HD, e.g. heart rate, cell volume, pH, and sympatho-vagal balance. The third context is human hypertrophic cardiomyopathy (HCM), a genetic disorder characterised by an increased arrhythmic risk, and still lacking a specific pharmacological treatment.