51 resultados para Computational experiment
Resumo:
The production rate of $b$ and $\bar{b}$ hadrons in $pp$ collisions are not expected to be strictly identical, due to imbalance between quarks and anti-quarks in the initial state. This phenomenon can be naively related to the fact that the $\bar{b}$ quark produced in the hard scattering might combine with a $u$ or $d$ valence quark from the colliding protons, whereas the same cannot happen for a $b$ quark. This thesis presents the analysis performed to determine the production asymmetries of $B^0$ and $B^0_s$. The analysis relies on data samples collected by the LHCb detector at the Large Hadron Collider (LHC) during the 2011 and 2012 data takings at two different values of the centre of mass energy $\sqrt{s}=7$ TeV and at $\sqrt{s}=8$ TeV, corresponding respectively to an integrated luminosity of 1 fb$^{-1}$ and of 2 fb$^{-1}$. The production asymmetry is one of the key ingredients to perform measurements of $CP$ violation in b-hadron decays at the LHC, since $CP$ asymmetries must be disentangled from other sources. The measurements of the production asymmetries are performed in bins of $p_\mathrm{T}$ and $\eta$ of the $B$-meson. The values of the production asymmetries, integrated in the ranges $4 < p_\mathrm{T} < 30$ GeV/c and $2.5<\eta<4.5$, are determined to be: \begin{equation} A_\mathrm{P}(\B^0)= (-1.00\pm0.48\pm0.29)\%,\nonumber \end{equation} \begin{equation} A_\mathrm{P}(\B^0_s)= (\phantom{-}1.09\pm2.61\pm0.61)\%,\nonumber \end{equation} where the first uncertainty is statistical and the second is systematic. The measurement of $A_\mathrm{P}(B^0)$ is performed using the full statistics collected by LHCb so far, corresponding to an integrated luminosity of 3 fb$^{-1}$, while the measurement of $A_\mathrm{P}(B^0_s)$ is realized with the first 1 fb$^{-1}$, leaving room for improvement. No clear evidence of dependences on the values of $p_\mathrm{T}$ and $\eta$ is observed. The results presented in this thesis are the most precise measurements available up to date.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.
Resumo:
The aim of the work was to explore the practical applicability of molecular dynamics at different length and time scales. From nanoparticles system over colloids and polymers to biological systems like membranes and finally living cells, a broad range of materials was considered from a theoretical standpoint. In this dissertation five chemistry-related problem are addressed by means of theoretical and computational methods. The main results can be outlined as follows. (1) A systematic study of the effect of the concentration, chain length, and charge of surfactants on fullerene aggregation is presented. The long-discussed problem of the location of C60 in micelles was addressed and fullerenes were found in the hydrophobic region of the micelles. (2) The interactions between graphene sheet of increasing size and phospholipid membrane are quantitatively investigated. (3) A model was proposed to study structure, stability, and dynamics of MoS2, a material well-known for its tribological properties. The telescopic movement of nested nanotubes and the sliding of MoS2 layers is simulated. (4) A mathematical model to gain understaning of the coupled diffusion-swelling process in poly(lactic-co-glycolic acid), PLGA, was proposed. (5) A soft matter cell model is developed to explore the interaction of living cell with artificial surfaces. The effect of the surface properties on the adhesion dynamics of cells are discussed.
Resumo:
Self-organising pervasive ecosystems of devices are set to become a major vehicle for delivering infrastructure and end-user services. The inherent complexity of such systems poses new challenges to those who want to dominate it by applying the principles of engineering. The recent growth in number and distribution of devices with decent computational and communicational abilities, that suddenly accelerated with the massive diffusion of smartphones and tablets, is delivering a world with a much higher density of devices in space. Also, communication technologies seem to be focussing on short-range device-to-device (P2P) interactions, with technologies such as Bluetooth and Near-Field Communication gaining greater adoption. Locality and situatedness become key to providing the best possible experience to users, and the classic model of a centralised, enormously powerful server gathering and processing data becomes less and less efficient with device density. Accomplishing complex global tasks without a centralised controller responsible of aggregating data, however, is a challenging task. In particular, there is a local-to-global issue that makes the application of engineering principles challenging at least: designing device-local programs that, through interaction, guarantee a certain global service level. In this thesis, we first analyse the state of the art in coordination systems, then motivate the work by describing the main issues of pre-existing tools and practices and identifying the improvements that would benefit the design of such complex software ecosystems. The contribution can be divided in three main branches. First, we introduce a novel simulation toolchain for pervasive ecosystems, designed for allowing good expressiveness still retaining high performance. Second, we leverage existing coordination models and patterns in order to create new spatial structures. Third, we introduce a novel language, based on the existing ``Field Calculus'' and integrated with the aforementioned toolchain, designed to be usable for practical aggregate programming.
Resumo:
Heart diseases are the leading cause of death worldwide, both for men and women. However, the ionic mechanisms underlying many cardiac arrhythmias and genetic disorders are not completely understood, thus leading to a limited efficacy of the current available therapies and leaving many open questions for cardiac electrophysiologists. On the other hand, experimental data availability is still a great issue in this field: most of the experiments are performed in vitro and/or using animal models (e.g. rabbit, dog and mouse), even when the final aim is to better understand the electrical behaviour of in vivo human heart either in physiological or pathological conditions. Computational modelling constitutes a primary tool in cardiac electrophysiology: in silico simulations, based on the available experimental data, may help to understand the electrical properties of the heart and the ionic mechanisms underlying a specific phenomenon. Once validated, mathematical models can be used for making predictions and testing hypotheses, thus suggesting potential therapeutic targets. This PhD thesis aims to apply computational cardiac modelling of human single cell action potential (AP) to three clinical scenarios, in order to gain new insights into the ionic mechanisms involved in the electrophysiological changes observed in vitro and/or in vivo. The first context is blood electrolyte variations, which may occur in patients due to different pathologies and/or therapies. In particular, we focused on extracellular Ca2+ and its effect on the AP duration (APD). The second context is haemodialysis (HD) therapy: in addition to blood electrolyte variations, patients undergo a lot of other different changes during HD, e.g. heart rate, cell volume, pH, and sympatho-vagal balance. The third context is human hypertrophic cardiomyopathy (HCM), a genetic disorder characterised by an increased arrhythmic risk, and still lacking a specific pharmacological treatment.