998 resultados para ATLAS, particle physics, SM, ZZ, aTGC
Resumo:
<p>The expansion of a magnetized high-pressure plasma into a low-pressure ambient medium is examined with particle-in-cell simulations. The magnetic field points perpendicular to the plasma's expansion direction and binary collisions between particles are absent. The expanding plasma steepens into a quasi-electrostatic shock that is sustained by the lower-hybrid (LH) wave. The ambipolar electric field points in the expansion direction and it induces together with the background magnetic field a fast E cross B drift of electrons. The drifting electrons modify the background magnetic field, resulting in its pile-up by the LH shock. The magnetic pressure gradient force accelerates the ambient ions ahead of the LH shock, reducing the relative velocity between the ambient plasma and the LH shock to about the phase speed of the shocked LH wave, transforming the LH shock into a nonlinear LH wave. The oscillations of the electrostatic potential have a larger amplitude and wavelength in the magnetized plasma than in an unmagnetized one with otherwise identical conditions. The energy loss to the drifting electrons leads to a noticeable slowdown of the LH shock compared to that in an unmagnetized plasma.</p>
Resumo:
<p>In this paper, Sr<sub>2</sub>Fe<sub>1.5</sub>Mo<sub>0.4</sub>Nb<sub>0.1</sub>O<sub>6-δ</sub> (SFMNb)-xSm<sub>0.2</sub>Ce<sub>0.8</sub>O<sub>2-δ</sub> (SDC) (x = 0, 20, 30, 40, 50 wt%) composite cathode materials were synthesized by a one-pot combustion method to improve the electrochemical performance of SFMNb cathode for intermediate temperature solid oxide fuel cells (IT-SOFCs). The fabrication of composite cathodes by adding SDC to SFMNb is conducive to providing extended electrochemical reaction zones for oxygen reduction reactions (ORR). X-ray diffraction (XRD) demonstrates that SFMNb is chemically compatible with SDC electrolytes at temperature up to 1100 °C. Scanning electron microscope (SEM) indicates that the SFMNb-SDC composite cathodes have a porous network nanostructure as well as the single phase SFMNb. The conductivity and thermal expansion coefficient of the composite cathodes decrease with the increased content of SDC, while the electrochemical impedance spectra (EIS) exhibits that SFMNb-40SDC composite cathode has optimal electrochemical performance with low polarization resistance (R<sub>p</sub>) on the La<sub>0.9</sub>Sr<sub>0.1</sub>Ga<sub>0.8</sub>Mg<sub>0.2</sub>O<sub>3</sub> electrolyte. The R<sub>p</sub> of the SFMNb-40SDC composite cathode is about 0.047 Ω cm<sup>2</sup> at 800 °C in air. A single cell with SFMNb-40SDC cathode also displays favorable discharge performance, whose maximum power density is 1.22 W cm<sup>-2</sup> at 800 °C. All results indicate that SFMNb-40SDC composite material is a promising cathode candidate for IT-SOFCs.</p>
Resumo:
Searches for the supersymmetric partner of the top quark (stop) are motivated by natural supersymmetry, where the stop has to be light to cancel the large radiative corrections to the Higgs boson mass. This thesis presents three different searches for the stop at √s = 8 TeV and √s = 13 TeV using data from the ATLAS experiment at CERN’s Large Hadron Collider. The thesis also includes a study of the primary vertex reconstruction performance in data and simulation at √s = 7 TeV using tt and Z events. All stop searches presented are carried out in final states with a single lepton, four or more jets and large missing transverse energy. A search for direct stop pair production is conducted with 20.3 fb−1 of data at a center-of-mass energy of √s = 8 TeV. Several stop decay scenarios are considered, including those to a top quark and the lightest neutralino and to a bottom quark and the lightest chargino. The sensitivity of the analysis is also studied in the context of various phenomenological MSSM models in which more complex decay scenarios can be present. Two different analyses are carried out at √s = 13 TeV. The first one is a search for both gluino-mediated and direct stop pair production with 3.2 fb−1 of data while the second one is a search for direct stop pair production with 13.2 fb−1 of data in the decay scenario to a bottom quark and the lightest chargino. The results of the analyses show no significant excess over the Standard Model predictions in the observed data. Consequently, exclusion limits are set at 95% CL on the masses of the stop and the lightest neutralino.
Resumo:
Loess is the most important collapsible soil; possibly the only engineering soil in which real collapse occurs. A real collapse involves a diminution in volume - it would be an open metastable packing being reduced to a more closely packed, more stable structure. Metastability is at the heart of the collapsible soils problem. To envisage and to model the collapse process in a metastable medium, knowledge is required about the nature and shape of the particles, the types of packings they assume (real and ideal), and the nature of the collapse process - a packing transition upon a change to the effective stress in a media of double porosity. Particle packing science has made little progress in geoscience discipline - since the initial packing paradigms set by Graton and Fraser (1935) - nevertheless is relatively well-established in the soft matter physics discipline. The collapse process can be represented by mathematical modelling of packing – including the Monte Carlo simulations - but relating representation to process remains difficult. This paper revisits the problem of sudden packing transition from a micro-physico-mechanical viewpoint (i.e. collapse imetan terms of structure-based effective stress). This cross-disciplinary approach helps in generalization on collapsible soils to be made that suggests loess is the only truly collapsible soil, because it is only loess which is so totally influenced by the packing essence of the formation process.
Resumo:
We investigate the implication of the nonlinear and non-local multi-particle Schrodinger-Newton equation for the motion of the mass centre of an extended multi-particle object, giving self-contained and comprehensible derivations. In particular, we discuss two opposite limiting cases. In the first case, the width of the centre-of-mass wave packet is assumed much larger than the actual extent of the object, in the second case it is assumed much smaller. Both cases result in nonlinear deviations from ordinary free Schrodinger evolution for the centre of mass. On a general conceptual level we include some discussion in order to clarify the physical basis and intention for studying the Schrodinger-Newton equation.
Resumo:
Effective field theories (EFTs) are ubiquitous in theoretical physics and in particular in field theory descriptions of quantum systems probed at energies much lower than one or few characterizing scales. More recently, EFTs have gained a prominent role in the study of fundamental interactions and in particular in the parametriasation of new physics beyond the Standard Model, which would occur at scales Λ, much larger than the electroweak scale. In this thesis, EFTs are employed to study three different physics cases. First, we consider light-by-light scattering as a possible probe of new physics. At low energies it can be described by dimension-8 operators, leading to the well-known Euler-Heisenberg Lagrangian. We consider the explicit dependence of matching coefficients on type of particle running in the loop, confirming the sensitiveness to the spin, mass, and interactions of possibly new particles. Second, we consider EFTs to describe Dark Matter (DM) interactions with SM particles. We consider a phenomenologically motivated case, i.e., a new fermion state that couples to the Hypercharge through a form factor and has no interactions with photons and the Z boson. Results from direct, indirect and collider searches for DM are used to constrain the parameter space of the model. Third, we consider EFTs that describe axion-like particles (ALPs), whose phenomenology is inspired by the Peccei-Quinn solution to strong CP problem. ALPs generically couple to ordinary matter through dimension-5 operators. In our case study, we investigate the rather unique phenomenological implications of ALPs with enhanced couplings to the top quark.
Resumo:
The Short Baseline Neutrino Program at Fermilab aims to confirm or definitely rule out the existence of sterile neutrinos at the eV mass scale. The program will perform the most sensitive search in both the nue appearance and numu disappearance channels along the Booster Neutrino Beamline. The far detector, ICARUS-T600, is a high-granularity Liquid Argon Time Projection Chamber located at 600 m from the Booster neutrino source and at shallow depth, thus exposed to a large flux of cosmic particles. Additionally, ICARUS is located 6 degrees off axis with respect to the Neutrino beam from the Main Injector. This thesis presents the construction, installation and commissioning of the ICARUS Cosmic Ray Tagger system, providing a 4 pi coverage of the active liquid argon volume. By exploiting only the precise nanosecond scale synchronization of the cosmic tagger and the PMT optical flashes it is possible to determine if an event was likely triggered by a cosmic particle. The results show that using the Top Cosmic Ray Tagger alone a conservative rejection larger than 65% of the cosmic induced background can be achieved. Additionally, by requiring the absence of hits in the whole cosmic tagger system it is possible to perform a pre-selection of contained neutrino events ahead of the full event reconstruction.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.
Resumo:
Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.
Resumo:
Current data indicate that the size of high-density lipoprotein (HDL) may be considered an important marker for cardiovascular disease risk. We established reference values of mean HDL size and volume in an asymptomatic representative Brazilian population sample (n=590) and their associations with metabolic parameters by gender. Size and volume were determined in HDL isolated from plasma by polyethyleneglycol precipitation of apoB-containing lipoproteins and measured using the dynamic light scattering (DLS) technique. Although the gender and age distributions agreed with other studies, the mean HDL size reference value was slightly lower than in some other populations. Both HDL size and volume were influenced by gender and varied according to age. HDL size was associated with age and HDL-C (total population); non- white ethnicity and CETP inversely (females); HDL-C and PLTP mass (males). On the other hand, HDL volume was determined only by HDL-C (total population and in both genders) and by PLTP mass (males). The reference values for mean HDL size and volume using the DLS technique were established in an asymptomatic and representative Brazilian population sample, as well as their related metabolic factors. HDL-C was a major determinant of HDL size and volume, which were differently modulated in females and in males.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.