927 resultados para Astrophysics - High Energy Astrophysical Phenomena
Resumo:
In the current age of fast-depleting conventional energy sources, top priority is given to exploring non-conventional energy sources, designing highly efficient energy storage systems and converting existing machines/instruments/devices into energy-efficient ones. ‘Energy efficiency’ is one of the important challenges for today’s scientific and research community, worldwide. In line with this demand, the current research was focused on developing two highly energy-efficient devices – field emitters and Li-ion batteries, using beneficial properties of carbon nanotubes (CNT). Interface-engineered, directly grown CNTs were used as cathode in field emitters, while similar structure was applied as anode in Li-ion batteries. Interface engineering was found to offer minimum resistance to electron flow and strong bonding with the substrate. Both field emitters and Li-ion battery anodes were benefitted from these advantages, demonstrating high energy efficiency. Field emitter, developed during this research, could be characterized by low turn-on field, high emission current, very high field enhancement factor and extremely good stability during long-run. Further, application of 3-dimensional design to these field emitters resulted in achieving one of the highest emission current densities reported so far. The 3-D field emitter registered 27 times increase in current density, as compared to their 2-D counterparts. These achievements were further followed by adding new functionalities, transparency and flexibility, to field emitters, keeping in view of current demand for flexible displays. A CNT-graphene hybrid structure showed appreciable emission, along with very good transparency and flexibility. Li-ion battery anodes, prepared using the interface-engineered CNTs, have offered 140% increment in capacity, as compared to conventional graphite anodes. Further, it has shown very good rate capability and an exceptional ‘zero capacity degradation’ during long cycle operation. Enhanced safety and charge transfer mechanism of this novel anode structure could be explained from structural characterization. In an attempt to progress further, CNTs were coated with ultrathin alumina by atomic layer deposition technique. These alumina-coated CNT anodes offered much higher capacity and an exceptional rate capability, with very low capacity degradation in higher current densities. These highly energy efficient CNT based anodes are expected to enhance capacities of future Li-ion batteries.
Resumo:
The main goal of this dissertation was to study two- and three-nucleon Short Range Correlations (SRCs) in high energy three-body breakup of 3He nucleus in 3He(e, e'NN) N reaction. SRCs are characterized by quantum fluctuations in nuclei during which constituent nucleons partially overlap with each other. ^ A theoretical framework is developed within the Generalized Eikonal Approximation (GEA) which upgrades existing medium-energy methods that are inapplicable for high momentum and energy transfer reactions. High momentum and energy transfer is required to provide sufficient resolution for probing SRCs. GEA is a covariant theory which is formulated through the effective Feynman diagrammatic rules. It allows self-consistent calculation of single and double re-scatterings amplitudes which are present in three-body breakup processes. The calculations were carried out in detail and the analytical result for the differential cross section of 3He(e, e'NN)N reaction was derived in a form applicable for programming and numerical calculations. The corresponding computer code has been developed and the results of computation were compared to the published experimental data, showing satisfactory agreement for a wide range of values of missing momenta. ^ In addition to the high energy approximation this study exploited the exclusive nature of the process under investigation to gain more information about the SRCs. The description of the exclusive 3He( e, e'NN)N reaction has been done using the formalism of the nuclear decay function, which is a practically unexplored quantity and is related to the conventional spectral function through the integration of the phase space of the recoil nucleons. Detailed investigation showed that the decay function clearly exhibits the main features of two- and three-nucleon correlations. Four highly practical types of SRCs in 3He nucleus were discussed in great detail for different orders of the final state re-interactions using the decay function as an unique identifying tool. ^ The overall conclusion in this dissertation suggests that the investigation of the decay function opens up a completely new venue in studies of short range nuclear properties. ^
Resumo:
The main goal of this dissertation was to study two- and three-nucleon Short Range Correlations (SRCs) in high energy three-body breakup of 3He nucleus in 3He(e, e'NN)N reaction. SRCs are characterized by quantum fluctuations in nuclei during which constituent nucleons partially overlap with each other. A theoretical framework is developed within the Generalized Eikonal Approximation (GEA) which upgrades existing medium-energy methods that are inapplicable for high momentum and energy transfer reactions. High momentum and energy transfer is required to provide sufficient resolution for probing SRCs. GEA is a covariant theory which is formulated through the effective Feynman diagrammatic rules. It allows self-consistent calculation of single and double re-scatterings amplitudes which are present in three-body breakup processes. The calculations were carried out in detail and the analytical result for the differential cross section of 3He(e, e'NN)Nreaction was derived in a form applicable for programming and numerical calculations. The corresponding computer code has been developed and the results of computation were compared to the published experimental data, showing satisfactory agreement for a wide range of values of missing momenta. In addition to the high energy approximation this study exploited the exclusive nature of the process under investigation to gain more information about the SRCs. The description of the exclusive 3He(e, e'NN)N reaction has been done using the formalism of the nuclear decay function, which is a practically unexplored quantity and is related to the conventional spectral function through the integration of the phase space of the recoil nucleons. Detailed investigation showed that the decay function clearly exhibits the main features of two- and three-nucleon correlations. Four highly practical types of SRCs in 3He nucleus were discussed in great detail for different orders of the final state re-interactions using the decay function as an unique identifying tool. The overall conclusion in this dissertation suggests that the investigation of the decay function opens up a completely new venue in studies of short range nuclear properties.
Resumo:
An accurate knowledge of the fluorescence yield and its dependence on atmospheric properties such as pressure, temperature or humidity is essential to obtain a reliable measurement of the primary energy of cosmic rays in experiments using the fluorescence technique. In this work, several sets of fluorescence yield data (i.e. absolute value and quenching parameters) are described and compared. A simple procedure to study the effect of the assumed fluorescence yield on the reconstructed shower parameters (energy and shower maximum depth) as a function of the primary features has been developed. As an application, the effect of water vapor and temperature dependence of the collisional cross section on the fluorescence yield and its impact on the reconstruction of primary energy and shower maximum depth has been studied. Published by Elsevier B.V.
Resumo:
We numerically investigate a fiber laser which contains an active fiber along with a dispersion decreasing fiber both operating at normal dispersion. Large-bandwidth pulses are obtained that can be linearly compressed resulting in ultra-short high-energy pulse generation. ©2010 Crown.
Resumo:
This work was supported by the Joint Services Electronics Program (U.S. Army, U.S. Navy, and U.S. Air Force) under Contract No. DA 28 043 AMC 00073(E).
Resumo:
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
Gli Ultra-High-Energy Cosmic Rays sono dei raggi cosmici-dotati di energia estremamente elevata-che raggiungono la Terra con un bassissimo rateo e dei quali abbiamo pochi dati a riguardo; le incertezze riguardano la loro composizione, la loro sorgente, i metodi di accelerazione e le caratteristiche dei campi magnetici che li deviano durante il loro cammino. L’obiettivo di questo studio è determinare quali modelli di campo magnetico possano descrivere correttamente la propagazione degli UHECRs, andando a fare un confronto con i dati sperimentali a disposizione; infatti, quello che osserviamo è una distribuzione isotropa nel cielo e, di conseguenza, i modelli teorici di propagazione, per poter essere accettati, devono rispecchiare tale comportamento. Sono stati testati nove modelli di campo magnetico tratti da simulazioni cosmologiche, andando a considerare due diverse composizione per i CRs (simil-ferro e simil-protone) e il risultato ha dato delle risposte positive solo per tre di essi. Tali modelli, per cui troviamo accordo, sono caratterizzati da una scala di inomegeneità più ampia rispetto a quella dei modelli scartati, infatti, analizzando il loro spettro di potenza, il maggior contributo è dato da fluttuazioni di campo magnetico su scale di 10 Mpc. Ciò naturalmente, viste anche le poche informazioni riguardo ai campi magnetici intergalattici, ci porta a pensare che campi di questo tipo siano favoriti. Inoltre, per tali modelli, gli esiti sono risultati particolarmente in accordo con i dati sperimentali, considerando CRs con composizione simile al ferro: ciò fa pensare che tale composizione possa essere quella effettiva.
Resumo:
Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.
Resumo:
Galactic microquasars are certainly one of the most recent additions to the field of high energy Astrophysics. These new objects are just X-ray binaries with the ability to generate relativistic jets and their interest has been growing during the last decade. Today, they represent primary targets for all space based observatories working in the X-ray and [gamma]-ray domains. Behind such interest, there is hope that their study will assist us to understand some of the analog phenomena observed in distant quasars and active galactic nuclei, wich share with microquasars practically the same scaled-up physics. Microquasars are also believed to be among the different kind of sources responsible for the violent and ever changing appearance of the [gamma]-ray ski. In this paper we review the general situation of the microquasar topic, their identification and study, including comments on the recent observational and theoretical discoveries most relevant in our opinion.
Resumo:
Nuclear (p,alpha) reactions destroying the so-called ""light-elements"" lithium, beryllium and boron have been largely studied in the past mainly because their role in understanding some astrophysical phenomena, i.e. mixing-phenomena occurring in young F-G stars [1]. Such mechanisms transport the surface material down to the region close to the nuclear destruction zone, where typical temperatures of the order of similar to 10(6) K are reached. The corresponding Gamow energy E(0)=1.22 (Z(x)(2)Z(X)(2)T(6)(2))(1/3) [2] is about similar to 10 keV if one considers the ""boron-case"" and replaces in the previous formula Z(x) = 1, Z(X) = 5 and T(6) = 5. Direct measurements of the two (11)B(p,alpha(0))(8)Be and (10)B(p,alpha)(7)Be reactions in correspondence of this energy region are difficult to perform mainly because the combined effects of Coulomb barrier penetrability and electron screening [3]. The indirect method of the Trojan Horse (THM) [4-6] allows one to extract the two-body reaction cross section of interest for astrophysics without the extrapolation-procedures. Due to the THM formalism, the extracted indirect data have to be normalized to the available direct ones at higher energies thus implying that the method is a complementary tool in solving some still open questions for both nuclear and astrophysical issues [7-12].
Resumo:
The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.