988 resultados para ultrafast physics
Resumo:
Expertise in physics has been traditionally studied in cognitive science, where physics expertise is understood through the difference between novice and expert problem solving skills. The cognitive perspective of physics experts only create a partial model of physics expertise and does not take into account the development of physics experts in the natural context of research. This dissertation takes a social and cultural perspective of learning through apprenticeship to model the development of physics expertise of physics graduate students in a research group. I use a qualitative methodological approach of an ethnographic case study to observe and video record the common practices of graduate students in their biophysics weekly research group meetings. I recorded notes on observations and conduct interviews with all participants of the biophysics research group for a period of eight months. I apply the theoretical framework of Communities of Practice to distinguish the cultural norms of the group that cultivate physics expert practices. Results indicate that physics expertise is specific to a topic or subfield and it is established through effectively publishing research in the larger biophysics research community. The participant biophysics research group follows a learning trajectory for its students to contribute to research and learn to communicate their research in the larger biophysics community. In this learning trajectory students develop expert member competencies to learn to communicate their research and to learn the standards and trends of research in the larger research community. Findings from this dissertation expand the model of physics expertise beyond the cognitive realm and add the social and cultural nature of physics expertise development. This research also addresses ways to increase physics graduate student success towards their PhD. and decrease the 48% attrition rate of physics graduate students. Cultivating effective research experiences that give graduate students agency and autonomy beyond their research groups gives students the motivation to finish graduate school and establish their physics expertise.
Resumo:
Ultrafast laser owns extreme small beam size and high pulse intensity which enable spatial localised modification either on the surface or in the bulk of materials. Therefore, ultrafast laser has been widely used to micromachine optical fibres to alter optical structures. In order to do the precise control of the micromachining process to achieve the desired structure and modification, investigations on laser parameters control should be carried out to make better understanding of the effects in the laser micromachining process. These responses are important to laser machining, most of which are usually unknown during the process. In this work, we report the real time monitored results of the reflection of PMMA based optical fibre Bragg gratings (POFBGs) during excimer ultraviolet laser micromachining process. Photochemical and thermal effects have been observed during the process. The UV radiation was absorbed by the PMMA material, which consequently induced the modifications in both spatial structure and material properties of the POFBG. The POFBG showed a significant wavelength blue shift during laser micromachining. Part of it attributed to UV absorption converted thermal energy whilst the other did not disappear after POFBG cooling off, which attributed to UV induced photodegradation in POF.
Resumo:
Walking is the most basic form of transportation. A good understanding of pedestrian’s dynamics is essential in meeting the mobility and accessibility needs of people by providing a safe and quick walking flow. Advances in the dynamics of pedestrians in crowds are of great theoretical and practical interest, as they lead to new insights regarding the planning of pedestrian facilities, crowd management, or evacuation analysis. As a physicist, I would like to put forward some additional theoretical and practical contributions that could be interesting to explore, regarding the perspective of physics on about human crowd dynamics (panic as a specific form of behavior excluded).
Resumo:
The present manuscript focuses on out of equilibrium physics in two dimensional models. It has the purpose of presenting some results obtained as part of out of equilibrium dynamics in its non perturbative aspects. This can be understood in two different ways: the former is related to integrability, which is non perturbative by nature; the latter is related to emergence of phenomena in the out of equilibirum dynamics of non integrable models that are not accessible by standard perturbative techniques. In the study of out of equilibirum dynamics, two different protocols are used througout this work: the bipartitioning protocol, within the Generalised Hydrodynamics (GHD) framework, and the quantum quench protocol. With GHD machinery we study the Staircase Model, highlighting how the hydrodynamic picture sheds new light into the physics of Integrable Quantum Field Theories; with quench protocols we analyse different setups where a non-perturbative description is needed and various dynamical phenomena emerge, such as the manifistation of a dynamical Gibbs effect, confinement and the emergence of Bloch oscillations preventing thermalisation.
Resumo:
Image-to-image (i2i) translation networks can generate fake images beneficial for many applications in augmented reality, computer graphics, and robotics. However, they require large scale datasets and high contextual understanding to be trained correctly. In this thesis, we propose strategies for solving these problems, improving performances of i2i translation networks by using domain- or physics-related priors. The thesis is divided into two parts. In Part I, we exploit human abstraction capabilities to identify existing relationships in images, thus defining domains that can be leveraged to improve data usage efficiency. We use additional domain-related information to train networks on web-crawled data, hallucinate scenarios unseen during training, and perform few-shot learning. In Part II, we instead rely on physics priors. First, we combine realistic physics-based rendering with generative networks to boost outputs realism and controllability. Then, we exploit naive physical guidance to drive a manifold reorganization, which allowed generating continuous conditions such as timelapses.
Resumo:
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.
Resumo:
The Short Baseline Neutrino Program at Fermilab aims to confirm or definitely rule out the existence of sterile neutrinos at the eV mass scale. The program will perform the most sensitive search in both the nue appearance and numu disappearance channels along the Booster Neutrino Beamline. The far detector, ICARUS-T600, is a high-granularity Liquid Argon Time Projection Chamber located at 600 m from the Booster neutrino source and at shallow depth, thus exposed to a large flux of cosmic particles. Additionally, ICARUS is located 6 degrees off axis with respect to the Neutrino beam from the Main Injector. This thesis presents the construction, installation and commissioning of the ICARUS Cosmic Ray Tagger system, providing a 4 pi coverage of the active liquid argon volume. By exploiting only the precise nanosecond scale synchronization of the cosmic tagger and the PMT optical flashes it is possible to determine if an event was likely triggered by a cosmic particle. The results show that using the Top Cosmic Ray Tagger alone a conservative rejection larger than 65% of the cosmic induced background can be achieved. Additionally, by requiring the absence of hits in the whole cosmic tagger system it is possible to perform a pre-selection of contained neutrino events ahead of the full event reconstruction.
Resumo:
La malattia COVID-19 associata alla sindrome respiratoria acuta grave da coronavirus 2 (SARS-CoV-2) ha rappresentato una grave minaccia per la salute pubblica e l’economia globale sin dalla sua scoperta in Cina, nel dicembre del 2019. Gli studiosi hanno effettuato numerosi studi ed in particolar modo l’applicazione di modelli epidemiologici costruiti a partire dai dati raccolti, ha permesso la previsione di diversi scenari sullo sviluppo della malattia, nel breve-medio termine. Gli obiettivi di questa tesi ruotano attorno a tre aspetti: i dati disponibili sulla malattia COVID-19, i modelli matematici compartimentali, con particolare riguardo al modello SEIJDHR che include le vaccinazioni, e l’utilizzo di reti neurali ”physics-informed” (PINNs), un nuovo approccio basato sul deep learning che mette insieme i primi due aspetti. I tre aspetti sono stati dapprima approfonditi singolarmente nei primi tre capitoli di questo lavoro e si sono poi applicate le PINNs al modello SEIJDHR. Infine, nel quarto capitolo vengono riportati frammenti rilevanti dei codici Python utilizzati e i risultati numerici ottenuti. In particolare vengono mostrati i grafici sulle previsioni nel breve-medio termine, ottenuti dando in input dati sul numero di positivi, ospedalizzati e deceduti giornalieri prima riguardanti la città di New York e poi l’Italia. Inoltre, nell’indagine della parte predittiva riguardante i dati italiani, si è individuato un punto critico legato alla funzione che modella la percentuale di ricoveri; sono stati quindi eseguiti numerosi esperimenti per il controllo di tali previsioni.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.
Resumo:
This thesis project is framed in the research field of Physics Education and aims to contribute to the reflection on the importance of disciplinary identities in addressing interdisciplinarity through the lens of the Nature of Science (NOS). In particular, the study focuses on the module on the parabola and parabolic motion, which was designed within the EU project IDENTITIES. The project aims to design modules to innovate pre-service teacher education according to contemporary challenges, focusing on interdisciplinarity in curricular and STEM topics (especially between physics, mathematics and computer science). The modules are designed according to a model of disciplines and interdisciplinarity that the project IDENTITIES has been elaborating on two main theoretical frameworks: the Family Resemblance Approach (FRA), reconceptualized for the Nature of science (Erduran & Dagher, 2014), and the boundary crossing and boundary objects framework by Akkerman and Bakker (2011). The main aim of the thesis is to explore the impact of this interdisciplinary model in the specific case of the implementation of the parabola and parabolic motion module in a context of preservice teacher education. To reach this purpose, we have analyzed some data collected during the implementation in order to investigate, in particular, the role of the FRA as a learning tool to: a) elaborate on the concept of “discipline”, within the broader problem to define interdisciplinarity; b) compare the epistemic core of physics and mathematics; c) develop epistemic skills and interdisciplinary competences in student-teachers. The analysis of the data led us to recognize three different roles played by the FRA: FRA as epistemological activator, FRA as scaffolding for reasoning and navigating (inhabiting) the complexity, and FRA as lens to investigate the relationship between physics and mathematics in the historical case.
Resumo:
Ultrafast pump-probe spectroscopy is a conceptually simple and versatile tool for resolving photoinduced dynamics in molecular systems. Due to the fast development of new experimental setups, such as synchrotron light sources and X-ray free electron lasers (XFEL), new spectral windows are becoming accessible. On the one hand, these sources have enabled scientist to access faster and faster time scales and to reach unprecedent insights into dynamical properties of matter. On the other hand, the complementarity of well-developed and novel techniques allows to study the same physical process from different points of views, integrating the advantages and overcoming the limitations of each approach. In this context, it is highly desirable to reach a clear understanding of which type of spectroscopy is more suited to capture a certain facade of a given photo-induced process, that is, to establish a correlation between the process to be unraveled and the technique to be used. In this thesis, I will show how computational spectroscopy can be a tool to establish such a correlation. I will study a specific process, which is the ultrafast energy transfer in the nicotinamide adenine dinucleotide dimer (NADH). This process will be observed in different spectral windows (from UV-VIS to X-rays), accessing the ability of different spectroscopic techniques to unravel the system evolution by means of state-of-the-art theoretical models and methodologies. The comparison of different spectroscopic simulations will demonstrate their complementarity, eventually allowing to identify the type of spectroscopy that is best suited to resolve the ultrafast energy transfer.
Resumo:
El Niño-Southern Oscillation (ENSO) è il maggiore fenomeno climatico che avviene a livello dell’Oceano Pacifico tropicale e che ha influenze ambientali, climatiche e socioeconomiche a larga scala. In questa tesi si ripercorrono i passi principali che sono stati fatti per tentare di comprendere un fenomeno così complesso. Per prima cosa, si sono studiati i meccanismi che ne governano la dinamica, fino alla formulazione del modello matematico chiamato Delayed Oscillator (DO) model, proposto da Suarez e Schopf nel 1988. In seguito, per tenere conto della natura caotica del sistema studiato, si è introdotto nel modello lo schema chiamato Stochastically Perturbed Parameterisation Tendencies (SPPT). Infine, si sono portati due esempi di soluzione numerica del DO, sia con che senza l’introduzione della correzione apportata dallo schema SPPT, e si è visto in che misura SPPT porta reali miglioramenti al modello studiato.
Resumo:
Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.
Resumo:
My thesis falls within the framework of physics education and teaching of mathematics. The objective of this report was made possible by using geometrical (in mathematics) and qualitative (in physics) problems. We have prepared four (resp. three) open answer exercises for mathematics (resp. physics). The test batch has been selected across two different school phases: end of the middle school (third year, 8\textsuperscript{th} grade) and beginning of high school (second and third year, 10\textsuperscript{th} and 11\textsuperscript{th} grades respectively). High school students achieved the best results in almost every problem, but 10\textsuperscript{th} grade students got the best overall results. Moreover, a clear tendency to not even try qualitative problems resolution has emerged from the first collection of graphs, regardless of subject and grade. In order to improve students' problem-solving skills, it is worth to invest on vertical learning and spiral curricula. It would make sense to establish a stronger and clearer connection between physics and mathematical knowledge through an interdisciplinary approach.