988 resultados para computataional physics
Resumo:
The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.
Resumo:
Mechanical conditioning has been shown to promote tissue formation in a wide variety of tissue engineering efforts. However the underlying mechanisms by which external mechanical stimuli regulate cells and tissues are not known. This is particularly relevant in the area of heart valve tissue engineering (HVTE) owing to the intense hemodynamic environments that surround native valves. Some studies suggest that oscillatory shear stress (OSS) caused by steady flow and scaffold flexure play a critical role in engineered tissue formation derived from bone marrow derived stem cells (BMSCs). In addition, scaffold flexure may enhance nutrient (e.g. oxygen, glucose) transport. In this study, we computationally quantified the i) magnitude of fluid-induced shear stresses; ii) the extent of temporal fluid oscillations in the flow field using the oscillatory shear index (OSI) parameter, and iii) glucose and oxygen mass transport profiles. Noting that sample cyclic flexure induces a high degree of oscillatory shear stress (OSS), we incorporated moving boundary computational fluid dynamic simulations of samples housed within a bioreactor to consider the effects of: 1) no flow, no flexure (control group), 2) steady flow-alone, 3) cyclic flexure-alone and 4) combined steady flow and cyclic flexure environments. We also coupled a diffusion and convention mass transport equation to the simulated system. We found that the coexistence of both OSS and appreciable shear stress magnitudes, described by the newly introduced parameter OSI-:τ: explained the high levels of engineered collagen previously observed from combining cyclic flexure and steady flow states. On the other hand, each of these metrics on its own showed no association. This finding suggests that cyclic flexure and steady flow synergistically promote engineered heart valve tissue production via OSS, so long as the oscillations are accompanied by a critical magnitude of shear stress. In addition, our simulations showed that mass transport of glucose and oxygen is enhanced by sample movement at low sample porosities, but did not play a role in highly porous scaffolds. Preliminary in-house in vitro experiments showed that cell proliferation and phenotype is enhanced in OSI-:τ: environments.^
Resumo:
The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system’s EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter’s components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled
Resumo:
Expertise in physics has been traditionally studied in cognitive science, where physics expertise is understood through the difference between novice and expert problem solving skills. The cognitive perspective of physics experts only create a partial model of physics expertise and does not take into account the development of physics experts in the natural context of research. This dissertation takes a social and cultural perspective of learning through apprenticeship to model the development of physics expertise of physics graduate students in a research group. I use a qualitative methodological approach of an ethnographic case study to observe and video record the common practices of graduate students in their biophysics weekly research group meetings. I recorded notes on observations and conduct interviews with all participants of the biophysics research group for a period of eight months. I apply the theoretical framework of Communities of Practice to distinguish the cultural norms of the group that cultivate physics expert practices. Results indicate that physics expertise is specific to a topic or subfield and it is established through effectively publishing research in the larger biophysics research community. The participant biophysics research group follows a learning trajectory for its students to contribute to research and learn to communicate their research in the larger biophysics community. In this learning trajectory students develop expert member competencies to learn to communicate their research and to learn the standards and trends of research in the larger research community. Findings from this dissertation expand the model of physics expertise beyond the cognitive realm and add the social and cultural nature of physics expertise development. This research also addresses ways to increase physics graduate student success towards their PhD. and decrease the 48% attrition rate of physics graduate students. Cultivating effective research experiences that give graduate students agency and autonomy beyond their research groups gives students the motivation to finish graduate school and establish their physics expertise.
Resumo:
Walking is the most basic form of transportation. A good understanding of pedestrian’s dynamics is essential in meeting the mobility and accessibility needs of people by providing a safe and quick walking flow. Advances in the dynamics of pedestrians in crowds are of great theoretical and practical interest, as they lead to new insights regarding the planning of pedestrian facilities, crowd management, or evacuation analysis. As a physicist, I would like to put forward some additional theoretical and practical contributions that could be interesting to explore, regarding the perspective of physics on about human crowd dynamics (panic as a specific form of behavior excluded).
Resumo:
The present manuscript focuses on out of equilibrium physics in two dimensional models. It has the purpose of presenting some results obtained as part of out of equilibrium dynamics in its non perturbative aspects. This can be understood in two different ways: the former is related to integrability, which is non perturbative by nature; the latter is related to emergence of phenomena in the out of equilibirum dynamics of non integrable models that are not accessible by standard perturbative techniques. In the study of out of equilibirum dynamics, two different protocols are used througout this work: the bipartitioning protocol, within the Generalised Hydrodynamics (GHD) framework, and the quantum quench protocol. With GHD machinery we study the Staircase Model, highlighting how the hydrodynamic picture sheds new light into the physics of Integrable Quantum Field Theories; with quench protocols we analyse different setups where a non-perturbative description is needed and various dynamical phenomena emerge, such as the manifistation of a dynamical Gibbs effect, confinement and the emergence of Bloch oscillations preventing thermalisation.
Resumo:
Image-to-image (i2i) translation networks can generate fake images beneficial for many applications in augmented reality, computer graphics, and robotics. However, they require large scale datasets and high contextual understanding to be trained correctly. In this thesis, we propose strategies for solving these problems, improving performances of i2i translation networks by using domain- or physics-related priors. The thesis is divided into two parts. In Part I, we exploit human abstraction capabilities to identify existing relationships in images, thus defining domains that can be leveraged to improve data usage efficiency. We use additional domain-related information to train networks on web-crawled data, hallucinate scenarios unseen during training, and perform few-shot learning. In Part II, we instead rely on physics priors. First, we combine realistic physics-based rendering with generative networks to boost outputs realism and controllability. Then, we exploit naive physical guidance to drive a manifold reorganization, which allowed generating continuous conditions such as timelapses.
Resumo:
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.
Resumo:
The Short Baseline Neutrino Program at Fermilab aims to confirm or definitely rule out the existence of sterile neutrinos at the eV mass scale. The program will perform the most sensitive search in both the nue appearance and numu disappearance channels along the Booster Neutrino Beamline. The far detector, ICARUS-T600, is a high-granularity Liquid Argon Time Projection Chamber located at 600 m from the Booster neutrino source and at shallow depth, thus exposed to a large flux of cosmic particles. Additionally, ICARUS is located 6 degrees off axis with respect to the Neutrino beam from the Main Injector. This thesis presents the construction, installation and commissioning of the ICARUS Cosmic Ray Tagger system, providing a 4 pi coverage of the active liquid argon volume. By exploiting only the precise nanosecond scale synchronization of the cosmic tagger and the PMT optical flashes it is possible to determine if an event was likely triggered by a cosmic particle. The results show that using the Top Cosmic Ray Tagger alone a conservative rejection larger than 65% of the cosmic induced background can be achieved. Additionally, by requiring the absence of hits in the whole cosmic tagger system it is possible to perform a pre-selection of contained neutrino events ahead of the full event reconstruction.
Resumo:
La malattia COVID-19 associata alla sindrome respiratoria acuta grave da coronavirus 2 (SARS-CoV-2) ha rappresentato una grave minaccia per la salute pubblica e l’economia globale sin dalla sua scoperta in Cina, nel dicembre del 2019. Gli studiosi hanno effettuato numerosi studi ed in particolar modo l’applicazione di modelli epidemiologici costruiti a partire dai dati raccolti, ha permesso la previsione di diversi scenari sullo sviluppo della malattia, nel breve-medio termine. Gli obiettivi di questa tesi ruotano attorno a tre aspetti: i dati disponibili sulla malattia COVID-19, i modelli matematici compartimentali, con particolare riguardo al modello SEIJDHR che include le vaccinazioni, e l’utilizzo di reti neurali ”physics-informed” (PINNs), un nuovo approccio basato sul deep learning che mette insieme i primi due aspetti. I tre aspetti sono stati dapprima approfonditi singolarmente nei primi tre capitoli di questo lavoro e si sono poi applicate le PINNs al modello SEIJDHR. Infine, nel quarto capitolo vengono riportati frammenti rilevanti dei codici Python utilizzati e i risultati numerici ottenuti. In particolare vengono mostrati i grafici sulle previsioni nel breve-medio termine, ottenuti dando in input dati sul numero di positivi, ospedalizzati e deceduti giornalieri prima riguardanti la città di New York e poi l’Italia. Inoltre, nell’indagine della parte predittiva riguardante i dati italiani, si è individuato un punto critico legato alla funzione che modella la percentuale di ricoveri; sono stati quindi eseguiti numerosi esperimenti per il controllo di tali previsioni.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.
Resumo:
This thesis project is framed in the research field of Physics Education and aims to contribute to the reflection on the importance of disciplinary identities in addressing interdisciplinarity through the lens of the Nature of Science (NOS). In particular, the study focuses on the module on the parabola and parabolic motion, which was designed within the EU project IDENTITIES. The project aims to design modules to innovate pre-service teacher education according to contemporary challenges, focusing on interdisciplinarity in curricular and STEM topics (especially between physics, mathematics and computer science). The modules are designed according to a model of disciplines and interdisciplinarity that the project IDENTITIES has been elaborating on two main theoretical frameworks: the Family Resemblance Approach (FRA), reconceptualized for the Nature of science (Erduran & Dagher, 2014), and the boundary crossing and boundary objects framework by Akkerman and Bakker (2011). The main aim of the thesis is to explore the impact of this interdisciplinary model in the specific case of the implementation of the parabola and parabolic motion module in a context of preservice teacher education. To reach this purpose, we have analyzed some data collected during the implementation in order to investigate, in particular, the role of the FRA as a learning tool to: a) elaborate on the concept of “discipline”, within the broader problem to define interdisciplinarity; b) compare the epistemic core of physics and mathematics; c) develop epistemic skills and interdisciplinary competences in student-teachers. The analysis of the data led us to recognize three different roles played by the FRA: FRA as epistemological activator, FRA as scaffolding for reasoning and navigating (inhabiting) the complexity, and FRA as lens to investigate the relationship between physics and mathematics in the historical case.
Resumo:
El Niño-Southern Oscillation (ENSO) è il maggiore fenomeno climatico che avviene a livello dell’Oceano Pacifico tropicale e che ha influenze ambientali, climatiche e socioeconomiche a larga scala. In questa tesi si ripercorrono i passi principali che sono stati fatti per tentare di comprendere un fenomeno così complesso. Per prima cosa, si sono studiati i meccanismi che ne governano la dinamica, fino alla formulazione del modello matematico chiamato Delayed Oscillator (DO) model, proposto da Suarez e Schopf nel 1988. In seguito, per tenere conto della natura caotica del sistema studiato, si è introdotto nel modello lo schema chiamato Stochastically Perturbed Parameterisation Tendencies (SPPT). Infine, si sono portati due esempi di soluzione numerica del DO, sia con che senza l’introduzione della correzione apportata dallo schema SPPT, e si è visto in che misura SPPT porta reali miglioramenti al modello studiato.
Resumo:
Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.