931 resultados para energy simulation
Resumo:
In the framework of a global transition to a low-carbon energy mix, the interest in advanced nuclear Small Modular Reactors (SMRs) has been growing at the international level. Due to the high level of maturity reached by Severe Accident Codes for currently operating rectors, their applicability to advanced SMRs is starting to be studied. Within the present work of thesis and in the framework of a collaboration between ENEA, UNIBO and IRSN, an ASTEC code model of a generic IRIS reactor has been developed. The simulation of a DBA sequence involving the operation of all the passive safety systems of the generic IRIS has been carried out to investigate the code model capability in the prediction of the thermal-hydraulics characterizing an integral SMR adopting a passive mitigation strategy. The following simulation of 4 BDBAs sequences explores the applicability of Severe Accident Codes to advance SMRs in beyond-design and core-degradation conditions. The uncertainty affecting a code simulation can be estimated by using the method of Input Uncertainty Propagation, whose application has been realized through the RAVEN-ASTEC coupling and implementation on an HPC platform. This probabilistic methodology has been employed in a study of the uncertainty affecting the passive safety system operation in the DBA simulation of ASTEC, providing a further characterization of the thermal-hydraulics of this sequence. The application of the Uncertainty Quantification method to early core-melt phenomena has been investigated in the framework of a BEPU analysis of the ASTEC simulation of the QUENCH test-6 experiment. A possible solution to the encountered challenges has been proposed through the application of a Limit Surface search algorithm.
Resumo:
The simulation of ultrafast photoinduced processes is a fundamental step towards the understanding of the underlying molecular mechanism and interpretation/prediction of experimental data. Performing a computer simulation of a complex photoinduced process is only possible introducing some approximations but, in order to obtain reliable results, the need to reduce the complexity must balance with the accuracy of the model, which should include all the relevant degrees of freedom and a quantitatively correct description of the electronic states involved in the process. This work presents new computational protocols and strategies for the parameterisation of accurate models for photochemical/photophysical processes based on state-of-the-art multiconfigurational wavefunction-based methods. The required ingredients for a dynamics simulation include potential energy surfaces (PESs) as well as electronic state couplings, which must be mapped across the wide range of geometries visited during the wavepacket/trajectory propagation. The developed procedures allow to obtain solid and extended databases reducing as much as possible the computational cost, thanks to, e.g., specific tuning of the level of theory for different PES regions and/or direct calculation of only the needed components of vectorial quantities (like gradients or nonadiabatic couplings). The presented approaches were applied to three case studies (azobenzene, pyrene, visual rhodopsin), all requiring an accurate parameterisation but for different reasons. The resulting models and simulations allowed to elucidate the mechanism and time scale of the internal conversion, reproducing or even predicting new transient experiments. The general applicability of the developed protocols to systems with different peculiarities and the possibility to parameterise different types of dynamics on an equal footing (classical vs purely quantum) prove that the developed procedures are flexible enough to be tailored for each specific system, and pave the way for exact quantum dynamics with multiple degrees of freedom.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
In recent years, developed countries have turned their attention to clean and renewable energy, such as wind energy and wave energy that can be converted to electrical power. Companies and academic groups worldwide are investigating several wave energy ideas today. Accordingly, this thesis studies the numerical simulation of the dynamic response of the wave energy converters (WECs) subjected to the ocean waves. This study considers a two-body point absorber (2BPA) and an oscillating surge wave energy converter (OSWEC). The first aim is to mesh the bodies of the earlier mentioned WECs to calculate their hydrostatic properties using axiMesh.m and Mesh.m functions provided by NEMOH. The second aim is to calculate the first-order hydrodynamic coefficients of the WECs using the NEMOH BEM solver and to study the ability of this method to eliminate irregular frequencies. The third is to generate a *.h5 file for 2BPA and OSWEC devices, in which all the hydrodynamic data are included. The BEMIO, a pre-and post-processing tool developed by WEC-Sim, is used in this study to create *.h5 files. The primary and final goal is to run the wave energy converter Simulator (WEC-Sim) to simulate the dynamic responses of WECs studied in this thesis and estimate their power performance at different sites located in the Mediterranean Sea and the North Sea. The hydrodynamic data obtained by the NEMOH BEM solver for the 2BPA and OSWEC devices studied in this thesis is imported to WEC-Sim using BEMIO. Lastly, the power matrices and annual energy production (AEP) of WECs are estimated for different sites located in the Sea of Sicily, Sea of Sardinia, Adriatic Sea, Tyrrhenian Sea, and the North Sea. To this end, the NEMOH and WEC-Sim are still the most practical tools to estimate the power generation of WECs numerically.
Resumo:
The work presented in this thesis aims to contribute to innovation in the Urban Air Mobility and Delivery sector and represents a solid starting point for air logistics and its future scenarios. The dissertation focuses on modeling, simulation, and control of a formation of multirotor aircraft for cooperative load transportation, with particular attention to environmental sustainability. First, a simulation and test environment is developed to assess technologies for suspended load stabilization. Starting from the mathematical model of two identical multirotors, formation-flight-keeping and collision-avoidance algorithms are analyzed. This approach guarantees both the safety of the vehicles within the formation and that of the payload, which may be made of people in the very near future. Afterwards, a mathematical model for the suspended load is implemented, as well as an active controller for its stabilization. The key focus of this part is represented by both analysis and control of payload oscillatory motion, by thoroughly investigating load kinetic energy decay. At this point, several test cases were introduced, in order to understand which strategy is the most effective and safe in terms of future applications in the field of air logistics.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
Rapidity-odd directed flow (v1) measurements for charged pions, protons, and antiprotons near midrapidity (y=0) are reported in sNN=7.7, 11.5, 19.6, 27, 39, 62.4, and 200 GeV Au+Au collisions as recorded by the STAR detector at the Relativistic Heavy Ion Collider. At intermediate impact parameters, the proton and net-proton slope parameter dv1/dy|y=0 shows a minimum between 11.5 and 19.6 GeV. In addition, the net-proton dv1/dy|y=0 changes sign twice between 7.7 and 39 GeV. The proton and net-proton results qualitatively resemble predictions of a hydrodynamic model with a first-order phase transition from hadronic matter to deconfined matter, and differ from hadronic transport calculations.
Resumo:
The control of energy homeostasis relies on robust neuronal circuits that regulate food intake and energy expenditure. Although the physiology of these circuits is well understood, the molecular and cellular response of this program to chronic diseases is still largely unclear. Hypothalamic inflammation has emerged as a major driver of energy homeostasis dysfunction in both obesity and anorexia. Importantly, this inflammation disrupts the action of metabolic signals promoting anabolism or supporting catabolism. In this review, we address the evidence that favors hypothalamic inflammation as a factor that resets energy homeostasis in pathological states.
Resumo:
Local parity-odd domains are theorized to form inside a quark-gluon plasma which has been produced in high-energy heavy-ion collisions. The local parity-odd domains manifest themselves as charge separation along the magnetic field axis via the chiral magnetic effect. The experimental observation of charge separation has previously been reported for heavy-ion collisions at the top RHIC energies. In this Letter, we present the results of the beam-energy dependence of the charge correlations in Au+Au collisions at midrapidity for center-of-mass energies of 7.7, 11.5, 19.6, 27, 39, and 62.4 GeV from the STAR experiment. After background subtraction, the signal gradually reduces with decreased beam energy and tends to vanish by 7.7 GeV. This implies the dominance of hadronic interactions over partonic ones at lower collision energies.
Resumo:
Cardiac arrest after open surgery has an incidence of approximately 3%, of which more than 50% of the cases are due to ventricular fibrillation. Electrical defibrillation is the most effective therapy for terminating cardiac arrhythmias associated with unstable hemodynamics. The excitation threshold of myocardial microstructures is lower when external electrical fields are applied in the longitudinal direction with respect to the major axis of cells. However, in the heart, cell bundles are disposed in several directions. Improved myocardial excitation and defibrillation have been achieved by applying shocks in multiple directions via intracardiac leads, but the results are controversial when the electrodes are not located within the cardiac chambers. This study was designed to test whether rapidly switching shock delivery in 3 directions could increase the efficiency of direct defibrillation. A multidirectional defibrillator and paddles bearing 3 electrodes each were developed and used in vivo for the reversal of electrically induced ventricular fibrillation in an anesthetized open-chest swine model. Direct defibrillation was performed by unidirectional and multidirectional shocks applied in an alternating fashion. Survival analysis was used to estimate the relationship between the probability of defibrillation and the shock energy. Compared with shock delivery in a single direction in the same animal population, the shock energy required for multidirectional defibrillation was 20% to 30% lower (P < .05) within a wide range of success probabilities. Rapidly switching multidirectional shock delivery required lower shock energy for ventricular fibrillation termination and may be a safer alternative for restoring cardiac sinus rhythm.
Resumo:
We report the first measurements of the moments--mean (M), variance (σ(2)), skewness (S), and kurtosis (κ)--of the net-charge multiplicity distributions at midrapidity in Au+Au collisions at seven energies, ranging from sqrt[sNN]=7.7 to 200 GeV, as a part of the Beam Energy Scan program at RHIC. The moments are related to the thermodynamic susceptibilities of net charge, and are sensitive to the location of the QCD critical point. We compare the products of the moments, σ(2)/M, Sσ, and κσ(2), with the expectations from Poisson and negative binomial distributions (NBDs). The Sσ values deviate from the Poisson baseline and are close to the NBD baseline, while the κσ(2) values tend to lie between the two. Within the present uncertainties, our data do not show nonmonotonic behavior as a function of collision energy. These measurements provide a valuable tool to extract the freeze-out parameters in heavy-ion collisions by comparing with theoretical models.
Resumo:
Sphingosine 1-phosphate receptor 1 (S1PR1) is a G-protein-coupled receptor for sphingosine-1-phosphate (S1P) that has a role in many physiological and pathophysiological processes. Here we show that the S1P/S1PR1 signalling pathway in hypothalamic neurons regulates energy homeostasis in rodents. We demonstrate that S1PR1 protein is highly enriched in hypothalamic POMC neurons of rats. Intracerebroventricular injections of the bioactive lipid, S1P, reduce food consumption and increase rat energy expenditure through persistent activation of STAT3 and the melanocortin system. Similarly, the selective disruption of hypothalamic S1PR1 increases food intake and reduces the respiratory exchange ratio. We further show that STAT3 controls S1PR1 expression in neurons via a positive feedback mechanism. Interestingly, several models of obesity and cancer anorexia display an imbalance of hypothalamic S1P/S1PR1/STAT3 axis, whereas pharmacological intervention ameliorates these phenotypes. Taken together, our data demonstrate that the neuronal S1P/S1PR1/STAT3 signalling axis plays a critical role in the control of energy homeostasis in rats.
Resumo:
Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.
Resumo:
The present study evaluated the effect of repeated simulated microwave disinfection on physical and mechanical properties of Clássico, Onda-Cryl and QC-20 denture base acrylic resins. Aluminum patterns were included in metallic or plastic flasks with dental stone following the traditional packing method. The powder/liquid mixing ratio was established according to the manufacturer's instructions. After water-bath polymerization at 74ºC for 9 h, boiling water for 20 min or microwave energy at 900 W for 10 min, the specimens were deflasked after flask cooling and finished. Each specimen was immersed in 150 mL of distilled water and underwent 5 disinfection cycles in a microwave oven set at 650 W for 3 min. Non-disinfected and disinfected specimens were subjected to the following tets: Knoop hardness test was performed with 25 g load for 10 s, impact strength test was done using the Charpy system with 40 kpcm, and 3-point bending test (flexural strength) was performed at a crosshead speed of 0.5 mm/min until fracture. Data were analyzed statistically by ANOVA and Tukey's test (α= 0.05%). Repeated simulated microwave disinfections decreased the Knoop hardness of Clássico and Onda-Cryl resins and had no effect on the impact strength of QC-20. The flexural strength was similar for all tested resins.
Resumo:
The purpose of this study was to evaluate the influence of intrapulpal pressure simulation on the bonding effectiveness of etch & rinse and self-etch adhesives to dentin. Eighty sound human molars were distributed into eight groups, according to the permeability level of each sample, measured by an apparatus to assess hydraulic conductance (Lp). Thus, a similar mean permeability was achieved in each group. Three etch & rinse adhesives (Prime & Bond NT - PB, Single Bond -SB, and Excite - EX) and one self-etch system (Clearfil SE Bond - SE) were employed, varying the presence or absence of an intrapulpal pressure (IPP) simulation of 15 cmH2O. After adhesive and restorative procedures were carried out, the samples were stored in distilled water for 24 hours at 37°C, and taken for tensile bond strength (TBS) testing. Fracture analysis was performed using a light microscope at 40 X magnification. The data, obtained in MPa, were then submitted to the Kruskal-Wallis test ( a = 0.05). The results revealed that the TBS of SB and EX was significantly reduced under IPP simulation, differing from the TBS of PB and SE. Moreover, SE obtained the highest bond strength values in the presence of IPP. It could be concluded that IPP simulation can influence the bond strength of certain adhesive systems to dentin and should be considered when in vitro studies are conducted.