26 resultados para Multi-phase experiments
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
The study of polymorphism has an important role in several fields of materials science, because structural differences lead to different physico-chemical properties of the system. This PhD work was dedicated to the investigation of polymorphism in Indigo, Thioindigo and Quinacridone, as case studies among the organic pigments employed as semiconductors, and in Paracetamol, Phenytoin and Nabumetone, chosen among some commonly used API. The aim of the research was to improve the understanding on the structures of bulk crystals and thin films, adopting Raman spectroscopy as the method of choice, while resorting to other experimental techniques to complement the gathered information. Different crystalline polymorphs, in fact, may be conveniently distinguished by their Raman spectra in the region of the lattice phonons (10-150 cm-1), the frequencies of which, probing the inter-molecular interactions, are very sensitive to even slight modifications in the molecular packing. In particular, we have used Confocal Raman Microscopy, which is a powerful, yet simple, technique for the investigation of crystal polymorphism in organic and inorganic materials, being capable of monitoring physical modifications, chemical transformations and phase inhomogeneities in crystal domains at the micrometre scale. In this way, we have investigated bulk crystals and thin film samples obtained with a variety of crystal growth and deposition techniques. Pure polymorphs and samples with phase mixing were found and fully characterized. Raman spectroscopy was complemented mainly by XRD measurements for bulk crystals and by AFM, GIXD and TEM for thin films. Structures and phonons of the investigated polymorphs were computed by DFT methods, and the comparison between theoretical and experimental results was used to assess the relative stability of the polymorphs and to assist the spectroscopic investigation. The Raman measurements were thus found to be able to clarify ambiguities in the phase assignments which otherwise the other methods were unable to solve.
Resumo:
In this thesis, we deal with the design of experiments in the drug development process, focusing on the design of clinical trials for treatment comparisons (Part I) and the design of preclinical laboratory experiments for proteins development and manufacturing (Part II). In Part I we propose a multi-purpose design methodology for sequential clinical trials. We derived optimal allocations of patients to treatments for testing the efficacy of several experimental groups by also taking into account ethical considerations. We first consider exponential responses for survival trials and we then present a unified framework for heteroscedastic experimental groups that encompasses the general ANOVA set-up. The very good performance of the suggested optimal allocations, in terms of both inferential and ethical characteristics, are illustrated analytically and through several numerical examples, also performing comparisons with other designs proposed in the literature. Part II concerns the planning of experiments for processes composed of multiple steps in the context of preclinical drug development and manufacturing. Following the Quality by Design paradigm, the objective of the multi-step design strategy is the definition of the manufacturing design space of the whole process and, as we consider the interactions among the subsequent steps, our proposal ensures the quality and the safety of the final product, by enabling more flexibility and process robustness in the manufacturing.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
In the field of bone substitutes is highly researched an innovative material able to fill gaps with high mechanical performances and able to stimulate cell response, permitting the complete restoration of the bone portion. In this respect, the synthesis of new bioactive materials able to mimic the compositional, morphological and mechanical features of bone is considered as the elective approach for effective tissue regeneration. Hydroxyapatite (HA) is the main component of the inorganic part of bone. Additionally ionic substitution can be performed in the apatite lattice producing different effects, depending from the selected ions. Magnesium, in substitution of calcium, and carbonate, in substitution of phosphate, extensively present in the biological bones, are able to improve properties naturally present in the apatitic phase, (i.e. biomimicry, solubility e osteoinductive properties). Other ions can be used to give new useful properties, like antiresorptive or antimicrobial properties, to the apatitic phase. This thesis focused on the development of hydroxyapatite nanophases with multiple ionic substitutions including gallium, or zinc ions, in association with magnesium and carbonate, with the purpose to provide double synergistic functionality as osteogenic and antibacterial biomaterial. Were developed bioactive materials based on Sr-substituted hydroxyapatite in the form of sintered targets. The obtained targets were treated with Pulsed Plasma Deposition (PED) resulting in the deposition of thin film coatings able to improve the roughness and wettability of PEEK, enhancing its osteointegrability. Were investigated heterogeneous gas-solid reactions, addressed to the biomorphic transformations of natural 3D porous structures into bone scaffolds with biomimetic composition and hierarchical organization, for application in load-bearing sites. The kinetics of the different reactions of the process were optimized to achieve complete and controlled phase transformation, maintaining the original 3-D morphology. Massive porous scaffolds made of ion-substituted hydroxyapatite and bone-mimicking structure were developed and tested in 3-D cell culture models.
Resumo:
The electrocatalytic reduction of CO2 (CO2RR) is a captivating strategy for the conversion of CO2 into fuels, to realize a carbon neutral circular economy. In the recent years, research has focused on the development of new materials and technology capable of capturing and converting CO2 into useful products. The main problem of CO2RR is given by its poor selectivity, which can lead to the formation of numerous reaction products, to the detriment of efficiencies. For this reason, the design of new electrocatalysts that selectively and efficiently reduce CO2 is a fundamental step for the future exploitation of this technology. Here we present a new class of electrocatalysts, designed with a modular approach, namely, deriving from the combination of different building blocks in a single nanostructure. With this approach it is possible to obtain materials with an innovative design and new functionalities, where the interconnections between the various components are essential to obtain a highly selective and efficient reduction of CO2, thus opening up new possibilities in the design of optimized electrocatalytic materials. By combining the unique physic-chemical properties of carbon nanostructures (CNS) with nanocrystalline metal oxides (MO), we were able to modulate the selectivity of CO2RR, with the production of formic acid and syngas at low overpotentials. The CNS have not only the task of stabilizing the MO nanoparticles, but the creation of an optimal interface between two nanostructures is able to improve the catalytic activity of the active phase of the material. While the presence of oxygen atoms in the MO creates defects that accelerate the reaction kinetics and stabilize certain reaction intermediates, selecting the reaction pathway. Finally, a part was dedicated to the study of the experimental parameters influencing the CO2RR, with the aim of improving the experimental setup in order to obtain commercial catalytic performances.
Resumo:
In the near future, the LHC experiments will continue to be upgraded as the LHC luminosity will increase from the design 1034 to 7.5 × 1034, with the HL-LHC project, to reach 3000 × f b−1 of accumulated statistics. After the end of a period of data collection, CERN will face a long shutdown to improve overall performance by upgrading the experiments and implementing more advanced technologies and infrastructures. In particular, ATLAS will upgrade parts of the detector, the trigger, and the data acquisition system. It will also implement new strategies and algorithms for processing and transferring the data to the final storage. This PhD thesis presents a study of a new pattern recognition algorithm to be used in the trigger system, which is a software designed to provide the information necessary to select physical events from background data. The idea is to use the well-known Hough Transform mathematical formula as an algorithm for detecting particle trajectories. The effectiveness of the algorithm has already been validated in the past, independently of particle physics applications, to detect generic shapes in images. Here, a software emulation tool is proposed for the hardware implementation of the Hough Transform, to reconstruct the tracks in the ATLAS Trigger and Data Acquisition system. Until now, it has never been implemented on electronics in particle physics experiments, and as a hardware implementation it would provide overall latency benefits. A comparison between the simulated data and the physical system was performed on a Xilinx UltraScale+ FPGA device.
Resumo:
The aim of this thesis is to present exact and heuristic algorithms for the integrated planning of multi-energy systems. The idea is to disaggregate the energy system, starting first with its core the Central Energy System, and then to proceed towards the Decentral part. Therefore, a mathematical model for the generation expansion operations to optimize the performance of a Central Energy System system is first proposed. To ensure that the proposed generation operations are compatible with the network, some extensions of the existing network are considered as well. All these decisions are evaluated both from an economic viewpoint and from an environmental perspective, as specific constraints related to greenhouse gases emissions are imposed in the formulation. Then, the thesis presents an optimization model for solar organic Rankine cycle in the context of transactive energy trading. In this study, the impact that this technology can have on the peer-to-peer trading application in renewable based community microgrids is inspected. Here the consumer becomes a prosumer and engages actively in virtual trading with other prosumers at the distribution system level. Moreover, there is an investigation of how different technological parameters of the solar Organic Rankine Cycle may affect the final solution. Finally, the thesis introduces a tactical optimization model for the maintenance operations’ scheduling phase of a Combined Heat and Power plant. Specifically, two types of cleaning operations are considered, i.e., online cleaning and offline cleaning. Furthermore, a piecewise linear representation of the electric efficiency variation curve is included. Given the challenge of solving the tactical management model, a heuristic algorithm is proposed. The heuristic works by solving the daily operational production scheduling problem, based on the final consumer’s demand and on the electricity prices. The aggregate information from the operational problem is used to derive maintenance decisions at a tactical level.
Resumo:
The enhanced production of strange hadrons in heavy-ion collisions relative to that in minimum-bias pp collisions is historically considered one of the first signatures of the formation of a deconfined quark-gluon plasma. At the LHC, the ALICE experiment observed that the ratio of strange to non-strange hadron yields increases with the charged-particle multiplicity at midrapidity, starting from pp collisions and evolving smoothly across interaction systems and energies, ultimately reaching Pb-Pb collisions. The understanding of the origin of this effect in small systems remains an open question. This thesis presents a comprehensive study of the production of $K^{0}_{S}$, $\Lambda$ ($\bar{\Lambda}$) and $\Xi^{-}$ ($\bar{\Xi}^{+}$) strange hadrons in pp collisions at $\sqrt{s}$ = 13 TeV collected in LHC Run 2 with ALICE. A novel approach is exploited, introducing, for the first time, the concept of effective energy in the study of strangeness production in hadronic collisions at the LHC. In this work, the ALICE Zero Degree Calorimeters are used to measure the energy carried by forward emitted baryons in pp collisions, which reduces the effective energy available for particle production with respect to the nominal centre-of-mass energy. The results presented in this thesis provide new insights into the interplay, for strangeness production, between the initial stages of the collision and the produced final hadronic state. Finally, the first Run 3 results on the production of $\Omega^{\pm}$ ($\bar{\Omega}^{+}$) multi-strange baryons are presented, measured in pp collisions at $\sqrt{s}$ = 13.6 TeV and 900 GeV, the highest and lowest collision energies reached so far at the LHC. This thesis also presents the development and validation of the ALICE Time-Of-Flight (TOF) data quality monitoring system for LHC Run 3. This work was fundamental to assess the performance of the TOF detector during the commissioning phase, in the Long Shutdown 2, and during the data taking period.