993 resultados para Experimental setup
Resumo:
Les anodes de carbone sont des éléments consommables servant d’électrode dans la réaction électrochimique d’une cuve Hall-Héroult. Ces dernières sont produites massivement via une chaine de production dont la mise en forme est une des étapes critiques puisqu’elle définit une partie de leur qualité. Le procédé de mise en forme actuel n’est pas pleinement optimisé. Des gradients de densité importants à l’intérieur des anodes diminuent leur performance dans les cuves d’électrolyse. Encore aujourd’hui, les anodes de carbone sont produites avec comme seuls critères de qualité leur densité globale et leurs propriétés mécaniques finales. La manufacture d’anodes est optimisée de façon empirique directement sur la chaine de production. Cependant, la qualité d’une anode se résume en une conductivité électrique uniforme afin de minimiser les concentrations de courant qui ont plusieurs effets néfastes sur leur performance et sur les coûts de production d’aluminium. Cette thèse est basée sur l’hypothèse que la conductivité électrique de l’anode n’est influencée que par sa densité considérant une composition chimique uniforme. L’objectif est de caractériser les paramètres d’un modèle afin de nourrir une loi constitutive qui permettra de modéliser la mise en forme des blocs anodiques. L’utilisation de la modélisation numérique permet d’analyser le comportement de la pâte lors de sa mise en forme. Ainsi, il devient possible de prédire les gradients de densité à l’intérieur des anodes et d’optimiser les paramètres de mise en forme pour en améliorer leur qualité. Le modèle sélectionné est basé sur les propriétés mécaniques et tribologiques réelles de la pâte. La thèse débute avec une étude comportementale qui a pour objectif d’améliorer la compréhension des comportements constitutifs de la pâte observés lors d’essais de pressage préliminaires. Cette étude est basée sur des essais de pressage de pâte de carbone chaude produite dans un moule rigide et sur des essais de pressage d’agrégats secs à l’intérieur du même moule instrumenté d’un piézoélectrique permettant d’enregistrer les émissions acoustiques. Cette analyse a précédé la caractérisation des propriétés de la pâte afin de mieux interpréter son comportement mécanique étant donné la nature complexe de ce matériau carboné dont les propriétés mécaniques sont évolutives en fonction de la masse volumique. Un premier montage expérimental a été spécifiquement développé afin de caractériser le module de Young et le coefficient de Poisson de la pâte. Ce même montage a également servi dans la caractérisation de la viscosité (comportement temporel) de la pâte. Il n’existe aucun essai adapté pour caractériser ces propriétés pour ce type de matériau chauffé à 150°C. Un moule à paroi déformable instrumenté de jauges de déformation a été utilisé pour réaliser les essais. Un second montage a été développé pour caractériser les coefficients de friction statique et cinétique de la pâte aussi chauffée à 150°C. Le modèle a été exploité afin de caractériser les propriétés mécaniques de la pâte par identification inverse et pour simuler la mise en forme d’anodes de laboratoire. Les propriétés mécaniques de la pâte obtenues par la caractérisation expérimentale ont été comparées à celles obtenues par la méthode d’identification inverse. Les cartographies tirées des simulations ont également été comparées aux cartographies des anodes pressées en laboratoire. La tomodensitométrie a été utilisée pour produire ces dernières cartographies de densité. Les résultats des simulations confirment qu’il y a un potentiel majeur à l’utilisation de la modélisation numérique comme outil d’optimisation du procédé de mise en forme de la pâte de carbone. La modélisation numérique permet d’évaluer l’influence de chacun des paramètres de mise en forme sans interrompre la production et/ou d’implanter des changements coûteux dans la ligne de production. Cet outil permet donc d’explorer des avenues telles la modulation des paramètres fréquentiels, la modification de la distribution initiale de la pâte dans le moule, la possibilité de mouler l’anode inversée (upside down), etc. afin d’optimiser le processus de mise en forme et d’augmenter la qualité des anodes.
Resumo:
As Universidades Seniores têm vindo a assumir um papel cada vez mais preponderante nas transformações das sociedades ocidentais atuais, as quais têm a responsabilidade de integrar, absorver e até mesmo beneficiar das mais-valias caraterísticas do aluno sénior. São portanto o reflexo das mudanças de paradigma do adulto sénior, permitindo variadíssimas possibilidades e atividades orientadas para este público. A proposta de novas práticas e metodologias é consequentemente, desejável a partir de novas abordagens didáticas de ensino que tentam acompanhar a disponibilidade de informação facilitadora do acesso ao conhecimento. A diversidade de meios e o cruzamento de várias disciplinas artísticas apresentam-se como componentes de um processo que se pretende dinâmico e atual. Neste relatório de estágio, coloca-se a questão da valorização da Criação Artística, através de uma abordagem contemporânea, na configuração das ofertas educativas no seio das Universidades Seniores em geral, e da Universidade Sénior de Ovar em particular. Como tal é proposto o desenvolvimento do projeto Incubadora Artística como configuração experimental daquilo que é admitido como possibilidade alargada de aproximação à arte, em geral, e à arte contemporânea, em particular, por parte do aluno sénior conhecendo e explorando as diferentes expressões artísticas, bem como usando as mesmas nas suas próprias produções artística.
Resumo:
Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.
Resumo:
An ideal biomaterial for dental implants must have very high biocompatibility, which means that such materials should not provoke any serious adverse tissue response. Also, used metal alloys must have high fatigue resistance due the masticatory force and good corrosion resistance. These properties are rendered by using alpha and beta stabilizers, such as Al, V, Ni, Fe, Cr, Cu, Zn. Commercially pure titanium (TiCP) is used often for dental and orthopedic implants manufacturing. However, sometimes other alloys are employed and consequently it is essential to research the chemical elements present in those alloys that could bring prejudice for the health. Present work investigated TiCP metal alloys used for dental implant manufacturing and evaluated the presence of stabilizing elements within existing limits and standards for such materials. For alloy characterization and identification of stabilizing elements it was used EDXRF technique. This method allows to perform qualitative and quantitative analysis of the materials using the spectra of the characteristic X-rays emitted by the elements present in the metal samples. The experimental setup was based on two X- rays tubes (AMPTEK Mini X model with Ag and Au targets), a X-123SDD detector (AMPTEK) and a 0.5mm Cu collimator, developed due to the sample characteristics. The other experimental setup used as a complementary technique is composed of an X-ray tube with a Mo target, collimator 0.65mm and XFlash (SDD) detector - ARTAX 200 (BRUKER). Other method for elemental characterization by energy dispersive spectroscopy (EDS) applied in present work was based on Scanning Electron Microscopy (SEM) EVO® (Zeeis). This method also was used to evaluate the surface microstructure of the sample. The percentual of Ti obtained in the elementary characterization was among 93.35 ± 0.17% and 95.34 ± 0.19 %. These values are considered below the reference limit of 98.635% to 99.5% for TiCP, established by Association of metals centric materials engineers and scientists Society (ASM). The presence of elements Al and V in all samples also contributed to underpin the fact that are not TiCP implants. The values for Al vary between 6.3 ± 1.3% and 3.7 ± 2.0% and for V, between 0.26 ± 0.09% and 0.112 ± 0.048%. According to the American Society for Testing and Materials (ASTM), these elements should not be present in TiCP and in accordance with the National Institute of Standards and Technology (NIST), the presence of Al should be <0.01% and V should be of 0.009 ± 0.001%. Obtained results showed that implant materials are not exactly TiCP but, were manufactured using Ti-Al-V alloy, which contained Fe, Ni, Cu and Zn. The quantitative analysis and elementary characterization of experimental results shows that the best accuracy and precision were reached with X-Ray tube with Au target and collimator of 0.5 mm. Use of technique of EDS confirmed the results of EDXRF for Ti-Al-V alloy. Evaluating the surface microstructure by SEM of the implants, it was possible to infer that ten of the thirteen studied samples are contemporaneous, rough surface and three with machined surface.
Resumo:
Thermal characterizations of high power light emitting diodes (LEDs) and laser diodes (LDs) are one of the most critical issues to achieve optimal performance such as center wavelength, spectrum, power efficiency, and reliability. Unique electrical/optical/thermal characterizations are proposed to analyze the complex thermal issues of high power LEDs and LDs. First, an advanced inverse approach, based on the transient junction temperature behavior, is proposed and implemented to quantify the resistance of the die-attach thermal interface (DTI) in high power LEDs. A hybrid analytical/numerical model is utilized to determine an approximate transient junction temperature behavior, which is governed predominantly by the resistance of the DTI. Then, an accurate value of the resistance of the DTI is determined inversely from the experimental data over the predetermined transient time domain using numerical modeling. Secondly, the effect of junction temperature on heat dissipation of high power LEDs is investigated. The theoretical aspect of junction temperature dependency of two major parameters – the forward voltage and the radiant flux – on heat dissipation is reviewed. Actual measurements of the heat dissipation over a wide range of junction temperatures are followed to quantify the effect of the parameters using commercially available LEDs. An empirical model of heat dissipation is proposed for applications in practice. Finally, a hybrid experimental/numerical method is proposed to predict the junction temperature distribution of a high power LD bar. A commercial water-cooled LD bar is used to present the proposed method. A unique experimental setup is developed and implemented to measure the average junction temperatures of the LD bar. After measuring the heat dissipation of the LD bar, the effective heat transfer coefficient of the cooling system is determined inversely. The characterized properties are used to predict the junction temperature distribution over the LD bar under high operating currents. The results are presented in conjunction with the wall-plug efficiency and the center wavelength shift.
Medidas de concentração de radônio proveniente de argamassas de cimento portland, gesso e fosfogesso
Resumo:
Portland cement being very common construction material has in its composition the natural gypsum. To decrease the costs of manufacturing, the cement industry is substituting the gypsum in its composition by small quantities of phosphogypsum, which is the residue generated by the production of fertilizers and consists essentially of calcium dihydrate and some impurities, such as fluoride, metals in general, and radionuclides. Currently, tons of phosphogypsum are stored in the open air near the fertilizer industries, causing contamination of the environment. The 226 Ra present in these materials, when undergoes radioactive decay, produces the 222Rn gas. This radioactive gas, when inhaled together with its decay products deposited in the lungs, produces the exposure to radiation and can be a potential cause of lung cancer. Thus, the objective of this study was to measure the concentration levels of 222Rn from cylindrical samples of Portland cement, gypsum and phosphogypsum mortar from the state of Paraná, as well as characterizer the material and estimate the radon concentration in an environment of hypothetical dwelling with walls covered by such materials. Experimental setup of 222Rn activity measurements was based on AlphaGUARD detector (Saphymo GmbH). The qualitative and quantitative analysis was performed by gamma spectrometry and EDXRF with Au and Ag targets tubes (AMPTEK), and Mo target (ARTAX) and mechanical testing with x- ray equipment (Gilardoni) and the mechanical press (EMIC). Obtained average values of radon activity from studied materials in the air of containers were of 854 ± 23 Bq/m3, 60,0 ± 7,2 Bq/m3 e 52,9 ± 5,4 Bq/m3 for Portland cement, gypsum and phosphogypsum mortar, respectively. These results extrapolated into the volume of hypothetical dwelling of 36 m3 with the walls covered by such materials were of 3366 ± 91 Bq/m3, 237 ± 28 Bq/m3 e 208 ± 21 Bq/m3for Portland cement, gypsum and phosphogypsum mortar, respectively. Considering the limit of 300 Bq/m3 established by the ICRP, it could be concluded that the use of Portland cement plaster in dwellings is not secure and requires some specific mitigation procedure. Using the results of gamma spectrometry there were calculated the values of radium equivalent activity concentrations (Raeq) for Portland cement, gypsum and phosphogypsum mortar, which were obtained equal to 78,2 ± 0,9 Bq/kg; 58,2 ± 0,9 Bq/kg e 68,2 ± 0,9 Bq/kg, respectively. All values of radium equivalent activity concentrations for studied samples are below the maximum level of 370 Bq/kg. The qualitative and quantitative analysis of EDXRF spectra obtained with studied mortar samples allowed to evaluate quantitate and the elements that constitute the material such as Ca, S, Fe, and others.
Resumo:
The subject of quark transverse spin and transverse momentum distribution are two current research frontier in understanding the spin structure of the nucleons. The goal of the research reported in this dissertation is to extract new information on the quark transversity distribution and the novel transverse-momentum-dependent Sivers function in the neutron. A semi-inclusive deep inelastic scattering experiment was performed at the Hall A of the Jefferson laboratory using 5.9 GeV electron beam and a transversely polarized ^{3}He target. The scattered electrons and the produced hadrons (pions, kaons, and protons) were detected in coincidence with two large magnetic spectrometers. By regularly flipping the spin direction of the transversely polarized target, the single-spin-asymmetry (SSA) of the semi-inclusive deep inelastic reaction ^{3}He^{uparrow}(e,e'h^{\pm})X was measured over the kinematic range 0.13 < x < 0.41 and 1.3 < Q^{2} < 3.1 (GeV)^{2}. The SSA contains several different azimuthal angular modulations which are convolutions of quarks distribution functions in the nucleons and the quark fragmentation functions into hadrons. It is from the extraction of the various ``moments'' of these azimuthal angular distributions (Collins moment and Sivers moment) that we obtain information on the quark transversity distribution and the novel T-odd Sivers function. In this dissertation, I first introduced the theoretical background and experimental status of nucleon spins and the physics of SSA. I will then present the experimental setup and data collection of the JLab E06-010 experiment. Details of data analysis will be discussed next with emphasis on the kaon particle identification and the Ring-Imaging Cherenkov detector which are my major responsibilities in this experiment. Finally, results on the kaon Collins and Sivers moments extracted from the Maximum Likelihood method will be presented and interpreted. I will conclude with a discussion on the future prospects for this research.
Resumo:
La calidad de servicio se ha convertido en un tema de vital importancia para sus proveedores, pero su análisis ha estado orientado principalmente a la configuración en los dispositivos de interconectividad. El artículo presenta el análisis de los datos obtenidos en una serie de simulaciones realizadas en el núcleo de una red NGN con tecnología MPLS, donde de manera conjunta se han configurado mecanismos de calidad de servicio con configuraciones específicas dentro de la red de núcleo. Adicionalmente se presentan los análisis de los datos obtenidos en un montaje experimental de laboratorio de similares características al esquema simulado.
Resumo:
The Homogeneous Charge Compression Ignition (HCCI) engine is a promising combustion concept for reducing NOx and particulate matter (PM) emissions and providing a high thermal efficiency in internal combustion engines. This concept though has limitations in the areas of combustion control and achieving stable combustion at high loads. For HCCI to be a viable option for on-road vehicles, further understanding of its combustion phenomenon and its control are essential. Thus, this thesis has a focus on both the experimental setup of an HCCI engine at Michigan Technological University (MTU) and also developing a physical numerical simulation model called the Sequential Model for Residual Affected HCCI (SMRH) to investigate performance of HCCI engines. The primary focus is on understanding the effects of intake and exhaust valve timings on HCCI combustion. For the experimental studies, this thesis provided the contributions for development of HCCI setup at MTU. In particular, this thesis made contributions in the areas of measurement of valve profiles, measurement of piston to valve contact clearance for procuring new pistons for further studies of high geometric compression ratio HCCI engines. It also consists of developing and testing a supercharging station and the setup of an electrical air heater to extend the HCCI operating region. The HCCI engine setup is based on a GM 2.0 L LHU Gen 1 engine which is a direct injected engine with variable valve timing (VVT) capabilities. For the simulation studies, a computationally efficient modeling platform has been developed and validated against experimental data from a single cylinder HCCI engine. In-cylinder pressure trace, combustion phasing (CA10, CA50, BD) and performance metrics IMEP, thermal efficiency, and CO emission are found to be in good agreement with experimental data for different operating conditions. Effects of phasing intake and exhaust valves are analyzed using SMRH. In addition, a novel index called Fuel Efficiency and Emissions (FEE) index is defined and is used to determine the optimal valve timings for engine operation through the use of FEE contour maps.
Resumo:
Peripheral nerves have demonstrated the ability to bridge gaps of up to 6 mm. Peripheral Nerve System injury sites beyond this range need autograft or allograft surgery. Central Nerve System cells do not allow spontaneous regeneration due to the intrinsic environmental inhibition. Although stem cell therapy seems to be a promising approach towards nerve repair, it is essential to use the distinct three-dimensional architecture of a cell scaffold with proper biomolecule embedding in order to ensure that the local environment can be controlled well enough for growth and survival. Many approaches have been developed for the fabrication of 3D scaffolds, and more recently, fiber-based scaffolds produced via the electrospinning have been garnering increasing interest, as it offers the opportunity for control over fiber composition, as well as fiber mesh porosity using a relatively simple experimental setup. All these attributes make electrospun fibers a new class of promising scaffolds for neural tissue engineering. Therefore, the purpose of this doctoral study is to investigate the use of the novel material PGD and its derivative PGDF for obtaining fiber scaffolds using the electrospinning. The performance of these scaffolds, combined with neural lineage cells derived from ESCs, was evaluated by the dissolvability test, Raman spectroscopy, cell viability assay, real time PCR, Immunocytochemistry, extracellular electrophysiology, etc. The newly designed collector makes it possible to easily obtain fibers with adequate length and integrity. The utilization of a solvent like ethanol and water for electrospinning of fibrous scaffolds provides a potentially less toxic and more biocompatible fabrication method. Cell viability testing demonstrated that the addition of gelatin leads to significant improvement of cell proliferation on the scaffolds. Both real time PCR and Immunocytochemistry analysis indicated that motor neuron differentiation was achieved through the high motor neuron gene expression using the metabolites approach. The addition of Fumaric acid into fiber scaffolds further promoted the differentiation. Based on the results, this newly fabricated electrospun fiber scaffold, combined with neural lineage cells, provides a potential alternate strategy for nerve injury repair.^
Resumo:
Historically, domestic tasks such as preparing food and washing and drying clothes and dishes were done by hand. In a modern home many of these chores are taken care of by machines such as washing machines, dishwashers and tumble dryers. When the first such machines came on the market customers were happy that they worked at all! Today, the costs of electricity and customers’ environmental awareness are high, so features such as low electricity, water and detergent use strongly influence which household machine the customer will buy. One way to achieve lower electricity usage for the tumble dryer and the dishwasher is to add a heat pump system. The function of a heat pump system is to extract heat from a lower temperature source (heat source) and reject it to a higher temperature sink (heat sink) at a higher temperature level. Heat pump systems have been used for a long time in refrigerators and freezers, and that industry has driven the development of small, high quality, low price heat pump components. The low price of good quality heat pump components, along with an increased willingness to pay extra for lower electricity usage and environmental impact, make it possible to introduce heat pump systems in other household products. However, there is a high risk of failure with new features. A number of household manufacturers no longer exist because they introduced poorly implemented new features, which resulted in low quality and product performance. A manufacturer must predict whether the future value of a feature is high enough for the customer chain to pay for it. The challenge for the manufacturer is to develop and produce a high-performance heat pump feature in a household product with high quality, predict future willingness to pay for it, and launch it at the right moment in order to succeed. Tumble dryers with heat pump systems have been on the market since 2000. Paper I reports on the development of a transient simulation model of a commercial heat pump tumble dryer. The measured and simulated results were compared with good similarity. The influence of the size of the compressor and the condenser was investigated using the validated simulation model. The results from the simulation model show that increasing the cylinder volume of the compressor by 50% decreases the drying time by 14% without using more electricity. Paper II is a concept study of adding a heat pump system to a dishwasher in order to decrease the total electricity usage. The dishwasher, dishware and water are heated by the condenser, and the evaporator absorbs the heat from a water tank. The majority of the heat transfer to the evaporator occurs when ice is generated in the water tank. An experimental setup and a transient simulation model of a heat pump dishwasher were developed. The simulation results show a 24% reduction in electricity use compared to a conventional dishwasher heated with an electric element. The simulation model was based on an experimental setup that was not optimised. During the study it became apparent that it is possible to decrease electricity usage even more with the next experimental setup.
Resumo:
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors is essential in performing localization. This makes the time of arrival (ToA) an important piece of information to retrieve from the AE signal. Generally, this is determined using statistical methods such as the Akaike Information Criterion (AIC) which is particularly prone to errors in the presence of noise. And given that the structures of interest are surrounded with harsh environments, a way to accurately estimate the arrival time in such noisy scenarios is of particular interest. In this work, two new methods are presented to estimate the arrival times of AE signals which are based on Machine Learning. Inspired by great results in the field, two models are presented which are Deep Learning models - a subset of machine learning. They are based on Convolutional Neural Network (CNN) and Capsule Neural Network (CapsNet). The primary advantage of such models is that they do not require the user to pre-define selected features but only require raw data to be given and the models establish non-linear relationships between the inputs and outputs. The performance of the models is evaluated using AE signals generated by a custom ray-tracing algorithm by propagating them on an aluminium plate and compared to AIC. It was found that the relative error in estimation on the test set was < 5% for the models compared to around 45% of AIC. The testing process was further continued by preparing an experimental setup and acquiring real AE signals to test on. Similar performances were observed where the two models not only outperform AIC by more than a magnitude in their average errors but also they were shown to be a lot more robust as compared to AIC which fails in the presence of noise.
Resumo:
This thesis work has been developed in collaboration between the Department of Physics and Astronomy of the University of Bologna and the IRCCS Rizzoli Orthopedic Institute during an internship period. The study aims to investigate the sensitivity of single-sided NMR in detecting structural differences of the articular cartilage tissue and their correlation with mechanical behavior. Suitable cartilage indicators for osteoarthritis (OA) severity (e.g., water and proteoglycans content, collagen structure) were explored through four NMR parameters: T2, T1, D, and Slp. Structural variations of the cartilage among its three layers (i.e., superficial, middle, and deep) were investigated performing several NMR pulses sequences on bovine knee joint samples using the NMR-MOUSE device. Previously, cartilage degradation studies were carried out, performing tests in three different experimental setups. The monitoring of the parameters and the best experimental setup were determined. An NMR automatized procedure based on the acquisition of these quantitative parameters was implemented, tested, and used for the investigation of the layers of twenty bovine cartilage samples. Statistical and pattern recognition analyses on these parameters have been performed. The results obtained from the analyses are very promising: the discrimination of the three cartilage layers shows very good results in terms of significance, paving the way for extensive use of NMR single-sided devices for biomedical applications. These results will be also integrated with analyses of tissue mechanical properties for a complete evaluation of cartilage changes throughout OA disease. The use of low-priced and mobile devices towards clinical applications could concern the screening of diseases related to cartilage tissue. This could have a positive impact both economically (including for underdeveloped countries) and socially, providing screening possibilities to a large part of the population.
Resumo:
The electrocatalytic reduction of CO2 (CO2RR) is a captivating strategy for the conversion of CO2 into fuels, to realize a carbon neutral circular economy. In the recent years, research has focused on the development of new materials and technology capable of capturing and converting CO2 into useful products. The main problem of CO2RR is given by its poor selectivity, which can lead to the formation of numerous reaction products, to the detriment of efficiencies. For this reason, the design of new electrocatalysts that selectively and efficiently reduce CO2 is a fundamental step for the future exploitation of this technology. Here we present a new class of electrocatalysts, designed with a modular approach, namely, deriving from the combination of different building blocks in a single nanostructure. With this approach it is possible to obtain materials with an innovative design and new functionalities, where the interconnections between the various components are essential to obtain a highly selective and efficient reduction of CO2, thus opening up new possibilities in the design of optimized electrocatalytic materials. By combining the unique physic-chemical properties of carbon nanostructures (CNS) with nanocrystalline metal oxides (MO), we were able to modulate the selectivity of CO2RR, with the production of formic acid and syngas at low overpotentials. The CNS have not only the task of stabilizing the MO nanoparticles, but the creation of an optimal interface between two nanostructures is able to improve the catalytic activity of the active phase of the material. While the presence of oxygen atoms in the MO creates defects that accelerate the reaction kinetics and stabilize certain reaction intermediates, selecting the reaction pathway. Finally, a part was dedicated to the study of the experimental parameters influencing the CO2RR, with the aim of improving the experimental setup in order to obtain commercial catalytic performances.
Resumo:
The High Energy Rapid Modular Ensemble of Satellites (HERMES) is a new mission concept involving the development of a constellation of six CubeSats in low Earth orbit with new miniaturized instruments that host a hybrid Silicon Drift Detector/GAGG:Ce based system for X-ray and γ-ray detection, aiming to monitor high-energy cosmic transients, such as Gamma Ray Bursts and the electromagnetic counterparts of gravitational wave events. The HERMES constellation will also operate together with the Australian-Italian SpIRIT mission, which will house a HERMES-like detector. The HERMES pathfinder mini-constellation, consisting of six satellites plus SpIRIT, is likely to be launched in 2023. The HERMES detectors are based on the heritage of the Italian ReDSoX collaboration, with joint design and production by INFN-Trieste and Fondazione Bruno Kessler, and the involvement of several Italian research institutes and universities. An application-specific, low-noise, low-power integrated circuit (ASIC) called LYRA was conceived and designed for the HERMES readout electronics. My thesis project focuses on the ground calibrations of the first HERMES and SpIRIT flight detectors, with a performance assessment and characterization of the detectors. The first part of this work addresses measurements and experimental tests on laboratory prototypes of the HERMES detectors and their front-end electronics, while the second part is based on the design of the experimental setup for flight detector calibrations and related functional tests for data acquisition, as well as the development of the calibration software. In more detail, the calibration parameters (such as the gain of each detector channel) are determined using measurements with radioactive sources, performed at different operating temperatures between -20°C and +20°C by placing the detector in a suitable climate chamber. The final part of the thesis involves the analysis of the calibration data and a discussion of the results.