957 resultados para Planar jet, hot-wire anemometry, calibration procedure, experiments for the characterization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the stellar calibrator sample and the conversion from instrumental to physical units for the 24 μm channel of the Multiband Imaging Photometer for Spitzer (MIPS). The primary calibrators are A stars, and the calibration factor based on those stars is 4.54 × 10^-2 MJy sr^–1 (DN/s)^–1, with a nominal uncertainty of 2%. We discuss the data reduction procedures required to attain this accuracy; without these procedures, the calibration factor obtained using the automated pipeline at the Spitzer Science Center is 1.6% ± 0.6% lower. We extend this work to predict 24 μm flux densities for a sample of 238 stars that covers a larger range of flux densities and spectral types. We present a total of 348 measurements of 141 stars at 24 μm. This sample covers a factor of ~460 in 24 μm flux density, from 8.6 mJy up to 4.0 Jy. We show that the calibration is linear over that range with respect to target flux and background level. The calibration is based on observations made using 3 s exposures; a preliminary analysis shows that the calibration factor may be 1% and 2% lower for 10 and 30 s exposures, respectively. We also demonstrate that the calibration is very stable: over the course of the mission, repeated measurements of our routine calibrator, HD 159330, show a rms scatter of only 0.4%. Finally, we show that the point-spread function (PSF) is well measured and allows us to calibrate extended sources accurately; Infrared Astronomy Satellite (IRAS) and MIPS measurements of a sample of nearby galaxies are identical within the uncertainties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An accurate knowledge of the fluorescence yield and its dependence on atmospheric properties such as pressure, temperature or humidity is essential to obtain a reliable measurement of the primary energy of cosmic rays in experiments using the fluorescence technique. In this work, several sets of fluorescence yield data (i.e. absolute value and quenching parameters) are described and compared. A simple procedure to study the effect of the assumed fluorescence yield on the reconstructed shower parameters (energy and shower maximum depth) as a function of the primary features has been developed. As an application, the effect of water vapor and temperature dependence of the collisional cross section on the fluorescence yield and its impact on the reconstruction of primary energy and shower maximum depth has been studied. Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geochemical well logs were used to measure the dry weight percent oxide abundances of Si, Al, Ca, Mg, Fe, Ti, and K and the elemental abundances of Gd, S, Th, and U at 0.15-m intervals throughout the basement section of Hole 504B. These geochemical data are used to estimate the integrated chemical exchange resulting from hydrothermal alteration of the oceanic crust that has occurred over the last 5.9 Ma. A large increase in Si in the transition zone between pillows and dikes (Layers 2B and 2C) indicates that mixing of hot, upwelling hydrothermal fluids with cold, downwelling seawater occurred in the past at a permeability discontinuity at this level in the crust, even though the low-to-high permeability boundary in Hole 504B is now 500 m shallower (at the Layer 2A/2B boundary). The observations of extensive Ca loss and Mg gain agree with chemical exchanges recorded in the laboratory in experiments on the reactions that occur between basalt and seawater at high temperatures. The K budget requires significant addition to Layer 2A from both high-temperature depletion in Layers 2B and 2C and low-temperature alteration by seawater. Integrated water/rock ratios are derived for the mass of seawater required to add enriched elements and for the mass of hydrothermal fluid required to remove depleted elements in the crust at Hole 504B.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND:
Evidence regarding the association of the built environment with physical activity is influencing policy recommendations that advocate changing the built environment to increase population-level physical activity. However, to date there has been no rigorous appraisal of the quality of the evidence on the effects of changing the built environment. The aim of this review was to conduct a thorough quantitative appraisal of the risk of bias present in those natural experiments with the strongest experimental designs for assessing the causal effects of the built environment on physical activity.

METHODS:
Eligible studies had to evaluate the effects of changing the built environment on physical activity, include at least one measurement before and one measurement of physical activity after changes in the environment, and have at least one intervention site and non-intervention comparison site. Given the large number of systematic reviews in this area, studies were identified from three exemplar systematic reviews; these were published in the past five years and were selected to provide a range of different built environment interventions. The risk of bias in these studies was analysed using the Cochrane Risk of Bias Assessment Tool: for Non-Randomized Studies of Interventions (ACROBAT-NRSI).

RESULTS:
Twelve eligible natural experiments were identified. Risk of bias assessments were conducted for each physical activity outcome from all studies, resulting in a total of fifteen outcomes being analysed. Intervention sites included parks, urban greenways/trails, bicycle lanes, paths, vacant lots, and a senior citizen's centre. All outcomes had an overall critical (n = 12) or serious (n = 3) risk of bias. Domains with the highest risk of bias were confounding (due to inadequate control sites and poor control of confounding variables), measurement of outcomes, and selection of the reported result.

CONCLUSIONS:
The present review focused on the strongest natural experiments conducted to date. Given this, the failure of existing studies to adequately control for potential sources of bias highlights the need for more rigorous research to underpin policy recommendations for changing the built environment to increase physical activity. Suggestions are proposed for how future natural experiments in this area can be improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il existe désormais une grande variété de lentilles panoramiques disponibles sur le marché dont certaines présentant des caractéristiques étonnantes. Faisant partie de cette dernière catégorie, les lentilles Panomorphes sont des lentilles panoramiques anamorphiques dont le profil de distorsion est fortement non-uniforme, ce qui cause la présence de zones de grandissement augmenté dans le champ de vue. Dans un contexte de robotique mobile, ces particularités peuvent être exploitées dans des systèmes stéréoscopiques pour la reconstruction 3D d’objets d’intérêt qui permettent à la fois une bonne connaissance de l’environnement, mais également l’accès à des détails plus fins en raison des zones de grandissement augmenté. Cependant, à cause de leur complexité, ces lentilles sont difficiles à calibrer et, à notre connaissance, aucune étude n’a réellement été menée à ce propos. L’objectif principal de cette thèse est la conception, l’élaboration et l’évaluation des performances de systèmes stéréoscopiques Panomorphes. Le calibrage a été effectué à l’aide d’une technique établie utilisant des cibles planes et d’une boîte à outils de calibrage dont l’usage est répandu. De plus, des techniques mathématiques nouvelles visant à rétablir la symétrie de révolution dans l’image (cercle) et à uniformiser la longueur focale (cercle uniforme) ont été développées pour voir s’il était possible d’ainsi faciliter le calibrage. Dans un premier temps, le champ de vue a été divisé en zones à l’intérieur desquelles la longueur focale instantanée varie peu et le calibrage a été effectué pour chacune d’entre elles. Puis, le calibrage général des systèmes a aussi été réalisé pour tout le champ de vue simultanément. Les résultats ont montré que la technique de calibrage par zone ne produit pas de gain significatif quant à la qualité des reconstructions 3D d’objet d’intérêt par rapport au calibrage général. Cependant, l’étude de cette nouvelle approche a permis de réaliser une évaluation des performances des systèmes stéréoscopiques Panomorphes sur tout le champ de vue et de montrer qu’il est possible d’effectuer des reconstructions 3D de qualité dans toutes les zones. De plus, la technique mathématique du cercle a produit des résultats de reconstructions 3D en général équivalents à l’utilisation des coordonnées originales. Puisqu’il existe des outils de calibrage qui, contrairement à celui utilisé dans ce travail, ne disposent que d’un seul degré de liberté sur la longueur focale, cette technique pourrait rendre possible le calibrage de lentilles Panomorphes à l’aide de ceux-ci. Finalement, certaines conclusions ont pu être dégagées quant aux facteurs déterminants influençant la qualité de la reconstruction 3D à l’aide de systèmes stéréoscopiques Panomorphes et aux caractéristiques à privilégier dans le choix des lentilles. La difficulté à calibrer les optiques Panomorphes en laboratoire a mené à l’élaboration d’une technique de calibrage virtuel utilisant un logiciel de conception optique et une boîte à outils de calibrage. Cette approche a permis d’effectuer des simulations en lien avec l’impact des conditions d’opération sur les paramètres de calibrage et avec l’effet des conditions de calibrage sur la qualité de la reconstruction. Des expérimentations de ce type sont pratiquement impossibles à réaliser en laboratoire mais représentent un intérêt certain pour les utilisateurs. Le calibrage virtuel d’une lentille traditionnelle a aussi montré que l’erreur de reprojection moyenne, couramment utilisée comme façon d’évaluer la qualité d’un calibrage, n’est pas nécessairement un indicateur fiable de la qualité de la reconstruction 3D. Il est alors nécessaire de disposer de données supplémentaires pour juger adéquatement de la qualité d’un calibrage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les anodes de carbone sont des éléments consommables servant d’électrode dans la réaction électrochimique d’une cuve Hall-Héroult. Ces dernières sont produites massivement via une chaine de production dont la mise en forme est une des étapes critiques puisqu’elle définit une partie de leur qualité. Le procédé de mise en forme actuel n’est pas pleinement optimisé. Des gradients de densité importants à l’intérieur des anodes diminuent leur performance dans les cuves d’électrolyse. Encore aujourd’hui, les anodes de carbone sont produites avec comme seuls critères de qualité leur densité globale et leurs propriétés mécaniques finales. La manufacture d’anodes est optimisée de façon empirique directement sur la chaine de production. Cependant, la qualité d’une anode se résume en une conductivité électrique uniforme afin de minimiser les concentrations de courant qui ont plusieurs effets néfastes sur leur performance et sur les coûts de production d’aluminium. Cette thèse est basée sur l’hypothèse que la conductivité électrique de l’anode n’est influencée que par sa densité considérant une composition chimique uniforme. L’objectif est de caractériser les paramètres d’un modèle afin de nourrir une loi constitutive qui permettra de modéliser la mise en forme des blocs anodiques. L’utilisation de la modélisation numérique permet d’analyser le comportement de la pâte lors de sa mise en forme. Ainsi, il devient possible de prédire les gradients de densité à l’intérieur des anodes et d’optimiser les paramètres de mise en forme pour en améliorer leur qualité. Le modèle sélectionné est basé sur les propriétés mécaniques et tribologiques réelles de la pâte. La thèse débute avec une étude comportementale qui a pour objectif d’améliorer la compréhension des comportements constitutifs de la pâte observés lors d’essais de pressage préliminaires. Cette étude est basée sur des essais de pressage de pâte de carbone chaude produite dans un moule rigide et sur des essais de pressage d’agrégats secs à l’intérieur du même moule instrumenté d’un piézoélectrique permettant d’enregistrer les émissions acoustiques. Cette analyse a précédé la caractérisation des propriétés de la pâte afin de mieux interpréter son comportement mécanique étant donné la nature complexe de ce matériau carboné dont les propriétés mécaniques sont évolutives en fonction de la masse volumique. Un premier montage expérimental a été spécifiquement développé afin de caractériser le module de Young et le coefficient de Poisson de la pâte. Ce même montage a également servi dans la caractérisation de la viscosité (comportement temporel) de la pâte. Il n’existe aucun essai adapté pour caractériser ces propriétés pour ce type de matériau chauffé à 150°C. Un moule à paroi déformable instrumenté de jauges de déformation a été utilisé pour réaliser les essais. Un second montage a été développé pour caractériser les coefficients de friction statique et cinétique de la pâte aussi chauffée à 150°C. Le modèle a été exploité afin de caractériser les propriétés mécaniques de la pâte par identification inverse et pour simuler la mise en forme d’anodes de laboratoire. Les propriétés mécaniques de la pâte obtenues par la caractérisation expérimentale ont été comparées à celles obtenues par la méthode d’identification inverse. Les cartographies tirées des simulations ont également été comparées aux cartographies des anodes pressées en laboratoire. La tomodensitométrie a été utilisée pour produire ces dernières cartographies de densité. Les résultats des simulations confirment qu’il y a un potentiel majeur à l’utilisation de la modélisation numérique comme outil d’optimisation du procédé de mise en forme de la pâte de carbone. La modélisation numérique permet d’évaluer l’influence de chacun des paramètres de mise en forme sans interrompre la production et/ou d’implanter des changements coûteux dans la ligne de production. Cet outil permet donc d’explorer des avenues telles la modulation des paramètres fréquentiels, la modification de la distribution initiale de la pâte dans le moule, la possibilité de mouler l’anode inversée (upside down), etc. afin d’optimiser le processus de mise en forme et d’augmenter la qualité des anodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The semiconductor nanowire has been widely studied over the past decade and identified as a promising nanotechnology building block with application in photonics and electronics. The flexible bottom-up approach to nanowire growth allows for straightforward fabrication of complex 1D nanostructures with interesting optical, electrical, and mechanical properties. III-V nanowires in particular are useful because of their direct bandgap, high carrier mobility, and ability to form heterojunctions and have been used to make devices such as light-emitting diodes, lasers, and field-effect transistors. However, crystal defects are widely reported for III-V nanowires when grown in the common out-of-plane <111>B direction. Furthermore, commercialization of nanowires has been limited by the difficulty of assembling nanowires with predetermined position and alignment on a wafer-scale. In this thesis, planar III-V nanowires are introduced as a low-defect and integratable nanotechnology building block grown with metalorganic chemical vapor deposition. Planar GaAs nanowires grown with gold seed particles self-align along the <110> direction on the (001) GaAs substrate. Transmission electron microscopy reveals that planar GaAs nanowires are nearly free of crystal defects and grow laterally and epitaxially on the substrate surface. The nanowire morphology is shown to be primarily controlled through growth temperature and an ideal growth window of 470 +\- 10 °C is identified for planar GaAs nanowires. Extension of the planar growth mode to other materials is demonstrated through growth of planar InAs nanowires. Using a sacrificial layer, the transfer of planar GaAs nanowires onto silicon substrates with control over the alignment and position is presented. A metal-semiconductor field-effect transistor fabricated with a planar GaAs nanowire shows bulk-like low-field electron transport characteristics with high mobility. The aligned planar geometry and excellent material quality of planar III-V nanowires may lead to highly integrated III-V nanophotonics and nanoelectronics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental and analytical studies were conducted to explore thermo-acoustic coupling during the onset of combustion instability in various air-breathing combustor configurations. These include a laboratory-scale 200-kW dump combustor and a 100-kW augmentor featuring a v-gutter flame holder. They were used to simulate main combustion chambers and afterburners in aero engines, respectively. The three primary themes of this work includes: 1) modeling heat release fluctuations for stability analysis, 2) conducting active combustion control with alternative fuels, and 3) demonstrating practical active control for augmentor instability suppression. The phenomenon of combustion instabilities remains an unsolved problem in propulsion engines, mainly because of the difficulty in predicting the fluctuating component of heat release without extensive testing. A hybrid model was developed to describe both the temporal and spatial variations in dynamic heat release, using a separation of variables approach that requires only a limited amount of experimental data. The use of sinusoidal basis functions further reduced the amount of data required. When the mean heat release behavior is known, the only experimental data needed for detailed stability analysis is one instantaneous picture of heat release at the peak pressure phase. This model was successfully tested in the dump combustor experiments, reproducing the correct sign of the overall Rayleigh index as well as the remarkably accurate spatial distribution pattern of fluctuating heat release. Active combustion control was explored for fuel-flexible combustor operation using twelve different jet fuels including bio-synthetic and Fischer-Tropsch types. Analysis done using an actuated spray combustion model revealed that the combustion response times of these fuels were similar. Combined with experimental spray characterizations, this suggested that controller performance should remain effective with various alternative fuels. Active control experiments validated this analysis while demonstrating 50-70\% reduction in the peak spectral amplitude. A new model augmentor was built and tested for combustion dynamics using schlieren and chemiluminescence techniques. Novel active control techniques including pulsed air injection were implemented and the results were compared with the pulsed fuel injection approach. The pulsed injection of secondary air worked just as effectively for suppressing the augmentor instability, setting up the possibility of more efficient actuation strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an IP-based nonparametric (revealed preference) testing procedure for rational consumption behavior in terms of general collective models, which include consumption externalities and public consumption. An empirical application to data drawn from the Russia Longitudinal Monitoring Survey (RLMS) demonstrates the practical usefulness of the procedure. Finally, we present extensions of the testing procedure to evaluate the goodness-of-…t of the collective model subject to testing, and to quantify and improve the power of the corresponding collective rationality tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fiji exports approximately 800 t year-1 of 'Solo Sunrise' papaya marketed as 'Fiji Red' to international markets which include New Zealand, Australia and Japan. The wet weather conditions from November to April each year result in a significant increase in fungal diseases present in Fiji papaya orchards. The two major pathogens that are causing significant post-harvest losses are: stem end rot (Phytophthora palmivora) and anthracnose (Colletotrichum spp.). The high incidence of post-harvest rots has led to increased rejection rates all along the supply chain, causing a reduction in income to farmers, exporters, importers and retailers of Fiji papaya. It has also undermined the superior quality reputation on the market. In response to this issue, the Fiji Papaya industry led by Nature's Way Cooperative, embarked on series of trials supported by the Australian Centre for International Agricultural Research (ACIAR) to determine the most effective and economical post-harvest control in Fiji papaya. Of all the treatments that were examined, a hot water dip treatment was selected by the industry as the most appropriate technology given the level of control that it provide, the cost effectiveness of the treatment and the fact that it was non-chemical. A commercial hot water unit that fits with the existing quarantine treatment and packing facilities has been designed and a cost benefit analysis for the investment carried out. This paper explores the research findings as well as the industry process that has led to the commercial uptake of this important technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spent hydroprocessing catalysts (HPCs) are solid wastes generated in refinery industries and typically contain various hazardous metals, such as Co, Ni, and Mo. These wastes cannot be discharged into the environment due to strict regulations and require proper treatment to remove the hazardous substances. Various options have been proposed and developed for spent catalysts treatment; however, hydrometallurgical processes are considered efficient, cost-effective and environmentally-friendly methods of metal extraction, and have been widely employed for different metal uptake from aqueous leachates of secondary materials. Although there are a large number of studies on hazardous metal extraction from aqueous solutions of various spent catalysts, little information is available on Co, Ni, and Mo removal from spent NiMo hydroprocessing catalysts. In the current study, a solvent extraction process was applied to the spent HPC to specifically remove Co, Ni, and Mo. The spent HPC is dissolved in an acid solution and then the metals are extracted using three different extractants, two of which were aminebased and one which was a quaternary ammonium salt. The main aim of this study was to develop a hydrometallurgical method to remove, and ultimately be able to recover, Co, Ni, and Mo from the spent HPCs produced at the petrochemical plant in Come By Chance, Newfoundland and Labrador. The specific objectives of the study were: (1) characterization of the spent catalyst and the acidic leachate, (2) identifying the most efficient leaching agent to dissolve the metals from the spent catalyst; (3) development of a solvent extraction procedure using the amine-based extractants Alamine308, Alamine336 and the quaternary ammonium salt, Aliquat336 in toluene to remove Co, Ni, and Mo from the spent catalyst; (4) selection of the best reagent for Co, Ni, and Mo extraction based on the required contact time, required extractant concentration, as well as organic:aqueous ratio; and (5) evaluation of the extraction conditions and optimization of the metal extraction process using the Design Expert® software. For the present study, a Central Composite Design (CCD) method was applied as the main method to design the experiments, evaluate the effect of each parameter, provide a statistical model, and optimize the extraction process. Three parameters were considered as the most significant factors affecting the process efficiency: (i) extractant concentration, (ii) the organic:aqueous ratio, and (iii) contact time. Metal extraction efficiencies were calculated based on ICP analysis of the pre- and post–leachates, and the process optimization was conducted with the aid of the Design Expert® software. The obtained results showed that Alamine308 can be considered to be the most effective and suitable extractant for spent HPC examined in the study. Alamine308 is capable of removing all three metals to the maximum amounts. Aliquat336 was found to be not as effective, especially for Ni extraction; however, it is able to separate all of these metals within the first 10 min, unlike Alamine336, which required more than 35 min to do so. Based on the results of this study, a cost-effective and environmentally-friendly solventextraction process was achieved to remove Co, Ni, and Mo from the spent HPCs in a short amount of time and with the low extractant concentration required. This method can be tested and implemented for other hazardous metals from other secondary materials as well. Further investigation may be required; however, the results of this study can be a guide for future research on similar metal extraction processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The LISA Path finder mission will demonstrate the technology of drag-free test masses for use as inertial references in future space-based gravitational wave detectors. To accomplish this, the Path finder spacecraft will perform drag-free flight about a test mass while measuring the acceleration of this primary test mass relative to a second reference test mass. Because the reference test mass is contained within the same spacecraft, it is necessary to apply forces on it to maintain its position and attitude relative to the spacecraft. These forces are a potential source of acceleration noise in the LISA Path finder system that are not present in the full LISA con figuration. While LISA Path finder has been designed to meet it's primary mission requirements in the presence of this noise, recent estimates suggest that the on-orbit performance may be limited by this 'suspension noise'. The drift-mode or free-flight experiments provide an opportunity to mitigate this noise source and further characterize the underlying disturbances that are of interest to the designers of LISA-like instruments. This article provides a high-level overview of these experiments and the methods under development to analyze the resulting data.