950 resultados para Process parameters
Resumo:
This work presents a 1-D process scale model used to investigate the chemical dynamics and temporal variability of nitrogen oxides (NOx) and ozone (O3) within and above snowpack at Summit, Greenland for March-May 2009 and estimates surface exchange of NOx between the snowpack and surface layer in April-May 2009. The model assumes the surface of snowflakes have a Liquid Like Layer (LLL) where aqueous chemistry occurs and interacts with the interstitial air of the snowpack. Model parameters and initialization are physically and chemically representative of snowpack at Summit, Greenland and model results are compared to measurements of NOx and O3 collected by our group at Summit, Greenland from 2008-2010. The model paired with measurements confirmed the main hypothesis in literature that photolysis of nitrate on the surface of snowflakes is responsible for nitrogen dioxide (NO2) production in the top ~50 cm of the snowpack at solar noon for March – May time periods in 2009. Nighttime peaks of NO2 in the snowpack for April and May were reproduced with aqueous formation of peroxynitric acid (HNO4) in the top ~50 cm of the snowpack with subsequent mass transfer to the gas phase, decomposition to form NO2 at nighttime, and transportation of the NO2 to depths of 2 meters. Modeled production of HNO4 was hindered in March 2009 due to the low production of its precursor, hydroperoxy radical, resulting in underestimation of nighttime NO2 in the snowpack for March 2009. The aqueous reaction of O3 with formic acid was the major sync of O3 in the snowpack for March-May, 2009. Nitrogen monoxide (NO) production in the top ~50 cm of the snowpack is related to the photolysis of NO2, which underrepresents NO in May of 2009. Modeled surface exchange of NOx in April and May are on the order of 1011 molecules m-2 s-1. Removal of measured downward fluxes of NO and NO2 in measured fluxes resulted in agreement between measured NOx fluxes and modeled surface exchange in April and an order of magnitude deviation in May. Modeled transport of NOx above the snowpack in May shows an order of magnitude increase of NOx fluxes in the first 50 cm of the snowpack and is attributed to the production of NO2 during the day from the thermal decomposition and photolysis of peroxynitric acid with minor contributions of NO from HONO photolysis in the early morning.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
The impact of common versus separate estimation of orbit parameters on GRACE gravity field solutions
Resumo:
Gravity field parameters are usually determined from observations of the GRACE satellite mission together with arc-specific parameters in a generalized orbit determination process. When separating the estimation of gravity field parameters from the determination of the satellites’ orbits, correlations between orbit parameters and gravity field coefficients are ignored and the latter parameters are biased towards the a priori force model. We are thus confronted with a kind of hidden regularization. To decipher the underlying mechanisms, the Celestial Mechanics Approach is complemented by tools to modify the impact of the pseudo-stochastic arc-specific parameters on the normal equations level and to efficiently generate ensembles of solutions. By introducing a time variable a priori model and solving for hourly pseudo-stochastic accelerations, a significant reduction of noisy striping in the monthly solutions can be achieved. Setting up more frequent pseudo-stochastic parameters results in a further reduction of the noise, but also in a notable damping of the observed geophysical signals. To quantify the effect of the a priori model on the monthly solutions, the process of fixing the orbit parameters is replaced by an equivalent introduction of special pseudo-observations, i.e., by explicit regularization. The contribution of the thereby introduced a priori information is determined by a contribution analysis. The presented mechanism is valid universally. It may be used to separate any subset of parameters by pseudo-observations of a special design and to quantify the damage imposed on the solution.
Resumo:
Charcoal analysis was conducted on sediment cores from three lakes to assess the relationship between the area and number of charcoal particles. Three charcoal-size parameters (maximum breadth, maximum length and area) were measured on sediment samples representing various vegetation types, including shrub tundra, boreal forest and temperate forest. These parameters and charcoal size-class distributions do not differ statistically between two sites where the same preparation technique (glycerine pollen slides) was used, but they differ for the same core when different techniques were applied. Results suggest that differences in charcoal size and size-class distribution are mainly caused by different preparation techniques and are not related to vegetation-type variation. At all three sites, the area and number concentrations of charcoal particles are highly correlated in standard pollen slides; 82–83% of the variability of the charcoal-area concentration can be explained by the particle-number concentration. Comparisons between predicted and measured area concentrations show that regression equations linking charcoal number and area concentrations can be used across sites as long as the same pollen-preparation technique is used. Thus it is concluded that it is unnecessary to measure charcoal areas in standard pollen slides – a time-consuming and tedious process.
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^
Resumo:
This study investigates a theoretical model where a longitudinal process, that is a stationary Markov-Chain, and a Weibull survival process share a bivariate random effect. Furthermore, a Quality-of-Life adjusted survival is calculated as the weighted sum of survival time. Theoretical values of population mean adjusted survival of the described model are computed numerically. The parameters of the bivariate random effect do significantly affect theoretical values of population mean. Maximum-Likelihood and Bayesian methods are applied on simulated data to estimate the model parameters. Based on the parameter estimates, predicated population mean adjusted survival can then be calculated numerically and compared with the theoretical values. Bayesian method and Maximum-Likelihood method provide parameter estimations and population mean prediction with comparable accuracy; however Bayesian method suffers from poor convergence due to autocorrelation and inter-variable correlation. ^
Resumo:
A general model for the illness-death stochastic process with covariates has been developed for the analysis of survival data. This model incorporates important baseline and time-dependent covariates to make proper adjustment for the transition probabilities and survival probabilities. The follow-up period is subdivided into small intervals and a constant hazard is assumed for each interval. An approximation formula is derived to estimate the transition parameters when the exact transition time is unknown.^ The method developed is illustrated by using data from a study on the prevention of the recurrence of a myocardial infarction and subsequent mortality, the Beta-Blocker Heart Attack Trial (BHAT). This method provides an analytical approach which simultaneously includes provision for both fatal and nonfatal events in the model. According to this analysis, the effectiveness of the treatment can be compared between the Placebo and Propranolol treatment groups with respect to fatal and nonfatal events. ^
Resumo:
On-orbit exposures can come from numerous factors related to the space environment as evidenced by almost 50 years of environmental samples collected for water analysis, air analysis, radiation analysis, and physiologic parameters. For astronauts and spaceflight participants the occupational exposures can be very different from those experienced by workers performing similar tasks in workplaces on Earth, because the duration of the exposure could be continuous for very long orbital, and eventually interplanetary, missions. The establishment of long-term exposure standards is vital to controlling the quality of the spacecraft environment over long periods. NASA often needs to update and revise its prior exposure standards (Spacecrafts Maximum Allowable Concentrations (SMACs)). Traditional standards-setting processes are often lengthy, so a more rapid method to review and establish standards would be a substantial advancement in this area. This project investigates use of the Delphi method for this purpose. ^ In order to achieve the objectives of this study a modified Delphi methodology was tested in three trials executed by doctoral students and a panel of experts in disciplines related to occupational safety and health. During each test/trial modifications were made to the methodology. Prior to submission of the Delphi Questionnaire to the panel of experts a pilot study/trial was conducted using five doctoral students with the goals of testing and adjusting the Delphi questionnaire to improve comprehension, work out any procedural issues and evaluate the effectiveness of the questionnaire in drawing the desired responses. The remainder of the study consisted of two trials of the Modified Delphi process using 6 chemicals that currently have the potential of causing occupational exposures to NASA astronauts or spaceflight participants. To assist in setting Occupational Exposure Limits (OEL), the expert panel was established consisting of experts from academia, government and industry. Evidence was collected and used to create close-ended questionnaires which were submitted to the Delphi panel of experts for the establishment of OEL values for three chemicals from the list of six originally selected (trial 1). Once the first Delphi trial was completed, adjustments were made to the Delphi questionnaires and the process above was repeated with the remaining 3 chemicals (trial 2). ^ Results indicate that experience in occupational safety and health and with OEL methodologies can have a positive effect in minimizing the time experts take in completing this process. Based on the results of the questionnaires and comparison of the results with the SMAC already established by NASA, we conclude that use of the Delphi methodology is appropriate for use in the decision-making process for the selection of OELs.^
Resumo:
Ocean Drilling Program (ODP) Sites 832 and 833 were drilled in the intra-arc North Aoba Basin of the New Hebrides Island Arc (Vanuatu). High volcanic influxes in the intra-arc basin sediment resulting from erosion of volcanic rocks from nearby islands and from volcanic activity are associated with characteristic magnetic signals. The high magnetic susceptibility in the sediment (varying on average from 0.005 to more than 0.03 SI) is one of the most characteristic physical properties of this sedimentary depositional environment because of the high concentration of magnetites in redeposited ash flows and in coarse-grained turbidites. Susceptibility data correlate well with the high resolution electrical resistivity logs recorded by the formation microscanner (FMS) tool. Unlike the standard geophysical logs, which have low vertical resolution and therefore smooth the record of the sedimentary process, the FMS and whole-core susceptibility data provide a clearer picture of turbiditic sediment deposition. Measurements of Curie temperatures and low-temperature susceptibility behavior indicate that the principal magnetic minerals in ash beds, silt, and volcanic sandstone are Ti-poor titanomagnetite, whereas Ti-rich titanomagnetites are found in the intrusive sills at the bottom of Site 833. Apart from an increase in the concentration of magnetite in the sandstone layer, acquisition of isothermal and anhysteretic remanences does not show significant differences between sandstone and clayey silts. The determination of the anisotropy of magnetic susceptibility (AMS) in more than 400 samples show that clayey siltstone have a magnetic anisotropy up to 15%, whereas the AMS is much reduced in sandstone layers. The magnetic susceptibility fabric is dominated by the foliation plane, which is coplanar to the bedding plane. Reorientations of the samples using characteristic remanent magnetizations indicate that the bedding planes dip about 10° toward the east, in agreement with results from FMS images. Basaltic sills drilled at Site 833 have high magnetic susceptibilities (0.05 to 0.1 SI) and strong remanent magnetizations. Magnetic field anomalies up to 50 µT were measured in the sills by the general purpose inclinometer tool (GPIT). The direction of the in-situ magnetic anomaly vectors, calculated from the GPIT, is oriented toward the southeast with shallow inclinations which suggests that the sill intruded during a reversed polarity period.
Resumo:
A large fraction of the carbon dioxide added to the atmosphere by human activity enters the sea, causing ocean acidification. We show that otoliths (aragonite ear bones) of young fish grown under high CO2 (low pH) conditions are larger than normal, contrary to expectation. We hypothesize that CO2 moves freely through the epithelium around the otoliths in young fish, accelerating otolith growth while the local pH is controlled. This is the converse of the effect commonly reported for structural biominerals.
Resumo:
The solubility parameters of two SBS commercial rubbers with different structures (lineal and radial), and with slightly different styrene content have been determined by inverse gas chromatography technique. The Flory–Huggins interaction parameters of several polymer–solvent mixtures have also been calculated. The influence of the polymer composition, the solvent molecular weight and the temperature over these parameters have been discussed; besides, these parameters have been compared with previous ones, obtained by intrinsic viscosity measurements. From the Flory–Huggins interaction parameters, the infinite dilution activity coefficients of the solvents have been calculated and fitted to the well-known NRTL model. These NRTL binary interaction parameters have a great importance in modelling the separation steps in the process of obtaining the rubber.
Resumo:
To optimize the last high temperature step of a standard solar cell fabrication process (the contact cofiring step), the aluminium gettering is incorporated in the Impurity-to-Efficiency simulation tool, so that it models the phosphorus and aluminium co-gettering effect on iron impurities. The impact of iron on the cell efficiency will depend on the balance between precipitate dissolution and gettering. Gettering efficiency is similar in a wide range of peak temperatures (600-850 ºC), so that this peak temperature can be optimized favoring other parameters (e.g. ohmic contact). An industrial co-firing step can enhance the co-gettering effect by adding a temperature plateau after the peak of temperature. For highly contaminated materials, a short plateau (menor que 2 min) at low temperature (600 ºC) is shown to reduce the dissolved iron.
Resumo:
Polysilicon cost impacts significantly on the photovoltaics (PV) cost and on the energy payback time. Nowadays, the besetting production process is the so called Siemens process, polysilicon deposition by chemical vapor deposition (CVD) from Trichlorosilane. Polysilicon purification level for PV is to a certain extent less demanding that for microelectronics. At the Instituto de Energía Solar (IES) research on this subject is performed through a Siemens process-type laboratory reactor. Through the laboratory CVD prototype at the IES laboratories, valuable information about the phenomena involved in the polysilicon deposition process and the operating conditions is obtained. Polysilicon deposition by CVD is a complex process due to the big number of parameters involved. A study on the influence of temperature and inlet gas mixture composition on the polysilicon deposition growth rate, based on experimental experience, is shown. Moreover, CVD process accounts for the largest contribution to the energy consumption of the polysilicon production. In addition, radiation phenomenon is the major responsible for low energetic efficiency of the whole process. This work presents a model of radiation heat loss, and the theoretical calculations are confirmed experimentally through a prototype reactor at our disposal, yielding a valuable know-how for energy consumption reduction at industrial Siemens reactors.
Resumo:
Computing the modal parameters of structural systems often requires processing data from multiple non-simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors, which are fixed for all measurements, while the other sensors change their position from one setup to the next. One possibility is to process the setups separately resulting in different modal parameter estimates for each setup. Then, the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global mode shapes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a new state space model that processes all setups at once. The result is that the global mode shapes are obtained automatically, and only a value for the natural frequency and damping ratio of each mode is estimated. We also investigate the estimation of this model using maximum likelihood and the Expectation Maximization algorithm, and apply this technique to simulated and measured data corresponding to different structures.
Resumo:
Optical hyperthermia systems based on the laser irradiation of gold nanorods seem to be a promising tool in the development of therapies against cancer. After a proof of concept in which the authors demonstrated the efficiency of this kind of systems, a modeling process based on an equivalent thermal-electric circuit has been carried out to determine the thermal parameters of the system and an energy balance obtained from the time-dependent heating and cooling temperature curves of the irradiated samples in order to obtain the photothermal transduction efficiency. By knowing this parameter, it is possible to increase the effectiveness of the treatments, thanks to the possibility of predicting the response of the device depending on the working configuration. As an example, the thermal behavior of two different kinds of nanoparticles is compared. The results show that, under identical conditions, the use of PEGylated gold nanorods allows for a more efficient heating compared with bare nanorods, and therefore, it results in a more effective therapy.