904 resultados para simultaneous monitoring of process mean and variance
Resumo:
The reduction of indigo (dispersed in water) to leuco-indigo (dissolved in water) is an important industrial process and investigated here for the case of glucose as an environmentally benign reducing agent. In order to quantitatively follow the formation of leuco-indigo two approaches based on (i) rotating disk voltammetry and (ii) sonovoltammetry are developed. Leuco-indigo, once formed in alkaline solution, is readily monitored at a glassy carbon electrode in the mass transport limit employing hydrodynamic voltammetry. The presence of power ultrasound further improves the leuco-indigo determination due to additional agitation and homogenization effects. While inactive at room temperature, glucose readily reduces indigo in alkaline media at 65 degrees C. In the presence of excess glucose, a surface dissolution kinetics limited process is proposed following the rate law d eta(leuco-indigo)/dt = k x c(OH-) x S-indigo where eta(leuco-indigo) is the amount of leuco-indigo formed, k = 4.1 x 10(-9) m s(-1) (at 65 degrees C, assuming spherical particles of I gm diameter) is the heterogeneous dissolution rate constant,c(OH-) is the concentration of hydroxide, and Sindigo is the reactive surface area. The activation energy for this process in aqueous 0.2 M NaOH is E-A = 64 U mol(-1) consistent with a considerable temperature effects. The redox mediator 1,8-dihydroxyanthraquinone is shown to significantly enhance the reaction rate by catalysing the electron transfer between glucose and solid indigo particles. (c) 2006 Elsevier Ltd. All fights reserved.
Resumo:
Numerical experiments are described that pertain to the climate of a coupled atmosphere–ocean–ice system in the absence of land, driven by modern-day orbital and CO2 forcing. Millennial time-scale simulations yield a mean state in which ice caps reach down to 55° of latitude and both the atmosphere and ocean comprise eastward- and westward-flowing zonal jets, whose structure is set by their respective baroclinic instabilities. Despite the zonality of the ocean, it is remarkably efficient at transporting heat meridionally through the agency of Ekman transport and eddy-driven subduction. Indeed the partition of heat transport between the atmosphere and ocean is much the same as the present climate, with the ocean dominating in the Tropics and the atmosphere in the mid–high latitudes. Variability of the system is dominated by the coupling of annular modes in the atmosphere and ocean. Stochastic variability inherent to the atmospheric jets drives variability in the ocean. Zonal flows in the ocean exhibit decadal variability, which, remarkably, feeds back to the atmosphere, coloring the spectrum of annular variability. A simple stochastic model can capture the essence of the process. Finally, it is briefly reviewed how the aquaplanet can provide information about the processes that set the partition of heat transport and the climate of Earth.
Resumo:
Exocyclic DNA adducts produced by exogenous and endogenous compounds are emerging as potential tools to study a variety of human diseases and air pollution exposure. A highly sensitive method involving online reverse-phase high performance liquid chromatography with electrospray tandem mass spectrometry detection in the multiple reaction monitoring mode and employing stable isotope-labeled internal standards was developed for the simultaneous quantification of 1,N(2)-etheno-2`-deoxyguanosine (1,N(2)-epsilon dGuo) and 1,N(2)-propano-2`-deoxyguanosine (1,N(2)-propanodGuo) in DNA. This methodology permits direct online quantification of 2`-deoxyguanosine and ca. 500 amol of adducts in 100 mu g of hydrolyzed DNA M the same analysis. Using the newly developed technique, accurate determinations of 1,N(2)-etheno-2`-deoxyguanosine and 1,N2-propano-2`-deoxyguanosine levels in DNA extracts of human cultured cells (4.01 +/- 0.32 1,N(2)-epsilon dGuo/10(8) dGuo and 3.43 +/- 0.33 1,N(2)-propanodGuo/10(8) dGuo) and rat tissue (liver, 2.47 +/- 0.61 1,N(2)-epsilon dGuo/10(8) dGuo and 4.61 +/- 0.69 1,N(2)-propanodGuo/108 dGuo; brain, 2.96 +/- 1.43,N(2)-epsilon dGuo/10(8) dGuo and 5.66 +/- 3.70 1,N(2)-propanoclGuo/10(8) dGuo; and lung, 0,87 +/- 0.34 1,N(2)-edGuo/ 10(8) dGuo and 2.25 +/- 1.72 1,N(2)-propanodGuo/10(8) dGuo) were performed. The method described herein can be used to study the biological significance of exocyclic DNA adducts through the quantification of different adducts in humans and experimental an with pathological conditions and after air pollution exposure.
Resumo:
Vegetation growing on railway trackbeds and embankments present potential problems. The presence of vegetation threatens the safety of personnel inspecting the railway infrastructure. In addition vegetation growth clogs the ballast and results in inadequate track drainage which in turn could lead to the collapse of the railway embankment. Assessing vegetation within the realm of railway maintenance is mainly carried out manually by making visual inspections along the track. This is done either on-site or by watching videos recorded by maintenance vehicles mainly operated by the national railway administrative body. A need for the automated detection and characterisation of vegetation on railways (a subset of vegetation control/management) has been identified in collaboration with local railway maintenance subcontractors and Trafikverket, the Swedish Transport Administration (STA). The latter is responsible for long-term planning of the transport system for all types of traffic, as well as for the building, operation and maintenance of public roads and railways. The purpose of this research project was to investigate how vegetation can be measured and quantified by human raters and how machine vision can automate the same process. Data were acquired at railway trackbeds and embankments during field measurement experiments. All field data (such as images) in this thesis work was acquired on operational, lightly trafficked railway tracks, mostly trafficked by goods trains. Data were also generated by letting (human) raters conduct visual estimates of plant cover and/or count the number of plants, either on-site or in-house by making visual estimates of the images acquired from the field experiments. Later, the degree of reliability of(human) raters’ visual estimates were investigated and compared against machine vision algorithms. The overall results of the investigations involving human raters showed inconsistency in their estimates, and are therefore unreliable. As a result of the exploration of machine vision, computational methods and algorithms enabling automatic detection and characterisation of vegetation along railways were developed. The results achieved in the current work have shown that the use of image data for detecting vegetation is indeed possible and that such results could form the base for decisions regarding vegetation control. The performance of the machine vision algorithm which quantifies the vegetation cover was able to process 98% of the im-age data. Investigations of classifying plants from images were conducted in in order to recognise the specie. The classification rate accuracy was 95%.Objective measurements such as the ones proposed in thesis offers easy access to the measurements to all the involved parties and makes the subcontracting process easier i.e., both the subcontractors and the national railway administration are given the same reference framework concerning vegetation before signing a contract, which can then be crosschecked post maintenance.A very important issue which comes with an increasing ability to recognise species is the maintenance of biological diversity. Biological diversity along the trackbeds and embankments can be mapped, and maintained, through better and robust monitoring procedures. Continuously monitoring the state of vegetation along railways is highly recommended in order to identify a need for maintenance actions, and in addition to keep track of biodiversity. The computational methods or algorithms developed form the foundation of an automatic inspection system capable of objectively supporting manual inspections, or replacing manual inspections.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Cure kinetic model is an integral part of composite process simulation, which is used to predict the degree of curing and the amount of the generated heat. The parameters involved in kinetic models are usually determined empirically from isothermal or dynamic differential scanning calorimetry (DSC) data. In this work, DSC and rheological techniques were used to investigate some of the kinetic parameters of cure reactions of carbon/F161 epoxy prepreg and to evaluate the cure cycle used to manufacture polymeric composites for aeronautical applications. As a result, it was observed that the F161 prepreg presents cure kinetic with total order 1.2-1.9. (c) 2006 Springer Science + Business Media, Inc.
Resumo:
Time-resolved X-ray absorption-fine structure (Quick-XAFS) and UV-Vis absorption spectroscopies were combined for monitoring simultaneously the time evolution of Zn-based species and ZnO quantum dot (Qdot) formation and growth during the sol-gel synthesis from zinc oxy-acetate precursor solution. The time evolution of the nanostructural features of colloidal suspension was independently monitored in situ by small angle X-ray scattering (SAXS). In both cases, the monitoring was initialized just after the addition of NaOH solution (B = [OH]/[Zn] = 0.5) to the precursor solution at 40 degrees C. Combined time-resolved Quick-XAFS and UV-Vis data showed that the formation of ZnO colloids from the zinc oxy-acetate consumption achieves a quasi-steady-state chemical equilibrium in less than 200s. Afterwards, the comparison of the ZnO Qdots size and Guinier gyration radius evidences a limited aggregation process coupled to the Qdots growth. The analysis of the experimental results demonstrates that the nanocrystal coalescence and Ostwald ripening control the kinetics of the Qdot growth.
Resumo:
The production of chlorine was investigated in the photoelectrocatalytic oxidation of a chloride-containing solution using a TiO(2) thin-film electrode biased at current density from 5 to 50 mA cm(-2) and illuminated by UV light. Such parameters as chloride concentrations from 0.001 to 0.10 mol L(-1), pH 2-12, and interfering salts were varied in this study in order to determine their effect on this oxidation process. At an optimum condition this photoelectrocatalytic method can produce active chlorine at levels compatible to water disinfections processes using a chloride concentration higher than 0.010 mol L(-1) at a pH of 4 and a current density of 30 mA cm(-2). The method was successfully applied to treat surface water collected from a Brazilian river. After 150 min of photoelectrocatalytic oxidation, we obtained a 90% reduction in total organic carbon removal, a 100% removal of turbidity, a 93% decrease in colour and a chemical oxygen demand (COD) removal of around 96% (N=3). The proposed technology based on photoelectrocatalytic oxidation was also tested in treating 250 mL of a solution containing 0.05 mol L(-1) NaCl and 50 mu g L(-1) of Microcystin aeruginosa. The bacteria is completely removed after 5 min of photoelectrocatalysis following an initial rate constant removal of -0.260 min(-1), suggesting that the present method could be considered as a promising alternative to chlorine-based disinfections. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Este artigo considera um gráfico np x proposto por Wu et al. (2009) para controle de média de processo como uma alternativa ao uso do gráfico de. O que distingue do gráfico de controle np x é o fato das unidades amostrais serem classificadas como unidades de primeiro ou de segunda classe de acordo com seus limites discriminantes. O gráfico tradicional np é um caso particular do gráfico np x quando os limites discriminantes coincidem com os limites de especificação e unidade de primeira (segunda) classe é um item conforme (não conforme). Estendendo o trabalho de Reynolds Junior, Arnold e Baik (1996), consideramos que a média de processo oscila mesmo na ausência de alguma causa especial. As propriedades de Cadeia de Markov foram adotadas para avaliar o desempenho do gráfico np x no monitoramento de média de processos que oscila. de modo geral, o gráfico np x requer amostras duas vezes maior para superar desempenho do gráfico (enquanto que o gráfico tradicional np necessita tamanho de amostras cinco ou seis vezes maior).
Resumo:
The use of internal standardization for simultaneous atomic absorption spectrometry (SIMAAS) was investigated for Cd and Pb determination in whole blood. The comparison of thermochemical and physicochemical parameters allowed the selection of Ag, Bi, and Tl as internal standard candidates. Correlation graphs, plotted from the normalized absorbance signals (n = 20) of internal standard (axis y) versus analyte ( axis x), precision and accuracy were used to select Ag as the most appropriate internal standard. Blood samples were diluted (1 + 9) with 0.11% (m/v) Triton X-100 + 1.1% (v/v) HNO3 + 0.28% (m/v) NH4H2PO4 + 10 mug L-1 Ag+. Pyrolysis and atomization temperatures for the optimized heating program were 550 and 1700 degreesC, respectively. Characteristic masses based on integrated absorbance were 1.68 +/- 0.01 pg for Cd and 30.3 +/- 0.1 pg for Pb. The detection limits (DL) were 0.095 +/- 0.001 mug L-1 and 0.86 +/- 0.01 mug L-1 for Cd and Pb, respectively. The mean RSD for all determinations was the same for Cd (13 +/- 9%) with or without Ag as internal standard ( IS). on the other hand, the use of Ag as IS improved the RSD for Pb from 3.6 +/- 4.0% to 2.2 +/- 2.0%. An effective contribution of the internal standard Ag was verified in the recoveries of spiked samples (0.5 mug L-1 Cd2+ and 5.0 mug L-1 Pb2+). The mean recoveries were 81 +/- 8% and 91 +/- 4% for Cd, and 80 +/- 11% and 93 +/- 6% for Pb without and with IS correction, respectively. This is the first application of IS for a simultaneous determination by SIMAAS.
Resumo:
A significant part of film production by the coating industry is based on wet bench processes, where better understanding of their temporal dynamics could facilitate control and optimization. In this work, in situ laser interferometry is applied to study properties of flowing liquids and quantitatively monitor the dip coating batch process. Two oil standards Newtonian, non-volatile, with constant refractive indices and distinct flow properties - were measured under several withdrawing speeds. The dynamics of film physical thickness then depends on time as t(-1/2), and flow characterization becomes possible with high precision (linear slope uncertainty of +/-0.04%). Resulting kinematic viscosities for OP60 and OP400 are 1,17 +/- 0,03. St and 9,9 +/- 0,2 St, respectively. These results agree with nominal values, as provided by the manufacturer. For more complex films (a multi-component sol-gel Zirconyl Chloride aqueous solution) with a varying refractive index, through a direct polarimetric measurement, allowing also determination of the temporal evolution of physical thickness (uncertainty of +/- 0,007 microns) is also determined during dip coating.
Resumo:
This paper deals with the joint economic design of (x) over bar and R charts when the occurrence times of assignable causes follow Weibull distributions with increasing failure rates. The variable quality characteristic is assumed to be normally distributed and the process is subject to two independent assignable causes (such as tool wear-out, overheating, or vibration). One cause changes the process mean and the other changes the process variance. However, the occurrence of one kind of assignable cause does not preclude the occurrence of the other. A cost model is developed and a non-uniform sampling interval scheme is adopted. A two-step search procedure is employed to determine the optimum design parameters. Finally, a sensitivity analysis of the model is conducted, and the cost savings associated with the use of non-uniform sampling intervals instead of constant sampling intervals are evaluated.
Resumo:
Statement of problem. Acrylic resin denture teeth soften upon immersion in water, and the heating generated during microwave sterilization may enhance this process.Purpose. Six brands of acrylic resin denture teeth were investigated with respect to the effect of microwave sterilization and water immersion on Vickers hardness (VHN).Material and Methods. The acrylic resin denture teeth (Dentron [D], Vipi Dent Plus [V], Postaris [P], Biolux [B], Trilux [T], and Artiplus [A]) were embedded in heat-polymerized acrylic resin within polyvinylchloride tubes. For each brand, the occlusal surfaces of 32 identical acrylic resin denture posterior teeth were ground flat with 1500-grit silicon carbide paper and polished on a wet polishing wheel with a slurry of tin oxide. Hardness tests were performed after polishing (control group, C) after polishing followed by 2 cycles of microwave sterilization at 650 W for 6 minutes (MwS group), after polishing followed by 90-day immersion in water (90-day Wim group), and after polishing followed by 90-day storage in water and 2 cycles of microwave sterilization (90-day Wim + MwS group). For each specimen, 8 hardness measurements were made and the mean was calculated. Data were analyzed with a 2-way analysis of variance followed by the Bonferroni procedure to determine any significance between pairs of mean values (alpha=.01).Results: Mircrowave sterilization of specimens significantly decreased (P <.001) the hardness of the acrylic resin denture tooth specimens P (17.8 to 16.6 VHN, V (18.3 to 15.8 VHN), T (17.4 to 15.3 VHN), B (16.8 to 15.7 VHN), and A (17.3 to 15.7 VHN). For all acrylic resin denture teeth, no significant differences in hardness were found between the groups Mws, 90-day Wim, and 90-day Wim + MwS, with the exception of the 90-day Wim + MwS tooth A specimens (14.4 VHN), which demonstrated significant lower mean values (P <.001) than the 90-day Wim (15.8 VHN) and MwS (15.7 VHN) specimens.Conclusions. For specimens immersed in water for 90 days, 2 cycles of microwave sterilization had no effect on the hardness of most of the acrylic resin denture teeth.
Resumo:
Two catalyst wastes (RNi and RAI) from polyol production were considered as hazardous, due to their respective high concentration of nickel and aluminum contents. This article presents the study, done to avoid environmental impacts, of the simultaneous solidification/stabilization of both catalyst wastes with type II Portland cement (CP) by non-conventional differential thermal analysis (NCDTA). This technique allows one to monitor the initial stages of cement hydration to evaluate the accelerating and/or retarding effects on the process due to the presence of the wastes and to identify the steps where the changes occur. Pastes with water/cement ratio equal to 0.5 were prepared, into which different amounts of each waste were added. NCDTA has the same basic principle of Differential Thermal Analysis (DTA), but differs in the fact that there is no external heating or cooling system as in the case of DTA. The thermal effects of the cement paste hydration with and without waste presence were evaluated from the energy released during the process in real time by acquiring the temperature data of the sample and reference using thermistors with 0.03 A degrees C resolution, coupled to an analog-digital interface. In the early stages of cement hydration retarding and accelerating effects occur, respectively due to RNi and RAl presence, with significant thermal effects. During the simultaneous use of the two waste catalysts for their stabilization process by solidification in cement, there is a synergic resulting effect, which allows better hydration operating conditions than when each waste is solidified separately. Thermogravimetric (TG) and derivative thermogravimetric analysis (DTG) of 4 and 24 h pastes allow a quantitative information about the main cement hydrated phases and confirm the same accelerating or retarding effects due to the presence of wastes indicated from respective NCDTA curves.
Resumo:
This paper deals with the joint economic design of x̄ and R charts when the occurrence times of assignable causes follow Weibull distributions with increasing failure rates. The variable quality characteristic is assumed to be normally distributed and the process is subject to two independent assignable causes (such as tool wear-out, overheating, or vibration). One cause changes the process mean and the other changes the process variance. However, the occurrence of one kind of assignable cause does not preclude the occurrence of the other. A cost model is developed and a non-uniform sampling interval scheme is adopted. A two-step search procedure is employed to determine the optimum design parameters. Finally, a sensitivity analysis of the model is conducted, and the cost savings associated with the use of non-uniform sampling intervals instead of constant sampling intervals are evaluated.