995 resultados para work sampling
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The analytical determination of atmospheric pollutants still presents challenges due to the low-level concentrations (frequently in the mu g m(-3) range) and their variations with sampling site and time In this work a capillary membrane diffusion scrubber (CMDS) was scaled down to match with capillary electrophoresis (CE) a quick separation technique that requires nothing more than some nanoliters of sample and when combined with capacitively coupled contactless conductometric detection (C(4)D) is particularly favorable for ionic species that do not absorb in the UV-vis region like the target analytes formaldehyde formic acid acetic acid and ammonium The CMDS was coaxially assembled inside a PTFE tube and fed with acceptor phase (deionized water for species with a high Henry s constant such as formaldehyde and carboxylic acids or acidic solution for ammonia sampling with equilibrium displacement to the non-volatile ammonium ion) at a low flow rate (8 3 nLs(-1)) while the sample was aspirated through the annular gap of the concentric tubes at 25 mLs(-1) A second unit in all similar to the CMDS was operated as a capillary membrane diffusion emitter (CMDE) generating a gas flow with know concentrations of ammonia for the evaluation of the CMDS The fluids of the system were driven with inexpensive aquarium air pumps and the collected samples were stored in vials cooled by a Peltier element Complete protocols were developed for the analysis in air of NH(3) CH(3)COOH HCOOH and with a derivatization setup CH(2)O by associating the CMDS collection with the determination by CE-C(4)D The ammonia concentrations obtained by electrophoresis were checked against the reference spectrophotometric method based on Berthelot s reaction Sensitivity enhancements of this reference method were achieved by using a modified Berthelot reaction solenoid micro-pumps for liquid propulsion and a long optical path cell based on a liquid core waveguide (LCW) All techniques and methods of this work are in line with the green analytical chemistry trends (C) 2010 Elsevier B V All rights reserved
Resumo:
Compared to other volatile carbonylic compounds present in outdoor air, formaldehyde (CH2O) is the most toxic, deserving more attention in terms of indoor and outdoor air quality legislation and control. The analytical determination of CH2O in air still presents challenges due to the low-level concentration (in the sub-ppb range) and its variation with sampling site and time. Of the many available analytical methods for carbonylic compounds, the most widespread one is the time consuming collection in cartridges impregnated with 2,4-dinitrophenylhydrazine followed by the analysis of the formed hydrazones by HPLC. The present work proposes the use of polypropylene hollow porous capillary fibers to achieve efficient CH2O collection. The Oxyphan (R) fiber (designed for blood oxygenation) was chosen for this purpose because it presents good mechanical resistance, high density of very fine pores and high ratio of collection area to volume of the acceptor fluid in the tube, all favorable for the development of air sampling apparatus. The collector device consists of a Teflon pipe inside of which a bundle of polypropylene microporous capillary membranes was introduced. While the acceptor passes at a low flow rate through the capillaries, the sampled air circulates around the fibers, impelled by a low flow membrane pump (of the type used for aquariums ventilation). The coupling of this sampling technique with the selective and quantitative determination of CH2O, in the form of hydroxymethanesulfonate (HMS) after derivatization with HSO3-, by capillary electrophoresis with capacitively coupled contactless conductivity detection (CE-(CD)-D-4) enabled the development of a complete analytical protocol for the CH2O evaluation in air. (C) 2008 Published by Elsevier B.V.
Resumo:
O objetivo deste trabalho foi analizar a distribuição espacial da compactação do solo e a influência da umidade do solo na resistência à penetração. Esta última variável foi descrita pelo índice de cone. O solo estudado foi Nitossolo e os dados de índice de cone foram obtidos usando um penetrômetro. A resistência do solo foi avaliada a 5 profundidades diferentes, 0-10 cm, 10-20 cm, 20-30 cm, 30-40 cm e mais de 40 cm, porém o conteúdo de umidade do solo foi medido a 0-20 cm e 20-40 cm. As condições hídricas do solo variaram nas diferentes amostragems. Os coeficientes de variação para o índice de cone foram 16,5% a 45,8% e os do conteúdo de umidade do solo variaram entre 8,96% e 21,38%. Os resultados sugeriram elevada correlação entre a resistência do solo, estimada pelo índice de cone e a profundidade do solo. Sem embargo, a relação esperada com a umidade do solo não foi apreciada. Observou-se dependência espacial em 31 de 35 séries de dados de índice de cone e umidade do solo. Esta dependência foi ajustada por modelos exponenciais com efeito pepita variável de 0 a 90% o valor do patamar. em séries de dados o comportamento foi aleatório. Portanto, a técnica das distâncias inversas foi utilizada para cartografar a distribuição das variáveis que não tiveram estrutura espacial. Na krigagem constatou-se uma suavização dos mapas comparados com esses das distâncias inversas. A krigagem indicadora foi utilizada para cartografar a variabilidade espacial do índice de cone e recomendar melhor manejo do solo.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The objective of this work is to present an index which may synthesize a set of indicators of mobility for medium size cities urban centers. Three great areas were selected to compose the mobility index: pedestrians, motor vehicles and cycling. The Sampling Mobility Index is given by the sum of the punctuation the indicators selected and can to result in 700 points, the best result to mobility, and 0 points, the worse to mobility. The result obtained is given by the Sampling Mobility Index equal to 390. This result indicates a critical situation in Assis, as far as mobility is concerned. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This work describes a methodology for power factor control and correction of the unbalanced currents in four-wire electric circuits. The methodology is based on the insertion of two compensation networks, one wye-grounded neutral and other in delta, in parallel to the load. The mathematical development has been proposed in previous work [3]. In this paper, however, the determination of the compensation susceptances is based on the instantaneous values of load currents. The results are obtained using the MatLab-Simulink enviroment
Resumo:
This work shows the potentiality of As as internal standard to compensate errors from sampling of sparkling drinking water samples in the determination of selenium by graphite furnace atomic absorption spectrometry. The mixture Pd(NO 3) 2/Mg(NO 3) 2 was used as chemical modifier. All samples and reference solutions were automatically spiked with 500 μg l -1 As and 0.2% (v/v) HNO 3 by the autosampler, eliminating the need for manual dilutions. For 10 μl dispensed sample into the graphite tube, a good correlation (r=0.9996) was obtained between the ratio of analyte absorbance by the internal standard absorbance and the analyte concentrations. The relative standard deviations (R.S.D.) of measurements varied from 0.05 to 2% and from 1.9 to 5% (n=12) with and without internal standardization, respectively. The limit of detection (LD) based on integrated absorbance was 3.0 μg l -1 Se. Recoveries in the 94-109% range for Se spiked samples were obtained. Internal standardization (IS) improved the repeatability of measurements and increased the lifetime of the graphite tube in ca. 15%. © 2004 Elsevier B.V. All rights reserved.
Resumo:
The present work develops and optimizes a method to determine copper in samples of feces and fish feed by graphite furnace atomic absorption spectrometry (GFAAS) through the direct introduction of slurries of the samples into the spectrometer's graphite tube coated internally with metallic rhodium and tungsten carbide that acts as chemical modifiers. The limits of detection (LOD) and quantification (LOQ) calculated for 20 readings of the blank of the standard slurries (0.50% m/v of feces or feed devoid of copper) were 0.24 and 0.79 μg L -1 for the standard feces slurries and 0.26 and 0.87 μg L -1 for the standard feed slurries. The proposed method was applied in studies of absorption of copper in different fish feeds and their results proved compatible with that obtained from samples mineralized by acid digestion using microwave oven. © Springer Science+Business Media, LLC 2008.
Resumo:
In soil surveys, several sampling systems can be used to define the most representative sites for sample collection and description of soil profiles. In recent years, the conditioned Latin hypercube sampling system has gained prominence for soil surveys. In Brazil, most of the soil maps are at small scales and in paper format, which hinders their refinement. The objectives of this work include: (i) to compare two sampling systems by conditioned Latin hypercube to map soil classes and soil properties; (II) to retrieve information from a detailed scale soil map of a pilot watershed for its refinement, comparing two data mining tools, and validation of the new soil map; and (III) to create and validate a soil map of a much larger and similar area from the extrapolation of information extracted from the existing soil map. Two sampling systems were created by conditioned Latin hypercube and by the cost-constrained conditioned Latin hypercube. At each prospection place, soil classification and measurement of the A horizon thickness were performed. Maps were generated and validated for each sampling system, comparing the efficiency of these methods. The conditioned Latin hypercube captured greater variability of soils and properties than the cost-constrained conditioned Latin hypercube, despite the former provided greater difficulty in field work. The conditioned Latin hypercube can capture greater soil variability and the cost-constrained conditioned Latin hypercube presents great potential for use in soil surveys, especially in areas of difficult access. From an existing detailed scale soil map of a pilot watershed, topographical information for each soil class was extracted from a Digital Elevation Model and its derivatives, by two data mining tools. Maps were generated using each tool. The more accurate of these tools was used for extrapolation of soil information for a much larger and similar area and the generated map was validated. It was possible to retrieve the existing soil map information and apply it on a larger area containing similar soil forming factors, at much low financial cost. The KnowledgeMiner tool for data mining, and ArcSIE, used to create the soil map, presented better results and enabled the use of existing soil map to extract soil information and its application in similar larger areas at reduced costs, which is especially important in development countries with limited financial resources for such activities, such as Brazil.
Resumo:
In the seed production system, genetic purity is one of the fundamental requirements for its commercialization. The present work had the goal of determined the sample size for genetic purity evaluation, in order to protect the seed consumer and the producer and to evaluate the sensitivity of microsatellite technique for discriminating hybrids from their respective relatives and for detecting mixtures when they are present in small amounts in the samples. For the sequential sampling, hybrid seeds were marked and mixed in with the seed lots, simulating the following levels of contamination: 0.25, 0.5, 1.0, 2.0, 4.0, and 6.0%. After this, groups of 40 seeds were taken in sequence, up to a maximum of 400 seeds, with the objective of determining the quantity of seeds necessary to detect the percentage of mixture mentioned above. The sensitivity of microsatellite technique was evaluated by mixing different proportions of DNA from the hybrids with their respective seed lines. For the level of mixture was higher than 1:8 (1P1:8P2; 8P1:1P2), the sensitivity of the marker in detecting different proportions of the mixture varied according to the primer used. In terms of the sequential sampling, it was verified that in order to detect mixture levels higher than 1% within the seed lot- with a risk level for both the producer and the consumer of 0.05- the size of the necessary sample was smaller than the size needed for the fixed sample size. This also made it possible to reduce costs, making it possible to use microsatellites to certify the genetic purity of corn seeds lots.
Resumo:
Abstract Background Air pollution in São Paulo is constantly being measured by the State of Sao Paulo Environmental Agency, however there is no information on the variation between places with different traffic densities. This study was intended to identify a gradient of exposure to traffic-related air pollution within different areas in São Paulo to provide information for future epidemiological studies. Methods We measured NO2 using Palmes' diffusion tubes in 36 sites on streets chosen to be representative of different road types and traffic densities in São Paulo in two one-week periods (July and August 2000). In each study period, two tubes were installed in each site, and two additional tubes were installed in 10 control sites. Results Average NO2 concentrations were related to traffic density, observed on the spot, to number of vehicles counted, and to traffic density strata defined by the city Traffic Engineering Company (CET). Average NO2concentrations were 63μg/m3 and 49μg/m3 in the first and second periods, respectively. Dividing the sites by the observed traffic density, we found: heavy traffic (n = 17): 64μg/m3 (95% CI: 59μg/m3 – 68μg/m3); local traffic (n = 16): 48μg/m3 (95% CI: 44μg/m3 – 52μg/m3) (p < 0.001). Conclusion The differences in NO2 levels between heavy and local traffic sites are large enough to suggest the use of a more refined classification of exposure in epidemiological studies in the city. Number of vehicles counted, traffic density observed on the spot and traffic density strata defined by the CET might be used as a proxy for traffic exposure in São Paulo when more accurate measurements are not available.
Resumo:
For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.
Resumo:
In dieser Arbeit wird ein vergröbertes (engl. coarse-grained, CG) Simulationsmodell für Peptide in wässriger Lösung entwickelt. In einem CG Verfahren reduziert man die Anzahl der Freiheitsgrade des Systems, so dass manrngrössere Systeme auf längeren Zeitskalen untersuchen kann. Die Wechselwirkungspotentiale des CG Modells sind so aufgebaut, dass die Peptid Konformationen eines höher aufgelösten (atomistischen) Modells reproduziert werden.rnIn dieser Arbeit wird der Einfluss unterschiedlicher bindender Wechsel-rnwirkungspotentiale in der CG Simulation untersucht, insbesondere daraufhin,rnin wie weit das Konformationsgleichgewicht der atomistischen Simulation reproduziert werden kann. Im CG Verfahren verliert man per Konstruktionrnmikroskopische strukturelle Details des Peptids, zum Beispiel, Korrelationen zwischen Freiheitsgraden entlang der Peptidkette. In der Dissertationrnwird gezeigt, dass diese “verlorenen” Eigenschaften in einem Rückabbildungsverfahren wiederhergestellt werden können, in dem die atomistischen Freiheitsgrade wieder in die CG-Strukturen eingefügt werden. Dies gelingt, solange die Konformationen des CG Modells grundsätzlich gut mit der atomistischen Ebene übereinstimmen. Die erwähnten Korrelationen spielen einerngrosse Rolle bei der Bildung von Sekundärstrukturen und sind somit vonrnentscheidender Bedeutung für ein realistisches Ensemble von Peptidkonformationen. Es wird gezeigt, dass für eine gute Übereinstimmung zwischen CG und atomistischen Kettenkonformationen spezielle bindende Wechselwirkungen wie zum Beispiel 1-5 Bindungs- und 1,3,5-Winkelpotentiale erforderlich sind. Die intramolekularen Parameter (d.h. Bindungen, Winkel, Torsionen), die für kurze Oligopeptide parametrisiert wurden, sind übertragbarrnauf längere Peptidsequenzen. Allerdings können diese gebundenen Wechselwirkungen nur in Kombination mit solchen nichtbindenden Wechselwirkungspotentialen kombiniert werden, die bei der Parametrisierung verwendet werden, sind also zum Beispiel nicht ohne weiteres mit einem andere Wasser-Modell kombinierbar. Da die Energielandschaft in CG-Simulationen glatter ist als im atomistischen Modell, gibt es eine Beschleunigung in der Dynamik. Diese Beschleunigung ist unterschiedlich für verschiedene dynamische Prozesse, zum Beispiel für verschiedene Arten von Bewegungen (Rotation und Translation). Dies ist ein wichtiger Aspekt bei der Untersuchung der Kinetik von Strukturbildungsprozessen, zum Beispiel Peptid Aggregation.rn