903 resultados para analytical decomposition
Resumo:
Plant litter and fine roots are important in maintaining soil organic carbon (C) levels as well as for nutrient cycling. The decomposition of surface-placed litter and fine roots of wheat ( Triticum aestivum ), lucerne ( Medicago sativa ), buffel grass ( Cenchrus ciliaris ), and mulga ( Acacia aneura ), placed at 10-cm and 30-cm depths, was studied in the field in a Rhodic Paleustalf. After 2 years, = 60% of mulga roots and twigs remained undecomposed. The rate of decomposition varied from 4.2 year -1 for wheat roots to 0.22 year -1 for mulga twigs, which was significantly correlated with the lignin concentration of both tops and roots. Aryl+O-aryl C concentration, as measured by 13 C nuclear magnetic resonance spectroscopy, was also significantly correlated with the decomposition parameters, although with a lower R 2 value than the lignin concentration. Thus, lignin concentration provides a good predictor of litter and fine root decomposition in the field.
Resumo:
The thermal decomposition behavior of 1,2-bis-(2,4,6-tribromophenoxy)ethane (BTBPE) widely used as flame retardant plastics additive was studied by HRTG and differential scanning calorimetries. It was pyrolysed in inert atmosphere at 240 and 340 °C in isothermal conditions, the decomposition products were collected and investigated by means of IR and GC-MS, most of them are identified. It was found that BTBPE mostly evaporates at 240 °C. The decomposition products at 340°C depend on rate of their removal from the hot reaction zone. Main primary decomposition products found in case of rapid removal are tribromophenol and vinyl tribromophenyl ether. Whereas, prolonged contact with heating zone also produces hydrogen bromide, ethylene bromide, polybrominated vinyl phenyl ethers and diphenyl ethers, and dibenzodioxins. The nature of the identified compounds are in accordance with a molecular and radical pyrolysis reaction pathway. © 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The volatile chemicals which comprise the odor of the illicit drug cocaine have been analyzed by adsorption onto activated charcoal followed by solvent elution and GC/MS analysis. A series of field tests have been performed to determine the dominant odor compound to which dogs alert. All of our data to date indicate that the dominant odor is due to the presence of methyl benzoate which is associated with the cocaine, rather than the cocaine itself. When methyl benzoate and cocaine are spiked onto U.S. currency, the threshold level of methyl benzoate required for a canine to signal an alert is typically 1-10 $\mu$g. Humans have been shown to have a sensitivity similar to dogs for methyl benzoate but with poorer selectivity/reliability. The dominant decomposition pathway for cocaine has been evaluated at elevated temperatures (up to 280$\sp\circ$C). Benzoic acid, but no detectable methyl benzoate, is formed. Solvent extraction and SFE were used to study the recovery of cocaine from U.S. currency. The amount of cocaine which could be recovered was found to decrease with time. ^
Resumo:
The manner in which remains decompose has been and is currently being researched around the world, yet little is still known about the generated scent of death. In fact, it was not until the Casey Anthony trial that research on the odor released from decomposing remains, and the compounds that it is comprised of, was brought to light. The Anthony trial marked the first admission of human decomposition odor as forensic evidence into the court of law; however, it was not "ready for prime time" as the scientific research on the scent of death is still in its infancy. This research employed the use of solid-phase microextraction (SPME) with gas chromatography-mass spectrometry (GC-MS) to identify the volatile organic compounds (VOCs) released from decomposing remains and to assess the impact that different environmental conditions had on the scent of death. Using human cadaver analogues, it was discovered that the environment in which the remains were exposed to dramatically affected the odors released by either modifying the compounds that it was comprised of or by enhancing/hindering the amount that was liberated. In addition, the VOCs released during the different stages of the decomposition process for both human remains and analogues were evaluated. Statistical analysis showed correlations between the stage of decay and the VOCs generated, such that each phase of decomposition was distinguishable based upon the type and abundance of compounds that comprised the odor. This study has provided new insight into the scent of death and the factors that can dramatically affect it, specifically, frozen, aquatic, and soil environments. Moreover, the results revealed that different stages of decomposition were distinguishable based upon the type and total mass of each compound present. Thus, based upon these findings, it is suggested that the training aids that are employed for human remains detection (HRD) canines should 1) be characteristic of remains that have undergone decomposition in different environmental settings, and 2) represent each stage of decay, to ensure that the HRD canines have been trained to the various odors that they are likely to encounter in an operational situation.
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
Noise is constant presence in measurements. Its origin is related to the microscopic properties of matter. Since the seminal work of Brown in 1828, the study of stochastic processes has gained an increasing interest with the development of new mathematical and analytical tools. In the last decades, the central role that noise plays in chemical and physiological processes has become recognized. The dual role of noise as nuisance/resource pushes towards the development of new decomposition techniques that divide a signal into its deterministic and stochastic components. In this thesis I show how methods based on Singular Spectrum Analysis have the right properties to fulfil the previously mentioned requirement. During my work I applied SSA to different signals of interest in chemistry: I developed a novel iterative procedure for the denoising of powder X-ray diffractograms; I “denoised” bi-dimensional images from experiments of electrochemiluminescence imaging of micro-beads obtaining new insight on ECL mechanism. I also used Principal Component Analysis to investigate the relationship between brain electrophysiological signals and voice emission.
Resumo:
A temperature pause introduced in a simple single-step thermal decomposition of iron, with the presence of silver seeds formed in the same reaction mixture, gives rise to novel compact heterostructures: brick-like Ag@Fe3O4 core-shell nanoparticles. This novel method is relatively easy to implement, and could contribute to overcome the challenge of obtaining a multifunctional heteroparticle in which a noble metal is surrounded by magnetite. Structural analyses of the samples show 4 nm silver nanoparticles wrapped within compact cubic external structures of Fe oxide, with curious rectangular shape. The magnetic properties indicate a near superparamagnetic like behavior with a weak hysteresis at room temperature. The value of the anisotropy involved makes these particles candidates to potential applications in nanomedicine.
Resumo:
We have considered a Bose gas in an anisotropic potential. Applying the the Gross-Pitaevskii Equation (GPE) for a confined dilute atomic gas, we have used the methods of optimized perturbation theory and self-similar root approximants, to obtain an analytical formula for the critical number of particles as a function of the anisotropy parameter for the potential. The spectrum of the GPE is also discussed.
Resumo:
Few articles deal with lead and strontium isotopic analysis of water samples. The aim of this study was to define the chemical procedures for Pb and Sr isotopic analyses of groundwater samples from an urban sedimentary aquifer. Thirty lead and fourteen strontium isotopic analyses were performed to test different analytical procedures. Pb and Sr isotopic ratios as well as Sr concentration did not vary using different chemical procedures. However, the Pb concentrations were very dependent on the different procedures. Therefore, the choice of the best analytical procedure was based on the Pb results, which indicated a higher reproducibility from samples that had been filtered and acidified before the evaporation, had their residues totally dissolved, and were purified by ion chromatography using the Biorad® column. Our results showed no changes in Pb ratios with the storage time.
Resumo:
Cellulose acetates with different degrees of substitution (DS, from 0.6 to 1.9) were prepared from previously mercerized linter cellulose, in a homogeneous medium, using N,N-dimethylacetamide/lithium chloride as a solvent system. The influence of different degrees of substitution on the properties of cellulose acetates was investigated using thermogravimetric analyses (TGA). Quantitative methods were applied to the thermogravimetric curves in order to determine the apparent activation energy (Ea) related to the thermal decomposition of untreated and mercerized celluloses and cellulose acetates. Ea values were calculated using Broido's method and considering dynamic conditions. Ea values of 158 and 187 kJ mol-1 were obtained for untreated and mercerized cellulose, respectively. A previous study showed that C6OH is the most reactive site for acetylation, probably due to the steric hindrance of C2 and C3. The C6OH takes part in the first step of cellulose decomposition, leading to the formation of levoglucosan and, when it is changed to C6OCOCH3, the results indicate that the mechanism of thermal decomposition changes to one with a lower Ea. A linear correlation between Ea and the DS of the acetates prepared in the present work was identified.
Resumo:
The use of thermoanalytical data in sample preparation is described as a tool to catch the students' attention to some details that can simplify both the analysis and the analytical procedure. In this case, the thermal decomposition of eggshells was first investigated by thermogravimetry (TGA). Although the classical procedures suggest long exposure to high temperatures, the TGA data showed that the decomposition of organic matter takes place immediately when the sample is heated up to 800 °C under air atmosphere. After decomposition, the calcium content was determined by flame atomic emission photometry and compared with the results obtained using classical volumetric titration with EDTA.
Resumo:
Thermoanalytical behavior of sodium and potassium salts of pyrrolidinedithiocarbamate (pyr), piperidineditiocarbamate (pip), morpholinedithiocarbamate (mor), hexametileneiminedithiocarbamate (hex), were investigated. In a first step the salts were synthesized and characterized by infrared spectroscopy (FTIR), ¹H and 13C nuclear magnetic resonance (NMR) and elementar analysis. Finally, thermal analytical (TG/DTG and DSC) studies were performed in order to evaluate the thermal stability, as well as the pathways of the thermal decomposition based in the intermediate and final decomposition products.
Resumo:
This work describes the construction and testing of a simple pressurized solvent extraction (PSE) system. A mixture of acetone:water (80:20), 80 ºC and 103.5 bar, was used to extract two herbicides (Diuron and Bromacil) from a sample of polluted soil, followed by identification and quantification by high-performance liquid chromatography coupled with diode array detector (HPLC-DAD). The system was also used to extract soybean oil (70 ºC and 69 bar) using pentane. The extracted oil was weighed and characterized through the fatty acid methyl ester analysis (myristic (< 0.3%), palmitic (16.3%), stearic (2.8%), oleic (24.5%), linoleic (46.3%), linolenic (9.6%), araquidic (0.3%), gadoleic (< 0.3%), and behenic (0.3%) acids) using high-resolution gas chromatography with flame ionization detection (HRGC-FID). PSE results were compared with those obtained using classical procedures: Soxhlet extraction for the soybean oil and solid-liquid extraction followed by solid-phase extraction (SLE-SPE) for the herbicides. The results showed: 21.25 ± 0.36% (m/m) of oil in the soybeans using the PSE system and 21.55 ± 0.65% (m/m) using the soxhlet extraction system; extraction efficiency (recovery) of herbicides Diuron and Bromacil of 88.7 ± 4.5% and 106.6 ± 8.1%, respectively, using the PSE system, and 96.8 ± 1.0% and 94.2 ± 3.9%, respectively, with the SLP-SPE system; limit of detection (LOD) and limit of quantification (LOQ) for Diuron of 0.012 mg kg-1 and 0.040 mg kg-1, respectively; LOD and LOQ for Bromacil of 0.025 mg kg-1 and 0.083 mg kg-1, respectively. The linearity used ranged from 0.04 to 1.50 mg L-1 for Diuron and from 0.08 to 1.50 mg L-1 for Bromacil. In conclusion, using the PSE system, due to high pressure and temperature, it is possible to make efficient, fast extractions with reduced solvent consumption in an inert atmosphere, which prevents sample and analyte decomposition.
Resumo:
The thermal behavior of two polymorphic forms of rifampicin was studied by DSC and TG/DTG. The thermoanalytical results clearly showed the differences between the two crystalline forms. Polymorph I was the most thermally stable form, the DSC curve showed no fusion for this species and the thermal decomposition process occurred around 245 ºC. The DSC curve of polymorph II showed two consecutive events, an endothermic event (Tpeak = 193.9 ºC) and one exothermic event (Tpeak = 209.4 ºC), due to a melting process followed by recrystallization, which was attributed to the conversion of form II to form I. Isothermal and non-isothermal thermogravimetric methods were used to determine the kinetic parameters of the thermal decomposition process. For non-isothermal experiments, the activation energy (Ea) was derived from the plot of Log β vs 1/T, yielding values for polymorph form I and II of 154 and 123 kJ mol-1, respectively. In the isothermal experiments, the Ea was obtained from the plot of lnt vs 1/T at a constant conversion level. The mean values found for form I and form II were 137 and 144 kJ mol-1, respectively.
Resumo:
Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.