984 resultados para Multi-element compounds
Resumo:
The fast sequential multi-element determination of Ca, Mg, K, Cu, Fe, Mn and Zn in plant tissues by high-resolution continuum source flame atomic absorption spectrometry is proposed. For this, the main lines for Cu (324.754 nm), Fe (248.327 nm), Mn (279.482 nm) and Zn (213.857 nm) were selected, and the secondary lines for Ca (239.856 nm), Mg (202.582 nm) and K (404.414 nm) were evaluated. The side pixel registration approach was studied to reduce sensitivity and extend the linear working range for Mg by measuring at wings (202.576 nm; 202.577 nm; 202.578 nm; 202.580 nm: 202.585 nm; 202.586 nm: 202.587 nm; 202.588 nm) of the secondary line. The interference caused by NO bands on Zn at 213.857 nm was removed using the least-squares background correction. Using the main lines for Cu, Fe, Mn and Zn, secondary lines for Ca and K, and line wing at 202.588 nm for Mg, and 5 mL min(-1) sample flow-rate, calibration curves in the 0.1-0.5 mg L-1 Cu, 0.5-4.0 mg L-1 Fe, 0.5-4.0 mg L-1 Mn, 0.2-1.0 mg L-1 Zn, 10.0-100.0 mg L-1 Ca, 5.0-40.0 mg L-1 Mg and 50.0-250.0 mg L-1 K ranges were consistently obtained. Accuracy and precision were evaluated after analysis of five plant standard reference materials. Results were in agreement at a 95% confidence level (paired t-test) with certified values. The proposed method was applied to digests of sugar-cane leaves and results were close to those obtained by line-source flame atomic absorption spectrometry. Recoveries of Ca, Mg, K, Cu, Fe, Mn and Zn in the 89-103%, 84-107%, 87-103%, 85-105%, 92-106%, 91-114%, 96-114% intervals, respectively, were obtained. The limits of detection were 0.6 mg L-1 Ca, 0.4 mg L-1 Mg, 0.4 mg L-1 K, 7.7 mu g L-1 Cu, 7.7 mu g L-1 Fe, 1.5 mu g L-1 Mn and 5.9 mu g L-1 Zn. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A simple method to determine Cu, Fe, Mn and Zn in single aliquots of medicinal plants by HR-CS FAAS is proposed. The main lines for Cu, Mn and Zn, and the alternate line measured at the wing of the main line for Fe at 248.327 nm allowed calibration within the 0.025 - 2.0 mg L-1 Cu, 1.0 - 20.0 mg L-1 Fe, 0.05 - 2.0 mg L-1 Mn, 0.025 - 0.75 mg L-1 Zn ranges. Nineteen medicinal plants and two certified plant reference materials were analyzed. Results were in agreement at a 95% confidence level (paired t-test) with reference values. Limits of detection were 0.12 μg L-1 Cu, 330 μg L-1 Fe, 1.42 μg L-1 Mn and 8.12 μg L-1 Zn. Relative standard deviations (n=12) were ≤ 3% for all analytes. Recoveries in the 89 - 105% (Cu), 95 - 108% (Fe), 94 - 107% (Mn), and 93 - 110% (Zn) ranges were obtained.
Resumo:
Multi-element analysis of honey samples was carried out with the aim of developing a reliable method of tracing the origin of honey. Forty-two chemical elements were determined (Al, Cu, Pb, Zn, Mn, Cd, Tl, Co, Ni, Rb, Ba, Be, Bi, U, V, Fe, Pt, Pd, Te, Hf, Mo, Sn, Sb, P, La, Mg, I, Sm, Tb, Dy, Sd, Th, Pr, Nd, Tm, Yb, Lu, Gd, Ho, Er, Ce, Cr) by inductively coupled plasma mass spectrometry (ICP-MS). Then, three machine learning tools for classification and two for attribute selection were applied in order to prove that it is possible to use data mining tools to find the region where honey originated. Our results clearly demonstrate the potential of Support Vector Machine (SVM), Multilayer Perceptron (MLP) and Random Forest (RF) chemometric tools for honey origin identification. Moreover, the selection tools allowed a reduction from 42 trace element concentrations to only 5. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Clay mineral and bulk chemical (Si, Al, K, Mg, Sr, La, Ce, Nd) analyses of terrigenous surface sediments on the Siberian-Arctic shelf indicate that there are five regions with distinct, or endmember, sedimentary compositions. The formation of these geochemical endmembers is controlled by sediment provenance and grain size sorting. (1) The shale endmember (Al, K and REE rich sediment) is eroded from fine-grained marine sedimentary rocks of the Verkhoyansk Mountains and Kolyma-Omolon superterrain, and discharged to the shelf by the Lena, Yana, Indigirka and Kolyma Rivers. (2) The basalt endmember (Mg rich) originates from NE Siberia's Okhotsk-Chukotsk volcanic belt and Bering Strait inflow, and is prevalent in Chukchi Sea Sediments. Concentrations of the volcanically derived clay mineral smectite are elevated in Chukchi fine-fraction sediments, corroborating the conclusion that Chukchi sediments are volcanic in origin. (3) The mature sandstone endmember (Si rich) is found proximal to Wrangel Island and sections of the Chukchi Sea's Siberian coast and is derived from the sedimentary Chukotka terrain that comprises these landmasses. (4) The immature sandstone endmember (Sr rich) is abundant in the New Siberian Island region and reflects inputs from sedimentary rocks that comprise the islands. (5) The immature sandstone endmember is also prevalent in the western Laptev Sea, where it is eroded from sedimentary deposits blanketing the Siberian platform that are compositionally similar to those on the New Siberian Islands. Western Laptev can be distinguished from New Siberian Island region sediments by their comparatively elevated smectite concentrations and the presence of the basalt endmember, which indicate Siberian platform flood basalts are also a source of western Laptev sediments. In certain locations grain size sorting noticeably affects shelf sediment chemistry. (1) Erosion of fines by currents and sediment ice rafting contributes to the formation of the coarse-grained sandstone endmembers. (2) Bathymetrically controlled grain size sorting, in which fines preferentially accumulate offshore in deeper, less energetic water, helps distribute the fine-grained shale and basalt endmembers. An important implication of these results is that the observed sedimentary geochemical endmembers provide new markers of sediment provenance, which can be used to track sediment transport, ice-rafted debris dispersal or the movement of particle-reactive contaminants.
Resumo:
This work explores the multi-element capabilities of inductively coupled plasma - mass spectrometry with collision/reaction cell technology (CCT-ICP-MS) for the simultaneous determination of both spectrally interfered and non-interfered nuclides in wine samples using a single set of experimental conditions. The influence of the cell gas type (i.e. He, He+H2 and He+NH3), cell gas flow rate and sample pre-treatment (i.e. water dilution or acid digestion) on the background-equivalent concentration (BEC) of several nuclides covering the mass range from 7 to 238 u has been studied. Results obtained in this work show that, operating the collision/reaction cell with a compromise cell gas flow rate (i.e. 4 mL min−1) improves BEC values for interfered nuclides without a significant effect on the BECs for non-interfered nuclides, with the exception of the light elements Li and Be. Among the different cell gas mixtures tested, the use of He or He+H2 is preferred over He+NH3 because NH3 generates new spectral interferences. No significant influence of the sample pre-treatment methodology (i.e. dilution or digestion) on the multi-element capabilities of CCT-ICP-MS in the context of simultaneous analysis of interfered and non-interfered nuclides was observed. Nonetheless, sample dilution should be kept at minimum to ensure that light nuclides (e.g. Li and Be) could be quantified in wine. Finally, a direct 5-fold aqueous dilution is recommended for the simultaneous trace and ultra-trace determination of spectrally interfered and non-interfered elements in wine by means of CCT-ICP-MS. The use of the CCT is mandatory for interference-free ultra-trace determination of Ti and Cr. Only Be could not be determined when using the CCT due to a deteriorated limit of detection when compared to conventional ICP-MS.
Resumo:
The elemental analysis of Spanish palm dates by inductively coupled plasma atomic emission spectrometry and inductively coupled plasma mass spectrometry is reported for the first time. To complete the information about the mineral composition of the samples, C, H, and N are determined by elemental analysis. Dates from Israel, Tunisia, Saudi Arabia, Algeria and Iran have also been analyzed. The elemental composition have been used in multivariate statistical analysis to discriminate the dates according to its geographical origin. A total of 23 elements (As, Ba, C, Ca, Cd, Co, Cr, Cu, Fe, H, In, K, Li, Mg, Mn, N, Na, Ni, Pb, Se, Sr, V, and Zn) at concentrations from major to ultra-trace levels have been determined in 13 date samples (flesh and seeds). A careful inspection of the results indicate that Spanish samples show higher concentrations of Cd, Co, Cr, and Ni than the remaining ones. Multivariate statistical analysis of the obtained results, both in flesh and seed, indicate that the proposed approach can be successfully applied to discriminate the Spanish date samples from the rest of the samples tested.
Resumo:
Laser‐induced damage and ablation thresholds of bulk superconducting samples of Bi2(SrCa)xCu3Oy(x=2, 2.2, 2.6, 2.8, 3) and Bi1.6 (Pb)xSr2Ca2Cu3 Oy (x=0, 0.1, 0.2, 0.3, 0.4) for irradiation with a 1.06 μm beam from a Nd‐YAG laser have been determined as a function of x by the pulsed photothermal deflection technique. The threshold values of power density for ablation as well as damage are found to increase with increasing values of x in both systems while in the Pb‐doped system the threshold values decrease above a specific value of x, coinciding with the point at which the Tc also begins to fall.
Resumo:
In this short review, we provide some new insights into the material synthesis and characterization of modern multi-component superconducting oxides. Two different approaches such as the high-pressure, high-temperature method and ceramic combinatorial chemistry will be reported with application to several typical examples. First, we highlight the key role of the extreme conditions in the growth of Fe-based superconductors, where a careful control of the composition-structure relation is vital for understanding the microscopic physics. The availability of high-quality LnFeAsO (Ln = lanthanide) single crystals with substitution of O by F, Sm by Th, Fe by Co, and As by P allowed us to measure intrinsic and anisotropic superconducting properties such as Hc2, Jc. Furthermore, we demonstrate that combinatorial ceramic chemistry is an efficient way to search for new superconducting compounds. A single-sample synthesis concept based on multi-element ceramic mixtures can produce a variety of local products. Such a system needs local probe analyses and separation techniques to identify compounds of interest. We present the results obtained from random mixtures of Ca, Sr, Ba, La, Zr, Pb, Tl, Y, Bi, and Cu oxides reacted at different conditions. By adding Zr but removing Tl, Y, and Bi, the bulk state superconductivity got enhanced up to about 122 K.
Resumo:
The multi-element determination of Al, Cr, Mn, Ni, Cu, Zn, Cd, Ba, Pb, SO4= and Cl- in riverine water samples was accomplished by inductively coupled plasma mass spectrometry (ICP-MS). The sample passed through a column containing the anionic resin AG1-X8 and the metals were determined directly. The retained anionic species were eluted and SO4= and Cl- were determined at m/z 48 and 35 correspondent to the ions SO+ and Cl+ formed at the plasma. Accuracy for metals was assessed by analysing the certified reference TM-26 (National Water Research Institute of Canada). Results for SO4= and Cl- were in agreement with those obtained by turbidimetry and spectrophotometry. LOD's of 0.1 µg l-1 for Cd, Ba and Pb; 0.2 µg l-1 for Al, Mn and Cu; 0.5 µg l-1 for Cr; 0.9 for Zn; 2.0 µg l-1for Ni , 60 µg l-1 for S and 200 µg l-1 Cl were attained.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.