968 resultados para decomposition of polymeric precursor method (DPP)
Resumo:
A previous communication [1] described the preparation of the double selenates of lanthanum and the alkali metals; the La-Li compound has the formula La2(SeO4)3 · Li2SeO4 · 8H2O. Subsequent reports [2-4] have shown that it was not possible to prepare the Ce-Li, Pr-Li, Nd-Li and Sm-Li double selenates, using the same method [1]. It was possible to isolate the double selenates of all the cerie group lanthanides and lithium not previously described and, also, a La-Li double selenate having a different stoichiometry, using a modified preparation technique. © 1990.
Resumo:
Crystallographic and microstructural properties of Ho(Ni,Co,Mn)O3± perovskite-type multiferroic material are reported. Samples were synthesized with a modified polymeric precursor method. The synchrotron X-ray powder diffraction (SXRPD) technique associated to Rietveld refinement method was used to perform structural characterization. The crystallographic structures, as well as microstructural properties, were studied to determine unit cell parameters and volume, angles and atomic positions, crystallite size and strain. X-ray energies below the absorption edges of the transition metals helped to determine the mean preferred atomic occupancy for the substituent atoms. Furthermore, analyzing the degree of distortion of the polyhedra centered at the transitions metal atoms led to understanding the structural model of the synthesized phase. X-ray photoelectron spectroscopy (XPS) was performed to evaluate the valence states of the elements, and the tolerance factor and oxygen content. The obtained results indicated a small decrease distortion in structure, close to the HoMnO3 basis compound. In addition, the substituent atoms showed the same distribution and, on average, preferentially occupied the center of the unit cell.
Resumo:
In this paper, Co/CeO2 catalysts, with different cobalt contents were prepared by the polymeric precursor method and were evaluated for the steam reforming of ethanol. The catalysts were characterized by N-2 physisorption (BET method), X-ray diffraction (XRD), UV-visible diffuse reflectance, temperature programmed reduction analysis (TPR) and field emission scanning electron microscopy (FEG-SEM). It was observed that the catalytic behavior could be influenced by the experimental conditions and the nature of the catalyst employed. Physical-chemical characterizations revealed that the cobalt content of the catalyst influences the metal-support interaction which results in distinct catalyst performances. The catalyst with the highest cobalt content showed the best performance among the catalysts tested, exhibiting complete ethanol conversion, hydrogen selectivity close to 66% and good stability at a reaction temperature of 600 degrees C. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Ba0.77Ca0.23TiO3 ceramics were produced in this work starting from nanopowders synthesized via a polymeric precursor method. By adjusting the pH values of the precursor solutions above 7, it was possible to prepare powders weakly aggregated and with a smaller particle size, both facts which traduced into an enhanced nanopowders' sintering process at comparatively lower temperatures. Irrespective of the initial pH value, highly-dense and second phase-free ceramics were obtained following optimal sintering parameters (temperature and time) extracted from dilatometric and density measurements. By considering these and other sintering conditions, moreover, polycrystalline materials with an average grain size varying from 0.35 to 8 mm were produced, the grain growth process involving liquid phase-assisted sintering for heat treatments achieved at 1320 °C. The study of grain size effects on the ferroelectric properties of these materials was conducted, the results being discussed in the light of previous debates, including grain size-dependent degree of tetragonal distortion in such materials, as verified in this work.
Resumo:
CaSnO3 and SrSnO3 alkaline earth stannate thin films were prepared by chemical solution deposition using the polymeric precursor method on various single crystal substrates (R- and C-sapphire and 100-SrTiO3) at different temperatures. The films were characterized by X-ray diffraction (θ-2θ, ω- and φ-scans), field emission scanning electron microscopy, atomic force microscopy, micro-Raman spectroscopy and photoluminescence. Epitaxial SrSnO3 and CaSnO3 thin films were obtained on SrTiO3 with a high crystalline quality. The long-range symmetry promoted a short-range disorder which led to photoluminescence in the epitaxial films. In contrast, the films deposited on sapphire exhibited a random polycrystalline growth with no meaningful emission regardless of the substrate orientation. The network modifier (Ca or Sr) and the substrate (sapphire or SrTiO3) influenced the crystallization process and/or the microstructure. Higher is the tilts of the SnO6 octahedra, as in CaSnO3, higher is the crystallization temperature, which changed also the nucleation/grain growth process.
Resumo:
A YSZ@Al2O3 nanocomposite was obtained by Al2O3 coating on the surface of yttrium stabilized zirconia via a polymeric precursor method. The resulting core–shell structures were characterized by X-ray diffraction, scanning electron microscopy, transmission electronic microscopy and PL spectra. The TEM micrographs clearly show a homogeneous Al2O3 shell around the ZrO2 core. The observed PL is related to surface–interface defects. Such novel technologies can, in principle, explore materials which are not available in the bulk single crystal form but their figure-of-merit is dramatically dependent on the surface–interface defect states.
Resumo:
In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.
Resumo:
Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.
Resumo:
OBJECTIVES: This paper examines four different levels of possible variation in symptom reporting: occasion, day, person and family. DESIGN: In order to rule out effects of retrospection, concurrent symptom reporting was assessed prospectively using a computer-assisted self-report method. METHODS: A decomposition of variance in symptom reporting was conducted using diary data from families with adolescent children. We used palmtop computers to assess concurrent somatic complaints from parents and children six times a day for seven consecutive days. In two separate studies, 314 and 254 participants from 96 and 77 families, respectively, participated. A generalized multilevel linear models approach was used to analyze the data. Symptom reports were modelled using a logistic response function, and random effects were allowed at the family, person and day level, with extra-binomial variation allowed for on the occasion level. RESULTS: Substantial variability was observed at the person, day and occasion level but not at the family level. CONCLUSIONS: To explain symptom reporting in normally healthy individuals, situational as well as person characteristics should be taken into account. Family characteristics, however, would not help to clarify symptom reporting in all family members.
Resumo:
In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.
Resumo:
Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.
Resumo:
The purpose of this research is to develop a new statistical method to determine the minimum set of rows (R) in a R x C contingency table of discrete data that explains the dependence of observations. The statistical power of the method will be empirically determined by computer simulation to judge its efficiency over the presently existing methods. The method will be applied to data on DNA fragment length variation at six VNTR loci in over 72 populations from five major racial groups of human (total sample size is over 15,000 individuals; each sample having at least 50 individuals). DNA fragment lengths grouped in bins will form the basis of studying inter-population DNA variation within the racial groups are significant, will provide a rigorous re-binning procedure for forensic computation of DNA profile frequencies that takes into account intra-racial DNA variation among populations. ^
Resumo:
We propose a method for the decomposition of inequality changes based on panel data regression. The method is an efficient way to quantify the contributions of variables to changes of the Theil T index while satisfying the property of uniform addition. We illustrate the method using prefectural data from Japan for the period 1955 to 1998. Japan experienced a diminishing of regional income disparity during the years of high economic growth from 1955 to 1973. After estimating production functions using panel data for prefectures in Japan, we apply the new decomposition approach to identify each production factor’s contributions to the changes of per capita income inequality among prefectures. The decomposition results show that total factor productivity (residual) growth, population change (migration), and public capital stock growth contributed to the diminishing of per capita income disparity.
Resumo:
The decomposition of azodicarbonamide, used as foaming agent in PVC—plasticizer (1/1) plastisols was studied by DSC. Nineteen different plasticizers, all belonging to the ester family, two being polymeric (polyadipates), were compared. The temperature of maximum decomposition rate (in anisothermal regime at 5 K min−1 scanning rate), ranges between 434 and 452 K. The heat of decomposition ranges between 8.7 and 12.5 J g−1. Some trends of variation of these parameters appear significant and are discussed in terms of solvent (matrix) and viscosity effects on the decomposition reactions. The shear modulus at 1 Hz frequency was determined at the temperature of maximum rate of foaming agent decomposition, and differs significantly from a sample to another. The foam density was determined at ambient temperature and the volume fraction of bubbles was used as criterion to judge the efficiency of the foaming process. The results reveal the existence of an optimal shear modulus of the order of 2 kPa that corresponds roughly to plasticizer molar masses of the order of 450 ± 50 g mol−1. Heavier plasticizers, especially polymeric ones are too difficult to deform. Lighter plasticizers such as diethyl phthalate (DEP) deform too easily and presumably facilitate bubble collapse.
Resumo:
Paper submitted to the 7th International Symposium on Feedstock Recycling of Polymeric Materials (7th ISFR 2013), New Delhi, India, 23-26 October 2013.