966 resultados para Decomposition analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The synthesis and characterization of lanthanide(III) citrates with stoichiometries 1:1 and 2:3; [LnL center dot xH(2)O] and [Ln(2)(LH)(3)center dot 2H(2)O], Ln=La, Ce, Pr, Nd, Sm and Eu are reported. L stands for (C6O7H5)(3-) and LH for (C6O7H6)(2-). Infrared absorption spectra of both series evidence coordination of carboxylate groups through symmetric bridges or chelation. X-ray powder patterns show the amorphous character of [LnL center dot xH(2)O]. The compounds [Ln(2)LH(3)center dot 2H(2)O] are crystalline and isomorphous. Emission spectra of Eu compounds suggest C-2v symmetry for the coordination polyhedron of [LnL center dot xH(2)O] and C-4v for [Ln(2)(LH)(3)center dot 2H(2)O]. Thermal analyses (TG-DTG-DTA) were carried out for both series. The thermal analysis patterns of the two series are quite different and both fit in a 4-step model of thermal decomposition, with lanthanide oxides as final products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for the multi-elemental determination of metals (Al, Ba, Ca, Cu, Fe, K, Mg, Mn, Sr and Zn), metalloids (B and Si), and non-metals (Cl, P and 5) in the babassu nut and mesocarp, sapucaia nut, coconut pulp, cupuassu pulp and seed, and cashew nut by axially viewed inductively coupled plasma optical emission spectrometry is presented. A diluted oxidant mixture (2 ml HNO(3) + 1 ml H(2)O(2) + 3 ml H(2)O) was used to achieve the complete decomposition of the organic matrix in a closed-vessel microwave oven. The accuracy of the entire proposed method was confirmed by standard reference material analysis (peach leaves-NIST SRM1547). The certified values showed a good agreement at a 95% confidence limit (Student`s t-test). The average RSD for repeatability of calibration solutions measurements were in the range of 1.1-6.7%. Limits of quantification (LOQ = 10 x LOD) were in the level of 0.00072-0.0532 mg/l. The macro and micronutrient ranges in the different nuts and seeds did not exceed the dietary reference intake (DRI), except for Mn in the babassu nut. (C) 2010 Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, TG/DTG and DSC techniques were used to the determination of thermal behavior of prednicarbate alone and associated with glyceryl stearate excipient ( 1: 1 physical mixture). TG/DTG curves obtained for the binary mixture showed a reduction of approximately 37 degrees C to the thermal stability of drug (T(dm/dt-0) (Max)(DTG)). The disappearance of stretching band at 1280 cm(-1) (nu(as) C-O, carbonate group) and the presence of streching band with less intensity at 1750 cm(-1) (nu(s) C-O, ester group) in IR spectrum obtained to the binary mixture submitted at 220 degrees C, when compared with IR spectrum of drug submitted to the same temperature, confirmed the chemical interaction between these substances due to heating. Kinetics parameters of decomposition reaction of prednicarbate were obtained using isothermal (Arrhenius equation) and non-isothermal (Ozawa) methods. The reduction of approximately 45% of activation energy value (E(a)) to the first step of thermal decomposition reaction of drug in the 1:1 (mass/mass) physical mixture was observed by both kinetics methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

B3LYP/6-31 + G(d) calculations were employed to investigate the mechanism of the transesterification reaction between a model monoglyceride and the methoxide and ethoxide anions. The gas-phase results reveal that both reactions have essentially the same activation energy (5.9 kcal mol(-1)) for decomposition of the key tetrahedral intermediate. Solvent effects were included by means of both microsolvation and the polarizable continuum solvation model CPCM. Both solvent approaches reduce the activation energy, however, only the microsolvation model is able to introduce some differentiation between methanol and ethanol, yielding a lower activation energy for decomposition of the tetrahedral intermediate in the reaction with methanol (1.1 kcal mol(-1)) than for the corresponding reaction with ethanol (2.8 kcal mol(-1)), in line with experimental evidences. Analysis of the individual energy components within the CPCM approach reveals that electrostatic interactions are the main contribution to stabilization of the transition state. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper utilises the Juhn Murphy and Pierce (1991) decomposition to shed light on the pattern of slow male-female wage convergence in Australia over the 1980s. The analysis allows one to distinguish between the role of wage structure and genderspecific effects. The central question addressed is whether rising wage inequality counteracted the forces of increased female investment in labour market skills, i.e. education and experience. The conclusion is that in contrast to the US and the UK, Australian women do not appear to have been swimming against a tide of adverse wage structure changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining refers to extracting or "mining" knowledge from large amounts of data. It is also called a method of "knowledge presentation" where visualization and knowledge representation techniques are used to present the mined knowledge to the user. Efficient algorithms to mine frequent patterns are crucial to many tasks in data mining. Since the Apriori algorithm was proposed in 1994, there have been several methods proposed to improve its performance. However, most still adopt its candidate set generation-and-test approach. In addition, many methods do not generate all frequent patterns, making them inadequate to derive association rules. The Pattern Decomposition (PD) algorithm that can significantly reduce the size of the dataset on each pass makes it more efficient to mine all frequent patterns in a large dataset. This algorithm avoids the costly process of candidate set generation and saves a large amount of counting time to evaluate support with reduced datasets. In this paper, some existing frequent pattern generation algorithms are explored and their comparisons are discussed. The results show that the PD algorithm outperforms an improved version of Apriori named Direct Count of candidates & Prune transactions (DCP) by one order of magnitude and is faster than an improved FP-tree named as Predictive Item Pruning (PIP). Further, PD is also more scalable than both DCP and PIP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new objective fabric pilling grading method based on wavelet texture analysis was developed. The new method created a complex texture feature vector based on the wavelet detail coefficients from all decomposition levels and horizontal, vertical and diagonal orientations, permitting a much richer and more complete representation of pilling texture in the image to be used as a basis for classification. Standard multi-factor classification techniques of principal components analysis and discriminant analysis were then used to classify the pilling samples into five pilling degrees. The preliminary investigation of the method was performed using standard pilling image sets of knitted, woven and non-woven fabrics. The results showed that this method could successfully evaluate the pilling intensity of knitted, woven and non-woven fabrics by selecting the suitable wavelet and associated analysis scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multiresolution technique based on multiwavelets scale-space representation for stereo correspondence estimation is presented. The technique uses the well-known coarse-to-fine strategy, involving the calculation of stereo correspondences at the coarsest resolution level with consequent refinement up to the finest level. Vector coefficients of the multiwavelets transform modulus are used as corresponding features, where modulus maxima defines the shift invariant high-level features (multiscale edges) with phase pointing to the normal of the feature surface. The technique addresses the estimation of optimal corresponding points and the corresponding 2D disparity maps. Illuminative variation that can exist between the perspective views of the same scene is controlled using scale normalization at each decomposition level by dividing the details space coefficients with approximation space. The problems of ambiguity, explicitly, and occlusion, implicitly, are addressed by using a geometric topological refinement procedure. Geometric refinement is based on a symbolic tagging procedure introduced to keep only the most consistent matches in consideration. Symbolic tagging is performed based on probability of occurrence and multiple thresholds. The whole procedure is constrained by the uniqueness and continuity of the corresponding stereo features. The comparative performance of the proposed algorithm with eight famous existing algorithms, presented in the literature, is shown to validate the claims of promising performance of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of the direct and indirect requirements for energy is known as embodied energy analysis. For buildings, the direct energy includes that used primarily on site, while the indirect energy includes primarily the energy required for the manufacture of building materials. This thesis is concerned with the completeness and reliability of embodied energy analysis methods. Previous methods tend to address either one of these issues, but not both at the same time. Industry-based methods are incomplete. National statistical methods, while comprehensive, are a ‘black box’ and are subject to errors. A new hybrid embodied energy analysis method is derived to optimise the benefits of previous methods while minimising their flaws. In industry-based studies, known as ‘process analyses’, the energy embodied in a product is traced laboriously upstream by examining the inputs to each preceding process towards raw materials. Process analyses can be significantly incomplete, due to increasing complexity. The other major embodied energy analysis method, ‘input-output analysis’, comprises the use of national statistics. While the input-output framework is comprehensive, many inherent assumptions make the results unreliable. Hybrid analysis methods involve the combination of the two major embodied energy analysis methods discussed above, either based on process analysis or input-output analysis. The intention in both hybrid analysis methods is to reduce errors associated with the two major methods on which they are based. However, the problems inherent to each of the original methods tend to remain, to some degree, in the associated hybrid versions. Process-based hybrid analyses tend to be incomplete, due to the exclusions associated with the process analysis framework. However, input-output-based hybrid analyses tend to be unreliable because the substitution of process analysis data into the input-output framework causes unwanted indirect effects. A key deficiency in previous input-output-based hybrid analysis methods is that the input-output model is a ‘black box’, since important flows of goods and services with respect to the embodied energy of a sector cannot be readily identified. A new input-output-based hybrid analysis method was therefore developed, requiring the decomposition of the input-output model into mutually exclusive components (ie, ‘direct energy paths’). A direct energy path represents a discrete energy requirement, possibly occurring one or more transactions upstream from the process under consideration. For example, the energy required directly to manufacture the steel used in the construction of a building would represent a direct energy path of one non-energy transaction in length. A direct energy path comprises a ‘product quantity’ (for example, the total tonnes of cement used) and a ‘direct energy intensity’ (for example, the energy required directly for cement manufacture, per tonne). The input-output model was decomposed into direct energy paths for the ‘residential building construction’ sector. It was shown that 592 direct energy paths were required to describe 90% of the overall total energy intensity for ‘residential building construction’. By extracting direct energy paths using yet smaller threshold values, they were shown to be mutually exclusive. Consequently, the modification of direct energy paths using process analysis data does not cause unwanted indirect effects. A non-standard individual residential building was then selected to demonstrate the benefits of the new input-output-based hybrid analysis method in cases where the products of a sector may not be similar. Particular direct energy paths were modified with case specific process analysis data. Product quantities and direct energy intensities were derived and used to modify some of the direct energy paths. The intention of this demonstration was to determine whether 90% of the total embodied energy calculated for the building could comprise the process analysis data normally collected for the building. However, it was found that only 51% of the total comprised normally collected process analysis. The integration of process analysis data with 90% of the direct energy paths by value was unsuccessful because: • typically only one of the direct energy path components was modified using process analysis data (ie, either the product quantity or the direct energy intensity); • of the complexity of the paths derived for ‘residential building construction’; and • of the lack of reliable and consistent process analysis data from industry, for both product quantities and direct energy intensities. While the input-output model used was the best available for Australia, many errors were likely to be carried through to the direct energy paths for ‘residential building construction’. Consequently, both the value and relative importance of the direct energy paths for ‘residential building construction’ were generally found to be a poor model for the demonstration building. This was expected. Nevertheless, in the absence of better data from industry, the input-output data is likely to remain the most appropriate for completing the framework of embodied energy analyses of many types of products—even in non-standard cases. ‘Residential building construction’ was one of the 22 most complex Australian economic sectors (ie, comprising those requiring between 592 and 3215 direct energy paths to describe 90% of their total energy intensities). Consequently, for the other 87 non-energy sectors of the Australian economy, the input-output-based hybrid analysis method is likely to produce more reliable results than those calculated for the demonstration building using the direct energy paths for ‘residential building construction’. For more complex sectors than ‘residential building construction’, the new input-output-based hybrid analysis method derived here allows available process analysis data to be integrated with the input-output data in a comprehensive framework. The proportion of the result comprising the more reliable process analysis data can be calculated and used as a measure of the reliability of the result for that product or part of the product being analysed (for example, a building material or component). To ensure that future applications of the new input-output-based hybrid analysis method produce reliable results, new sources of process analysis data are required, including for such processes as services (for example, ‘banking’) and processes involving the transformation of basic materials into complex products (for example, steel and copper into an electric motor). However, even considering the limitations of the demonstration described above, the new input-output-based hybrid analysis method developed achieved the aim of the thesis: to develop a new embodied energy analysis method that allows reliable process analysis data to be integrated into the comprehensive, yet unreliable, input-output framework. Plain language summary Embodied energy analysis comprises the assessment of the direct and indirect energy requirements associated with a process. For example, the construction of a building requires the manufacture of steel structural members, and thus indirectly requires the energy used directly and indirectly in their manufacture. Embodied energy is an important measure of ecological sustainability because energy is used in virtually every human activity and many of these activities are interrelated. This thesis is concerned with the relationship between the completeness of embodied energy analysis methods and their reliability. However, previous industry-based methods, while reliable, are incomplete. Previous national statistical methods, while comprehensive, are a ‘black box’ subject to errors. A new method is derived, involving the decomposition of the comprehensive national statistical model into components that can be modified discretely using the more reliable industry data, and is demonstrated for an individual building. The demonstration failed to integrate enough industry data into the national statistical model, due to the unexpected complexity of the national statistical data and the lack of available industry data regarding energy and non-energy product requirements. These unique findings highlight the flaws in previous methods. Reliable process analysis and input-output data are required, particularly for those processes that were unable to be examined in the demonstration of the new embodied energy analysis method. This includes the energy requirements of services sectors, such as banking, and processes involving the transformation of basic materials into complex products, such as refrigerators. The application of the new method to less complex products, such as individual building materials or components, is likely to be more successful than to the residential building demonstration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate calcium chloride coagulation technology, two kinds of raw natural rubber samples were produced by calcium chloride and acetic acid respectively. Plasticity retention index (PRI), thermal degradation process, thermal degradation kinetics and differential thermal analysis of two samples studied. Furthermore, thermal degradation activation energy, pre-exponential factor and rate constant were calculated. The results show that natural rubber produced by calcium chloride possesses good mechanical property and poor thermo-stability in comparison to natural rubber produced by acetic acid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new time-frequency approach to the underdetermined blind source separation using the parallel factor decomposition of third-order tensors. Without any constraint on the number of active sources at an auto-term time-frequency point, this approach can directly separate the sources as long as the uniqueness condition of parallel factor decomposition is satisfied. Compared with the existing two-stage methods where the mixing matrix should be estimated at first and then used to recover the sources, our approach yields better source separation performance in the presence of noise. Moreover, the mixing matrix can be estimated at the same time of the source separation process. Numerical simulations are presented to show the superior performance of the proposed approach to some of the existing two-stage blind source separation methods that use the time-frequency representation as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decomposition of poly(vinyl alcohol)/montmorillonite clay (PVA/MMT) composites during melting-crystallization was experimentally confirmed by morphology and molecular structure changes. In particular, FTIR spectra show the shift of O-H stretching band as well as enhanced intensities of C-O stretching and CH2 rocking vibrational modes. Furthermore, Raman deconvolution indicates that C-H wagging, CH2-CH wagging, CH-CO bending and CH2 wagging modes in amorphous domains were all decreased greatly. Moreover, this decomposition leads to decreased melting enthalpy, melting point, crystallization enthalpy and crystallization temperature. Crystallization analysis shows that the MMT incorporated slows down the crystallization process in the PVA matrix regardless of the nucleation capability of MMT. Despite the severe decomposition, the crystallization kinetics still corroborated well with common classical models. As a result, molecular structure changes and crystallization retardation observed in this study clearly indicate the strong effects of the thermal degradation on the non-isothermal crystallization of PVA/MMT composites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Paper Tackles the Problem of Aggregate Tfp Measurement Using Stochastic Frontier Analysis (Sfa). Data From Penn World Table 6.1 are Used to Estimate a World Production Frontier For a Sample of 75 Countries Over a Long Period (1950-2000) Taking Advantage of the Model Offered By Battese and Coelli (1992). We Also Apply the Decomposition of Tfp Suggested By Bauer (1990) and Kumbhakar (2000) to a Smaller Sample of 36 Countries Over the Period 1970-2000 in Order to Evaluate the Effects of Changes in Efficiency (Technical and Allocative), Scale Effects and Technical Change. This Allows Us to Analyze the Role of Productivity and Its Components in Economic Growth of Developed and Developing Nations in Addition to the Importance of Factor Accumulation. Although not Much Explored in the Study of Economic Growth, Frontier Techniques Seem to Be of Particular Interest For That Purpose Since the Separation of Efficiency Effects and Technical Change Has a Direct Interpretation in Terms of the Catch-Up Debate. The Estimated Technical Efficiency Scores Reveal the Efficiency of Nations in the Production of Non Tradable Goods Since the Gdp Series Used is Ppp-Adjusted. We Also Provide a Second Set of Efficiency Scores Corrected in Order to Reveal Efficiency in the Production of Tradable Goods and Rank Them. When Compared to the Rankings of Productivity Indexes Offered By Non-Frontier Studies of Hall and Jones (1996) and Islam (1995) Our Ranking Shows a Somewhat More Intuitive Order of Countries. Rankings of the Technical Change and Scale Effects Components of Tfp Change are Also Very Intuitive. We Also Show That Productivity is Responsible For Virtually All the Differences of Performance Between Developed and Developing Countries in Terms of Rates of Growth of Income Per Worker. More Important, We Find That Changes in Allocative Efficiency Play a Crucial Role in Explaining Differences in the Productivity of Developed and Developing Nations, Even Larger Than the One Played By the Technology Gap

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the belief, supported byrecentapplied research, thataggregate datadisplay short-run comovement, there has been little discussion about the econometric consequences ofthese data “features.” W e use exhaustive M onte-Carlo simulations toinvestigate theimportance ofrestrictions implied by common-cyclicalfeatures for estimates and forecasts based on vectorautoregressive and errorcorrection models. First, weshowthatthe“best” empiricalmodeldevelopedwithoutcommoncycles restrictions neednotnestthe“best” modeldevelopedwiththoserestrictions, duetothe use ofinformation criteria forchoosingthe lagorderofthe twoalternative models. Second, weshowthatthecosts ofignoringcommon-cyclicalfeatures inV A R analysis may be high in terms offorecastingaccuracy and e¢ciency ofestimates ofvariance decomposition coe¢cients. A lthough these costs are more pronounced when the lag orderofV A R modelsareknown, theyarealsonon-trivialwhenitis selectedusingthe conventionaltoolsavailabletoappliedresearchers. T hird, we…ndthatifthedatahave common-cyclicalfeatures andtheresearcherwants touseaninformationcriterium to selectthelaglength, theH annan-Q uinn criterium is themostappropriate, sincethe A kaike and theSchwarz criteriahave atendency toover- and under-predictthe lag lengthrespectivelyinoursimulations.