968 resultados para METALLORGANIC DECOMPOSITION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, White (2007) analysed the international inequalities in Ecological Footprints per capita (EF hereafter) based on a two-factor decomposition of an index from the Atkinson family (Atkinson (1970)). Specifically, this paper evaluated the separate role of environment intensity (EF/GDP) and average income as explanatory factors for these global inequalities. However, in addition to other comments on their appeal, this decomposition suffers from the serious limitation of the omission of the role exerted by probable factorial correlation (York et al. (2005)). This paper proposes, by way of an alternative, a decomposition of a conceptually similar index like Theil’s (Theil, 1967) which, in effect, permits clear decomposition in terms of the role of both factors plus an inter-factor correlation, in line with Duro and Padilla (2006). This decomposition might, in turn, be extended to group inequality components (Shorrocks, 1980), an analysis that cannot be conducted in the case of the Atkinson indices. The proposed methodology is implemented empirically with the aim of analysing the international inequalities in EF per capita for the 1980-2007 period and, amongst other results, we find that, indeed, the interactive component explains, to a significant extent, the apparent pattern of stability observed in overall international inequalities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction. Keywords: ecological footprint; ecological inequality measurement, inequality decomposition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, White (2007) analysed the international inequalities in Ecological Footprints per capita (EF hereafter) based on a two-factor decomposition of an index from the Atkinson family (Atkinson (1970)). Specifically, this paper evaluated the separate role of environment intensity (EF/GDP) and average income as explanatory factors for these global inequalities. However, in addition to other comments on their appeal, this decomposition suffers from the serious limitation of the omission of the role exerted by probable factorial correlation (York et al. (2005)). This paper proposes, by way of an alternative, a decomposition of a conceptually similar index like Theil’s (Theil, 1967) which, in effect, permits clear decomposition in terms of the role of both factors plus an inter-factor correlation, in line with Duro and Padilla (2006). This decomposition might, in turn, be extended to group inequality components (Shorrocks, 1980), an analysis that cannot be conducted in the case of the Atkinson indices. The proposed methodology is implemented empirically with the aim of analysing the international inequalities in EF per capita for the 1980-2007 period and, amongst other results, we find that, indeed, the interactive component explains, to a significant extent, the apparent pattern of stability observed in overall international inequalities. Key words: ecological footprint; international environmental distribution; inequality decomposition

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work analyzes whether the relationship between risk and returns predicted by the Capital Asset Pricing Model (CAPM) is valid in the Brazilian stock market. The analysis is based on discrete wavelet decomposition on different time scales. This technique allows to analyze the relationship between different time horizons, since the short-term ones (2 to 4 days) up to the long-term ones (64 to 128 days). The results indicate that there is a negative or null relationship between systemic risk and returns for Brazil from 2004 to 2007. As the average excess return of a market portfolio in relation to a risk-free asset during that period was positive, it would be expected this relationship to be positive. That is, higher systematic risk should result in higher excess returns, which did not occur. Therefore, during that period, appropriate compensation for systemic risk was not observed in the Brazilian market. The scales that proved to be most significant to the risk-return relation were the first three, which corresponded to short-term time horizons. When treating differently, year-by-year, and consequently separating positive and negative premiums, some relevance is found, during some years, in the risk/return relation predicted by the CAPM. However, this pattern did not persist throughout the years. Therefore, there is not any evidence strong enough confirming that the asset pricing follows the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we characterize the non-emptiness of the equity core (Selten, 1978) and provide a method, easy to implement, for computing the Lorenz-maximal allocations in the equal division core (Dutta-Ray, 1991). Both results are based on a geometrical decomposition of the equity core as a finite union of polyhedrons. Keywords: Cooperative game, equity core, equal division core, Lorenz domination. JEL classification: C71

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Computational Biophysics Group at the Universitat Pompeu Fabra (GRIB-UPF) hosts two unique computational resources dedicated to the execution of large scale molecular dynamics (MD) simulations: (a) the ACMD molecular-dynamics software, used on standard personal computers with graphical processing units (GPUs); and (b) the GPUGRID. net computing network, supported by users distributed worldwide that volunteer GPUs for biomedical research. We leveraged these resources and developed studies, protocols and open-source software to elucidate energetics and pathways of a number of biomolecular systems, with a special focus on flexible proteins with many degrees of freedom. First, we characterized ion permeation through the bactericidal model protein Gramicidin A conducting one of the largest studies to date with the steered MD biasing methodology. Next, we addressed an open problem in structural biology, the determination of drug-protein association kinetics; we reconstructed the binding free energy, association, and dissaciociation rates of a drug like model system through a spatial decomposition and a Makov-chain analysis. The work was published in the Proceedings of the National Academy of Sciences and become one of the few landmark papers elucidating a ligand-binding pathway. Furthermore, we investigated the unstructured Kinase Inducible Domain (KID), a 28-peptide central to signalling and transcriptional response; the kinetics of this challenging system was modelled with a Markovian approach in collaboration with Frank Noe’s group at the Freie University of Berlin. The impact of the funding includes three peer-reviewed publication on high-impact journals; three more papers under review; four MD analysis components, released as open-source software; MD protocols; didactic material, and code for the hosting group.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To date, state-of-the-art seismic material parameter estimates from multi-component sea-bed seismic data are based on the assumption that the sea-bed consists of a fully elastic half-space. In reality, however, the shallow sea-bed generally consists of soft, unconsolidated sediments that are characterized by strong to very strong seismic attenuation. To explore the potential implications, we apply a state-of-the-art elastic decomposition algorithm to synthetic data for a range of canonical sea-bed models consisting of a viscoelastic half-space of varying attenuation. We find that in the presence of strong seismic attenuation, as quantified by Q-values of 10 or less, significant errors arise in the conventional elastic estimation of seismic properties. Tests on synthetic data indicate that these errors can be largely avoided by accounting for the inherent attenuation of the seafloor when estimating the seismic parameters. This can be achieved by replacing the real-valued expressions for the elastic moduli in the governing equations in the parameter estimation by their complex-valued viscoelastic equivalents. The practical application of our parameter procedure yields realistic estimates of the elastic seismic material properties of the shallow sea-bed, while the corresponding Q-estimates seem to be biased towards too low values, particularly for S-waves. Given that the estimation of inelastic material parameters is notoriously difficult, particularly in the immediate vicinity of the sea-bed, this is expected to be of interest and importance for civil and ocean engineering purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Little attention has been paid so far to the influence of the chemical nature of the substance when measuring δ 15N by elemental analysis (EA)-isotope ratio mass spectrometry (IRMS). Although the bulk nitrogen isotope analysis of organic material is not to be questioned, literature from different disciplines using IRMS provides hints that the quantitative conversion of nitrate into nitrogen presents difficulties. We observed abnormal series of δ 15N values of laboratory standards and nitrates. These unexpected results were shown to be related to the tailing of the nitrogen peak of nitrate-containing compounds. A series of experiments were set up to investigate the cause of this phenomenon, using ammonium nitrate (NH4NO3) and potassium nitrate (KNO3) samples, two organic laboratory standards as well as the international secondary reference materials IAEA-N1, IAEA-N2-two ammonium sulphates [(NH4)2SO4]-and IAEA-NO-3, a potassium nitrate. In experiment 1, we used graphite and vanadium pentoxide (V2O5) as additives to observe if they could enhance the decomposition (combustion) of nitrates. In experiment 2, we tested another elemental analyser configuration including an additional section of reduced copper in order to see whether or not the tailing could originate from an incomplete reduction process. Finally, we modified several parameters of the method and observed their influence on the peak shape, δ 15N value and nitrogen content in weight percent of nitrogen of the target substances. We found the best results using mere thermal decomposition in helium, under exclusion of any oxygen. We show that the analytical procedure used for organic samples should not be used for nitrates because of their different chemical nature. We present the best performance given one set of sample introduction parameters for the analysis of nitrates, as well as for the ammonium sulphate IAEA-N1 and IAEA-N2 reference materials. We discuss these results considering the thermochemistry of the substances and the analytical technique itself. The results emphasise the difference in chemical nature of inorganic and organic samples, which necessarily involves distinct thermochemistry when analysed by EA-IRMS. Therefore, they should not be processed using the same analytical procedure. This clearly impacts on the way international secondary reference materials should be used for the calibration of organic laboratory standards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we describe the usage of bilinear statistical models as a means of factoring the shape variability into two components attributed to inter-subject variation and to the intrinsic dynamics of the human heart. We show that it is feasible to reconstruct the shape of the heart at discrete points in the cardiac cycle. Provided we are given a small number of shape instances representing the same heart atdifferent points in the same cycle, we can use the bilinearmodel to establish this. Using a temporal and a spatial alignment step in the preprocessing of the shapes, around half of the reconstruction errors were on the order of the axial image resolution of 2 mm, and over 90% was within 3.5 mm. From this, weconclude that the dynamics were indeed separated from theinter-subject variability in our dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper first presents a 10-year outlook for major Asian dairy markets (China, India, Indonesia, Japan, South Korea, Malaysia, the Philippines, Thailand, and Vietnam) based on a world dairy model. Then, using Heien and Wessells’s technique, dairy product consumption growth is decomposed into contributions generated by income growth, population growth, price change, and urbanization and these contributions are quantified. Using the world dairy model, the paper also analyzes the impacts of alternative assumptions of higher income levels and technology development in Asia on Asian dairy consumptions and world dairy prices. The outlook projects that Asian dairy consumption will continue to grow strongly in the next decade. The consumption decomposition suggests that the growth would be mostly driven by income and population growth and, as a result, would raise world dairy prices. The simulation results show that technology improvement in Asian countries would dampen world dairy prices and meanwhile boost domestic dairy consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AIMS: High-mobility group box 1 (HMGB1) is a nuclear protein actively secreted by immune cells and passively released by necrotic cells that initiates pro-inflammatory signalling through binding to the receptor for advance glycation end-products. HMGB1 has been established as a key inflammatory mediator during myocardial infarction, but the proximal mechanisms responsible for myocardial HMGB1 expression and release in this setting remain unclear. Here, we investigated the possible involvement of peroxynitrite, a potent cytotoxic oxidant formed during myocardial infarction, on these processes. METHODS AND RESULTS: The ability of peroxynitrite to induce necrosis and HMGB1 release in vitro was evaluated in H9c2 cardiomyoblasts and in primary murine cardiac cells (myocytes and non-myocytes). In vivo, myocardial HMGB1 expression and nitrotyrosine content (a marker of peroxynitrite generation) were determined following myocardial ischaemia and reperfusion in rats, whereas peroxynitrite formation was inhibited by two different peroxynitrite decomposition catalysts: 5,10,15,20-tetrakis(4-sulphonatophenyl) porphyrinato iron (III) (FeTPPS) or Mn(III)-tetrakis(4-benzoic acid) porphyrin chloride (MnTBAP). In all types of cells studied, peroxynitrite (100 μM) elicited significant necrosis, the loss of intracellular HMGB1, and its passive release into the medium. In vivo, myocardial ischaemia-reperfusion induced significant myocardial necrosis, cardiac nitrotyrosine formation, and marked overexpression of myocardial HMGB1. FeTPPS reduced nitrotyrosine, decreased infarct size, and suppressed HMGB1 overexpression, an effect that was similarly obtained with MnTBAP. CONCLUSION: These findings indicate that peroxynitrite represents a key mediator of HMGB1 overexpression and release by cardiac cells and provide a novel mechanism linking myocardial oxidative/nitrosative stress with post-infarction myocardial inflammation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: While the prices of pharmaceuticals are relatively low in Greece, expenditure on them is growing more rapidly than almost anywhere else in the European Union. OBJECTIVE: To describe and explain the rise in drug expenditures through decomposition of the increase into the contribution of changes in prices, in volumes and a product-mix effect. METHODS: The decomposition of the growth in pharmaceutical expenditures in Greece over the period 1991-2006 was conducted using data from the largest social insurance fund (IKA) that covers more than 50% of the population. RESULTS: Real drug spending increased by 285%, despite a 58% decrease in the relative price of pharmaceuticals. The increase in expenditure is mainly attributable to a switch to more innovative, but more expensive, pharmaceuticals, indicated by a product-mix residual of 493% in the decomposition. A rising volume of drugs also plays a role, and this is due to an increase in the number of prescriptions issued per doctor visit, rather than an increase in the number of visits or the population size. CONCLUSIONS: Rising pharmaceutical expenditures are strongly determined by physicians' prescribing behaviour, which is not subject to any monitoring and for which there are no incentives to be cost conscious.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.