881 resultados para Regression-based decomposition.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper performs an empirical Decomposition of International Inequality in Ecological Footprint in order to quantify to what extent explanatory variables such as a country’s affluence, economic structure, demographic characteristics, climate and technology contributed to international differences in terms of natural resource consumption during the period 1993-2007. We use a Regression-Based Inequality Decomposition approach. As a result, the methodology extends qualitatively the results obtained in standard environmental impact regressions as it comprehends further social dimensions of the Sustainable Development concept, i.e. equity within generations. The results obtained point to prioritizing policies that take into account both future and present generations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses the possibilities provided by the regression-based inequality decomposition (Fields, 2003) to explore the contribution of different explanatory factors to international inequality in CO2 emissions per capita. In contrast to previous emissions inequality decompositions, which were based on identity relationships (Duro and Padilla, 2006), this methodology does not impose any a priori specific relationship. Thus, it allows an assessment of the contribution to inequality of different relevant variables. In short, the paper appraises the relative contributions of affluence, sectoral composition, demographic factors and climate. The analysis is applied to selected years of the period 1993–2007. The results show the important (though decreasing) share of the contribution of demographic factors, as well as a significant contribution of affluence and sectoral composition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper performs an empirical Decomposition of International Inequality in Ecological Footprint in order to quantify to what extent explanatory variables such as a country’s affluence, economic structure, demographic characteristics, climate and technology contributed to international differences in terms of natural resource consumption during the period 1993-2007. We use a Regression- Based Inequality Decomposition approach. As a result, the methodology extends qualitatively the results obtained in standard environmental impact regressions as it comprehends further social dimensions of the Sustainable Development concept, i.e. equity within generations. The results obtained point to prioritizing policies that take into account both future and present generations. Keywords: Ecological Footprint Inequality, Regression-Based Inequality Decomposition, Intragenerational equity, Sustainable development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resumen tomado de la publicaci??n

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aggregation theory of mathematical programming is used to study decentralization in convex programming models. A two-level organization is considered and a aggregation-disaggregation scheme is applied to such a divisionally organized enterprise. In contrast to the known aggregation techniques, where the decision variables/production planes are aggregated, it is proposed to aggregate resources allocated by the central planning department among the divisions. This approach results in a decomposition procedure, in which the central unit has no optimization problem to solve and should only average local information provided by the divisions.