63 resultados para mean intensity
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
The link between energy consumption and economic growth has been widely studied in the economic literature. Understanding this relationship is important from both an environmental and a socio-economic point of view, as energy consumption is crucial to economic activity and human environmental impact. This relevance is even higher for developing countries, since energy consumption per unit of output varies through the phases of development, increasing from an agricultural stage to an industrial one and then decreasing for certain service based economies. In the Argentinean case, the relevance of energy consumption to economic development seems to be particularly important. While energy intensity seems to exhibit a U-Shaped curve from 1990 to 2003 decreasing slightly after that year, total energy consumption increases along the period of analysis. Why does this happen? How can we relate this result with the sustainability debate? All these questions are very important due to Argentinean hydrocarbons dependence and due to the recent reduction in oil and natural gas reserves, which can lead to a lack of security of supply. In this paper we study Argentinean energy consumption pattern for the period 1990-2007, to discuss current and future energy and economic sustainability. To this purpose, we developed a conventional analysis, studying energy intensity, and a non conventional analysis, using the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) accounting methodology. Both methodologies show that the development process followed by Argentina has not been good enough to assure sustainability in the long term. Instead of improving energy use, energy intensity has increased. The current composition of its energy mix, and the recent economic crisis in Argentina, as well as its development path, are some of the possible explanations.
Resumo:
This paper conducts an empirical analysis of the relationship between wage inequality, employment structure, and returns to education in urban areas of Mexico during the past two decades (1987-2008). Applying Melly’s (2005) quantile regression based decomposition, we find that changes in wage inequality have been driven mainly by variations in educational wage premia. Additionally, we find that changes in employment structure, including occupation and firm size, have played a vital role. This evidence seems to suggest that the changes in wage inequality in urban Mexico cannot be interpreted in terms of a skill-biased change, but rather they are the result of an increasing demand for skills during that period.
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging.When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positivevariables, has no straightforward way to build consistent and optimal confidence intervals for estimates.These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
Recently, White (2007) analysed the international inequalities in Ecological Footprints per capita (EF hereafter) based on a two-factor decomposition of an index from the Atkinson family (Atkinson (1970)). Specifically, this paper evaluated the separate role of environment intensity (EF/GDP) and average income as explanatory factors for these global inequalities. However, in addition to other comments on their appeal, this decomposition suffers from the serious limitation of the omission of the role exerted by probable factorial correlation (York et al. (2005)). This paper proposes, by way of an alternative, a decomposition of a conceptually similar index like Theil’s (Theil, 1967) which, in effect, permits clear decomposition in terms of the role of both factors plus an inter-factor correlation, in line with Duro and Padilla (2006). This decomposition might, in turn, be extended to group inequality components (Shorrocks, 1980), an analysis that cannot be conducted in the case of the Atkinson indices. The proposed methodology is implemented empirically with the aim of analysing the international inequalities in EF per capita for the 1980-2007 period and, amongst other results, we find that, indeed, the interactive component explains, to a significant extent, the apparent pattern of stability observed in overall international inequalities.
Resumo:
Recently, White (2007) analysed the international inequalities in Ecological Footprints per capita (EF hereafter) based on a two-factor decomposition of an index from the Atkinson family (Atkinson (1970)). Specifically, this paper evaluated the separate role of environment intensity (EF/GDP) and average income as explanatory factors for these global inequalities. However, in addition to other comments on their appeal, this decomposition suffers from the serious limitation of the omission of the role exerted by probable factorial correlation (York et al. (2005)). This paper proposes, by way of an alternative, a decomposition of a conceptually similar index like Theil’s (Theil, 1967) which, in effect, permits clear decomposition in terms of the role of both factors plus an inter-factor correlation, in line with Duro and Padilla (2006). This decomposition might, in turn, be extended to group inequality components (Shorrocks, 1980), an analysis that cannot be conducted in the case of the Atkinson indices. The proposed methodology is implemented empirically with the aim of analysing the international inequalities in EF per capita for the 1980-2007 period and, amongst other results, we find that, indeed, the interactive component explains, to a significant extent, the apparent pattern of stability observed in overall international inequalities. Key words: ecological footprint; international environmental distribution; inequality decomposition
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
In this article we compare regression models obtained to predict PhD students’ academic performance in the universities of Girona (Spain) and Slovenia. Explanatory variables are characteristics of PhD student’s research group understood as an egocentered social network, background and attitudinal characteristics of the PhD students and some characteristics of the supervisors. Academic performance was measured by the weighted number of publications. Two web questionnaires were designed, one for PhD students and one for their supervisors and other research group members. Most of the variables were easily comparable across universities due to the careful translation procedure and pre-tests. When direct comparison was notpossible we created comparable indicators. We used a regression model in which the country was introduced as a dummy coded variable including all possible interaction effects. The optimal transformations of the main and interaction variables are discussed. Some differences between Slovenian and Girona universities emerge. Some variables like supervisor’s performance and motivation for autonomy prior to starting the PhD have the same positive effect on the PhD student’s performance in both countries. On the other hand, variables like too close supervision by the supervisor and having children have a negative influence in both countries. However, we find differences between countries when we observe the motivation for research prior to starting the PhD which increases performance in Slovenia but not in Girona. As regards network variables, frequency of supervisor advice increases performance in Slovenia and decreases it in Girona. The negative effect in Girona could be explained by the fact that additional contacts of the PhD student with his/her supervisor might indicate a higher workload in addition to or instead of a better advice about the dissertation. The number of external student’s advice relationships and social support mean contact intensity are not significant in Girona, but they have a negative effect in Slovenia. We might explain the negative effect of external advice relationships in Slovenia by saying that a lot of external advice may actually result from a lack of the more relevant internal advice
Resumo:
Stone groundwood (SGW) is a fibrous matter commonly prepared in a high yield process, and mainly used for papermaking applications. In this work, the use of SGW fibers is explored as reinforcing element of polypropylene (PP) composites. Due to its chemical and superficial features, the use of coupling agents is needed for a good adhesion and stress transfer across the fiber-matrix interface. The intrinsic strength of the reinforcement is a key parameter to predict the mechanical properties of the composite and to perform an interface analysis. The main objective of the present work was the determination of the intrinsic tensile strength of stone groundwood fibers. Coupled and non-coupled PP composites from stone groundwood fibers were prepared. The influence of the surface morphology and the quality at interface on the final properties of the composite was analyzed and compared to that of fiberglass PP composites. The intrinsic tensile properties of stone groundwood fibers, as well as the fiber orientation factor and the interfacial shear strength of the current composites were determined
Resumo:
This paper examines empirically the determinants of decentralization of decision- making in the firm for small and medium-sized enterprises (SMEs) that tend to be highly centralized. By decentralization of decisions we mean the delegation of decision rights from the owner or manager to the plant supervisor or even to floor workers. Our findings show that the allocation of authority to basic workers or a team of workers depends on firm characteristics such as firm size, the use of internal networks or the number of workplaces, and workers characteristics, in particular, the composition of the laborforce in terms of education and seniority and whether or not workers receive pay incentives. External factors such as the intensity of competition and the firm s export intensity are also important determinants of the allocation of authority.
Resumo:
We establish the validity of subsampling confidence intervals for themean of a dependent series with heavy-tailed marginal distributions.Using point process theory, we study both linear and nonlinear GARCH-liketime series models. We propose a data-dependent method for the optimalblock size selection and investigate its performance by means of asimulation study.
Resumo:
Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.
Spanning tests in return and stochastic discount factor mean-variance frontiers: A unifying approach
Resumo:
We propose new spanning tests that assess if the initial and additional assets share theeconomically meaningful cost and mean representing portfolios. We prove their asymptoticequivalence to existing tests under local alternatives. We also show that unlike two-step oriterated procedures, single-step methods such as continuously updated GMM yield numericallyidentical overidentifyng restrictions tests, so there is arguably a single spanning test.To prove these results, we extend optimal GMM inference to deal with singularities in thelong run second moment matrix of the influence functions. Finally, we test for spanningusing size and book-to-market sorted US stock portfolios.
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.