995 resultados para Factor decomposition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing importance of vertical specialisation (VS) trade has been a notable feature of rapid economic globalisation and regional integration. In an attempt to understand countries’ depth of participation in global production chains, many Input-Output based VS indicators have been developed. However, most of them focus on showing the overall magnitude of a country’s VS trade, rather than explaining the roles that specific sectors or products play in VS trade and what factors make the VS change over time. Changes in vertical specialisation indicators are, in fact, determined by mixed and complex factors such as import substitution ratios, types of exported goods and domestic production networks. In this paper, decomposition techniques are applied to VS measurement based on the OECD Input-Output database. The decomposition results not only help us understand the structure of VS at detailed sector and product levels, but also show us the contributions of trade dependency, industrial structures of foreign trade and domestic production system to a country’s vertical specialisation trade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a method for the decomposition of inequality changes based on panel data regression. The method is an efficient way to quantify the contributions of variables to changes of the Theil T index while satisfying the property of uniform addition. We illustrate the method using prefectural data from Japan for the period 1955 to 1998. Japan experienced a diminishing of regional income disparity during the years of high economic growth from 1955 to 1973. After estimating production functions using panel data for prefectures in Japan, we apply the new decomposition approach to identify each production factor’s contributions to the changes of per capita income inequality among prefectures. The decomposition results show that total factor productivity (residual) growth, population change (migration), and public capital stock growth contributed to the diminishing of per capita income disparity.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a new time-frequency approach to the underdetermined blind source separation using the parallel factor decomposition of third-order tensors. Without any constraint on the number of active sources at an auto-term time-frequency point, this approach can directly separate the sources as long as the uniqueness condition of parallel factor decomposition is satisfied. Compared with the existing two-stage methods where the mixing matrix should be estimated at first and then used to recover the sources, our approach yields better source separation performance in the presence of noise. Moreover, the mixing matrix can be estimated at the same time of the source separation process. Numerical simulations are presented to show the superior performance of the proposed approach to some of the existing two-stage blind source separation methods that use the time-frequency representation as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pristine peatlands are carbon (C) accumulating wetland ecosystems sustained by a high water level (WL) and consequent anoxia that slows down decomposition. Persistent WL drawdown as a response to climate and/or land-use change directly affects decomposition: increased oxygenation stimulates decomposition of the old C (peat) sequestered under prior anoxic conditions. Responses of the new C (plant litter) in terms of quality, production and decomposability, and the consequences for the whole C cycle of peatlands are not fully understood. WL drawdown induces changes in plant community resulting in shift in dominance from Sphagnum and graminoids to shrubs and trees. There is increasing evidence that the indirect effects of WL drawdown via the changes in plant communities will have more impact on the ecosystem C cycling than any direct effects. The aim of this study is to disentangle the direct and indirect effects of WL drawdown on the new C by measuring the relative importance of 1) environmental parameters (WL depth, temperature, soil chemistry) and 2) plant community composition on litter production, microbial activity, litter decomposition rates and, consequently, on the C accumulation. This information is crucial for modelling C cycle under changing climate and/or land-use. The effects of WL drawdown were tested in a large-scale experiment with manipulated WL at two time scales and three nutrient regimes. Furthermore, the effect of climate on litter decomposability was tested along a north-south gradient. Additionally, a novel method for estimating litter chemical quality and decomposability was explored by combining Near infrared spectroscopy with multivariate modelling. WL drawdown had direct effects on litter quality, microbial community composition and activity and litter decomposition rates. However, the direct effects of WL drawdown were overruled by the indirect effects via changes in litter type composition and production. Short-term (years) responses to WL drawdown were small. In long-term (decades), dramatically increased litter inputs resulted in large accumulation of organic matter in spite of increased decomposition rates. Further, the quality of the accumulated matter greatly changed from that accumulated in pristine conditions. The response of a peatland ecosystem to persistent WL drawdown was more pronounced at sites with more nutrients. The study demonstrates that the shift in vegetation composition as a response to climate and/or land-use change is the main factor affecting peatland ecosystem C cycle and thus dynamic vegetation is a necessity in any models applied for estimating responses of C fluxes to changes in the environment. The time scale for vegetation changes caused by hydrological changes needs to extend to decades. This study provides grouping of litter types (plant species and part) into functional types based on their chemical quality and/or decomposability that the models could utilize. Further, the results clearly show a drop in soil temperature as a response to WL drawdown when an initially open peatland converts into a forest ecosystem, which has not yet been considered in the existing models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2013

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kinetics and product studies of the decompositions of allyl-t-butyl peroxide and 3-hydroperoxy- l-propene (allyl hydroperoxide ) in tolune were investigated. Decompositions of allyl-t-butyl peroxide in toluene at 130-1600 followed first order kinetics with an activation energy of 32.8 K.cals/mol and a log A factor of 13.65. The rates of decomposition were lowered in presence of the radical trap~methyl styrene. By the radical trap method, the induced decomposition at 1300 is shown to be 12.5%. From the yield of 4-phenyl-l,2- epoxy butane the major path of induced decomposition is shown to be via an addition mechanism. On the other hand, di-t-butYl peroxyoxalate induced decomposition of this peroxide at 600 proceeded by an abstraction mechanism. Induced decomposition of peroxides and hydroperoxides containing the allyl system is proposed to occur mainly through an addition mechanism at these higher temperatures. Allyl hydroperoxide in toluene at 165-1850 decomposes following 3/2 order kinetics with an Ea of 30.2 K.cals per mole and log A of 10.6. Enormous production of radicals through chain branching may explain these relatively low values of E and log A. The complexity of the reaction is indicated a by the formation of various products of the decomposition. A study of the radical attack of the hydro peroxide at lower temperatures is suggested as a further work to throw more light on the nature of decomposition of this hydroperoxide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A One-Dimensional Time to Explosion (ODTX) apparatus has been used to study the times to explosion of a number of compositions based on RDX and HMX over a range of contact temperatures. The times to explosion at any given temperature tend to increase from RDX to HMX and with the proportion of HMX in the composition. Thermal ignition theory has been applied to time to explosion data to calculate kinetic parameters. The apparent activation energy for all of the compositions lay between 127 kJ mol−1 and 146 kJ mol−1. There were big differences in the pre-exponential factor and this controlled the time to explosion rather than the activation energy for the process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate calcium chloride coagulation technology, two kinds of raw natural rubber samples were produced by calcium chloride and acetic acid respectively. Plasticity retention index (PRI), thermal degradation process, thermal degradation kinetics and differential thermal analysis of two samples studied. Furthermore, thermal degradation activation energy, pre-exponential factor and rate constant were calculated. The results show that natural rubber produced by calcium chloride possesses good mechanical property and poor thermo-stability in comparison to natural rubber produced by acetic acid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Construction is an important industry and forms a vital part of national economics in the world. Factors affecting the productivity of the construction industry should be measured appropriately to reflect its development situation and economic performance. The Malmquist index method with a novel decomposition technique is employed to estimate the total factor productivity of the Australian construction industry during the period 1990-2007 and to analyse the factors affecting the technological change in the industry. Research results exemplified by two input variables and one output variable elaborate how construction technology, pure technical efficiency and scale economy take effect in the change of construction productivity. In addition, based on temporal and spatial comparisons, the analysis for construction productivities reveals their changes over time and across the country. Proposals and recommendations are expected to be beneficial for policy making and strategic decisions to improve the performance of the Australian construction industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Paper Tackles the Problem of Aggregate Tfp Measurement Using Stochastic Frontier Analysis (Sfa). Data From Penn World Table 6.1 are Used to Estimate a World Production Frontier For a Sample of 75 Countries Over a Long Period (1950-2000) Taking Advantage of the Model Offered By Battese and Coelli (1992). We Also Apply the Decomposition of Tfp Suggested By Bauer (1990) and Kumbhakar (2000) to a Smaller Sample of 36 Countries Over the Period 1970-2000 in Order to Evaluate the Effects of Changes in Efficiency (Technical and Allocative), Scale Effects and Technical Change. This Allows Us to Analyze the Role of Productivity and Its Components in Economic Growth of Developed and Developing Nations in Addition to the Importance of Factor Accumulation. Although not Much Explored in the Study of Economic Growth, Frontier Techniques Seem to Be of Particular Interest For That Purpose Since the Separation of Efficiency Effects and Technical Change Has a Direct Interpretation in Terms of the Catch-Up Debate. The Estimated Technical Efficiency Scores Reveal the Efficiency of Nations in the Production of Non Tradable Goods Since the Gdp Series Used is Ppp-Adjusted. We Also Provide a Second Set of Efficiency Scores Corrected in Order to Reveal Efficiency in the Production of Tradable Goods and Rank Them. When Compared to the Rankings of Productivity Indexes Offered By Non-Frontier Studies of Hall and Jones (1996) and Islam (1995) Our Ranking Shows a Somewhat More Intuitive Order of Countries. Rankings of the Technical Change and Scale Effects Components of Tfp Change are Also Very Intuitive. We Also Show That Productivity is Responsible For Virtually All the Differences of Performance Between Developed and Developing Countries in Terms of Rates of Growth of Income Per Worker. More Important, We Find That Changes in Allocative Efficiency Play a Crucial Role in Explaining Differences in the Productivity of Developed and Developing Nations, Even Larger Than the One Played By the Technology Gap

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop and calibrate a model where differences in factor endowments lead countries to trade intermediate goods, and gains from trade reflect in total factor productivity. We perform several output and growth decompositions, to assess the impact that barriers to trade, as well as changes in terms of trade, have on measured TFP. We find that for very poor economies gains from trade are large, in some cases representing a doubling of GDP. Also, that an improvement in the terms of trade - by allowing the use of a better mix of intermediate inputs in the production process - translates into productivity growth.