888 resultados para Hierarchical decomposition
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2015
Resumo:
Duro and Esteban (1998) proposed an additive decomposition of Theil populationweighted index by four income multiplicative factors (in spatial contexts). This note makes some additional methodological points: first, it argues that interaction effects are taken into account in the factoral indexes although only in a fairly restrictive way. As a consequence, we suggest to rewrite the decomposition formula as a sum of strict Theil indexes plus the interactive terms; second, it might be instructive to aggregate some of the initial factors; third, this decomposition can be immediately extended to the between- and within-group components.
Resumo:
The purpose of this paper is to study the possible differences among countries as CO2 emitters and to examine the underlying causes of these differences. The starting point of the analysis is the Kaya identity, which allows us to break down per capita emissions in four components: an index of carbon intensity, transformation efficiency, energy intensity and social wealth. Through a cluster analysis we have identified five groups of countries with different behavior according to these four factors. One significant finding is that these groups are stable for the period analyzed. This suggests that a study based on these components can characterize quite accurately the polluting behavior of individual countries, that is to say, the classification found in the analysis could be used in other studies which look to study the behavior of countries in terms of CO2 emissions in homogeneous groups. In this sense, it supposes an advance over the traditional regional or rich-poor countries classifications .
Resumo:
In this paper well-known summary inequality indexes are used to explore interregional income inequalities in Europe. In particular, we mainly employ Theilspopulation-weighted index because of its appealing properties. Two decomposition analysis are applied. First, regional inequalities are decomposed by regional subgroups (countries). Second, intertemporal inequality changes are separated into income and population changes. The main results can be summarized as follows. First, data confirm a reduction in crossregional inequality during 1982-97. Second, this reduction is basically due to real convergence among countries. Third, currently the greater part of European interregional disparities is within-country by nature, which introduce an important challenge for the European policy. Fourth, inequality changes are due mainly to income variations, population changes playing a minor role.
Resumo:
The Spanish savings banks attracted quite a considerable amount of interest within the scientific arena, especially subsequent to the disappearance of the regulatory constraints during the second decade of the 1980s. Nonetheless, a lack of research identified with respect to mainstream paths given by strategic groups, and the analysis of the total factor productivity. Therefore, on the basis of the resource-based view of the firm and cluster analysis, we make use of changes in structure and performance ratios in order to identify the strategic groups extant in the sector. We attain a threeways division, which we link with different input-output specifications defining strategic paths. Consequently, on the basis of these three dissimilar approaches we compute and decompose a Hicks-Moorsteen total factor productivity index. Obtained results put forward an interesting interpretation under a multi-strategic approach, together with the setbacks of employing cluster analysis within a complex strategic environment. Moreover, we also propose an ex-post method of analysing the outcomes of the decomposed total factor productivity index that could be merged with non-traditional techniques of forming strategic groups, such as cognitive approaches.
Resumo:
In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.
Resumo:
Analysis of gas emissions by the input-output subsystem approach provides detailed insight into pollution generation in an economy. Structural decomposition analysis, on the other hand, identifies the factors behind the changes in key variables over time. Extending the input-output subsystem model to account for the changes in these variables reveals the channels by which environmental burdens are caused and transmitted throughout the production system. In this paper we propose a decomposition of the changes in the components of CO2 emissions captured by an input-output subsystems representation. The empirical application is for the Spanish service sector, and the economic and environmental data are for years 1990 and 2000. Our results show that services increased their CO2 emissions mainly because of a rise in emissions generated by non-services to cover the final demand for services. In all service activities, the decomposed effects show an increase in CO2 emissions due to a decrease in emission coefficients (i.e., emissions per unit of output) compensated by an increase in emissions caused both by the input-output coefficients and the rise in demand for services. Finally, large asymmetries exist not only in the quantitative changes in the CO2 emissions of the various services but also in the decomposed effects of these changes. Keywords: structural decomposition analysis, input-output subsystems, CO2 emissions, service sector.
Resumo:
Recognition by the T-cell receptor (TCR) of immunogenic peptides (p) presented by Class I major histocompatibility complexes (MHC) is the key event in the immune response against virus-infected cells or tumor cells. A study of the 2C TCR/SIYR/H-2K(b) system using a computational alanine scanning and a much faster binding free energy decomposition based on the Molecular Mechanics-Generalized Born Surface Area (MM-GBSA) method is presented. The results show that the TCR-p-MHC binding free energy decomposition using this approach and including entropic terms provides a detailed and reliable description of the interactions between the molecules at an atomistic level. Comparison of the decomposition results with experimentally determined activity differences for alanine mutants yields a correlation of 0.67 when the entropy is neglected and 0.72 when the entropy is taken into account. Similarly, comparison of experimental activities with variations in binding free energies determined by computational alanine scanning yields correlations of 0.72 and 0.74 when the entropy is neglected or taken into account, respectively. Some key interactions for the TCR-p-MHC binding are analyzed and some possible side chains replacements are proposed in the context of TCR protein engineering. In addition, a comparison of the two theoretical approaches for estimating the role of each side chain in the complexation is given, and a new ad hoc approach to decompose the vibrational entropy term into atomic contributions, the linear decomposition of the vibrational entropy (LDVE), is introduced. The latter allows the rapid calculation of the entropic contribution of interesting side chains to the binding. This new method is based on the idea that the most important contributions to the vibrational entropy of a molecule originate from residues that contribute most to the vibrational amplitude of the normal modes. The LDVE approach is shown to provide results very similar to those of the exact but highly computationally demanding method.
Resumo:
The study was designed to investigate the psychometric properties of the French version and the cross-language replicability of the Hierarchical Personality Inventory for Children (HiPIC). The HiPIC is an instrument aimed at assessing the five dimensions of the Five-Factor Model for Children. Subjects were 552 children aged between 8 and 12 years, rated by one or both parents. At the domain level, reliability ranged from .83 to .93 and at the facet level, reliability ranged from .69 to .89. Differences between genders were congruent with those found in the Dutch sample. Girls scored higher on Benevolence and Conscientiousness. Age was negatively correlated with Extraversion and Imagination. For girls, we also observed a decrease of Emotional Stability. A series of exploratory factor analyses confirmed the overall five-factor structure for girls and boys. Targeted factor analyses and congruence coefficients revealed high cross-language replicability at the domain and at the facet levels. The results showed that the French version of the HiPIC is a reliable and valid instrument for assessing personality with children and has a particularly high cross-language replicability.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Resumo:
The calyptrate dipterans are the most important decomposers of human cadavers. Knowledge of their species and distribution are of great importance to forensic entomology, especially because of the enormous diversity in Brazil. Carcasses of domestic pigs (Sus scrofa, L) were the experimental models used to attract calyptrates of forensic interest during the winters of 2006 and 2007 and the summers of 2006 and 2008. A total of 24,423 specimens from 44 species were collected (19 Muscidae, 2 Fanniidae and 23 Sarcophagidae), three of which were new records of occurrence and 20 of which were new forensic records for the state of Rio de Janeiro. Fourteen of these species were newly identified as forensically important in Brazil.
Resumo:
In several computer graphics areas, a refinement criterion is often needed to decide whether to goon or to stop sampling a signal. When the sampled values are homogeneous enough, we assume thatthey represent the signal fairly well and we do not need further refinement, otherwise more samples arerequired, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is verysensitive to variability is necessary. In this paper, we present a family of discrimination measures, thef-divergences, meeting this requirement. These convex functions have been well studied and successfullyapplied to image processing and several areas of engineering. Two applications to global illuminationare shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. Weobtain significantly better results than with classic criteria, showing that f-divergences are worth furtherinvestigation in computer graphics. Also a discrimination measure based on entropy of the samples forrefinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a naturalmethod to deal with the adaptive subdivision of the sampling region
Resumo:
This paper surveys control architectures proposed in the literature and describes a control architecture that is being developed for a semi-autonomous underwater vehicle for intervention missions (SAUVIM) at the University of Hawaii. Conceived as hybrid, this architecture has been organized in three layers: planning, control and execution. The mission is planned with a sequence of subgoals. Each subgoal has a related task supervisor responsible for arranging a set of pre-programmed task modules in order to achieve the subgoal. Task modules are the key concept of the architecture. They are the main building blocks and can be dynamically re-arranged by the task supervisor. In our architecture, deliberation takes place at the planning layer while reaction is dealt through the parallel execution of the task modules. Hence, the system presents both a hierarchical and an heterarchical decomposition, being able to show a predictable response while keeping rapid reactivity to the dynamic environment