979 resultados para Morse decompositions
Resumo:
Given a non-positively curved 2-complex with a circle-valued Morse function satisfying some extra combinatorial conditions, we describe how to locally isometrically embed this in a larger non- positively curved 2-complex with free-by-cyclic fundamental group. This embedding procedure is used to produce examples of CAT(0) free-by-cyclic groups that contain closed hyperbolic surface subgroups with polynomial distortion of arbitrary degree. We also produce examples of CAT(0) hyperbolic free-by-cyclic groups that contain closed hyperbolic surface subgroups that are exponentially distorted.
Resumo:
We construct the Chow motive modelling intersection co-homology of a proper surface. We then study its functoriality properties. Using Murre's decompositions of the motive of a desingularization into KÄunneth components [Mr1], we show that such decompositions exist also for the intersection motive.
Resumo:
Using new linked employee-employer data for Britain in 2004, this paper shows that, on average, full-time male public sector employees earn 11.7 log wage points more than their private sector counterparts. Decomposition analysis reveals that the majority of this pay premium is associated with public sector employees having individual characteristics associated with higher pay and to their working in higher paid occupations. Further focussing analysis on the highly skilled and unskilled occupations in both sectors, reveals evidence of workplace segregation positively impacting on earnings in the private sector for the highly skilled, and in the public sector for the unskilled. Substantial earnings gaps between the highly skilled and unskilled are found, and the unexplained components in these gaps are very similar regardless of sector.
Resumo:
This paper contributes to the on-going empirical debate regarding the role of the RBC model and in particular of technology shocks in explaining aggregate fluctuations. To this end we estimate the model’s posterior density using Markov-Chain Monte-Carlo (MCMC) methods. Within this framework we extend Ireland’s (2001, 2004) hybrid estimation approach to allow for a vector autoregressive moving average (VARMA) process to describe the movements and co-movements of the model’s errors not explained by the basic RBC model. The results of marginal likelihood ratio tests reveal that the more general model of the errors significantly improves the model’s fit relative to the VAR and AR alternatives. Moreover, despite setting the RBC model a more difficult task under the VARMA specification, our analysis, based on forecast error and spectral decompositions, suggests that the RBC model is still capable of explaining a significant fraction of the observed variation in macroeconomic aggregates in the post-war U.S. economy.
Resumo:
Hypergraph width measures are a class of hypergraph invariants important in studying the complexity of constraint satisfaction problems (CSPs). We present a general exact exponential algorithm for a large variety of these measures. A connection between these and tree decompositions is established. This enables us to almost seamlessly adapt the combinatorial and algorithmic results known for tree decompositions of graphs to the case of hypergraphs and obtain fast exact algorithms. As a consequence, we provide algorithms which, given a hypergraph H on n vertices and m hyperedges, compute the generalized hypertree-width of H in time O*(2n) and compute the fractional hypertree-width of H in time O(1.734601n.m).1
Resumo:
Developments in the statistical analysis of compositional data over the last twodecades have made possible a much deeper exploration of the nature of variability,and the possible processes associated with compositional data sets from manydisciplines. In this paper we concentrate on geochemical data sets. First we explainhow hypotheses of compositional variability may be formulated within the naturalsample space, the unit simplex, including useful hypotheses of subcompositionaldiscrimination and specific perturbational change. Then we develop through standardmethodology, such as generalised likelihood ratio tests, statistical tools to allow thesystematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require specialconstruction. We comment on the use of graphical methods in compositional dataanalysis and on the ordination of specimens. The recent development of the conceptof compositional processes is then explained together with the necessary tools for astaying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland.Finally we point out a number of unresolved problems in the statistical analysis ofcompositional processes
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table hasn rows and m columns and all probabilities are non-null. This kind of table can beseen as an element in the simplex of n · m parts. In this context, the marginals areidentified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclideanelements of the Aitchison geometry of the simplex can also be translated into the tableof probabilities: subspaces, orthogonal projections, distances.Two important questions are addressed: a) given a table of probabilities, which isthe nearest independent table to the initial one? b) which is the largest orthogonalprojection of a row onto a column? or, equivalently, which is the information in arow explained by a column, thus explaining the interaction? To answer these questionsthree orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independenttwo-way tables and fully dependent tables representing row-column interaction. Animportant result is that the nearest independent table is the product of the two (rowand column)-wise geometric marginal tables. A corollary is that, in an independenttable, the geometric marginals conform with the traditional (arithmetic) marginals.These decompositions can be compared with standard log-linear models.Key words: balance, compositional data, simplex, Aitchison geometry, composition,orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure,contingency table
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction.
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction. Keywords: ecological footprint; ecological inequality measurement, inequality decomposition.
Resumo:
In silico screening has become a valuable tool in drug design, but some drug targets represent real challenges for docking algorithms. This is especially true for metalloproteins, whose interactions with ligands are difficult to parametrize. Our docking algorithm, EADock, is based on the CHARMM force field, which assures a physically sound scoring function and a good transferability to a wide range of systems, but also exhibits difficulties in case of some metalloproteins. Here, we consider the therapeutically important case of heme proteins featuring an iron core at the active site. Using a standard docking protocol, where the iron-ligand interaction is underestimated, we obtained a success rate of 28% for a test set of 50 heme-containing complexes with iron-ligand contact. By introducing Morse-like metal binding potentials (MMBP), which are fitted to reproduce density functional theory calculations, we are able to increase the success rate to 62%. The remaining failures are mainly due to specific ligand-water interactions in the X-ray structures. Testing of the MMBP on a second data set of non iron binders (14 cases) demonstrates that they do not introduce a spurious bias towards metal binding, which suggests that they may reliably be used also for cross-docking studies.
Resumo:
We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.
Resumo:
Estimates for the U.S. suggest that at least in some sectors productivity enhancing reallocationis the dominant factor in accounting for producitivity growth. An open question, particularlyrelevant for developing countries, is whether reallocation is always productivity enhancing. Itmay be that imperfect competition or other barriers to competitive environments imply that thereallocation process is not fully e?cient in these countries. Using a unique plant-levellongitudinal dataset for Colombia for the period 1982-1998, we explore these issues byexamining the interaction between market allocation, and productivity and profitability.Moreover, given the important trade, labor and financial market reforms in Colombia during theearly 1990's, we explore whether and how the contribution of reallocation changed over theperiod of study. Our data permit measurement of plant-level quantities and prices. Takingadvantage of the rich structure of our price data, we propose a sequential mehodology to estimateproductivity and demand shocks at the plant level. First, we estimate total factor productivity(TFP) with plant-level physical output data, where we use downstream demand to instrumentinputs. We then turn to estimating demand shocks and mark-ups with plant-level price data, usingTFP to instrument for output in the inversedemand equation. We examine the evolution of thedistributions of TFP and demand shocks in response to the market reforms in the 1990's. We findthat market reforms are associated with rising overall productivity that is largely driven byreallocation away from low- and towards highproductivity businesses. In addition, we find thatthe allocation of activity across businesses is less driven by demand factors after reforms. Wefind that the increase in aggregate productivity post-reform is entirely accounted for by theimproved allocation of activity.
Resumo:
We examine the effects of extracting monetary policy disturbances with semi-structural and structural VARs, using data generated bya limited participation model under partial accommodative and feedback rules. We find that, in general, misspecification is substantial: short run coefficients often have wrong signs; impulse responses and variance decompositions give misleadingrepresentations of the dynamics. Explanations for the results and suggestions for macroeconomic practice are provided.
Resumo:
The T-cell receptor (TCR) interaction with antigenic peptides (p) presented by the major histocompatibility complex (MHC) molecule is a key determinant of immune response. In addition, TCR-pMHC interactions offer examples of features more generally pertaining to protein-protein recognition: subtle specificity and cross-reactivity. Despite their importance, molecular details determining the TCR-pMHC binding remain unsolved. However, molecular simulation provides the opportunity to investigate some of these aspects. In this study, we perform extensive equilibrium and steered molecular dynamics simulations to study the unbinding of three TCR-pMHC complexes. As a function of the dissociation reaction coordinate, we are able to obtain converged H-bond counts and energy decompositions at different levels of detail, ranging from the full proteins, to separate residues and water molecules, down to single atoms at the interface. Many observed features do not support a previously proposed two-step model for TCR recognition. Our results also provide keys to interpret experimental point-mutation results. We highlight the role of water both in terms of interface resolvation and of water molecules trapped in the bound complex. Importantly, we illustrate how two TCRs with similar reactivity and structures can have essentially different binding strategies. Proteins 2011; © 2011 Wiley-Liss, Inc.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.