89 resultados para LEED STRUCTURE-ANALYSIS
Resumo:
The Spanish savings banks attracted quite a considerable amount of interest within the scientific arena, especially subsequent to the disappearance of the regulatory constraints during the second decade of the 1980s. Nonetheless, a lack of research identified with respect to mainstream paths given by strategic groups, and the analysis of the total factor productivity. Therefore, on the basis of the resource-based view of the firm and cluster analysis, we make use of changes in structure and performance ratios in order to identify the strategic groups extant in the sector. We attain a threeways division, which we link with different input-output specifications defining strategic paths. Consequently, on the basis of these three dissimilar approaches we compute and decompose a Hicks-Moorsteen total factor productivity index. Obtained results put forward an interesting interpretation under a multi-strategic approach, together with the setbacks of employing cluster analysis within a complex strategic environment. Moreover, we also propose an ex-post method of analysing the outcomes of the decomposed total factor productivity index that could be merged with non-traditional techniques of forming strategic groups, such as cognitive approaches.
Resumo:
This paper presents an outline of rationale and theory of the MuSIASEM scheme (Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism). First, three points of the rationale behind our MuSIASEM scheme are discussed: (i) endosomatic and exosomatic metabolism in relation to Georgescu-Roegen’s flow-fund scheme; (2) the bioeconomic analogy of hypercycle and dissipative parts in ecosystems; (3) the dramatic reallocation of human time and land use patterns in various sectors of modern economy. Next, a flow-fund representation of the MUSIASEM scheme on three levels (the whole national level, the paid work sectors level, and the agricultural sector level) is illustrated to look at the structure of the human economy in relation to two primary factors: (i) human time - a fund; and (ii) exosomatic energy - a flow. The three levels representation uses extensive and intensive variables simultaneously. Key conceptual tools of the MuSIASEM scheme - mosaic effects and impredicative loop analysis - are explained using the three level flow-fund representation. Finally, we claim that the MuSIASEM scheme can be seen as a multi-purpose grammar useful to deal with sustainability issues.
Resumo:
Performance analysis is the task of monitor the behavior of a program execution. The main goal is to find out the possible adjustments that might be done in order improve the performance. To be able to get that improvement it is necessary to find the different causes of overhead. Nowadays we are already in the multicore era, but there is a gap between the level of development of the two main divisions of multicore technology (hardware and software). When we talk about multicore we are also speaking of shared memory systems, on this master thesis we talk about the issues involved on the performance analysis and tuning of applications running specifically in a shared Memory system. We move one step ahead to take the performance analysis to another level by analyzing the applications structure and patterns. We also present some tools specifically addressed to the performance analysis of OpenMP multithread application. At the end we present the results of some experiments performed with a set of OpenMP scientific application.
Resumo:
En aquest projecte s’ha analitzat i optimitzat l’enllaç satèl·lit amb avió per a un sistema aeronàutic global. Aquest nou sistema anomenat ANTARES està dissenyat per a comunicar avions amb estacions base mitjançant un satèl·lit. Aquesta és una iniciativa on hi participen institucions oficials en l’aviació com ara l’ECAC i que és desenvolupat en una col·laboració europea d’universitats i empreses. El treball dut a terme en el projecte compren bàsicament tres aspectes. El disseny i anàlisi de la gestió de recursos. La idoneïtat d’utilitzar correcció d’errors en la capa d’enllaç i en cas que sigui necessària dissenyar una opció de codificació preliminar. Finalment, estudiar i analitzar l’efecte de la interferència co-canal en sistemes multifeix. Tots aquests temes són considerats només per al “forward link”. L’estructura que segueix el projecte és primer presentar les característiques globals del sistema, després centrar-se i analitzar els temes mencionats per a poder donar resultats i extreure conclusions.
Resumo:
At CoDaWork'03 we presented work on the analysis of archaeological glass composi-tional data. Such data typically consist of geochemical compositions involving 10-12variables and approximates completely compositional data if the main component, sil-ica, is included. We suggested that what has been termed `crude' principal componentanalysis (PCA) of standardized data often identi ed interpretable pattern in the datamore readily than analyses based on log-ratio transformed data (LRA). The funda-mental problem is that, in LRA, minor oxides with high relative variation, that maynot be structure carrying, can dominate an analysis and obscure pattern associatedwith variables present at higher absolute levels. We investigate this further using sub-compositional data relating to archaeological glasses found on Israeli sites. A simplemodel for glass-making is that it is based on a `recipe' consisting of two `ingredients',sand and a source of soda. Our analysis focuses on the sub-composition of componentsassociated with the sand source. A `crude' PCA of standardized data shows two clearcompositional groups that can be interpreted in terms of di erent recipes being used atdi erent periods, reected in absolute di erences in the composition. LRA analysis canbe undertaken either by normalizing the data or de ning a `residual'. In either case,after some `tuning', these groups are recovered. The results from the normalized LRAare di erently interpreted as showing that the source of sand used to make the glassdi ered. These results are complementary. One relates to the recipe used. The otherrelates to the composition (and presumed sources) of one of the ingredients. It seemsto be axiomatic in some expositions of LRA that statistical analysis of compositionaldata should focus on relative variation via the use of ratios. Our analysis suggests thatabsolute di erences can also be informative
Resumo:
Precision of released figures is not only an important quality feature of official statistics,it is also essential for a good understanding of the data. In this paper we show a casestudy of how precision could be conveyed if the multivariate nature of data has to betaken into account. In the official release of the Swiss earnings structure survey, the totalsalary is broken down into several wage components. We follow Aitchison's approachfor the analysis of compositional data, which is based on logratios of components. Wefirst present diferent multivariate analyses of the compositional data whereby the wagecomponents are broken down by economic activity classes. Then we propose a numberof ways to assess precision
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by asimplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able togenerate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow definingmonitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated
Resumo:
A cultivation-independent approach based on polymerase chain reaction (PCR)-amplified partial small subunit rRNA genes was used to characterize bacterial populations in the surface soil of a commercial pear orchard consisting of different pear cultivars during two consecutive growing seasons. Pyrus communis L. cvs Blanquilla, Conference, and Williams are among the most widely cultivated cultivars in Europe and account for the majority of pear production in Northeastern Spain. To assess the heterogeneity of the community structure in response to environmental variables and tree phenology, bacterial populations were examined using PCR-denaturing gradient gel electrophoresis (DGGE) followed by cluster analysis of the 16S ribosomal DNA profiles by means of the unweighted pair group method with arithmetic means. Similarity analysis of the band patterns failed to identify characteristic fingerprints associated with the pear cultivars. Both environmentally and biologically based principal-component analyses showed that the microbial communities changed significantly throughout the year depending on temperature and, to a lesser extent, on tree phenology and rainfall. Prominent DGGE bands were excised and sequenced to gain insight into the identities of the predominant bacterial populations. Most DGGE band sequences were related to bacterial phyla, such as Bacteroidetes, Cyanobacteria, Acidobacteria, Proteobacteria, Nitrospirae, and Gemmatimonadetes, previously associated with typical agronomic crop environments
Resumo:
The computational approach to the Hirshfeld [Theor. Chim. Acta 44, 129 (1977)] atom in a molecule is critically investigated, and several difficulties are highlighted. It is shown that these difficulties are mitigated by an alternative, iterative version, of the Hirshfeld partitioning procedure. The iterative scheme ensures that the Hirshfeld definition represents a mathematically proper information entropy, allows the Hirshfeld approach to be used for charged molecules, eliminates arbitrariness in the choice of the promolecule, and increases the magnitudes of the charges. The resulting "Hirshfeld-I charges" correlate well with electrostatic potential derived atomic charges
Resumo:
We present a method for analyzing the curvature (second derivatives) of the conical intersection hyperline at an optimized critical point. Our method uses the projected Hessians of the degenerate states after elimination of the two branching space coordinates, and is equivalent to a frequency calculation on a single Born-Oppenheimer potential-energy surface. Based on the projected Hessians, we develop an equation for the energy as a function of a set of curvilinear coordinates where the degeneracy is preserved to second order (i.e., the conical intersection hyperline). The curvature of the potential-energy surface in these coordinates is the curvature of the conical intersection hyperline itself, and thus determines whether one has a minimum or saddle point on the hyperline. The equation used to classify optimized conical intersection points depends in a simple way on the first- and second-order degeneracy splittings calculated at these points. As an example, for fulvene, we show that the two optimized conical intersection points of C2v symmetry are saddle points on the intersection hyperline. Accordingly, there are further intersection points of lower energy, and one of C2 symmetry - presented here for the first time - is found to be the global minimum in the intersection space
Resumo:
This paper provides a theoretical and empirical analysis of the relationship between airport congestion and airline network structure. We find that the development of hub-and-spoke (HS) networks may have detrimental effects on social welfare in presence of airport congestion. The theoretical analysis shows that, although airline pro ts are typically higher under HS networks, congestion could create incentives for airlines to adopt fully-connected (FC) networks. However, the welfare analysis leads to the conclusion that airlines may have an inefficient bias towards HS networks. In line with the theoretical analysis, our empirical results show that network airlines are weakly infl uenced by congestion in their choice of frequencies from/to their hub airports. Consistently with this result, we con firm that delays are higher in hub airports controlling for concentration and airport size. Keywords: airlines; airport congestion; fully-connected networks, hub-and-spoke net- works; network efficiency JEL Classifi cation Numbers: L13; L2; L93
Resumo:
Arising from either retrotransposition or genomic duplication of functional genes, pseudogenes are “genomic fossils” valuable for exploring the dynamics and evolution of genes and genomes. Pseudogene identification is an important problem in computational genomics, and is also critical for obtaining an accurate picture of a genome’s structure and function. However, no consensus computational scheme for defining and detecting pseudogenes has been developed thus far. As part of the ENCyclopedia Of DNA Elements (ENCODE) project, we have compared several distinct pseudogene annotation strategies and found that different approaches and parameters often resulted in rather distinct sets of pseudogenes. We subsequently developed a consensus approach for annotating pseudogenes (derived from protein coding genes) in the ENCODE regions, resulting in 201 pseudogenes, two-thirds of which originated from retrotransposition. A survey of orthologs for these pseudogenes in 28 vertebrate genomes showed that a significant fraction (∼80%) of the processed pseudogenes are primate-specific sequences, highlighting the increasing retrotransposition activity in primates. Analysis of sequence conservation and variation also demonstrated that most pseudogenes evolve neutrally, and processed pseudogenes appear to have lost their coding potential immediately or soon after their emergence. In order to explore the functional implication of pseudogene prevalence, we have extensively examined the transcriptional activity of the ENCODE pseudogenes. We performed systematic series of pseudogene-specific RACE analyses. These, together with complementary evidence derived from tiling microarrays and high throughput sequencing, demonstrated that at least a fifth of the 201 pseudogenes are transcribed in one or more cell lines or tissues.
Resumo:
Placental malaria is a special form of malaria that causes up to 200,000 maternal and infant deaths every year. Previous studies show that two receptor molecules, hyaluronic acid and chondroitin sulphate A, are mediating the adhesion of parasite-infected erythrocytes in the placenta of patients, which is believed to be a key step in the pathogenesis of the disease. In this study, we aimed at identifying sites of malaria-induced adaptation by scanning for signatures of natural selection in 24 genes in the complete biosynthesis pathway of these two receptor molecules. We analyzed a total of 24 Mb of publicly available polymorphism data from the International HapMap project for three human populations with European, Asian and African ancestry, with the African population from a region of presently and historically high malaria prevalence. Using the methods based on allele frequency distributions, genetic differentiation between populations, and on long-range haplotype structure, we found only limited evidence for malaria-induced genetic adaptation in this set of genes in the African population; however, we identified one candidate gene with clear evidence of selection in the Asian population. Although historical exposure to malaria in this population cannot be ruled out, we speculate that it might be caused by other pathogens, as there is growing evidence that these molecules are important receptors in a variety of host-pathogen interactions. We propose to use the present methods in a systematic way to help identify candidate regions under positive selection as a consequence of malaria.
Resumo:
One of the disadvantages of old age is that there is more past than future: this,however, may be turned into an advantage if the wealth of experience and, hopefully,wisdom gained in the past can be reflected upon and throw some light on possiblefuture trends. To an extent, then, this talk is necessarily personal, certainly nostalgic,but also self critical and inquisitive about our understanding of the discipline ofstatistics. A number of almost philosophical themes will run through the talk: searchfor appropriate modelling in relation to the real problem envisaged, emphasis onsensible balances between simplicity and complexity, the relative roles of theory andpractice, the nature of communication of inferential ideas to the statistical layman, theinter-related roles of teaching, consultation and research. A list of keywords might be:identification of sample space and its mathematical structure, choices betweentransform and stay, the role of parametric modelling, the role of a sample spacemetric, the underused hypothesis lattice, the nature of compositional change,particularly in relation to the modelling of processes. While the main theme will berelevance to compositional data analysis we shall point to substantial implications forgeneral multivariate analysis arising from experience of the development ofcompositional data analysis…