1000 resultados para Geologia -- Hongria -- Mètodes estadístics
Resumo:
This study presents a first attempt to extend the “Multi-scale integrated analysis of societal and ecosystem metabolism (MuSIASEM)” approach to a spatial dimension using GIS techniques in the Metropolitan area of Barcelona. We use a combination of census and commercial databases along with a detailed land cover map to create a layer of Common Geographic Units that we populate with the local values of human time spent in different activities according to MuSIASEM hierarchical typology. In this way, we mapped the hours of available human time, in regards to the working hours spent in different locations, putting in evidence the gradients in spatial density between the residential location of workers (generating the work supply) and the places where the working hours are actually taking place. We found a strong three-modal pattern of clumps of areas with different combinations of values of time spent on household activities and on paid work. We also measured and mapped spatial segregation between these two activities and put forward the conjecture that this segregation increases with higher energy throughput, as the size of the functional units must be able to cope with the flow of exosomatic energy. Finally, we discuss the effectiveness of the approach by comparing our geographic representation of exosomatic throughput to the one issued from conventional methods.
Resumo:
La validació de mètodes és un dels pilars fonamentals de l’assegurament de la qualitat en els laboratoris d’anàlisi, tal i com queda reflectit en la norma ISO/IEC 17025. És, per tant, un aspecte que cal abordar en els plans d’estudis dels presents i dels futurs graus en Química. Existeix molta bibliografia relativa a la validació de mètodes, però molt sovint aquesta s’utilitza poc, degut a la dificultat manifesta de processar tota la informació disponible i aplicar-la al laboratori i als problemes concrets. Una altra de les limitacions en aquest camps és la manca de programaris adaptats a les necessitats del laboratori. Moltes de les rutines estadístiques que es fan servir en la validació de mètodes són adaptacions fetes amb Microsoft Excel o venen incorporades en paquets estadístics gegants, amb un alt grau de complexitat. És per aquest motiu que l’objectiu del projecte ha estat generar un programari per la validació de mètodes i l’assegurament de la qualitat dels resultats analítics, que incorporés únicament les rutines necessàries. Específicament, el programari incorpora les funcions estadístiques necessàries per a verificar l’exactitud i avaluar la precisió d’un mètode analític. El llenguatge de programació triat ha estat el Java en la seva versió 6. La part de creació del programari ha constat de les següents etapes: recollida de requisits, anàlisi dels requisits, disseny del programari en mòduls, programació d les funcions del programa i de la interfície gràfica, creació de tests d’integració i prova amb usuaris reals, i, finalment, la posada en funcionament del programari (creació de l’instal·lador i distribució del programari).
Resumo:
In this paper we address the complexity of the analysis of water use in relation to the issue of sustainability. In fact, the flows of water in our planet represent a complex reality which can be studied using many different perceptions and narratives referring to different scales and dimensions of analysis. For this reason, a quantitative analysis of water use has to be based on analytical methods that are semantically open: they must be able to define what we mean with the term “water” when crossing different scales of analysis. We propose here a definition of water as a resource that deal with the many services it provides to humans and ecosystems. WE argue that water can fulfil so many of them since the element has many characteristics that allow for the resource to be labelled with different attributes, depending on the end use –such as drinkable. Since the services for humans and the functions for ecosystems associated with water flows are defined on different scales but still interconnected it is necessary to organize our assessment of water use across different hierarchical levels. In order to do so we define how to approach the study of water use in the Societal Metabolism, by proposing the Water Metabolism, tganized in three levels: societal level, ecosystem level and global level. The possible end uses we distinguish for the society are: personal/physiological use, household use, economic use. Organizing the study of “water use” across all these levels increases the usefulness of the quantitative analysis and the possibilities of finding relevant and comparable results. To achieve this result, we adapted a method developed to deal with multi-level, multi-scale analysis - the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach - to the analysis of water metabolism. In this paper, we discuss the peculiar analytical identity that “water” shows within multi-scale metabolic studies: water represents a flow-element when considering the metabolism of social systems (at a small scale, when describing the water metabolism inside the society) and a fund-element when considering the metabolism o ecosystems (at a larger scale when describing the water metabolism outside the society). The theoretical analysis is illustrated using two case which characterize the metabolic patterns regarding water use of a productive system in Catalonia and a water management policy in Andarax River Basin in Andalusia.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la Universidad Politécnica de Madrid, Espanya, entre setembre i o desembre del 2007. Actualment la indústria aeroespacial i aeronàutica té com prioritat millorar la fiabilitat de las seves estructures a través del desenvolupament de nous sistemes per a la monitorització i detecció d’impactes. Hi ha diverses tècniques potencialment útils, i la seva aplicabilitat en una situació particular depèn críticament de la mida del defecte que permet l’estructura. Qualsevol defecte canviarà la resposta vibratòria de l’element estructural, així com el transitori de l’ona que es propaga per l’estructura elàstica. Correlacionar aquests canvis, que poden ser detectats experimentalment amb l’ocurrència del defecte, la seva localització i quantificació, és un problema molt complex. Aquest treball explora l’ús de l'Anàlisis de Components Principals (Principal Component Analysis - PCA-) basat en la formulació dels estadístics T2 i Q per tal de detectar i distingir els defectes a l'estructura, tot correlacionant els seus canvis a la resposta vibratòria. L’estructura utilitzada per l’estudi és l’ala d’una turbina d’un avió comercial. Aquesta ala s’excita en un extrem utilitzant un vibrador, i a la qual s'han adherit set sensors PZT a la superfície. S'aplica un senyal conegut i s'analitzen les respostes. Es construeix un model PCA utilitzant dades de l’estructura sense defecte. Per tal de provar el model, s'adhereix un tros d’alumini en quatre posicions diferents. Les dades dels assajos de l'estructura amb defecte es projecten sobre el model. Les components principals i les distàncies de Q-residual i T2-Hotelling s'utilitzaran per a l'anàlisi de les incidències. Q-residual indica com de bé s'adiu cadascuna de les mostres al model PCA, ja que és una mesura de la diferència, o residu, entre la mostra i la seva projecció sobre les components principals retingudes en el model. La distància T2-Hotelling és una mesura de la variació de cada mostra dins del model PCA, o el que vindria a ser el mateix, la distància al centre del model PCA.
Resumo:
La aplicación Log2XML tiene como objeto principal la transformación de archivos log en formato texto con separador de campos a un formato XML estandarizado. Para permitir que la aplicación pueda trabajar con logs de diferentes sistemas o aplicaciones, dispone de un sistema de plantillas (indicación de orden de campos y carácter separador) que permite definir la estructura mínima para poder extraer la información de cualquier tipo de log que se base en separadores de campo. Por último, la aplicación permite el procesamiento de la información extraída para la generación de informes y estadísticas.Por otro lado, en el proyecto se profundiza en la tecnología Grails.
Resumo:
This presentation aims to make understandable the use and application context of two Webometrics techniques, the logs analysis and Google Analytics, which currently coexist in the Virtual Library of the UOC. In this sense, first of all it is provided a comprehensive introduction to webometrics and then it is analysed the case of the UOC's Virtual Library focusing on the assimilation of these techniques and the considerations underlying their use, and covering in a holistic way the process of gathering, processing and data exploitation. Finally there are also provided guidelines for the interpretation of the metric variables obtained.
Resumo:
This study examines how structural determinants influence intermediary factors of child health inequities and how they operate through the communities where children live. In particular, we explore individual, family and community level characteristics associated with a composite indicator that quantitatively measures intermediary determinants of early childhood health in Colombia. We use data from the 2010 Colombian Demographic and Health Survey (DHS). Adopting the conceptual framework of the Commission on Social Determinants of Health (CSDH), three dimensions related to child health are represented in the index: behavioural factors, psychosocial factors and health system. In order to generate the weight of the variables and take into account the discrete nature of the data, principal component analysis (PCA) using polychoric correlations are employed in the index construction. Weighted multilevel models are used to examine community effects. The results show that the effect of household’s SES is attenuated when community characteristics are included, indicating the importance that the level of community development may have in mediating individual and family characteristics. The findings indicate that there is a significant variance in intermediary determinants of child health between-community, especially for those determinants linked to the health system, even after controlling for individual, family and community characteristics. These results likely reflect that whilst the community context can exert a greater influence on intermediary factors linked directly to health, in the case of psychosocial factors and the parent’s behaviours, the family context can be more important. This underlines the importance of distinguishing between community and family intervention programmes.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuouslycored boreholes, 100 to 220m deep were drilled in the northern part of the PoPlain by Regione Lombardia in the last five years. Quantitative provenanceanalysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carriedout by using multivariate statistical analysis (principal component analysis, PCA,and similarity analysis) on an integrated data set, including high-resolution bulkpetrography and heavy-mineral analyses on Pleistocene sands and of 250 majorand minor modern rivers draining the southern flank of the Alps from West toEast (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations,metamorphic and quartzofeldspathic detritus from the Western and Central Alpswas carried from the axial belt to the Po basin longitudinally parallel to theSouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenariorapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset ofthe first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA andsimilarity analysis from core samples show that the longitudinal trunk river at thistime was shifted southward by the rapid southward and westward progradation oftransverse alluvial river systems fed from the Central and Southern Alps.Sediments were transported southward by braided river systems as well as glacialsediments transported by Alpine valley glaciers invaded the alluvial plain.Kew words: Detrital modes; Modern sands; Provenance; Principal ComponentsAnalysis; Similarity, Canberra Distance; palaeodrainage
Resumo:
The paper analyses the regional flows of domestic tourism that took place in Spain in year 2000, contributing to the state of knowledge on tourism required by authorities and private firms when faced with decision making, for example, for regional infrastructure planning. Although tourism is one of the main income-generating economic activities in Spain, domestic tourism has received little attention in the literature compared to inbound tourism. The paper uses among others, gravitational model tools and concentration indices, to analyse regional concentration of both domestic demand and supply; tourism flows among regions, and the causes that may explain the observed flows and attractiveness between regions. Among the most remarkable results are the high regional concentration of demand and supply, and the role of population and regional income as explanatory variables. Also remarkable are the attractiveness of own region and neighbour ones, and that domestic tourism may be acting as a regional income redistributing activity
Resumo:
Starting with logratio biplots for compositional data, which are based on the principle of subcompositional coherence, and then adding weights, as in correspondence analysis, we rediscover Lewi's spectral map and many connections to analyses of two-way tables of non-negative data. Thanks to the weighting, the method also achieves the property of distributional equivalence
Resumo:
Developments in the statistical analysis of compositional data over the last twodecades have made possible a much deeper exploration of the nature of variability,and the possible processes associated with compositional data sets from manydisciplines. In this paper we concentrate on geochemical data sets. First we explainhow hypotheses of compositional variability may be formulated within the naturalsample space, the unit simplex, including useful hypotheses of subcompositionaldiscrimination and specific perturbational change. Then we develop through standardmethodology, such as generalised likelihood ratio tests, statistical tools to allow thesystematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require specialconstruction. We comment on the use of graphical methods in compositional dataanalysis and on the ordination of specimens. The recent development of the conceptof compositional processes is then explained together with the necessary tools for astaying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland.Finally we point out a number of unresolved problems in the statistical analysis ofcompositional processes
Resumo:
The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principalcomponent analysis allow to model compositional changes compared with a reference point.The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling
Resumo:
The low levels of unemployment recorded in the UK in recent years are widely cited asevidence of the country’s improved economic performance, and the apparent convergence of unemployment rates across the country’s regions used to suggest that the longstanding divide in living standards between the relatively prosperous ‘south’ and the more depressed ‘north’ has been substantially narrowed. Dissenters from theseconclusions have drawn attention to the greatly increased extent of non-employment(around a quarter of the UK’s working age population are not in employment) and themarked regional dimension in its distribution across the country. Amongst these dissenters it is generally agreed that non-employment is concentrated amongst oldermales previously employed in the now very much smaller ‘heavy’ industries (e.g. coal,steel, shipbuilding).This paper uses the tools of compositiona l data analysis to provide a much richer picture of non-employment and one which challenges the conventional analysis wisdom about UK labour market performance as well as the dissenters view of the nature of theproblem. It is shown that, associated with the striking ‘north/south’ divide in nonemployment rates, there is a statistically significant relationship between the size of the non-employment rate and the composition of non-employment. Specifically, it is shown that the share of unemployment in non-employment is negatively correlated with the overall non-employment rate: in regions where the non-employment rate is high the share of unemployment is relatively low. So the unemployment rate is not a very reliable indicator of regional disparities in labour market performance. Even more importantly from a policy viewpoint, a significant positive relationship is found between the size ofthe non-employment rate and the share of those not employed through reason of sicknessor disability and it seems (contrary to the dissenters) that this connection is just as strong for women as it is for men
Resumo:
In human Population Genetics, routine applications of principal component techniques are oftenrequired. Population biologists make widespread use of certain discrete classifications of humansamples into haplotypes, the monophyletic units of phylogenetic trees constructed from severalsingle nucleotide bimorphisms hierarchically ordered. Compositional frequencies of the haplotypesare recorded within the different samples. Principal component techniques are then required as adimension-reducing strategy to bring the dimension of the problem to a manageable level, say two,to allow for graphical analysis.Population biologists at large are not aware of the special features of compositional data and normally make use of the crude covariance of compositional relative frequencies to construct principalcomponents. In this short note we present our experience with using traditional linear principalcomponents or compositional principal components based on logratios, with reference to a specificdataset