1000 resultados para Mètodes iteratius (Matemàtica)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developments in the statistical analysis of compositional data over the last twodecades have made possible a much deeper exploration of the nature of variability,and the possible processes associated with compositional data sets from manydisciplines. In this paper we concentrate on geochemical data sets. First we explainhow hypotheses of compositional variability may be formulated within the naturalsample space, the unit simplex, including useful hypotheses of subcompositionaldiscrimination and specific perturbational change. Then we develop through standardmethodology, such as generalised likelihood ratio tests, statistical tools to allow thesystematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require specialconstruction. We comment on the use of graphical methods in compositional dataanalysis and on the ordination of specimens. The recent development of the conceptof compositional processes is then explained together with the necessary tools for astaying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland.Finally we point out a number of unresolved problems in the statistical analysis ofcompositional processes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principalcomponent analysis allow to model compositional changes compared with a reference point.The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging.When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positivevariables, has no straightforward way to build consistent and optimal confidence intervals for estimates.These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The low levels of unemployment recorded in the UK in recent years are widely cited asevidence of the country’s improved economic performance, and the apparent convergence of unemployment rates across the country’s regions used to suggest that the longstanding divide in living standards between the relatively prosperous ‘south’ and the more depressed ‘north’ has been substantially narrowed. Dissenters from theseconclusions have drawn attention to the greatly increased extent of non-employment(around a quarter of the UK’s working age population are not in employment) and themarked regional dimension in its distribution across the country. Amongst these dissenters it is generally agreed that non-employment is concentrated amongst oldermales previously employed in the now very much smaller ‘heavy’ industries (e.g. coal,steel, shipbuilding).This paper uses the tools of compositiona l data analysis to provide a much richer picture of non-employment and one which challenges the conventional analysis wisdom about UK labour market performance as well as the dissenters view of the nature of theproblem. It is shown that, associated with the striking ‘north/south’ divide in nonemployment rates, there is a statistically significant relationship between the size of the non-employment rate and the composition of non-employment. Specifically, it is shown that the share of unemployment in non-employment is negatively correlated with the overall non-employment rate: in regions where the non-employment rate is high the share of unemployment is relatively low. So the unemployment rate is not a very reliable indicator of regional disparities in labour market performance. Even more importantly from a policy viewpoint, a significant positive relationship is found between the size ofthe non-employment rate and the share of those not employed through reason of sicknessor disability and it seems (contrary to the dissenters) that this connection is just as strong for women as it is for men

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In human Population Genetics, routine applications of principal component techniques are oftenrequired. Population biologists make widespread use of certain discrete classifications of humansamples into haplotypes, the monophyletic units of phylogenetic trees constructed from severalsingle nucleotide bimorphisms hierarchically ordered. Compositional frequencies of the haplotypesare recorded within the different samples. Principal component techniques are then required as adimension-reducing strategy to bring the dimension of the problem to a manageable level, say two,to allow for graphical analysis.Population biologists at large are not aware of the special features of compositional data and normally make use of the crude covariance of compositional relative frequencies to construct principalcomponents. In this short note we present our experience with using traditional linear principalcomponents or compositional principal components based on logratios, with reference to a specificdataset

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main instrument used in psychological measurement is the self-report questionnaire. One of its majordrawbacks however is its susceptibility to response biases. A known strategy to control these biases hasbeen the use of so-called ipsative items. Ipsative items are items that require the respondent to makebetween-scale comparisons within each item. The selected option determines to which scale the weight ofthe answer is attributed. Consequently in questionnaires only consisting of ipsative items everyrespondent is allotted an equal amount, i.e. the total score, that each can distribute differently over thescales. Therefore this type of response format yields data that can be considered compositional from itsinception.Methodological oriented psychologists have heavily criticized this type of item format, since the resultingdata is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians havekept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate bothpositions and addresses the similarities and differences between the two data collection methods. Theultimate objective is to formulate a guideline when to use which type of item format.The comparison is based on data obtained with both an ipsative and normative version of threepsychological questionnaires, which were administered to 502 first-year students in psychology accordingto a balanced within-subjects design. Previous research only compared the direct ipsative scale scoreswith the derived ipsative scale scores. The use of compositional data analysis techniques also enables oneto compare derived normative score ratios with direct normative score ratios. The addition of the secondcomparison not only offers the advantage of a better-balanced research strategy. In principle it also allowsfor parametric testing in the evaluation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most of economic literature has presented its analysis under the assumption of homogeneous capital stock.However, capital composition differs across countries. What has been the pattern of capital compositionassociated with World economies? We make an exploratory statistical analysis based on compositional datatransformed by Aitchinson logratio transformations and we use tools for visualizing and measuring statisticalestimators of association among the components. The goal is to detect distinctive patterns in the composition.As initial findings could be cited that:1. Sectorial components behaved in a correlated way, building industries on one side and , in a lessclear view, equipment industries on the other.2. Full sample estimation shows a negative correlation between durable goods component andother buildings component and between transportation and building industries components.3. Countries with zeros in some components are mainly low income countries at the bottom of theincome category and behaved in a extreme way distorting main results observed in the fullsample.4. After removing these extreme cases, conclusions seem not very sensitive to the presence ofanother isolated cases

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The statistical analysis of literary style is the part of stylometry that compares measurable characteristicsin a text that are rarely controlled by the author, with those in other texts. When thegoal is to settle authorship questions, these characteristics should relate to the author’s style andnot to the genre, epoch or editor, and they should be such that their variation between authors islarger than the variation within comparable texts from the same author.For an overview of the literature on stylometry and some of the techniques involved, see for exampleMosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) orLebart, Salem and Berry (1998).Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be“the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writterslike Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translatedseveral times into Spanish, Italian and French, with modern English translations by Rosenthal(1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465,but it was not printed until 1490.There is an intense and long lasting debate around its authorship sprouting from its first edition,where its introduction states that the whole book is the work of Martorell (1413?-1468), while atthe end it is stated that the last one fourth of the book is by Galba (?-1490), after the death ofMartorell. Some of the authors that support the theory of single authorship are Riquer (1990),Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer(1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990).Neither of the two candidate authors left any text comparable to the one under study, and thereforediscriminant analysis can not be used to help classify chapters by author. By using sample textsencompassing about ten percent of the book, and looking at word length and at the use of 44conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that mightindicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba andGinebra (2000) estimates that stylistic boundary to be near chapter 383.Following the lead of the extensive literature, this paper looks into word length, the use of the mostfrequent words and into the use of vowels in each chapter of the book. Given that the featuresselected are categorical, that leads to three contingency tables of ordered rows and therefore tothree sequences of multinomial observations.Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3describes the problem of the estimation of a suden change-point in those sequences, in the followingsections we propose various ways to estimate change-points in multinomial sequences; the methodin section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma modelsonto the sequence of Chi-square distances between each row profiles and the average profile, theone in Section 6 fits models onto the sequence of values taken by the first component of thecorrespondence analysis as well as onto sequences of other summary measures like the averageword length. In Section 7 we fit models onto the marginal binomial sequences to identify thefeatures that distinguish the chapters before and after that boundary. Most methods rely heavilyon the use of generalized linear models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades &-1 to &9) were separated, and each fraction was analysed for its chemical composition.The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø &8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar).To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend.Key words: sediment, geochemistry, grain size, regression, step function

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El GREP (grup de recerca de producte, procés i producció) de la UdG actualmentdisposa d’una eina informàtica desenvolupada en un PFC del 2003 que li permet fer la seqüenciació de la producció d’un taller mecànic amb un màxim de cinc productes, un nombre definit de possibles rutes de fabricació per a cada producte i tres màquines. Aquesta eina és molt resolutiva per a aquests casos, ja que estudia totes les possibilitats i les comprova una per una. No obstant aquest fet presenta una sèrie de limitacions com són el temps d’execució doncs al comprovar totes les seqüències té un elevat cost computacional i la rigidesa del sistema doncs no ens permet seqüenciar més productes ni més màquines. Per tal de donar solució a aquest problema es planteja generar una nova eina informàtica a partir de l’actual però que permeti seqüenciar més peces sense ocupar tanta memòria per així implementar-hi futures millores com el temps de preparació etc... Per a desenvolupar l’eina informàtica s’han utilitzat mètodes heurístics, concretament dos que són: algoritmes genètics i cerca TABU. Aquests mètodes destaquen perquè no busquen totes les combinacions possibles sinó que estudien una sèrie de combinacions i utilitzant mètodes de creuament i generació d’entorns busquen una solució

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ara ja fa un segle del descobriment de la superconductivitat. Aquest fenomen quàntic macroscòpic té aplicacions en el camp de la diagnosi mèdica per MRI (Magnetic Resonance Imaging), que permet obtenir imatges internes detallades del cos humà o en aplicacions científiques com el SQUID (Superconducting Quantum Interference Device), amb el que es poden mesurar camps magnètics molt dèbils, amb límits de detecció fins a 10-15 T. Però les aplicacions dels superconductors no es limiten ni molt menys a aquests camps ja que s‟estan fent grans esforços per poder processar cintes superconductores a gran escala per tal de poder aplicar aquest fenomen a cables de conducció elèctrica, motors, generadors, transformadors, etc. Amb això s‟aconseguiria un increment notable de l‟eficiència energètica amb la consegüent disminució de l‟emissió de gasos d‟efecte hivernacle. En aquest projecte, realitzat al grup de Superconductivitat i Nanoestructuració a Gran Escala de l„ICMAB, s‟han fet diversos muntatges experimentals per tal de poder observar i entendre millor el procés de piròlisi d‟una solució precursora de YBCO en forma de capes primes a sobre de diferents substrats. Aquesta etapa és determinant per obtenir en el procés de creixement una capa texturada i d‟ alta densitat de corrent crítica (Jc). Per això s‟ha fet ús d‟un sistema que permet realitzar la piròlisi de forma relativament ràpida mentre s‟enregistra en vídeo l‟evolució de la capa. En totes les mostres pirolitzades s‟ha estudiat la qualitat, textura i morfologia superficial. També s‟ha intentat veure de forma qualitativa el comportament dinàmic dels gasos a dins d‟una cavitat cilíndrica, que és la geometria utilitzada fins ara per créixer cintes superconductores. Finalment s‟han dissenyat diferents tipus de bufadors per tal d‟introduir els gasos de forma transversal i no longitudinal dins del forn tubular durant el tractament tèrmic, fet que dóna lloc a un increment de la superfície superconductora homogènia. El projecte es distribueix en diferents parts. Inicialment es fa una introducció als superconductors d‟alta temperatura així com els mètodes de fabricació actuals de les cintes superconductores. Seguidament s‟expliquen els objectius que pretenem assolir. En un apartat posterior descrivim les tècniques experimentals utilitzades. Seguidament detallem tots els resultats obtinguts junt amb les seves caracteritzacions. Finalment estudiem el impacte ambiental que ha tingut la realització d‟aquest projecte. Detallem el cost del mateix en un pressupost. En tres annexes ampliem alguns subapartats que per manca d‟espai en el text principal no hem pogut desenvolupar.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En aquest projecte es presenta un mecanisme per garantir la privacitat de les cerques que fan els usuaris als motors de cerca aprofitant el potencial de les xarxes socials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La litiasi urinària és un trastorn que implica la formació de precipitats en qualsevol part del tracte urinari. Aquest és un desordre comú que afecta aproximadament a una desena part de la població de la Unió Europea al llarg de la seva vida. A més, durant els cinc anys posteriors a un episodi litiàsic el percentatge de recurrència dels pacients és del 45 al 75%. Aquest trastorn urinari està influït per una gran quantitat de variables, d’origen fisiològic, psicològic i ambiental. Els episodis litiàsics, es poden solucionar espontàniament, amb l’expulsió del càlcul renal, o bé a través de diverses intervencions mèdiques. Els tractaments mèdics derivats de la litiasi urinària; és a dir, la fragmentació del càlcul, intervencions quirúrgiques i tractaments posteriors generen unes grans despeses als sistemes mèdics. Pels motius exposats, la identificació del desordre que ha originat l’episodi litiàsic és de radical importància, per tal de minimitzar el risc de reincidència. Els mètodes més usuals per determinar les causes que desencadenen la formació del càlcul renal són les anàlisis d’orina i l’estudi del càlcul generat. La correcta descripció de la composició i, especialment, l’estructura del càlcul renal pot aportar informació clau sobre les possibles causes de la seva formació, tant de l’inici de nucleació del càlcul com de les successives etapes de creixement cristal·lí. Tenint en compte aquest darrer aspecte, el present estudi s’ha dirigit a la caracterització de càlculs urinaris mitjançant l’aplicació de metodologies d’imatge química (Hyperspectral Imaging), el que va contribuir a determinar les principals característiques espectrals de cada compost majoritari als càlculs renals. D’altra banda, la utilització de mostres de composició coneguda ha possibilitat la creació d’un model amb Xarxes Neuronals Artificials, que permet la classificació de noves mostres de composició desconeguda, de manera més ràpida que el procediment actual. Aquest treball constitueix una nova contribució a la comprensió de l’estructura de les pedres de ronyó, així com les condicions de la seva formació. Els resultats obtinguts destaquen les possibilitats que presenten les tècniques emprades al camp de la litiasi renal, que permeten complementar els coneixements existents enfocats a millorar la qualitat de vida dels pacients.