127 resultados para Matemática Aplicada
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation
Resumo:
A problem in the archaeometric classification of Catalan Renaissance pottery is the fact, that the clay supply of the pottery workshops was centrally organized by guilds, and therefore usually all potters of a single production centre produced chemically similar ceramics. However, analysing the glazes of the ware usually a large number of inclusions in the glaze is found, which reveal technological differences between single workshops. These inclusions have been used by the potters in order to opacify the transparent glaze and to achieve a white background for further decoration. In order to distinguish different technological preparation procedures of the single workshops, at a Scanning Electron Microscope the chemical composition of those inclusions as well as their size in the two-dimensional cut is recorded. Based on the latter, a frequency distribution of the apparent diameters is estimated for each sample and type of inclusion. Following an approach by S.D. Wicksell (1925), it is principally possible to transform the distributions of the apparent 2D-diameters back to those of the true three-dimensional bodies. The applicability of this approach and its practical problems are examined using different ways of kernel density estimation and Monte-Carlo tests of the methodology. Finally, it is tested in how far the obtained frequency distributions can be used to classify the pottery
Resumo:
Most of economic literature has presented its analysis under the assumption of homogeneous capital stock. However, capital composition differs across countries. What has been the pattern of capital composition associated with World economies? We make an exploratory statistical analysis based on compositional data transformed by Aitchinson logratio transformations and we use tools for visualizing and measuring statistical estimators of association among the components. The goal is to detect distinctive patterns in the composition. As initial findings could be cited that: 1. Sectorial components behaved in a correlated way, building industries on one side and , in a less clear view, equipment industries on the other. 2. Full sample estimation shows a negative correlation between durable goods component and other buildings component and between transportation and building industries components. 3. Countries with zeros in some components are mainly low income countries at the bottom of the income category and behaved in a extreme way distorting main results observed in the full sample. 4. After removing these extreme cases, conclusions seem not very sensitive to the presence of another isolated cases
Resumo:
The statistical analysis of literary style is the part of stylometry that compares measurable characteristics in a text that are rarely controlled by the author, with those in other texts. When the goal is to settle authorship questions, these characteristics should relate to the author’s style and not to the genre, epoch or editor, and they should be such that their variation between authors is larger than the variation within comparable texts from the same author. For an overview of the literature on stylometry and some of the techniques involved, see for example Mosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) or Lebart, Salem and Berry (1998). Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be “the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writters like Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translated several times into Spanish, Italian and French, with modern English translations by Rosenthal (1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465, but it was not printed until 1490. There is an intense and long lasting debate around its authorship sprouting from its first edition, where its introduction states that the whole book is the work of Martorell (1413?-1468), while at the end it is stated that the last one fourth of the book is by Galba (?-1490), after the death of Martorell. Some of the authors that support the theory of single authorship are Riquer (1990), Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer (1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990). Neither of the two candidate authors left any text comparable to the one under study, and therefore discriminant analysis can not be used to help classify chapters by author. By using sample texts encompassing about ten percent of the book, and looking at word length and at the use of 44 conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that might indicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba and Ginebra (2000) estimates that stylistic boundary to be near chapter 383. Following the lead of the extensive literature, this paper looks into word length, the use of the most frequent words and into the use of vowels in each chapter of the book. Given that the features selected are categorical, that leads to three contingency tables of ordered rows and therefore to three sequences of multinomial observations. Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3 describes the problem of the estimation of a suden change-point in those sequences, in the following sections we propose various ways to estimate change-points in multinomial sequences; the method in section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma models onto the sequence of Chi-square distances between each row profiles and the average profile, the one in Section 6 fits models onto the sequence of values taken by the first component of the correspondence analysis as well as onto sequences of other summary measures like the average word length. In Section 7 we fit models onto the marginal binomial sequences to identify the features that distinguish the chapters before and after that boundary. Most methods rely heavily on the use of generalized linear models
Resumo:
In several computer graphics areas, a refinement criterion is often needed to decide whether to go on or to stop sampling a signal. When the sampled values are homogeneous enough, we assume that they represent the signal fairly well and we do not need further refinement, otherwise more samples are required, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is very sensitive to variability is necessary. In this paper, we present a family of discrimination measures, the f-divergences, meeting this requirement. These convex functions have been well studied and successfully applied to image processing and several areas of engineering. Two applications to global illumination are shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. We obtain significantly better results than with classic criteria, showing that f-divergences are worth further investigation in computer graphics. Also a discrimination measure based on entropy of the samples for refinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a natural method to deal with the adaptive subdivision of the sampling region
Resumo:
Usually, psychometricians apply classical factorial analysis to evaluate construct validity of order rank scales. Nevertheless, these scales have particular characteristics that must be taken into account: total scores and rank are highly relevant
Resumo:
This paper sets out to identify the initial positions of the different decision makers who intervene in a group decision making process with a reduced number of actors, and to establish possible consensus paths between these actors. As a methodological support, it employs one of the most widely-known multicriteria decision techniques, namely, the Analytic Hierarchy Process (AHP). Assuming that the judgements elicited by the decision makers follow the so-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al., 1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknown variance, a Bayesian approach is used in the estimation of the relative priorities of the alternatives being compared. These priorities, estimated by way of the median of the posterior distribution and normalised in a distributive manner (priorities add up to one), are a clear example of compositional data that will be used in the search for consensus between the actors involved in the resolution of the problem through the use of Multidimensional Scaling tools
Resumo:
First application of compositional data analysis techniques to Australian election data
Resumo:
Precision of released figures is not only an important quality feature of official statistics, it is also essential for a good understanding of the data. In this paper we show a case study of how precision could be conveyed if the multivariate nature of data has to be taken into account. In the official release of the Swiss earnings structure survey, the total salary is broken down into several wage components. We follow Aitchison's approach for the analysis of compositional data, which is based on logratios of components. We first present diferent multivariate analyses of the compositional data whereby the wage components are broken down by economic activity classes. Then we propose a number of ways to assess precision
Resumo:
This paper presents a procedure that allows us to determine the preference structures (PS) associated to each of the different groups of actors that can be identified in a group decision making problem with a large number of individuals. To that end, it makes use of the Analytic Hierarchy Process (AHP) (Saaty, 1980) as the technique to solve discrete multicriteria decision making problems. This technique permits the resolution of multicriteria, multienvironment and multiactor problems in which subjective aspects and uncertainty have been incorporated into the model, constructing ratio scales corresponding to the priorities relative to the elements being compared, normalised in a distributive manner (wi = 1). On the basis of the individuals’ priorities we identify different clusters for the decision makers and, for each of these, the associated preference structure using, to that end, tools analogous to those of Multidimensional Scaling. The resulting PS will be employed to extract knowledge for the subsequent negotiation processes and, should it be necessary, to determine the relative importance of the alternatives being compared using anyone of the existing procedures
Resumo:
It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features
Resumo:
Two contrasting case studies of sediment and detrital mineral composition are investigated in order to outline interactions between chemical composition and grain size. Modern glacial sediments exhibit a strong dependence of the two parameters due to the preferential enrichment of mafic minerals, especially biotite, in the fine-grained fractions. On the other hand, the composition of detrital heavy minerals (here: rutile) appears to be not systematically related to grain-size, but is strongly controlled by location, i.e. the petrology of the source rocks of detrital grains. This supports the use of rutile as a well-suited tracer mineral for provenance studies. The results further suggest that (i) interpretations derived from whole-rock sediment geochemistry should be flanked by grain-size observations, and (ii) a more sound statistical evaluation of these interactions require the development of new tailor-made statistical tools to deal with such so-called two-way compositions
Resumo:
A study of tin deposits from Priamurye (Russia) is performed to analyze the differences between them based on their origin and also on commercial criteria. A particular analysis based on their vertical zonality is also given for samples from Solnechnoe deposit. All the statistical analysis are made on the subcomposition formed by seven trace elements in cassiterite (In, Sc, Be, W, Nb, Ti and V) using the Aitchison’ methodology of analysis of compositional data
Resumo:
The chemical composition of sediments and rocks, as well as their distribution at the Martian surface, represent a long term archive of processes, which have formed the planetary surface. A survey of chemical compositions by means of Compositional Data Analysis represents a valuable tool to extract direct evidence for weathering processes and allows to quantify weathering and sedimentation rates. clr-biplot techniques are applied for visualization of chemical relationships across the surface (“chemical maps”). The variability among individual suites of data is further analyzed by means of clr-PCA, in order to extract chemical alteration vectors between fresh rocks and their crusts and for an assessment of different source reservoirs accessible to soil formation. Both techniques are applied to elucidate the influence of remote weathering by combined analysis of several soil forming branches. Vector analysis in the Simplex provides the opportunity to study atmosphere surface interactions, including the role and composition of volcanic gases
Resumo:
In any discipline, where uncertainty and variability are present, it is important to have principles which are accepted as inviolate and which should therefore drive statistical modelling, statistical analysis of data and any inferences from such an analysis. Despite the fact that two such principles have existed over the last two decades and from these a sensible, meaningful methodology has been developed for the statistical analysis of compositional data, the application of inappropriate and/or meaningless methods persists in many areas of application. This paper identifies at least ten common fallacies and confusions in compositional data analysis with illustrative examples and provides readers with necessary, and hopefully sufficient, arguments to persuade the culprits why and how they should amend their ways