15 resultados para International New Thought Alliance

em Universitat de Girona, Spain


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of orthonormal coordinates in the simplex and, particularly, balance coordinates, has suggested the use of a dendrogram for the exploratory analysis of compositional data. The dendrogram is based on a sequential binary partition of a compositional vector into groups of parts. At each step of a partition, one group of parts is divided into two new groups, and a balancing axis in the simplex between both groups is defined. The set of balancing axes constitutes an orthonormal basis, and the projections of the sample on them are orthogonal coordinates. They can be represented in a dendrogram-like graph showing: (a) the way of grouping parts of the compositional vector; (b) the explanatory role of each subcomposition generated in the partition process; (c) the decomposition of the total variance into balance components associated with each binary partition; (d) a box-plot of each balance. This representation is useful to help the interpretation of balance coordinates; to identify which are the most explanatory coordinates; and to describe the whole sample in a single diagram independently of the number of parts of the sample

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principal component analysis allow to model compositional changes compared with a reference point. The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The statistical analysis of compositional data is commonly used in geological studies. As is well-known, compositions should be treated using logratios of parts, which are difficult to use correctly in standard statistical packages. In this paper we describe the new features of our freeware package, named CoDaPack, which implements most of the basic statistical methods suitable for compositional data. An example using real data is presented to illustrate the use of the package

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The statistical analysis of compositional data should be treated using logratios of parts, which are difficult to use correctly in standard statistical packages. For this reason a freeware package, named CoDaPack was created. This software implements most of the basic statistical methods suitable for compositional data. In this paper we describe the new version of the package that now is called CoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©), Visual Basic and Open GL, and it is oriented towards users with a minimum knowledge of computers with the aim at being simple and easy to use. This new version includes new graphical output in 2D and 3D. These outputs could be zoomed and, in 3D, rotated. Also a customization menu is included and outputs could be saved in jpeg format. Also this new version includes an interactive help and all dialog windows have been improved in order to facilitate its use. To use CoDaPack one has to access Excel© and introduce the data in a standard spreadsheet. These should be organized as a matrix where Excel© rows correspond to the observations and columns to the parts. The user executes macros that return numerical or graphical results. There are two kinds of numerical results: new variables and descriptive statistics, and both appear on the same sheet. Graphical output appears in independent windows. In the present version there are 8 menus, with a total of 38 submenus which, after some dialogue, directly call the corresponding macro. The dialogues ask the user to input variables and further parameters needed, as well as where to put these results. The web site http://ima.udg.es/CoDaPack contains this freeware package and only Microsoft Excel© under Microsoft Windows© is required to run the software. Kew words: Compositional data Analysis, Software

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Dirichlet family owes its privileged status within simplex distributions to easyness of interpretation and good mathematical properties. In particular, we recall fundamental properties for the analysis of compositional data such as closure under amalgamation and subcomposition. From a probabilistic point of view, it is characterised (uniquely) by a variety of independence relationships which makes it indisputably the reference model for expressing the non trivial idea of substantial independence for compositions. Indeed, its well known inadequacy as a general model for compositional data stems from such an independence structure together with the poorness of its parametrisation. In this paper a new class of distributions (called Flexible Dirichlet) capable of handling various dependence structures and containing the Dirichlet as a special case is presented. The new model exhibits a considerably richer parametrisation which, for example, allows to model the means and (part of) the variance-covariance matrix separately. Moreover, such a model preserves some good mathematical properties of the Dirichlet, i.e. closure under amalgamation and subcomposition with new parameters simply related to the parent composition parameters. Furthermore, the joint and conditional distributions of subcompositions and relative totals can be expressed as simple mixtures of two Flexible Dirichlet distributions. The basis generating the Flexible Dirichlet, though keeping compositional invariance, shows a dependence structure which allows various forms of partitional dependence to be contemplated by the model (e.g. non-neutrality, subcompositional dependence and subcompositional non-invariance), independence cases being identified by suitable parameter configurations. In particular, within this model substantial independence among subsets of components of the composition naturally occurs when the subsets have a Dirichlet distribution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pounamu (NZ jade), or nephrite, is a protected mineral in its natural form following the transfer of ownership back to Ngai Tahu under the Ngai Tahu (Pounamu Vesting) Act 1997. Any theft of nephrite is prosecutable under the Crimes Act 1961. Scientific evidence is essential in cases where origin is disputed. A robust method for discrimination of this material through the use of elemental analysis and compositional data analysis is required. Initial studies have characterised the variability within a given nephrite source. This has included investigation of both in situ outcrops and alluvial material. Methods for the discrimination of two geographically close nephrite sources are being developed. Key Words: forensic, jade, nephrite, laser ablation, inductively coupled plasma mass spectrometry, multivariate analysis, elemental analysis, compositional data analysis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we examine the problem of compositional data from a different starting point. Chemical compositional data, as used in provenance studies on archaeological materials, will be approached from the measurement theory. The results will show, in a very intuitive way that chemical data can only be treated by using the approach developed for compositional data. It will be shown that compositional data analysis is a particular case in projective geometry, when the projective coordinates are in the positive orthant, and they have the properties of logarithmic interval metrics. Moreover, it will be shown that this approach can be extended to a very large number of applications, including shape analysis. This will be exemplified with a case study in architecture of Early Christian churches dated back to the 5th-7th centuries AD

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the self-image of the Western university hangs on the idea that research and teaching are intimately connected. The central axiom here is that research and teaching are mutually supportive of each other. An institution lacking such a set of relationships between research and teaching falls short of what it means to be a university. This set of beliefs raises certain questions: Is it the case that the presence of such a mutually supportive set of relationships between research and teaching is a necessary condition of the fulfilment of the idea of the university? (A conceptual question). And is it true that, in practice today, such a mutually supportive set of relationships between research and teaching characterises universities? (An empirical question). In my talk, I want to explore these matters in a critical vein. I shall suggest that: a) In practice today, such a mutually supportive set of relationships between research and teaching is in jeopardy. Far from supporting each other, very often research and teaching contend against each other. Research and teaching are becoming two separate ideologies, with their own interest structures. b) Historically, the supposed tight link between research and teaching is both of recent origin and far from universally achieved in universities. Institutional separateness between research and teaching is and has been evident, both across institutions and even across departments in the same institution. c) Conceptually, research and teaching are different activities: each is complex and neither is reducible to the other. In theory, therefore, research and teaching may be said to constitute a holy alliance but in practice, we see more of an unholy alliance. If, then, in an ideal world, a positive relationship between research and teaching is still a worthwhile goal, how might it be construed and worked for? Seeing research and teaching as two discrete and unified sets of activity is now inadequate. Much better is a construal of research and teaching as themselves complexes, as intermingling pools of activity helping to form the liquid university that is emerging today. On this view, research and teaching are fluid spaces, ever on the move, taking up new shapes, and themselves dividing and reforming, as the university reworks its own destiny in modern society. On such a perspective, working out a productive relationship between research and teaching is a complex project. This is an alliance that is neither holy nor unholy. It is an uneasy alliance, with temporary accommodations and continuous new possibilities

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major problems in machine vision is the segmentation of images of natural scenes. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. The main contours of the scene are detected and used to guide the posterior region growing process. The algorithm places a number of seeds at both sides of a contour allowing stating a set of concurrent growing processes. A previous analysis of the seeds permits to adjust the homogeneity criterion to the regions's characteristics. A new homogeneity criterion based on clustering analysis and convex hull construction is proposed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach to mammographic mass detection is presented in this paper. Although different algorithms have been proposed for such a task, most of them are application dependent. In contrast, our approach makes use of a kindred topic in computer vision adapted to our particular problem. In this sense, we translate the eigenfaces approach for face detection/classification problems to a mass detection. Two different databases were used to show the robustness of the approach. The first one consisted on a set of 160 regions of interest (RoIs) extracted from the MIAS database, being 40 of them with confirmed masses and the rest normal tissue. The second set of RoIs was extracted from the DDSM database, and contained 196 RoIs containing masses and 392 with normal, but suspicious regions. Initial results demonstrate the feasibility of using such approach with performances comparable to other algorithms, with the advantage of being a more general, simple and cost-effective approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coded structured light is an optical technique based on active stereovision that obtains the shape of objects. One shot techniques are based on projecting a unique light pattern with an LCD projector so that grabbing an image with a camera, a large number of correspondences can be obtained. Then, a 3D reconstruction of the illuminated object can be recovered by means of triangulation. The most used strategy to encode one-shot patterns is based on De Bruijn sequences. In This work a new way to design patterns using this type of sequences is presented. The new coding strategy minimises the number of required colours and maximises both the resolution and the accuracy

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research studies from an internal view based on the Competency-Based Perspective (CBP), key organizational competencies developed for small new business. CBP is chosen in an attempt to explain the differences characterizing the closed companies from the consolidated ones. The main contribution of this paper is the definition of a set of key organizational competencies for new ventures from services and low technology based sectors. Using the classification proposed by [1] and a review of the entrepreneurship literature, the main competencies were defined and classified as: managerial, input-based, transformation-based, and output-based competencies. The proposed model for evaluating new ventures organizational competence is tested by means of Structural Equation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sociocultural changes that led to the genesis of Romance languages widened the gap between oral and written patterns, which display different discoursive and linguistic devices. In early documents, discoursive implicatures connecting propositions were not generally codified, so that the reader should furnish the correct interpretation according to his own perception of real facts; which can still be attested in current oral utterances. Once Romance languages had undergone several levelling processes which concluded in the first standardizations, implicatures became explicatures and were syntactically codified by means of univocal new complex conjunctions. As a consequence of the emergence of these new subordination strategies, a freer distribution of the information conveyed by the utterances is allowed. The success of complex structural patterns ran alongside of the genesis of new narrative genres and the generalization of a learned rhetoric. Both facts are a spontaneous effect of new approaches to the act of reading. Ancient texts were written to be read to a wide audience, whereas those printed by the end of the XV th century were conceived to be read quietly, in a low voice, by a private reader. The goal of this paper is twofold, since we will show that: a) The development of new complex conjunctions through the history of Romance languages accommodates to four structural patterns that range from parataxis to hypotaxis. b) This development is a reflex of the well known grammaticalization path from discourse to syntax that implies the codification of discoursive strategies (Givón 2 1979, Sperber and Wilson 1986, Carston 1988, Grice 1989, Bach 1994, Blackemore 2002, among others]