963 resultados para Compositional modulation
Resumo:
Résumé : Emotion et cognition sont deux termes généralement employés pour désigner des processus psychiques de nature opposée. C'est ainsi que les sciences cognitives se sont longtemps efforcées d'écarter la composante «chaude »des processus «froids »qu'elles visaient, si ce n'est pour montrer l'effet dévastateur de la première sur les seconds. Pourtant, les processus cognitifs (de collecte, maintien et utilisation d'information) et émotioAnels (d'activation subjective, physiologique et comportementale face à ce qui est attractif ou aversif) sont indissociables. Par l'approche neuro-éthologique, à l'interface entre le substrat biologique et les manifestations comportementales, nous nous sommes intéressés à une fonction cognitive essentielle, la fonction mnésique, classiquement exprimée chez le rongeur par l'orientation spatiale. Au niveau du substrat, McDonald et White (1993) ont montré la dissociation de trois systèmes de mémoire, avec les rôles de l'hippocampe, du néostriatum et de l'amygdale dans l'encodage des informations respectivement épisodiques, procédurales et émotionnelles. Nous nous sommes penchés sur l'interaction entre ces systèmes en fonction de la dimension émotionnelle par l'éclairage du comportement. L'état émotionnel de l'animal dépend de plusieurs facteurs, que nous avons tenté de contrôler indirectement en comparant leurs effets sur l'acquisition, dans diverses conditions, de la tâche de Morris (qui nécessite la localisation dans un bassin de la position d'une plate-forme submergée), ainsi que sur le style d'exploration de diverses arènes, ouvertes ou fermées, plus ou moins structurées par la présence de tunnels en plexiglas transparent. Nous avons d'abord exploré le rôle d'un composant du système adrénergique dans le rapport à la difficulté et au stress, à l'aide de souris knock-out pour le récepteur à la noradrénaline a-1 B dans un protocole avec 1 ou 4 points de départ dans un bassin partitionné. Ensuite, nous nous sommes penchés, chez le rat, sur les effets de renforcement intermittent dans différentes conditions expérimentales. Dans ces conditions, nous avons également tenté d'analyser en quoi la situation du but dans un paysage donné pouvait interférer avec les effets de certaines formes de stress. Finalement, nous avons interrogé les conséquences de perturbations passées, y compris le renforcement partiel, sur l'organisation des déplacements sur sol sec. Nos résultats montrent la nécessité, pour les souris cont~ô/es dont l'orientation repose sur l'hippocampe, de pouvoir varier les trajectoires, ce qui favoriserait la constitution d'une carte cognitive. Les souris a->B KO s'avèrent plus sensibles au stress et capables de bénéficier de la condition de route qui permet des réponses simples et automatisées, sous-tendues par l'activité du striatum. Chez les rats en bassin 100% renforcé, l'orientation apparaît basée sur l'hippocampe, relayée par le striatum pour le développement d'approches systématiques et rapides, avec réorientation efficace en nouvelle position par réactivation dépendant de l'hippocampe. A 50% de renforcement, on observe un effet du type de déroulement des sessions, transitoirement atténué par la motivation Lorsque les essais s'enchaînent sans pause intrasession, les latences diminuent régulièrement, ce qui suggère une prise en charge possible par des routines S-R dépendant du striatum. L'organisation des mouvements exploratoires apparaît dépendante du niveau d'insécurité, avec différents profils intermédiaires entre la différentiation maximale et la thigmotaxie, qui peuvent être mis en relation avec différents niveaux d'efficacité de l'hippocampe. Ainsi, notre travail encourage à la prise en compte de la dimension émotionnelle comme modulatrice du traitement d'information, tant en phase d'exploration de l'environnement que d'exploitation des connaissances spatiales. Abstract : Emotion and cognition are terms widely used to refer to opposite mental processes. Hence, cognitive science research has for a long time pushed "hot" components away from "cool" targeted processes, except for assessing devastating effects of the former upon the latter. However, cognitive processes (of information collection, preservation, and utilization) and emotional processes (of subjective, physiological, and behavioral activation roue to attraction or aversion) are inseparable. At the crossing between biological substrate and behavioral expression, we studied a chief cognitive function, memory, classically shown in animals through spatial orientation. At the substrate level, McDonald et White (1993) have shown a dissociation between three memory systems, with the hippocampus, neostriatum, and amygdala, encoding respectively episodic, habit, and emotional information. Through the behavior of laboratory rodents, we targeted the interaction between those systems and the emotional axis. The emotional state of an animal depends on different factors, that we tried to check in a roundabout way by the comparison of their effects on acquisition, in a variety of conditions, of the Morris task (in which the location of a hidden platform in a pool is required), as well as on the exploration profile in different apparatus, open-field and closed mazes, more or less organized by clear Plexiglas tunnels. We first tracked the role, under more or less difficult and stressful conditions, of an adrenergic component, with knock-out mice for the a-1 B receptor in a partitioned water maze with 1 or 4 start positions. With rats, we looked for the consequences of partial reinforcement in the water maze in different experimental conditions. In those conditions, we further analyzed how the situation of the goal in the landscape could interfere with the effect of a given stress. At last, we conducted experiments on solid ground, in an open-field and in radial mazes, in order to analyze the organization of spatial behavior following an aversive life event, such as partial reinforcement training in the water maze. Our results emphasize the reliance of normal mice to be able to vary approach trajectories. One of our leading hypotheses is that such strategies are hippocampus-dependent and are best developed for of a "cognitive map like" representation. Alpha-1 B KO mice appear more sensitive to stress and able to take advantage of the route condition allowing simple and automated responses, most likely striatum based. With rats in 100% reinforced water maze, the orientation strategy is predominantly hippocampus dependent (as illustrated by the impairment induced by lesions of this structure) and becomes progressively striatum dependent for the development of systematic and fast successful approaches. Training towards a new platform position requires a hippocampus based strategy. With a 50% reinforcement rate, we found a clear impairment related to intersession disruption, an effect transitorily minimized by motivation enhancement (cold water). When trials are given without intrasession interruption, latencies consistently diminish, suggesting a possibility for striatum dependent stimulus-response routine to occur. The organization of exploratory movements is shown to depend on the level of subjective security, with different intermediary profiles between maximum differentiation and thigmotaxy, which can be considered in parallel with different efficiency levels of the hippocampus dependent strategies. Thus, our work fosters the consideration of emotion as a cognitive treatment modulator, during spatial exploration as well as spatial learning. It leads to a model in which the predominance of hippocampus based exploration is challenged by training conditions of various nature.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
The amalgamation operation is frequently used to reduce the number of parts of compositional data but it is a non-linear operation in the simplex with the usual geometry,the Aitchison geometry. The concept of balances between groups, a particular coordinate system designed over binary partitions of the parts, could be an alternative to theamalgamation in some cases. In this work we discuss the proper application of bothconcepts using a real data set corresponding to behavioral measures of pregnant sows
Resumo:
Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
In this paper we examine the problem of compositional data from a different startingpoint. Chemical compositional data, as used in provenance studies on archaeologicalmaterials, will be approached from the measurement theory. The results will show, in avery intuitive way that chemical data can only be treated by using the approachdeveloped for compositional data. It will be shown that compositional data analysis is aparticular case in projective geometry, when the projective coordinates are in thepositive orthant, and they have the properties of logarithmic interval metrics. Moreover,it will be shown that this approach can be extended to a very large number ofapplications, including shape analysis. This will be exemplified with a case study inarchitecture of Early Christian churches dated back to the 5th-7th centuries AD
Resumo:
Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr)transformation to obtain the random vector y of dimension D. The factor model istheny = Λf + e (1)with the factors f of dimension k & D, the error term e, and the loadings matrix Λ.Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysismodel (1) can be written asCov(y) = ΛΛT + ψ (2)where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as theloadings matrix Λ are estimated from an estimation of Cov(y).Given observed clr transformed data Y as realizations of the random vectory. Outliers or deviations from the idealized model assumptions of factor analysiscan severely effect the parameter estimation. As a way out, robust estimation ofthe covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), seePison et al. (2003). Well known robust covariance estimators with good statisticalproperties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), relyon a full-rank data matrix Y which is not the case for clr transformed data (see,e.g., Aitchison, 1986).The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves thissingularity problem. The data matrix Y is transformed to a matrix Z by usingan orthonormal basis of lower dimension. Using the ilr transformed data, a robustcovariance matrix C(Z) can be estimated. The result can be back-transformed tothe clr space byC(Y ) = V C(Z)V Twhere the matrix V with orthonormal columns comes from the relation betweenthe clr and the ilr transformation. Now the parameters in the model (2) can beestimated (Basilevsky, 1994) and the results have a direct interpretation since thelinks to the original variables are still preserved.The above procedure will be applied to data from geochemistry. Our specialinterest is on comparing the results with those of Reimann et al. (2002) for the Kolaproject data
Resumo:
We take stock of the present position of compositional data analysis, of what has beenachieved in the last 20 years, and then make suggestions as to what may be sensibleavenues of future research. We take an uncompromisingly applied mathematical view,that the challenge of solving practical problems should motivate our theoreticalresearch; and that any new theory should be thoroughly investigated to see if it mayprovide answers to previously abandoned practical considerations. Indeed a main themeof this lecture will be to demonstrate this applied mathematical approach by a number ofchallenging examples
Resumo:
Traditionally, compositional data has been identified with closed data, and the simplex has been considered as the natural sample space of this kind of data. In our opinion, the emphasis on the constrained nature ofcompositional data has contributed to mask its real nature. More crucial than the constraining property of compositional data is the scale-invariant property of this kind of data. Indeed, when we are considering only few parts of a full composition we are not working with constrained data but our data are still compositional. We believe that it is necessary to give a more precisedefinition of composition. This is the aim of this oral contribution
Resumo:
The biplot has proved to be a powerful descriptive and analytical tool in many areasof applications of statistics. For compositional data the necessary theoreticaladaptation has been provided, with illustrative applications, by Aitchison (1990) andAitchison and Greenacre (2002). These papers were restricted to the interpretation ofsimple compositional data sets. In many situations the problem has to be described insome form of conditional modelling. For example, in a clinical trial where interest isin how patients’ steroid metabolite compositions may change as a result of differenttreatment regimes, interest is in relating the compositions after treatment to thecompositions before treatment and the nature of the treatments applied. To study thisthrough a biplot technique requires the development of some form of conditionalcompositional biplot. This is the purpose of this paper. We choose as a motivatingapplication an analysis of the 1992 US President ial Election, where interest may be inhow the three-part composition, the percentage division among the three candidates -Bush, Clinton and Perot - of the presidential vote in each state, depends on the ethniccomposition and on the urban-rural composition of the state. The methodology ofconditional compositional biplots is first developed and a detailed interpretation of the1992 US Presidential Election provided. We use a second application involving theconditional variability of tektite mineral compositions with respect to major oxidecompositions to demonstrate some hazards of simplistic interpretation of biplots.Finally we conjecture on further possible applications of conditional compositionalbiplots
Resumo:
The application of compositional data analysis through log ratio trans-formations corresponds to a multinomial logit model for the shares themselves.This model is characterized by the property of Independence of Irrelevant Alter-natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactlythis invariance of the ratio that underlies the commonly used zero replacementprocedure in compositional data analysis. In this paper we investigate using thenested logit model that does not embody IIA and an associated zero replacementprocedure and compare its performance with that of the more usual approach ofusing the multinomial logit model. Our comparisons exploit a data set that com-bines voting data by electoral division with corresponding census data for eachdivision for the 2001 Federal election in Australia
Resumo:
This analysis was stimulated by the real data analysis problem of householdexpenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that tryto add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spendingexcluding alcohol/tobacco similar for teetotal and non-teetotal households?In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than onecomponent, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durableswithin the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small.While this analysis is based on around economic data, the ideas carry over tomany other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
Examples of compositional data. The simplex, a suitable sample space for compositional data and Aitchison's geometry. R, a free language and environment for statistical computing and graphics
Resumo:
Compositional data naturally arises from the scientific analysis of the chemicalcomposition of archaeological material such as ceramic and glass artefacts. Data of thistype can be explored using a variety of techniques, from standard multivariate methodssuch as principal components analysis and cluster analysis, to methods based upon theuse of log-ratios. The general aim is to identify groups of chemically similar artefactsthat could potentially be used to answer questions of provenance.This paper will demonstrate work in progress on the development of a documentedlibrary of methods, implemented using the statistical package R, for the analysis ofcompositional data. R is an open source package that makes available very powerfulstatistical facilities at no cost. We aim to show how, with the aid of statistical softwaresuch as R, traditional exploratory multivariate analysis can easily be used alongside, orin combination with, specialist techniques of compositional data analysis.The library has been developed from a core of basic R functionality, together withpurpose-written routines arising from our own research (for example that reported atCoDaWork'03). In addition, we have included other appropriate publicly availabletechniques and libraries that have been implemented in R by other authors. Availablefunctions range from standard multivariate techniques through to various approaches tolog-ratio analysis and zero replacement. We also discuss and demonstrate a smallselection of relatively new techniques that have hitherto been little-used inarchaeometric applications involving compositional data. The application of the libraryto the analysis of data arising in archaeometry will be demonstrated; results fromdifferent analyses will be compared; and the utility of the various methods discussed
Resumo:
”compositions” is a new R-package for the analysis of compositional and positive data.It contains four classes corresponding to the four different types of compositional andpositive geometry (including the Aitchison geometry). It provides means for computation,plotting and high-level multivariate statistical analysis in all four geometries.These geometries are treated in an fully analogous way, based on the principle of workingin coordinates, and the object-oriented programming paradigm of R. In this way,called functions automatically select the most appropriate type of analysis as a functionof the geometry. The graphical capabilities include ternary diagrams and tetrahedrons,various compositional plots (boxplots, barplots, piecharts) and extensive graphical toolsfor principal components. Afterwards, ortion and proportion lines, straight lines andellipses in all geometries can be added to plots. The package is accompanied by ahands-on-introduction, documentation for every function, demos of the graphical capabilitiesand plenty of usage examples. It allows direct and parallel computation inall four vector spaces and provides the beginner with a copy-and-paste style of dataanalysis, while letting advanced users keep the functionality and customizability theydemand of R, as well as all necessary tools to add own analysis routines. A completeexample is included in the appendix