934 resultados para Multivariate data
Resumo:
Data associated with germplasm collections are typically large and multivariate with a considerable number of descriptors measured on each of many accessions. Pattern analysis methods of clustering and ordination have been identified as techniques for statistically evaluating the available diversity in germplasm data. While used in many studies, the approaches have not dealt explicitly with the computational consequences of large data sets (i.e. greater than 5000 accessions). To consider the application of these techniques to germplasm evaluation data, 11328 accessions of groundnut (Arachis hypogaea L) from the International Research Institute for the Semi-Arid Tropics, Andhra Pradesh, India were examined. Data for nine quantitative descriptors measured in the rainy and post-rainy growing seasons were used. The ordination technique of principal component analysis was used to reduce the dimensionality of the germplasm data. The identification of phenotypically similar groups of accessions within large scale data via the computationally intensive hierarchical clustering techniques was not feasible and non-hierarchical techniques had to be used. Finite mixture models that maximise the likelihood of an accession belonging to a cluster were used to cluster the accessions in this collection. The patterns of response for the different growing seasons were found to be highly correlated. However, in relating the results to passport and other characterisation and evaluation descriptors, the observed patterns did not appear to be related to taxonomy or any other well known characteristics of groundnut.
Resumo:
Many active pharmaceutical ingredients (APIs) have both anhydrate and hydrate forms. Due to the different physicochemical properties of solid forms, the changes in solid-state may result in therapeutic, pharmaceutical, legal and commercial problems. In order to obtain good solid dosage form quality and performance, there is a constant need to understand and control these phase transitions during manufacturing and storage. Thus it is important to detect and also quantify the possible transitions between the different forms. In recent years, vibrational spectroscopy has become an increasingly popular tool to characterise the solid-state forms and their phase transitions. It offers several advantages over other characterisation techniques including an ability to obtain molecular level information, minimal sample preparation, and the possibility of monitoring changes non-destructively in-line. Dehydration is the phase transition of hydrates which is frequently encountered during the dosage form production and storage. The aim of the present thesis was to investigate the dehydration behaviour of diverse pharmaceutical hydrates by near infrared (NIR), Raman and terahertz pulsed spectroscopic (TPS) monitoring together with multivariate data analysis. The goal was to reveal new perspectives for investigation of the dehydration at the molecular level. Solid-state transformations were monitored during dehydration of diverse hydrates on hot-stage. The results obtained from qualitative experiments were used to develop a method and perform the quantification of the solid-state forms during process induced dehydration in a fluidised bed dryer. Both in situ and in-line process monitoring and quantification was performed. This thesis demonstrated the utility of vibrational spectroscopy techniques and multivariate modelling to monitor and investigate dehydration behaviour in situ and during fluidised bed drying. All three spectroscopic methods proved complementary in the study of dehydration. NIR spectroscopy models could quantify the solid-state forms in the binary system, but were unable to quantify all the forms in the quaternary system. Raman spectroscopy models on the other hand could quantify all four solid-state forms that appeared upon isothermal dehydration. The speed of spectroscopic methods makes them applicable for monitoring dehydration and the quantification of multiple forms was performed during phase transition. Thus the solid-state structure information at the molecular level was directly obtained. TPS detected the intermolecular phonon modes and Raman spectroscopy detected mostly the changes in intramolecular vibrations. Both techniques revealed information about the crystal structure changes. NIR spectroscopy, on the other hand was more sensitive to water content and hydrogen bonding environment of water molecules. This study provides a basis for real time process monitoring using vibrational spectroscopy during pharmaceutical manufacturing.
Resumo:
Research on assessment and monitoring methods has primarily focused on fisheries with long multivariate data sets. Less research exists on methods applicable to data-poor fisheries with univariate data sets with a small sample size. In this study, we examine the capabilities of seasonal autoregressive integrated moving average (SARIMA) models to fit, forecast, and monitor the landings of such data-poor fisheries. We use a European fishery on meagre (Sciaenidae: Argyrosomus regius), where only a short time series of landings was available to model (n=60 months), as our case-study. We show that despite the limited sample size, a SARIMA model could be found that adequately fitted and forecasted the time series of meagre landings (12-month forecasts; mean error: 3.5 tons (t); annual absolute percentage error: 15.4%). We derive model-based prediction intervals and show how they can be used to detect problematic situations in the fishery. Our results indicate that over the course of one year the meagre landings remained within the prediction limits of the model and therefore indicated no need for urgent management intervention. We discuss the information that SARIMA model structure conveys on the meagre lifecycle and fishery, the methodological requirements of SARIMA forecasting of data-poor fisheries landings, and the capabilities SARIMA models present within current efforts to monitor the world’s data-poorest resources.
Resumo:
Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.
Resumo:
In the context of trans-dermal drug delivery it is very important to have mechanistic insight into the barrier function of the skin's stratum corneum and the diffusion mechanisms of topically applied drugs. Currently spectroscopic imaging techniques are evolving which enable a spatial examination of various types of samples in a dynamic way. ATR-FTIR imaging opens up the possibility to monitor spatial diffusion profiles across the stratum corneum of a skin sample. Multivariate data analyses methods based on factor analysis are able to provide insight into the large amount of spectroscopically complex and highly overlapping signals generated. Multivariate target factor analysis was used for spectral resolution and local diffusion profiles with time through stratum corneum. A model drug, 4-cyanophenol in polyethylene glycol 600 and water was studied. Results indicate that the average diffusion profiles between spatially different locations show similar profiles despite the heterogeneous nature of the biological sample and the challenging experimental set-up.
Resumo:
The ecological sciences have experienced immense growth over the course of this century, and chances are that they will continue to grow well on into the next millennium. There are some good reasons for this – ecology encompasses some of the most pressing concerns facing humanity. With recent advances in data collection technology and ambitious field research, ecologists are increasingly calling upon multivariate statistics to explore and test for patterns in their data. The goal of FISH 560 (Applied Multivariate Statistics for Ecologists) at the University of Washington is to introduce graduate students to the multivariate statistical techniques necessary to carry out sophisticated analyses and to critically evaluate scientific papers using these approaches. It is a practical, hands-on course emphasizing the analysis and interpretation of multivariate analysis, and covers the majority of approaches in common use by ecologists. To celebrate the hard work of past students, I am pleased to announce the creation of the Electronic Journal of Applied Multivariate Statistics (EJAMS). Each year, students in FISH 560 are required to write a final paper consisting of a statistical analysis of their own multivariate data set. These papers are submitted to EJAMS at the end of quarter and are peer reviewed by two other class members. A decision on publication is based on the reviewers’ recommendations and my own reading the paper. In closing, there is a need for the rapid dissemination of ecological research using multivariate statistics at the University of Washington. EJAMS is committed to this challenge.
Resumo:
Self-organizing maps (Kohonen 1997) is a type of artificial neural network developed to explore patterns in high-dimensional multivariate data. The conventional version of the algorithm involves the use of Euclidean metric in the process of adaptation of the model vectors, thus rendering in theory a whole methodology incompatible with non-Euclidean geometries. In this contribution we explore the two main aspects of the problem: 1. Whether the conventional approach using Euclidean metric can shed valid results with compositional data. 2. If a modification of the conventional approach replacing vectorial sum and scalar multiplication by the canonical operators in the simplex (i.e. perturbation and powering) can converge to an adequate solution. Preliminary tests showed that both methodologies can be used on compositional data. However, the modified version of the algorithm performs poorer than the conventional version, in particular, when the data is pathological. Moreover, the conventional ap- proach converges faster to a solution, when data is \well-behaved". Key words: Self Organizing Map; Artificial Neural networks; Compositional data
Resumo:
Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr) transformation to obtain the random vector y of dimension D. The factor model is then y = Λf + e (1) with the factors f of dimension k < D, the error term e, and the loadings matrix Λ. Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysis model (1) can be written as Cov(y) = ΛΛT + ψ (2) where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as the loadings matrix Λ are estimated from an estimation of Cov(y). Given observed clr transformed data Y as realizations of the random vector y. Outliers or deviations from the idealized model assumptions of factor analysis can severely effect the parameter estimation. As a way out, robust estimation of the covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), see Pison et al. (2003). Well known robust covariance estimators with good statistical properties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), rely on a full-rank data matrix Y which is not the case for clr transformed data (see, e.g., Aitchison, 1986). The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves this singularity problem. The data matrix Y is transformed to a matrix Z by using an orthonormal basis of lower dimension. Using the ilr transformed data, a robust covariance matrix C(Z) can be estimated. The result can be back-transformed to the clr space by C(Y ) = V C(Z)V T where the matrix V with orthonormal columns comes from the relation between the clr and the ilr transformation. Now the parameters in the model (2) can be estimated (Basilevsky, 1994) and the results have a direct interpretation since the links to the original variables are still preserved. The above procedure will be applied to data from geochemistry. Our special interest is on comparing the results with those of Reimann et al. (2002) for the Kola project data
Resumo:
The bewildering complexity of cortical microcircuits at the single cell level gives rise to surprisingly robust emergent activity patterns at the level of laminar and columnar local field potentials (LFPs) in response to targeted local stimuli. Here we report the results of our multivariate data-analytic approach based on simultaneous multi-site recordings using micro-electrode-array chips for investigation of the microcircuitary of rat somatosensory (barrel) cortex. We find high repeatability of stimulus-induced responses, and typical spatial distributions of LFP responses to stimuli in supragranular, granular, and infragranular layers, where the last form a particularly distinct class. Population spikes appear to travel with about 33 cm/s from granular to infragranular layers. Responses within barrel related columns have different profiles than those in neighbouring columns to the left or interchangeably to the right. Variations between slices occur, but can be minimized by strictly obeying controlled experimental protocols. Cluster analysis on normalized recordings indicates specific spatial distributions of time series reflecting the location of sources and sinks independent of the stimulus layer. Although the precise correspondences between single cell activity and LFPs are still far from clear, a sophisticated neuroinformatics approach in combination with multi-site LFP recordings in the standardized slice preparation is suitable for comparing normal conditions to genetically or pharmacologically altered situations based on real cortical microcircuitry.
Resumo:
Most multidimensional projection techniques rely on distance (dissimilarity) information between data instances to embed high-dimensional data into a visual space. When data are endowed with Cartesian coordinates, an extra computational effort is necessary to compute the needed distances, making multidimensional projection prohibitive in applications dealing with interactivity and massive data. The novel multidimensional projection technique proposed in this work, called Part-Linear Multidimensional Projection (PLMP), has been tailored to handle multivariate data represented in Cartesian high-dimensional spaces, requiring only distance information between pairs of representative samples. This characteristic renders PLMP faster than previous methods when processing large data sets while still being competitive in terms of precision. Moreover, knowing the range of variation for data instances in the high-dimensional space, we can make PLMP a truly streaming data projection technique, a trait absent in previous methods.
Resumo:
In this thesis some multivariate spectroscopic methods for the analysis of solutions are proposed. Spectroscopy and multivariate data analysis form a powerful combination for obtaining both quantitative and qualitative information and it is shown how spectroscopic techniques in combination with chemometric data evaluation can be used to obtain rapid, simple and efficient analytical methods. These spectroscopic methods consisting of spectroscopic analysis, a high level of automation and chemometric data evaluation can lead to analytical methods with a high analytical capacity, and for these methods, the term high-capacity analysis (HCA) is suggested. It is further shown how chemometric evaluation of the multivariate data in chromatographic analyses decreases the need for baseline separation. The thesis is based on six papers and the chemometric tools used are experimental design, principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), partial least squares regression (PLS) and parallel factor analysis (PARAFAC). The analytical techniques utilised are scanning ultraviolet-visible (UV-Vis) spectroscopy, diode array detection (DAD) used in non-column chromatographic diode array UV spectroscopy, high-performance liquid chromatography with diode array detection (HPLC-DAD) and fluorescence spectroscopy. The methods proposed are exemplified in the analysis of pharmaceutical solutions and serum proteins. In Paper I a method is proposed for the determination of the content and identity of the active compound in pharmaceutical solutions by means of UV-Vis spectroscopy, orthogonal signal correction and multivariate calibration with PLS and SIMCA classification. Paper II proposes a new method for the rapid determination of pharmaceutical solutions by the use of non-column chromatographic diode array UV spectroscopy, i.e. a conventional HPLC-DAD system without any chromatographic column connected. In Paper III an investigation is made of the ability of a control sample, of known content and identity to diagnose and correct errors in multivariate predictions something that together with use of multivariate residuals can make it possible to use the same calibration model over time. In Paper IV a method is proposed for simultaneous determination of serum proteins with fluorescence spectroscopy and multivariate calibration. Paper V proposes a method for the determination of chromatographic peak purity by means of PCA of HPLC-DAD data. In Paper VI PARAFAC is applied for the decomposition of DAD data of some partially separated peaks into the pure chromatographic, spectral and concentration profiles.
Resumo:
Biological wastewater treatment is a complex, multivariate process, in which a number of physical and biological processes occur simultaneously. In this study, principal component analysis (PCA) and parallel factor analysis (PARAFAC) were used to profile and characterise Lagoon 115E, a multistage biological lagoon treatment system at Melbourne Water's Western Treatment Plant (WTP) in Melbourne, Australia. In this study, the objective was to increase our understanding of the multivariate processes taking place in the lagoon. The data used in the study span a 7-year period during which samples were collected as often as weekly from the ponds of Lagoon 115E and subjected to analysis. The resulting database, involving 19 chemical and physical variables, was studied using the multivariate data analysis methods PCA and PARAFAC. With these methods, alterations in the state of the wastewater due to intrinsic and extrinsic factors could be discerned. The methods were effective in illustrating and visually representing the complex purification stages and cyclic changes occurring along the lagoon system. The two methods proved complementary, with each having its own beneficial features. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
With the latest development in computer science, multivariate data analysis methods became increasingly popular among economists. Pattern recognition in complex economic data and empirical model construction can be more straightforward with proper application of modern softwares. However, despite the appealing simplicity of some popular software packages, the interpretation of data analysis results requires strong theoretical knowledge. This book aims at combining the development of both theoretical and applicationrelated data analysis knowledge. The text is designed for advanced level studies and assumes acquaintance with elementary statistical terms. After a brief introduction to selected mathematical concepts, the highlighting of selected model features is followed by a practice-oriented introduction to the interpretation of SPSS1 outputs for the described data analysis methods. Learning of data analysis is usually time-consuming and requires efforts, but with tenacity the learning process can bring about a significant improvement of individual data analysis skills.
Resumo:
This thesis builds a framework for evaluating downside risk from multivariate data via a special class of risk measures (RM). The peculiarity of the analysis lies in getting rid of strong data distributional assumptions and in orientation towards the most critical data in risk management: those with asymmetries and heavy tails. At the same time, under typical assumptions, such as the ellipticity of the data probability distribution, the conformity with classical methods is shown. The constructed class of RM is a multivariate generalization of the coherent distortion RM, which possess valuable properties for a risk manager. The design of the framework is twofold. The first part contains new computational geometry methods for the high-dimensional data. The developed algorithms demonstrate computability of geometrical concepts used for constructing the RM. These concepts bring visuality and simplify interpretation of the RM. The second part develops models for applying the framework to actual problems. The spectrum of applications varies from robust portfolio selection up to broader spheres, such as stochastic conic optimization with risk constraints or supervised machine learning.
Resumo:
The occurrence and levels of airborne polycyclic aromatic hydrocarbons and volatile organic compounds in selected non-industrial environments in Brisbane have been investigated as part of an integrated indoor air quality assessment program. The most abundant and most frequently encountered compounds include, nonanal, decanal, texanol, phenol, 2-ethyl-1-hexanol, ethanal, naphthalene, 2,6-tert-butyl-4-methyl-phenol (BHT), salicylaldehyde, toluene, hexanal, benzaldehyde, styrene, ethyl benzene, o-, m- and pxylenes, benzene, n-butanol, 1,2-propandiol, and n-butylacetate. Many of the 64 compounds usually included in the European Collaborative Action method of TVOC analysis were below detection limits in the samples analysed. In order to extract maximum amount of information from the data collected, multivariate data projection methods have been employed. The implications of the information extracted on source identification and exposure control are discussed.