12 resultados para excitation emission matrix- parallel factor analysis
em Aston University Research Archive
Resumo:
Growth in availability and ability of modern statistical software has resulted in greater numbers of research techniques being applied across the marketing discipline. However, with such advances come concerns that techniques may be misinterpreted by researchers. This issue is critical since misinterpretation could cause erroneous findings. This paper investigates some assumptions regarding: 1) the assessment of discriminant validity; and 2) what confirmatory factor analysis accomplishes. Examples that address these points are presented, and some procedural remedies are suggested based upon the literature. This paper is, therefore, primarily concerned with the development of measurement theory and practice. If advances in theory development are not based upon sound methodological practice, we as researchers could be basing our work upon shaky foundations.
Resumo:
The present work describes the development of a proton induced X-ray emission (PIXE) analysis system, especially designed and builtfor routine quantitative multi-elemental analysis of a large number of samples. The historical and general developments of the analytical technique and the physical processes involved are discussed. The philosophy, design, constructional details and evaluation of a versatile vacuum chamber, an automatic multi-sample changer, an on-demand beam pulsing system and ion beam current monitoring facility are described.The system calibration using thin standard foils of Si, P, S,Cl, K, Ca, Ti, V, Fe, Cu, Ga, Ge, Rb, Y and Mo was undertaken at proton beam energies of 1 to 3 MeV in steps of 0.5 MeV energy and compared with theoretical calculations. An independent calibration check using bovine liver Standard Reference Material was performed. The minimum detectable limits have been experimentally determined at detector positions of 90° and 135° with respect to the incident beam for the above range of proton energies as a function of atomic number Z. The system has detection limits of typically 10- 7 to 10- 9 g for elements 14
Resumo:
Experiments combining different groups or factors are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the number of replications required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than simply the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for each error term of the ANOVA. Finally, in a factorial experiment, it is important to define the design of the experiment in detail because this determines the appropriate type of ANOVA. We will discuss some of the common variations of factorial ANOVA in future statnotes. If there is doubt about which ANOVA to use, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
PCA/FA is a method of analyzing complex data sets in which there are no clearly defined X or Y variables. It has multiple uses including the study of the pattern of variation between individual entities such as patients with particular disorders and the detailed study of descriptive variables. In most applications, variables are related to a smaller number of ‘factors’ or PCs that account for the maximum variance in the data and hence, may explain important trends among the variables. An increasingly important application of the method is in the ‘validation’ of questionnaires that attempt to relate subjective aspects of a patients experience with more objective measures of vision.
Resumo:
The Center for Epidemiologic Studies-Depression Scale (CES-D) is the most frequently used scale for measuring depressive symptomatology in caregiving research. The aim of this study is to test its construct structure and measurement equivalence between caregivers from two Spanish-speaking countries. Face-to-face interviews were carried out with 595 female dementia caregivers from Madrid, Spain, and from Coahuila, Mexico. The structure of the CES-D was analyzed using exploratory and confirmatory factor analysis (EFA and CFA, respectively). Measurement invariance across samples was analyzed comparing a baseline model with a more restrictive model. Significant differences between means were found for 7 items. The results of the EFA clearly supported a four-factor solution. The CFA for the whole sample with the four factors revealed high and statistically significant loading coefficients for all items (except item number 4). When equality constraints were imposed to test for the invariance between countries, the change in chi-square was significant, indicating that complete invariance could not be assumed. Significant between-countries differences were found for three of the four latent factor mean scores. Although the results provide general support for the original four-factor structure, caution should be exercised on reporting comparisons of depression scores between Spanish-speaking countries.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
In Statnotes 24 and 25, multiple linear regression, a statistical method that examines the relationship between a single dependent variable (Y) and two or more independent variables (X), was described. The principle objective of such an analysis was to determine which of the X variables had a significant influence on Y and to construct an equation that predicts Y from the X variables. ‘Principal components analysis’ (PCA) and ‘factor analysis’ (FA) are also methods of examining the relationships between different variables but they differ from multiple regression in that no distinction is made between the dependent and independent variables, all variables being essentially treated the same. Originally, PCA and FA were regarded as distinct methods but in recent times they have been combined into a single analysis, PCA often being the first stage of a FA. The basic objective of a PCA/FA is to examine the relationships between the variables or the ‘structure’ of the variables and to determine whether these relationships can be explained by a smaller number of ‘factors’. This statnote describes the use of PCA/FA in the analysis of the differences between the DNA profiles of different MRSA strains introduced in Statnote 26.
Resumo:
The pattern of correlation between two sets of variables can be tested using canonical variate analysis (CVA). CVA, like principal components analysis (PCA) and factor analysis (FA) (Statnote 27, Hilton & Armstrong, 2011b), is a multivariate analysis Essentially, as in PCA/FA, the objective is to determine whether the correlations between two sets of variables can be explained by a smaller number of ‘axes of correlation’ or ‘canonical roots’.
Resumo:
Purpose: In today's competitive scenario, effective supply chain management is increasingly dependent on third-party logistics (3PL) companies' capabilities and performance. The dissemination of information technology (IT) has contributed to change the supply chain role of 3PL companies and IT is considered an important element influencing the performance of modern logistics companies. Therefore, the purpose of this paper is to explore the relationship between IT and 3PLs' performance, assuming that logistics capabilities play a mediating role in this relationship. Design/methodology/approach: Empirical evidence based on a questionnaire survey conducted on a sample of logistics service companies operating in the Italian market was used to test a conceptual resource-based view (RBV) framework linking IT adoption, logistics capabilities and firm performance. Factor analysis and ordinary least square (OLS) regression analysis have been used to test hypotheses. The focus of the paper is multidisciplinary in nature; management of information systems, strategy, logistics and supply chain management approaches have been combined in the analysis. Findings: The results indicate strong relationships among data gathering technologies, transactional capabilities and firm performance, in terms of both efficiency and effectiveness. Moreover, a positive correlation between enterprise information technologies and 3PL financial performance has been found. Originality/value: The paper successfully uses the concept of logistics capabilities as mediating factor between IT adoption and firm performance. Objective measures have been proposed for IT adoption and logistics capabilities. Direct and indirect relationships among variables have been successfully tested. © Emerald Group Publishing Limited.
Resumo:
The multifunctional properties of carbon nanotubes (CNTs) make them a powerful platform for unprecedented innovations in a variety of practical applications. As a result of the surging growth of nanotechnology, nanotubes present a potential problem as an environmental pollutant, and as such, an efficient method for their rapid detection must be established. Here, we propose a novel type of ionic sensor complex for detecting CNTs – an organic dye that responds sensitively and selectively to CNTs with a photoluminescent signal. The complexes are formed through Coulomb attractions between dye molecules with uncompensated charges and CNTs covered with an ionic surfactant in water. We demonstrate that the photoluminescent excitation of the dye can be transferred to the nanotubes, resulting in selective and strong amplification (up to a factor of 6) of the light emission from the excitonic levels of CNTs in the near-infrared spectral range, as experimentally observed via excitation-emission photoluminescence (PL) mapping. The chirality of the nanotubes and the type of ionic surfactant used to disperse the nanotubes both strongly affect the amplification; thus, the complexation provides sensing selectivity towards specific CNTs. Additionally, neither similar uncharged dyes nor CNTs covered with neutral surfactant form such complexes. As model organic molecules, we use a family of polymethine dyes with an easily tailorable molecular structure and, consequently, tunable absorbance and PL characteristics. This provides us with a versatile tool for the controllable photonic and electronic engineering of an efficient probe for CNT detection.