28 resultados para Analysis of principal component
em Aston University Research Archive
Resumo:
This dissertation studies the process of operations systems design within the context of the manufacturing organization. Using the DRAMA (Design Routine for Adopting Modular Assembly) model as developed by a team from the IDOM Research Unit at Aston University as a starting point, the research employed empirically based fieldwork and a survey to investigate the process of production systems design and implementation within four UK manufacturing industries: electronics assembly, electrical engineering, mechanical engineering and carpet manufacturing. The intention was to validate the basic DRAMA model as a framework for research enquiry within the electronics industry, where the initial IDOM work was conducted, and then to test its generic applicability, further developing the model where appropriate, within the other industries selected. The thesis contains a review of production systems design theory and practice prior to presenting thirteen industrial case studies of production systems design from the four industry sectors. The results and analysis of the postal survey into production systems design are then presented. The strategic decisions of manufacturing and their relationship to production systems design, and the detailed process of production systems design and operation are then discussed. These analyses are used to develop the generic model of production systems design entitled DRAMA II (Decision Rules for Analysing Manufacturing Activities). The model contains three main constituent parts: the basic DRAMA model, the extended DRAMA II model showing the imperatives and relationships within the design process, and a benchmark generic approach for the design and analysis of each component in the design process. DRAMA II is primarily intended for use by researchers as an analytical framework of enquiry, but is also seen as having application for manufacturing practitioners.
Resumo:
Rhizome of cassava plants (Manihot esculenta Crantz) was catalytically pyrolysed at 500 °C using analytical pyrolysis–gas chromatography/mass spectrometry (Py–GC/MS) method in order to investigate the relative effect of various catalysts on pyrolysis products. Selected catalysts expected to affect bio-oil properties were used in this study. These include zeolites and related materials (ZSM-5, Al-MCM-41 and Al-MSU-F type), metal oxides (zinc oxide, zirconium (IV) oxide, cerium (IV) oxide and copper chromite) catalysts, proprietary commercial catalysts (Criterion-534 and alumina-stabilised ceria-MI-575) and natural catalysts (slate, char and ashes derived from char and biomass). The pyrolysis product distributions were monitored using models in principal components analysis (PCA) technique. The results showed that the zeolites, proprietary commercial catalysts, copper chromite and biomass-derived ash were selective to the reduction of most oxygenated lignin derivatives. The use of ZSM-5, Criterion-534 and Al-MSU-F catalysts enhanced the formation of aromatic hydrocarbons and phenols. No single catalyst was found to selectively reduce all carbonyl products. Instead, most of the carbonyl compounds containing hydroxyl group were reduced by zeolite and related materials, proprietary catalysts and copper chromite. The PCA model for carboxylic acids showed that zeolite ZSM-5 and Al-MSU-F tend to produce significant amounts of acetic and formic acids.
Resumo:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
Principal components analysis (PCA) has been described for over 50 years; however, it is rarely applied to the analysis of epidemiological data. In this study PCA was critically appraised in its ability to reveal relationships between pulsed-field gel electrophoresis (PFGE) profiles of methicillin- resistant Staphylococcus aureus (MRSA) in comparison to the more commonly employed cluster analysis and representation by dendrograms. The PFGE type following SmaI chromosomal digest was determined for 44 multidrug-resistant hospital-acquired methicillin-resistant S. aureus (MR-HA-MRSA) isolates, two multidrug-resistant community-acquired MRSA (MR-CA-MRSA), 50 hospital-acquired MRSA (HA-MRSA) isolates (from the University Hospital Birmingham, NHS Trust, UK) and 34 community-acquired MRSA (CA-MRSA) isolates (from general practitioners in Birmingham, UK). Strain relatedness was determined using Dice band-matching with UPGMA clustering and PCA. The results indicated that PCA revealed relationships between MRSA strains, which were more strongly correlated with known epidemiology, most likely because, unlike cluster analysis, PCA does not have the constraint of generating a hierarchic classification. In addition, PCA provides the opportunity for further analysis to identify key polymorphic bands within complex genotypic profiles, which is not always possible with dendrograms. Here we provide a detailed description of a PCA method for the analysis of PFGE profiles to complement further the epidemiological study of infectious disease. © 2005 Elsevier B.V. All rights reserved.
Resumo:
A Principal Components Analysis of neuropathological data from 79 Alzheimer’s disease (AD) cases was performed to determine whether there was evidence for subtypes of the disease. Two principal components were extracted from the data which accounted for 72% and 12% of the total variance respectively. The results suggested that 1) AD was heterogeneous but subtypes could not be clearly defined; 2) the heterogeneity, in part, reflected disease onset; 3) familial cases did not constitute a distinct subtype of AD and 4) there were two forms of late onset AD, one of which was associated with less senile plaque and neurofibrillary tangle development but with a greater degree of brain atherosclerosis.
Resumo:
Two contrasting multivariate statistical methods, viz., principal components analysis (PCA) and cluster analysis were applied to the study of neuropathological variations between cases of Alzheimer's disease (AD). To compare the two methods, 78 cases of AD were analyzed, each characterised by measurements of 47 neuropathological variables. Both methods of analysis revealed significant variations between AD cases. These variations were related primarily to differences in the distribution and abundance of senile plaques (SP) and neurofibrillary tangles (NFT) in the brain. Cluster analysis classified the majority of AD cases into five groups which could represent subtypes of AD. However, PCA suggested that variation between cases was more continuous with no distinct subtypes. Hence, PCA may be a more appropriate method than cluster analysis in the study of neuropathological variations between AD cases.
Resumo:
A principal components analysis was carried out on neuropathological data collected from 79 cases of Alzheimer's disease (AD) diagnosed in a single centre. The purpose of the study was to determine whether on neuropathological criteria there was evidence for clearly defined subtypes of the disease. Two principal components (PC1 and PC2) were extracted from the data. PC1 was considerable more important than PC2 accounting for 72% of the total variance. When plotted in relation to the first two principal components the majority of cases (65/79) were distributed in a single cluster within which subgroupings were not clearly evident. In addition, there were a number of individual, mainly early-onset cases, which were neither related to each other nor to the main cluster. The distribution of each neuropathological feature was examined in relation to PC1 and 2, Disease onset, rhe degree of gross brain atrophy, neuronal loss and the devlopment of senile plaques (SP) and neurofibrillary tangles (NFT) were negatively correlated with PC1. The devlopment of SP and NFT and the degree of brain athersclerosis were positively correlated with PC2. These results suggested: 1) that there were different forms of AD but no clear division of the cases into subclasses could be made based on the neuropathological criteria used; the cases showing a more continuous distribution from one form to another, 2) that disease onset was an important variable and was associated with a greater development of pathological changes, 3) familial cases were not a distinct subclass of AD; the cases being widely distributed in relation to PC1 and PC2 and 4) that there may be two forms of late-onset AD whic grade into each other, one of which was associated with less SP and NFT development but with a greater degree of brain atherosclerosis.
Resumo:
Visual perception is dependent on both light transmission through the eye and neuronal conduction through the visual pathway. Advances in clinical diagnostics and treatment modalities over recent years have increased the opportunities to improve the optical path and retinal image quality. Higher order aberrations and retinal straylight are two major factors that influence light transmission through the eye and ultimately, visual outcome. Recent technological advancements have brought these important factors into the clinical domain, however the potential applications of these tools and considerations regarding interpretation of data are much underestimated. The purpose of this thesis was to validate and optimise wavefront analysers and a new clinical tool for the objective evaluation of intraocular scatter. The application of these methods in a clinical setting involving a range of conditions was also explored. The work was divided into two principal sections: 1. Wavefront Aberrometry: optimisation, validation and clinical application The main findings of this work were: • Observer manipulation of the aberrometer increases variability by a factor of 3. • Ocular misalignment can profoundly affect reliability, notably for off-axis aberrations. • Aberrations measured with wavefront analysers using different principles are not interchangeable, with poor relationships and significant differences between values. • Instrument myopia of around 0.30D is induced when performing wavefront analysis in non-cyclopleged eyes; values can be as high as 3D, being higher as the baseline level of myopia decreases. Associated accommodation changes may result in relevant changes to the aberration profile, particularly with respect to spherical aberration. • Young adult healthy Caucasian eyes have significantly more spherical aberration than Asian eyes when matched for age, gender, axial length and refractive error. Axial length is significantly correlated with most components of the aberration profile. 2. Intraocular light scatter: Evaluation of subjective measures and validation and application of a new objective method utilising clinically derived wavefront patterns. The main findings of this work were: • Subjective measures of clinical straylight are highly repeatable. Three measurements are suggested as the optimum number for increased reliability. • Significant differences in straylight values were found for contact lenses designed for contrast enhancement compared to clear lenses of the same design and material specifications. Specifically, grey/green tints induced significantly higher values of retinal straylight. • Wavefront patterns from a commercial Hartmann-Shack device can be used to obtain objective measures of scatter and are well correlated with subjective straylight values. • Perceived retinal stray light was similar in groups of patients implanted with monofocal and multi focal intraocular lenses. Correlation between objective and subjective measurements of scatter is poor, possibly due to different illumination conditions between the testing procedures, or a neural component which may alter with age. Careful acquisition results in highly reproducible in vivo measures of higher order aberrations; however, data from different devices are not interchangeable which brings the accuracy of measurement into question. Objective measures of intraocular straylight can be derived from clinical aberrometry and may be of great diagnostic and management importance in the future.
Resumo:
Three studies tested the impact of properties of behavioral intention on intention-behavior consistency, information processing, and resistance. Principal components analysis showed that properties of intention formed distinct factors. Study 1 demonstrated that temporal stability, but not the other intention attributes, moderated intention-behavior consistency. Study 2 found that greater stability of intention was associated with improved memory performance. In Study 3, participants were confronted with a rating scale manipulation designed to alter their intention scores. Findings showed that stable intentions were able to withstand attack. Overall, the present research findings suggest that different properties of intention are not simply manifestations of a single underlying construct ("intention strength"), and that temporal stability exhibits superior resistance and impact compared to other intention attributes. © 2013 Wiley Periodicals, Inc.
Resumo:
The ability to define and manipulate the interaction of peptides with MHC molecules has immense immunological utility, with applications in epitope identification, vaccine design, and immunomodulation. However, the methods currently available for prediction of peptide-MHC binding are far from ideal. We recently described the application of a bioinformatic prediction method based on quantitative structure-affinity relationship methods to peptide-MHC binding. In this study we demonstrate the predictivity and utility of this approach. We determined the binding affinities of a set of 90 nonamer peptides for the MHC class I allele HLA-A*0201 using an in-house, FACS-based, MHC stabilization assay, and from these data we derived an additive quantitative structure-affinity relationship model for peptide interaction with the HLA-A*0201 molecule. Using this model we then designed a series of high affinity HLA-A2-binding peptides. Experimental analysis revealed that all these peptides showed high binding affinities to the HLA-A*0201 molecule, significantly higher than the highest previously recorded. In addition, by the use of systematic substitution at principal anchor positions 2 and 9, we showed that high binding peptides are tolerant to a wide range of nonpreferred amino acids. Our results support a model in which the affinity of peptide binding to MHC is determined by the interactions of amino acids at multiple positions with the MHC molecule and may be enhanced by enthalpic cooperativity between these component interactions.
Resumo:
Using a Markov switching unobserved component model we decompose the term premium of the North American CDX index into a permanent and a stationary component. We establish that the inversion of the CDX term premium is induced by sudden changes in the unobserved stationary component, which represents the evolution of the fundamentals underpinning the probability of default in the economy. We find evidence that the monetary policy response from the Fed during the crisis period was effective in reducing the volatility of the term premium. We also show that equity returns make a substantial contribution to the term premium over the entire sample period.
Resumo:
Recent discussion of the knowledge-based economy draws increasingly attention to the role that the creation and management of knowledge plays in economic development. Development of human capital, the principal mechanism for knowledge creation and management, becomes a central issue for policy-makers and practitioners at the regional, as well as national, level. Facing competition both within and across nations, regional policy-makers view human capital development as a key to strengthening the positions of their economies in the global market. Against this background, the aim of this study is to go some way towards answering the question of whether, and how, investment in education and vocational training at regional level provides these territorial units with comparative advantages. The study reviews literature in economics and economic geography on economic growth (Chapter 2). In growth model literature, human capital has gained increased recognition as a key production factor along with physical capital and labour. Although leaving technical progress as an exogenous factor, neoclassical Solow-Swan models have improved their estimates through the inclusion of human capital. In contrast, endogenous growth models place investment in research at centre stage in accounting for technical progress. As a result, they often focus upon research workers, who embody high-order human capital, as a key variable in their framework. An issue of discussion is how human capital facilitates economic growth: is it the level of its stock or its accumulation that influences the rate of growth? In addition, these economic models are criticised in economic geography literature for their failure to consider spatial aspects of economic development, and particularly for their lack of attention to tacit knowledge and urban environments that facilitate the exchange of such knowledge. Our empirical analysis of European regions (Chapter 3) shows that investment by individuals in human capital formation has distinct patterns. Those regions with a higher level of investment in tertiary education tend to have a larger concentration of information and communication technology (ICT) sectors (including provision of ICT services and manufacture of ICT devices and equipment) and research functions. Not surprisingly, regions with major metropolitan areas where higher education institutions are located show a high enrolment rate for tertiary education, suggesting a possible link to the demand from high-order corporate functions located there. Furthermore, the rate of human capital development (at the level of vocational type of upper secondary education) appears to have significant association with the level of entrepreneurship in emerging industries such as ICT-related services and ICT manufacturing, whereas such association is not found with traditional manufacturing industries. In general, a high level of investment by individuals in tertiary education is found in those regions that accommodate high-tech industries and high-order corporate functions such as research and development (R&D). These functions are supported through the urban infrastructure and public science base, facilitating exchange of tacit knowledge. They also enjoy a low unemployment rate. However, the existing stock of human and physical capital in those regions with a high level of urban infrastructure does not lead to a high rate of economic growth. Our empirical analysis demonstrates that the rate of economic growth is determined by the accumulation of human and physical capital, not by level of their existing stocks. We found no significant effects of scale that would favour those regions with a larger stock of human capital. The primary policy implication of our study is that, in order to facilitate economic growth, education and training need to supply human capital at a faster pace than simply replenishing it as it disappears from the labour market. Given the significant impact of high-order human capital (such as business R&D staff in our case study) as well as the increasingly fast pace of technological change that makes human capital obsolete, a concerted effort needs to be made to facilitate its continuous development.
Resumo:
Receptor activity modifying protein 1 (RAMP1) is an integral component of several receptors including the calcitonin gene-related peptide (CGRP) receptor. It forms a complex with the calcitonin receptor-like receptor (CLR) and is required for receptor trafficking and ligand binding. The N-terminus of RAMP1 comprises three helices. The current study investigated regions of RAMP1 important for CGRP or CLR interactions by alanine mutagenesis. Modeling suggested the second and third helices were important in protein-protein interactions. Most of the conserved residues in the N-terminus (M48, W56, Y66, P85, N66, H97, F101, D113, P114, P115), together with a further 13 residues spread throughout three helices of RAMP1, were mutated to alanine and coexpressed with CLR in Cos 7 cells. None of the mutations significantly reduced RAMP expression. Of the nine mutants from helix 1, only M48A had any effect, producing a modest reduction in trafficking of CLR to the cell surface. In helix 2 Y66A almost completely abolished CLR trafficking; L69A and T73A reduced the potency of CGRP to produce cAMP. In helix 3, H97A abolished CLR trafficking; P85A, N86A, and F101A had caused modest reductions in CLR trafficking and also reduced the potency of CGRP on cAMP production. F93A caused a modest reduction in CLR trafficking alone and L94A increased cAMP production. The data are consistent with a CLR recognition site particularly involving Y66 and H97, with lesser roles for adjacent residues in helix 3. L69 and T73 may contribute to a CGRP recognition site in helix 2 also involving nearby residues.