866 resultados para INDEPENDENT COMPONENT ANALYSIS (ICA)
Resumo:
BACKGROUND: Molecular tools may provide insight into cardiovascular risk. We assessed whether metabolites discriminate coronary artery disease (CAD) and predict risk of cardiovascular events. METHODS AND RESULTS: We performed mass-spectrometry-based profiling of 69 metabolites in subjects from the CATHGEN biorepository. To evaluate discriminative capabilities of metabolites for CAD, 2 groups were profiled: 174 CAD cases and 174 sex/race-matched controls ("initial"), and 140 CAD cases and 140 controls ("replication"). To evaluate the capability of metabolites to predict cardiovascular events, cases were combined ("event" group); of these, 74 experienced death/myocardial infarction during follow-up. A third independent group was profiled ("event-replication" group; n=63 cases with cardiovascular events, 66 controls). Analysis included principal-components analysis, linear regression, and Cox proportional hazards. Two principal components analysis-derived factors were associated with CAD: 1 comprising branched-chain amino acid metabolites (factor 4, initial P=0.002, replication P=0.01), and 1 comprising urea cycle metabolites (factor 9, initial P=0.0004, replication P=0.01). In multivariable regression, these factors were independently associated with CAD in initial (factor 4, odds ratio [OR], 1.36; 95% CI, 1.06 to 1.74; P=0.02; factor 9, OR, 0.67; 95% CI, 0.52 to 0.87; P=0.003) and replication (factor 4, OR, 1.43; 95% CI, 1.07 to 1.91; P=0.02; factor 9, OR, 0.66; 95% CI, 0.48 to 0.91; P=0.01) groups. A factor composed of dicarboxylacylcarnitines predicted death/myocardial infarction (event group hazard ratio 2.17; 95% CI, 1.23 to 3.84; P=0.007) and was associated with cardiovascular events in the event-replication group (OR, 1.52; 95% CI, 1.08 to 2.14; P=0.01). CONCLUSIONS: Metabolite profiles are associated with CAD and subsequent cardiovascular events.
Resumo:
This paper theoretically analysis the recently proposed "Extended Partial Least Squares" (EPLS) algorithm. After pointing out some conceptual deficiencies, a revised algorithm is introduced that covers the middle ground between Partial Least Squares and Principal Component Analysis. It maximises a covariance criterion between a cause and an effect variable set (partial least squares) and allows a complete reconstruction of the recorded data (principal component analysis). The new and conceptually simpler EPLS algorithm has successfully been applied in detecting and diagnosing various fault conditions, where the original EPLS algorithm did only offer fault detection.
Resumo:
This is the first paper that introduces a nonlinearity test for principal component models. The methodology involves the division of the data space into disjunct regions that are analysed using principal component analysis using the cross-validation principle. Several toy examples have been successfully analysed and the nonlinearity test has subsequently been applied to data from an internal combustion engine.
Resumo:
This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.
Resumo:
The present study examined the consistency over time of individual differences in behavioral and physiological responsiveness of calves to intuitively alarming test situations as well as the relationships between behavioral and physiological measures. Twenty Holstein Friesian heifer calves were individually subjected to the same series of two behavioral and two hypothalamo-pituitary-adrenocortical (HPA) axis reactivity tests at 3, 13 and 26 weeks of age. Novel environment (open field, OF) and novel object (NO) tests involved measurement of behavioral, plasma cortisol and heart rate responses. Plasma ACTH and/or cortisol response profiles were determined after administration of exogenous CRH and ACTH, respectively, in the HPA axis reactivity tests. Principal component analysis (PCA) was used to condense correlated measures within ages into principal components reflecting independent dimensions underlying the calves' reactivity. Cortisol responses to the OF and NO tests were positively associated with the latency to contact and negatively related to the time spent in contact with the NO. Individual differences in scores of a principal component summarizing this pattern of inter-correlations, as well as differences in separate measures of adrenocortical and behavioral reactivity in the OF and NO tests proved highly consistent over time. The cardiac response to confinement in a start box prior to the OF test was positively associated with the cortisol responses to the OF and NO tests at 26 weeks of age. HPA axis reactivity to ACTH or CRH was unrelated to adrenocortical and behavioral responses to novelty. These findings strongly suggest that the responsiveness of calves was mediated by stable individual characteristics. Correlated adrenocortical and behavioral responses to novelty may reflect underlying fearfulness, defining the individual's susceptibility to the elicitation of fear. Other independent characteristics mediating reactivity may include activity or coping style (related to locomotion) and underlying sociality (associated with vocalization). (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Nonlinear principal component analysis (PCA) based on neural networks has drawn significant attention as a monitoring tool for complex nonlinear processes, but there remains a difficulty with determining the optimal network topology. This paper exploits the advantages of the Fast Recursive Algorithm, where the number of nodes, the location of centres, and the weights between the hidden layer and the output layer can be identified simultaneously for the radial basis function (RBF) networks. The topology problem for the nonlinear PCA based on neural networks can thus be solved. Another problem with nonlinear PCA is that the derived nonlinear scores may not be statistically independent or follow a simple parametric distribution. This hinders its applications in process monitoring since the simplicity of applying predetermined probability distribution functions is lost. This paper proposes the use of a support vector data description and shows that transforming the nonlinear principal components into a feature space allows a simple statistical inference. Results from both simulated and industrial data confirm the efficacy of the proposed method for solving nonlinear principal component problems, compared with linear PCA and kernel PCA.
Resumo:
PURPOSE: To investigate the quality of life and priorities of patients with glaucoma.
METHODS: Patients diagnosed with glaucoma and no other ocular comorbidity were consecutively recruited. Clinical information was collected. Participants were asked to complete three questionnaires: EuroQuol (EQ-5D), time tradeoff (TTO), and choice-based conjoint analysis. The latter used five-attribute outcomes: (1) reading and seeing detail, (2) peripheral vision, (3) darkness and glare, (4) household chores, and (5) outdoor mobility. Visual field loss was estimated by using binocular integrated visual fields (IVFs).
RESULTS: Of 84 patients invited to participate, 72 were enrolled in the study. The conjoint utilities showed that the two main priorities were "reading and seeing detail" and "outdoor mobility." This rank order was stable across all segmentations of the data by demographic or visual state. However, the relative emphasis of these priorities changed with increasing visual field loss, with concerns for central vision increasing, whereas those for outdoor mobility decreased. Two subgroups of patients with differing priorities on the two main attributes were identified. Only 17% of patients (those with poorer visual acuity) were prepared to consider TTO. A principal component analysis revealed relatively independent components (i.e., low correlations) between the three different methodologies for assessing quality of life.
CONCLUSIONS: Assessments of quality of life using different methodologies have been shown to produce different outcomes with low intercorrelations between them. Only a minority of patients were prepared to trade time for a return to normal vision. Conjoint analysis showed two subgroups with different priorities. Severity of glaucoma influenced the relative importance of priorities.
Resumo:
The present study investigated the long-term consistency of individual differences in dairy cattles’ responses in tests of behavioural and hypothalamo–pituitary–adrenocortical (HPA) axis reactivity, as well as the relationship between responsiveness in behavioural tests and the reaction to first milking. Two cohorts of heifer calves, Cohorts 1 (N = 25) and 2 (N = 16), respectively, were examined longitudinally from the rearing period until adulthood. Cohort 1 heifers were subjected to open field (OF), novel object (NO), restraint, and response to a human tests at 7 months of age, and were again observed in an OF test during first pregnancy between 22 and 24 months of age. Subsequently, inhibition of milk ejection and stepping and kicking behaviours were recorded in Cohort 1 heifers during their first machine milking. Cohort 2 heifers were individually subjected to OF and NO tests as well as two HPA axis reactivity tests (determining ACTH and/or cortisol response profiles after administration of exogenous CRH and ACTH, respectively) at 6 months of age and during first lactation at approximately 29 months of age. Principal component analysis (PCA) was used to condense correlated response measures (to behavioural tests and to milking) within ages into independent dimensions underlying heifers’ reactivity. Heifers demonstrated consistent individual differences in locomotion and vocalisation during an OF test from rearing to first pregnancy (Cohort 1) or first lactation (Cohort 2). Individual differences in struggling in a restraint test at 7 months of age reliably predicted those in OF locomotion during first pregnancy in Cohort 1 heifers. Cohort 2 animals with high cortisol responses to OF and NO tests and high avoidance of the novel object at 6 months of age also exhibited enhanced cortisol responses to OF and NO tests at 29 months of age. Measures of HPA axis reactivity, locomotion, vocalisation and adrenocortical and behavioural responses to novelty were largely uncorrelated, supporting the idea that stress responsiveness in dairy cows is mediated by multiple independent underlying traits. Inhibition of milk ejection and stepping and kicking behaviours during first machine milking were not related to earlier struggling during restraint, locomotor responses to OF and NO tests, or the behavioural interaction with a novel object. Heifers with high rates of OF and NO vocalisation and short latencies to first contact with the human at 7 months of age exhibited better milk ejection during first machine milking. This suggests that low underlying sociality might be implicated in the inhibition of milk ejection at the beginning of lactation in heifers.
Resumo:
Geologic and environmental factors acting over varying spatial scales can control
trace element distribution and mobility in soils. In turn, the mobility of an element in soil will affect its oral bioaccessibility. Geostatistics, kriging and principal component analysis (PCA) were used to explore factors and spatial ranges of influence over a suite of 8 element oxides, soil organic carbon (SOC), pH, and the trace elements nickel (Ni), vanadium (V) and zinc (Zn). Bioaccessibility testing was carried out previously using the Unified BARGE Method on a sub-set of 91 soil samples from the Northern Ireland Tellus1 soil archive. Initial spatial mapping of total Ni, V and Zn concentrations shows their distributions are correlated spatially with local geologic formations, and prior correlation analyses showed that statistically significant controls were exerted over trace element bioaccessibility by the 8 oxides, SOC and pH. PCA applied to the geochemistry parameters of the bioaccessibility sample set yielded three principal components accounting for 77% of cumulative variance in the data
set. Geostatistical analysis of oxide, trace element, SOC and pH distributions using 6862 sample locations also identified distinct spatial ranges of influence for these variables, concluded to arise from geologic forming processes, weathering processes, and localised soil chemistry factors. Kriging was used to conduct a spatial PCA of Ni, V and Zn distributions which identified two factors comprising the majority of distribution variance. This was spatially accounted for firstly by basalt rock types, with the second component associated with sandstone and limestone in the region. The results suggest trace element bioaccessibility and distribution is controlled by chemical and geologic processes which occur over variable spatial ranges of influence.
Resumo:
The use of handheld near infrared (NIR) instrumentation, as a tool for rapid analysis, has the potential to be used widely in the animal feed sector. A comparison was made between handheld NIR and benchtop instruments in terms of proximate analysis of poultry feed using off-the-shelf calibration models and including statistical analysis. Additionally, melamine adulterated soya bean products were used to develop qualitative and quantitative calibration models from the NIRS spectral data with excellent calibration models and prediction statistics obtained. With regards to the quantitative approach, the coefficients of determination (R2) were found to be 0.94-0.99 with the corresponding values for the root mean square error of calibration and prediction were found to be 0.081-0.215 % and 0.095-0.288 % respectively. In addition, cross validation was used to further validate the models with the root mean square error of cross validation found to be 0.101-0.212 %. Furthermore, by adopting a qualitative approach with the spectral data and applying Principal Component Analysis, it was possible to discriminate between adulterated and pure samples.
Resumo:
Goats’ milk is responsible for unique traditional products such as Halloumi cheese. The characteristics of Halloumi depend on the original features of the milk and on the conditions under which the milk has been produced such as feeding regime of the animals or region of production. Using a range of milk (33) and Halloumi (33) samples collected over a year from three different locations in Cyprus (A, Anogyra; K, Kofinou; P, Paphos), the potential for fingerprint VOC analysis as marker to authenticate Halloumi was investigated. This unique set up consists of an in-injector thermo desorption (VOCtrap needle) and a chromatofocusing system based on mass spectrometry (VOCscanner). The mass spectra of all the analyzed samples are treated by multivariate analysis (Principle component analysis and Discriminant functions analysis). Results showed that the highland area of product (P) is clearly identified in milks produced (discriminant score 67%). It is interesting to note that the higher similitude found on milks from regions “A” and “K” (with P being distractive; discriminant score 80%) are not ‘carried over’ on the cheeses (higher similitude between regions “A” and “P”, with “K” distinctive). Data have been broken down into three seasons. Similarly, the seasonality differences observed in different milks are not necessarily reported on the produced cheeses. This is expected due to the different VOC signatures developed in cheeses as part of the numerous biochemical changes during its elaboration compared to milk. VOC however it is an additional analytical tool that can aid in the identification of region origin in dairy products.
Resumo:
Single component geochemical maps are the most basic representation of spatial elemental distributions and commonly used in environmental and exploration geochemistry. However, the compositional nature of geochemical data imposes several limitations on how the data should be presented. The problems relate to the constant sum problem (closure), and the inherently multivariate relative information conveyed by compositional data. Well known is, for instance, the tendency of all heavy metals to show lower values in soils with significant contributions of diluting elements (e.g., the quartz dilution effect); or the contrary effect, apparent enrichment in many elements due to removal of potassium during weathering. The validity of classical single component maps is thus investigated, and reasonable alternatives that honour the compositional character of geochemical concentrations are presented. The first recommended such method relies on knowledge-driven log-ratios, chosen to highlight certain geochemical relations or to filter known artefacts (e.g. dilution with SiO2 or volatiles). This is similar to the classical normalisation approach to a single element. The second approach uses the (so called) log-contrasts, that employ suitable statistical methods (such as classification techniques, regression analysis, principal component analysis, clustering of variables, etc.) to extract potentially interesting geochemical summaries. The caution from this work is that if a compositional approach is not used, it becomes difficult to guarantee that any identified pattern, trend or anomaly is not an artefact of the constant sum constraint. In summary the authors recommend a chain of enquiry that involves searching for the appropriate statistical method that can answer the required geological or geochemical question whilst maintaining the integrity of the compositional nature of the data. The required log-ratio transformations should be applied followed by the chosen statistical method. Interpreting the results may require a closer working relationship between statisticians, data analysts and geochemists.
Resumo:
Statistics are regularly used to make some form of comparison between trace evidence or deploy the exclusionary principle (Morgan and Bull, 2007) in forensic investigations. Trace evidence are routinely the results of particle size, chemical or modal analyses and as such constitute compositional data. The issue is that compositional data including percentages, parts per million etc. only carry relative information. This may be problematic where a comparison of percentages and other constraint/closed data is deemed a statistically valid and appropriate way to present trace evidence in a court of law. Notwithstanding an awareness of the existence of the constant sum problem since the seminal works of Pearson (1896) and Chayes (1960) and the introduction of the application of log-ratio techniques (Aitchison, 1986; Pawlowsky-Glahn and Egozcue, 2001; Pawlowsky-Glahn and Buccianti, 2011; Tolosana-Delgado and van den Boogaart, 2013) the problem that a constant sum destroys the potential independence of variances and covariances required for correlation regression analysis and empirical multivariate methods (principal component analysis, cluster analysis, discriminant analysis, canonical correlation) is all too often not acknowledged in the statistical treatment of trace evidence. Yet the need for a robust treatment of forensic trace evidence analyses is obvious. This research examines the issues and potential pitfalls for forensic investigators if the constant sum constraint is ignored in the analysis and presentation of forensic trace evidence. Forensic case studies involving particle size and mineral analyses as trace evidence are used to demonstrate the use of a compositional data approach using a centred log-ratio (clr) transformation and multivariate statistical analyses.
Resumo:
Dissertação de mest., Qualidade em Análises, Faculdade de Ciências e Tecnologia, Univ. do Algarve, 2013
Resumo:
Beyond the classical statistical approaches (determination of basic statistics, regression analysis, ANOVA, etc.) a new set of applications of different statistical techniques has increasingly gained relevance in the analysis, processing and interpretation of data concerning the characteristics of forest soils. This is possible to be seen in some of the recent publications in the context of Multivariate Statistics. These new methods require additional care that is not always included or refered in some approaches. In the particular case of geostatistical data applications it is necessary, besides to geo-reference all the data acquisition, to collect the samples in regular grids and in sufficient quantity so that the variograms can reflect the spatial distribution of soil properties in a representative manner. In the case of the great majority of Multivariate Statistics techniques (Principal Component Analysis, Correspondence Analysis, Cluster Analysis, etc.) despite the fact they do not require in most cases the assumption of normal distribution, they however need a proper and rigorous strategy for its utilization. In this work, some reflections about these methodologies and, in particular, about the main constraints that often occur during the information collecting process and about the various linking possibilities of these different techniques will be presented. At the end, illustrations of some particular cases of the applications of these statistical methods will also be presented.