61 resultados para principal component analysis (PCA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the monitoring of complex nonlinear and time-varying processes. Kernel principal component analysis (KPCA) has gained significant attention as a monitoring tool for nonlinear systems in recent years but relies on a fixed model that cannot be employed for time-varying systems. The contribution of this article is the development of a numerically efficient and memory saving moving window KPCA (MWKPCA) monitoring approach. The proposed technique incorporates an up- and downdating procedure to adapt (i) the data mean and covariance matrix in the feature space and (ii) approximates the eigenvalues and eigenvectors of the Gram matrix. The article shows that the proposed MWKPCA algorithm has a computation complexity of O(N2), whilst batch techniques, e.g. the Lanczos method, are of O(N3). Including the adaptation of the number of retained components and an l-step ahead application of the MWKPCA monitoring model, the paper finally demonstrates the utility of the proposed technique using a simulated nonlinear time-varying system and recorded data from an industrial distillation column.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper shows that current multivariate statistical monitoring technology may not detect incipient changes in the variable covariance structure nor changes in the geometry of the underlying variable decomposition. To overcome these deficiencies, the local approach is incorporated into the multivariate statistical monitoring framework to define two new univariate statistics for fault detection. Fault isolation is achieved by constructing a fault diagnosis chart which reveals changes in the covariance structure resulting from the presence of a fault. A theoretical analysis is presented and the proposed monitoring approach is exemplified using application studies involving recorded data from two complex industrial processes. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a method for the detection and classification of multiple events in an electrical power system in real-time, namely; islanding, high frequency events (loss of load) and low frequency events (loss of generation). This method is based on principal component analysis of frequency measurements and employs a moving window approach to combat the time-varying nature of power systems, thereby increasing overall situational awareness of the power system. Numerical case studies using both real data, collected from the UK power system, and simulated case studies, constructed using DigSilent PowerFactory, for islanding events, as well as both loss of load and generation dip events, are used to demonstrate the reliability of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During lateral leg raising, a synergistic inclination of the supporting leg and trunk in the opposite direction to the leg movement is performed in order to preserve equilibrium. As first hypothesized by Pagano and Turvey (J Exp Psychol Hum Percept Perform, 1995, 21:1070-1087), the perception of limb orientation could be based on the orientation of the limb's inertia tensor. The purpose of this study was thus to explore whether the final upper body orientation (trunk inclination relative to vertical) depends on changes in the trunk inertia tensor. We imposed a loading condition, with total mass of 4 kg added to the subject's trunk in either a symmetrical or asymmetrical configuration. This changed the orientation of the trunk inertia tensor while keeping the total trunk mass constant. In order to separate any effects of the inertia tensor from the effects of gravitational torque, the experiment was carried out in normo- and microgravity. The results indicated that in normogravity the same final upper body orientation was maintained irrespective of the loading condition. In microgravity, regardless of loading conditions the same (but different from the normogravity) orientation of the upper body was achieved through different joint organizations: two joints (the hip and ankle joints of the supporting leg) in the asymmetrical loading condition, and one (hip) in the symmetrical loading condition. In order to determine whether the different orientations of the inertia tensor were perceived during the movement, the interjoint coordination was quantified by performing a principal components analysis (PCA) on the supporting and moving hips and on the supporting ankle joints. It was expected that different loading conditions would modify the principal component of the PCA. In normogravity, asymmetrical loading decreased the coupling between joints, while in microgravity a strong coupling was preserved whatever the loading condition. It was concluded that the trunk inertia tensor did not play a role during the lateral leg raising task because in spite of the absence of gravitational torque the final upper body orientation and the interjoint coupling were not influenced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the first paper that shows and theoretically analyses that the presence of auto-correlation can produce considerable alterations in the Type I and Type II errors in univariate and multivariate statistical control charts. To remove this undesired effect, linear inverse ARMA filter are employed and the application studies in this paper show that false alarms (increased Type I errors) and an insensitive monitoring statistics (increased Type II errors) were eliminated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper theoretically analysis the recently proposed "Extended Partial Least Squares" (EPLS) algorithm. After pointing out some conceptual deficiencies, a revised algorithm is introduced that covers the middle ground between Partial Least Squares and Principal Component Analysis. It maximises a covariance criterion between a cause and an effect variable set (partial least squares) and allows a complete reconstruction of the recorded data (principal component analysis). The new and conceptually simpler EPLS algorithm has successfully been applied in detecting and diagnosing various fault conditions, where the original EPLS algorithm did only offer fault detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Guanine-rich DNA repeat sequences located at the terminal ends of chromosomal DNA can fold in a sequence-dependent manner into G-quadruplex structures, notably the terminal 150–200 nucleotides at the 3' end, which occur as a single-stranded DNA overhang. The crystal structures of quadruplexes with two and four human telomeric repeats show an all-parallel-stranded topology that is readily capable of forming extended stacks of such quadruplex structures, with external TTA loops positioned to potentially interact with other macromolecules. This study reports on possible arrangements for these quadruplex dimers and tetramers, which can be formed from 8 or 16 telomeric DNA repeats, and on a methodology for modeling their interactions with small molecules. A series of computational methods including molecular dynamics, free energy calculations, and principal components analysis have been used to characterize the properties of these higher-order G-quadruplex dimers and tetramers with parallel-stranded topology. The results confirm the stability of the central G-tetrads, the individual quadruplexes, and the resulting multimers. Principal components analysis has been carried out to highlight the dominant motions in these G-quadruplex dimer and multimer structures. The TTA loop is the most flexible part of the model and the overall multimer quadruplex becoming more stable with the addition of further G-tetrads. The addition of a ligand to the model confirms the hypothesis that flat planar chromophores stabilize G-quadruplex structures by making them less flexible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of handheld near infrared (NIR) instrumentation, as a tool for rapid analysis, has the potential to be used widely in the animal feed sector. A comparison was made between handheld NIR and benchtop instruments in terms of proximate analysis of poultry feed using off-the-shelf calibration models and including statistical analysis. Additionally, melamine adulterated soya bean products were used to develop qualitative and quantitative calibration models from the NIRS spectral data with excellent calibration models and prediction statistics obtained. With regards to the quantitative approach, the coefficients of determination (R2) were found to be 0.94-0.99 with the corresponding values for the root mean square error of calibration and prediction were found to be 0.081-0.215 % and 0.095-0.288 % respectively. In addition, cross validation was used to further validate the models with the root mean square error of cross validation found to be 0.101-0.212 %. Furthermore, by adopting a qualitative approach with the spectral data and applying Principal Component Analysis, it was possible to discriminate between adulterated and pure samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single component geochemical maps are the most basic representation of spatial elemental distributions and commonly used in environmental and exploration geochemistry. However, the compositional nature of geochemical data imposes several limitations on how the data should be presented. The problems relate to the constant sum problem (closure), and the inherently multivariate relative information conveyed by compositional data. Well known is, for instance, the tendency of all heavy metals to show lower values in soils with significant contributions of diluting elements (e.g., the quartz dilution effect); or the contrary effect, apparent enrichment in many elements due to removal of potassium during weathering. The validity of classical single component maps is thus investigated, and reasonable alternatives that honour the compositional character of geochemical concentrations are presented. The first recommended such method relies on knowledge-driven log-ratios, chosen to highlight certain geochemical relations or to filter known artefacts (e.g. dilution with SiO2 or volatiles). This is similar to the classical normalisation approach to a single element. The second approach uses the (so called) log-contrasts, that employ suitable statistical methods (such as classification techniques, regression analysis, principal component analysis, clustering of variables, etc.) to extract potentially interesting geochemical summaries. The caution from this work is that if a compositional approach is not used, it becomes difficult to guarantee that any identified pattern, trend or anomaly is not an artefact of the constant sum constraint. In summary the authors recommend a chain of enquiry that involves searching for the appropriate statistical method that can answer the required geological or geochemical question whilst maintaining the integrity of the compositional nature of the data. The required log-ratio transformations should be applied followed by the chosen statistical method. Interpreting the results may require a closer working relationship between statisticians, data analysts and geochemists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistics are regularly used to make some form of comparison between trace evidence or deploy the exclusionary principle (Morgan and Bull, 2007) in forensic investigations. Trace evidence are routinely the results of particle size, chemical or modal analyses and as such constitute compositional data. The issue is that compositional data including percentages, parts per million etc. only carry relative information. This may be problematic where a comparison of percentages and other constraint/closed data is deemed a statistically valid and appropriate way to present trace evidence in a court of law. Notwithstanding an awareness of the existence of the constant sum problem since the seminal works of Pearson (1896) and Chayes (1960) and the introduction of the application of log-ratio techniques (Aitchison, 1986; Pawlowsky-Glahn and Egozcue, 2001; Pawlowsky-Glahn and Buccianti, 2011; Tolosana-Delgado and van den Boogaart, 2013) the problem that a constant sum destroys the potential independence of variances and covariances required for correlation regression analysis and empirical multivariate methods (principal component analysis, cluster analysis, discriminant analysis, canonical correlation) is all too often not acknowledged in the statistical treatment of trace evidence. Yet the need for a robust treatment of forensic trace evidence analyses is obvious. This research examines the issues and potential pitfalls for forensic investigators if the constant sum constraint is ignored in the analysis and presentation of forensic trace evidence. Forensic case studies involving particle size and mineral analyses as trace evidence are used to demonstrate the use of a compositional data approach using a centred log-ratio (clr) transformation and multivariate statistical analyses.