914 resultados para Minor Component Analysis
Resumo:
Deformable Template models are first applied to track the inner wall of coronary arteries in intravascular ultrasound sequences, mainly in the assistance to angioplasty surgery. A circular template is used for initializing an elliptical deformable model to track wall deformation when inflating a balloon placed at the tip of the catheter. We define a new energy function for driving the behavior of the template and we test its robustness both in real and synthetic images. Finally we introduce a framework for learning and recognizing spatio-temporal geometric constraints based on Principal Component Analysis (eigenconstraints).
Resumo:
Slag composition determines the physical and chemical properties as well as the application performance of molten oxide mixtures. Therefore, it is necessary to establish a routine instrumental technique to produce accurate and precise analytical results for better process and production control. In the present paper, a multi-component analysis technique of powdered metallurgical slag samples by X-ray Fluorescence Spectrometer (XRFS) has been demonstrated. This technique provides rapid and accurate results, with minimum sample preparation. It eliminates the requirement for a fused disc, using briquetted samples protected by a layer of Borax(R). While the use of theoretical alpha coefficients has allowed accurate calibrations to be made using fewer standard samples, the application of pseudo-Voight function to curve fitting makes it possible to resolve overlapped peaks in X-ray spectra that cannot be physically separated. The analytical results of both certified reference materials and industrial slag samples measured using the present technique are comparable to those of the same samples obtained by conventional fused disc measurements.
Resumo:
The main purpose of this article is to gain an insight into the relationships between variables describing the environmental conditions of the Far Northern section of the Great Barrier Reef, Australia, Several of the variables describing these conditions had different measurement levels and often they had non-linear relationships. Using non-linear principal component analysis, it was possible to acquire an insight into these relationships. Furthermore. three geographical areas with unique environmental characteristics could be identified. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Onsite wastewater treatment systems aim to assimilate domestic effluent into the environment. Unfortunately failure of such systems is common and inadequate effluent treatment can have serious environmental implications. The capacity of a particular soil to treat wastewater will change over time. The physical properties influence the rate of effluent movement through the soil and its chemical properties dictate the ability to renovate effluent. A research project was undertaken to determine the role that physical and chemical soil properties play in predicting the long-term behaviour of soil under effluent irrigation and to determine if they have a potential function as early indicators of adverse effects of effluent irrigation on treatment sustainability. Principal Component Analysis (PCA) and Cluster Analysis grouped the soils independently of their soil classifications and allowed us to distinguish the most suitable soils for sustainable long term effluent irrigation and determine the most influential soil parameters to characterise them. Multivariate analysis allowed a clear distinction between soils based on the cation exchange capacities. This in turn correlated well with the soil mineralogy. Mixed mineralogy soils in particular sodium or magnesium dominant soils are the most susceptible to dispersion under effluent irrigation. The soil Exchangeable Sodium Percentage (ESP) was identified as a crucial parameter and was highly correlated with percentage clay, electrical conductivity, exchangeable sodium, exchangeable magnesium and low Ca:Mg ratios (less than 0.5).
Resumo:
The late Miocene Farallon Negro volcanics, comprising basaltic to rhyodacitic volcano-sedimentary rocks, host the Bajo de la Alumbrera porphyry copper-gold deposit in northwest Argentina. Early studies of the geology of the district have underpinned the general model for porphyry ore deposits where hydrothermal alteration and mineralization develop in and around porphyritic intrusions emplaced at shallow depths (2.5-3.5 km) into stratovolcanic assemblages. The Farallon Negro succession is dominated by thick sequences of volcano-sedimentary breccias, with lavas forming a minor component volumetrically. These volcaniclastic rocks conformably overlie crystalline basement-derived sedimentary rocks deposited in a developing foreland basin southeast of the Puna-Altiplano plateau. Within the Farallon Negro volcanics, volcanogenic accumulations evolved from early mafic to intermediate and silicic compositions. The younger and more silicic rocks are demonstrably coeval and comagmatic with the earliest group of mineralized porphyritic intrusions at Bajo de la Alumbrera. Our analysis of the volcanic stratigraphy and facies architecture of the Farallon Negro volcanics indicates that volcanic eruptions evolved from effusive to mixed effusive and explosive styles, as magma compositions changed to more intermediate and silicic compositions. Air early phase of mafic to intermediate voleanism was characterized by small synsedimentary intrusions with peperitic contacts, and lesser lava units scattered widely throughout the district, and interbedded with thick and extensive successions of coarse-grained sedimentary breccias. These sedimentary breccias formed from numerous debris- and hyperconcentrated flow events. A later phase of silicic volcanism included both effusive eruptions, forming several areally restricted lavas, and explosive eruptions, producing more widely dispersed (up to 5 kin) tuff units, some tip to 30-m thickness in proximal sections. Four key features of the volcanic stratigraphy suggest that the Farallon Negro volcanics need not simply record the construction of a large steep-sided polygenetic stratovolcano: (1) sheetlike, laterally continuous debris-flow and other coarse-grained sedimentary deposits are dominant, particularly in the lower sections; (2) mafic-intermediate composition lavas are volumetrically minor; (3) peperites are present throughout the sequence; and (4) fine-grained lacustrine sandstone-siltstone sequences occur in areas previously thought to be proximal to the summit region of the stratovolcano. Instead, the nature, distribution, and geometry of volcanic and volcaniclastic facies suggest that volcanism occurred as a relatively low relief, multiple-vent volcanic complex at the eastern edge of a broad, > 200-km-wide late Miocene volcanic belt and oil ail active foreland sedimentary basin to the Puna-Altiplano. Volcanism that occurred synchronously with the earliest stages of porphyry-related mineralization at Bajo de la Alumbrera apparently developed in an alluvial to ring plain setting that was distal to larger volcanic edifices.
Resumo:
This paper investigates the performance analysis of separation of mutually independent sources in nonlinear models. The nonlinear mapping constituted by an unsupervised linear mixture is followed by an unknown and invertible nonlinear distortion, are found in many signal processing cases. Generally, blind separation of sources from their nonlinear mixtures is rather difficult. We propose using a kernel density estimator incorporated with equivariant gradient analysis to separate the sources with nonlinear distortion. The kernel density estimator parameters of which are iteratively updated to minimize the output independence expressed as a mutual information criterion. The equivariant gradient algorithm has the form of nonlinear decorrelation to perform the convergence analysis. Experiments are proposed to illustrate these results.
Resumo:
This paper investigates the performance of EASI algorithm and the proposed EKENS algorithm for linear and nonlinear mixtures. The proposed EKENS algorithm is based on the modified equivariant algorithm and kernel density estimation. Theory and characteristic of both the algorithms are discussed for blind source separation model. The separation structure of nonlinear mixtures is based on a nonlinear stage followed by a linear stage. Simulations with artificial and natural data demonstrate the feasibility and good performance of the proposed EKENS algorithm.
Resumo:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
We analyze a Big Data set of geo-tagged tweets for a year (Oct. 2013–Oct. 2014) to understand the regional linguistic variation in the U.S. Prior work on regional linguistic variations usually took a long time to collect data and focused on either rural or urban areas. Geo-tagged Twitter data offers an unprecedented database with rich linguistic representation of fine spatiotemporal resolution and continuity. From the one-year Twitter corpus, we extract lexical characteristics for twitter users by summarizing the frequencies of a set of lexical alternations that each user has used. We spatially aggregate and smooth each lexical characteristic to derive county-based linguistic variables, from which orthogonal dimensions are extracted using the principal component analysis (PCA). Finally a regionalization method is used to discover hierarchical dialect regions using the PCA components. The regionalization results reveal interesting linguistic regional variations in the U.S. The discovered regions not only confirm past research findings in the literature but also provide new insights and a more detailed understanding of very recent linguistic patterns in the U.S.
Resumo:
This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient's extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.^
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.