945 resultados para mean field independent component analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recently proposed mean-field theory of mammalian cortex rhythmogenesis describes the salient features of electrical activity in the cerebral macrocolumn, with the use of inhibitory and excitatory neuronal populations (Liley et al 2002). This model is capable of producing a range of important human EEG (electroencephalogram) features such as the alpha rhythm, the 40 Hz activity thought to be associated with conscious awareness (Bojak & Liley 2007) and the changes in EEG spectral power associated with general anesthetic effect (Bojak & Liley 2005). From the point of view of nonlinear dynamics, the model entails a vast parameter space within which multistability, pseudoperiodic regimes, various routes to chaos, fat fractals and rich bifurcation scenarios occur for physiologically relevant parameter values (van Veen & Liley 2006). The origin and the character of this complex behaviour, and its relevance for EEG activity will be illustrated. The existence of short-lived unstable brain states will also be discussed in terms of the available theoretical and experimental results. A perspective on future analysis will conclude the presentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analysis method for diffusion tensor (DT) magnetic resonance imaging data is described, which, contrary to the standard method (multivariate fitting), does not require a specific functional model for diffusion-weighted (DW) signals. The method uses principal component analysis (PCA) under the assumption of a single fibre per pixel. PCA and the standard method were compared using simulations and human brain data. The two methods were equivalent in determining fibre orientation. PCA-derived fractional anisotropy and DT relative anisotropy had similar signal-to-noise ratio (SNR) and dependence on fibre shape. PCA-derived mean diffusivity had similar SNR to the respective DT scalar, and it depended on fibre anisotropy. Appropriate scaling of the PCA measures resulted in very good agreement between PCA and DT maps. In conclusion, the assumption of a specific functional model for DW signals is not necessary for characterization of anisotropic diffusion in a single fibre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model describing dissociation of monoprotonic acid and a method for the determination of its pK value are presented. The model is based on a mean field approximation. The Poisson-Boltzmann equation, adopting spherical symmetry, is numerically solved, and the solution of its linearized form is written. By use of the pH values of a dilution experiment of galacturonic acid as the entry data, the proposed method allowed estimation of the value of pK = 3.25 at a temperature of 25 degrees C. Values for the complex dimensions and dissociation degree are calculated using experimental pH values for solution concentration values ranging from 0.1 to 60 mM. The present analysis leads to the conclusion that the Poisson-Boltzmann equation or its linear form is equally suited for the description of such systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The possibility of kaon condensation in high-density symmetric nuclear matter is investigated including both s- and p-wave kaon-baryon interactions within the relativistic mean-field (RMF) theory. Above a certain density, we have a collective (D) over bar (S) state carrying the same quantum numbers as the antikaon. The appearance of the (K) over bar (S) state is caused by the time component of the axial-vector interaction between kaons and baryons. It is shown that the system becomes unstable with respect to condensation of K-(K) over bar (S) pairs. We consider how the effective baryon masses affect the kaon self-energy coming from the time component of the axial-vector interaction. Also, the role of the spatial component of the axial-vector interaction on the possible existence of the collective kaonic states is discussed in connection with A-mixing effects in the ground state of high-density matter: Implications of K (K) over bar (S) condensation for high-energy heavy-ion collisions are briefly mentioned. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In functional magnetic resonance imaging (fMRI) coherent oscillations of the blood oxygen level-dependent (BOLD) signal can be detected. These arise when brain regions respond to external stimuli or are activated by tasks. The same networks have been characterized during wakeful rest when functional connectivity of the human brain is organized in generic resting-state networks (RSN). Alterations of RSN emerge as neurobiological markers of pathological conditions such as altered mental state. In single-subject fMRI data the coherent components can be identified by blind source separation of the pre-processed BOLD data using spatial independent component analysis (ICA) and related approaches. The resulting maps may represent physiological RSNs or may be due to various artifacts. In this methodological study, we propose a conceptually simple and fully automatic time course based filtering procedure to detect obvious artifacts in the ICA output for resting-state fMRI. The filter is trained on six and tested on 29 healthy subjects, yielding mean filter accuracy, sensitivity and specificity of 0.80, 0.82, and 0.75 in out-of-sample tests. To estimate the impact of clearly artifactual single-subject components on group resting-state studies we analyze unfiltered and filtered output with a second level ICA procedure. Although the automated filter does not reach performance values of visual analysis by human raters, we propose that resting-state compatible analysis of ICA time courses could be very useful to complement the existing map or task/event oriented artifact classification algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study describes brain areas involved in medial temporal lobe (mTL) seizures of 12 patients. All patients showed so-called oro-alimentary behavior within the first 20 s of clinical seizure manifestation characteristic of mTL seizures. Single photon emission computed tomography (SPECT) images of regional cerebral blood flow (rCBF) were acquired from the patients in ictal and interictal phases and from normal volunteers. Image analysis employed categorical comparisons with statistical parametric mapping and principal component analysis (PCA) to assess functional connectivity. PCA supplemented the findings of the categorical analysis by decomposing the covariance matrix containing images of patients and healthy subjects into distinct component images of independent variance, including areas not identified by the categorical analysis. Two principal components (PCs) discriminated the subject groups: patients with right or left mTL seizures and normal volunteers, indicating distinct neuronal networks implicated by the seizure. Both PCs were correlated with seizure duration, one positively and the other negatively, confirming their physiological significance. The independence of the two PCs yielded a clear clustering of subject groups. The local pattern within the temporal lobe describes critical relay nodes which are the counterpart of oro-alimentary behavior: (1) right mesial temporal zone and ipsilateral anterior insula in right mTL seizures, and (2) temporal poles on both sides that are densely interconnected by the anterior commissure. Regions remote from the temporal lobe may be related to seizure propagation and include positively and negatively loaded areas. These patterns, the covarying areas of the temporal pole and occipito-basal visual association cortices, for example, are related to known anatomic paths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Persian Gulf and the Gulf of Oman marl forms the primary sediment cover, particularly on the Iranian side. A detailed quantitative description of the sediment components > 63 µ has been attempted in order to establish the regional distribution of the most important constituents as well as the criteria governing marl sedimentation in general. During the course of the analysis, the sand fraction from about 160 bottom-surface samples was split into 5 phi° fractions and 500 to 800 grains were counted in each individual fraction. The grains were cataloged in up to 40 grain type catagories. The gravel fraction was counted separately and the values calculated as weight percent. Basic for understanding the mode of formation of the marl sediment is the "rule" of independent availability of component groups. It states that the sedimentation of different component groups takes place independently, and that variation in the quantity of one component is independent of the presence or absence of other components. This means, for example, that different grain size spectrums are not necessarily developed through transport sorting. In the Persian Gulf they are more likely the result of differences in the amount of clay-rich fine sediment brought in to the restricted mouth areas of the Iranian rivers. These local increases in clayey sediment dilute the autochthonous, for the most part carbonate, coarse fraction. This also explains the frequent facies changes from carbonate to clayey marl. The main constituent groups of the coarse fraction are faecal pellets and lumps, the non carbonate mineral components, the Pleistocene relict sediment, the benthonic biogene components and the plankton. Faecal pellets and lumps are formed through grain size transformation of fine sediment. Higher percentages of these components can be correlated to large amounts of fine sediment and organic C. No discernable change takes place in carbonate minerals as a result of digestion and faecal pellet formation. The non-carbonate sand components originate from several unrelated sources and can be distinguished by their different grain size spectrum; as well as by other characteristics. The Iranian rivers supply the greatest amounts (well sorted fine sand). Their quantitative variations can be used to trace fine sediment transport directions. Similar mineral maxima in the sediment of the Gulf of Oman mark the path of the Persian Gulf outflow water. Far out from the coast, the basin bottoms in places contain abundant relict minerals (poorly sorted medium sand) and localized areas of reworked salt dome material (medium sand to gravel). Wind transport produces only a minimal "background value" of mineral components (very fine sand). Biogenic and non-biogenic relict sediments can be placed in separate component groups with the help of several petrographic criteria. Part of the relict sediment (well sorted fine sand) is allochthonous and was derived from the terrigenous sediment of river mouths. The main part (coarse, poorly sorted sediment), however, was derived from the late Pleistocene and forms a quasi-autochthonous cover over wide areas which receive little recent sedimentation. Bioturbation results in a mixing of the relict sediment with the overlying younger sediment. Resulting vertical sediment displacement of more than 2.5 m has been observed. This vertical mixing of relict sediment is also partially responsible for the present day grain size anomalies (coarse sediment in deep water) found in the Persian Gulf. The mainly aragonitic components forming the relict sediment show a finely subdivided facies pattern reflecting the paleogeography of carbonate tidal flats dating from the post Pleistocene transgression. Standstill periods are reflected at 110 -125m (shelf break), 64-61 m and 53-41 m (e.g. coare grained quartz and oolite concentrations), and at 25-30m. Comparing these depths to similar occurrences on other shelf regions (e. g. Timor Sea) leads to the conclusion that at this time minimal tectonic activity was taking place in the Persian Gulf. The Pleistocene climate, as evidenced by the absence of Iranian river sediment, was probably drier than the present day Persian Gulf climate. Foremost among the benthonic biogene components are the foraminifera and mollusks. When a ratio is set up between the two, it can be seen that each group is very sensitive to bottom type, i.e., the production of benthonic mollusca increases when a stable (hard) bottom is present whereas the foraminifera favour a soft bottom. In this way, regardless of the grain size, areas with high and low rates of recent sedimentation can be sharply defined. The almost complete absence of mollusks in water deeper than 200 to 300 m gives a rough sedimentologic water depth indicator. The sum of the benthonic foraminifera and mollusca was used as a relative constant reference value for the investigation of many other sediment components. The ratio between arenaceous foraminifera and those with carbonate shells shows a direct relationship to the amount of coarse grained material in the sediment as the frequence of arenaceous foraminifera depends heavily on the availability of sand grains. The nearness of "open" coasts (Iranian river mouths) is directly reflected in the high percentage of plant remains, and indirectly by the increased numbers of ostracods and vertebrates. Plant fragments do not reach their ultimate point of deposition in a free swimming state, but are transported along with the remainder of the terrigenous fine sediment. The echinoderms (mainly echinoids in the West Basin and ophiuroids in the Central Basin) attain their maximum development at the greatest depth reached by the action of the largest waves. This depth varies, depending on the exposure of the slope to the waves, between 12 to 14 and 30 to 35 m. Corals and bryozoans have proved to be good indicators of stable unchanging bottom conditions. Although bryozoans and alcyonarian spiculae are independent of water depth, scleractinians thrive only above 25 to 30 m. The beginning of recent reef growth (restricted by low winter temperatures) was seen only in one single area - on a shoal under 16 m of water. The coarse plankton fraction was studied primarily through the use of a plankton-benthos ratio. The increase in planktonic foraminifera with increasing water depth is here heavily masked by the "Adjacent sea effect" of the Persian Gulf: for the most part the foraminifera have drifted in from the Gulf of Oman. In contrast, the planktonic mollusks are able to colonize the entire Persian Gulf water body. Their amount in the plankton-benthos ratio always increases with water depth and thereby gives a reliable picture of local water depth variations. This holds true to a depth of around 400 m (corresponding to 80-90 % plankton). This water depth effect can be removed by graphical analysis, allowing the percentage of planktonic mollusks per total sample to be used as a reference base for relative sedimentation rate (sedimentation index). These values vary between 1 and > 1000 and thereby agree well with all the other lines of evidence. The "pteropod ooze" facies is then markedly dependent on the sedimentation rate and can theoretically develop at any depth greater than 65 m (proven at 80 m). It should certainly no longer be thought of as "deep sea" sediment. Based on the component distribution diagrams, grain size and carbonate content, the sediments of the Persian Gulf and the Gulf of Oman can be grouped into 5 provisional facies divisions (Chapt.19). Particularly noteworthy among these are first, the fine grained clayey marl facies occupying the 9 narrow outflow areas of rivers, and second, the coarse grained, high-carbonate marl facies rich in relict sediment which covers wide sediment-poor areas of the basin bottoms. Sediment transport is for the most part restricted to grain sizes < 150 µ and in shallow water is largely coast-parallel due to wave action at times supplemented by tidal currents. Below the wave base gravity transport prevails. The only current capable of moving sediment is the Persian Gulf outflow water in the Gulf of Oman.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Magnetoencephalography (MEG) provides a direct measure of brain activity with high combined spatiotemporal resolution. Preprocessing is necessary to reduce contributions from environmental interference and biological noise. New method The effect on the signal-to-noise ratio of different preprocessing techniques is evaluated. The signal-to-noise ratio (SNR) was defined as the ratio between the mean signal amplitude (evoked field) and the standard error of the mean over trials. Results Recordings from 26 subjects obtained during and event-related visual paradigm with an Elekta MEG scanner were employed. Two methods were considered as first-step noise reduction: Signal Space Separation and temporal Signal Space Separation, which decompose the signal into components with origin inside and outside the head. Both algorithm increased the SNR by approximately 100%. Epoch-based methods, aimed at identifying and rejecting epochs containing eye blinks, muscular artifacts and sensor jumps provided an SNR improvement of 5–10%. Decomposition methods evaluated were independent component analysis (ICA) and second-order blind identification (SOBI). The increase in SNR was of about 36% with ICA and 33% with SOBI. Comparison with existing methods No previous systematic evaluation of the effect of the typical preprocessing steps in the SNR of the MEG signal has been performed. Conclusions The application of either SSS or tSSS is mandatory in Elekta systems. No significant differences were found between the two. While epoch-based methods have been routinely applied the less often considered decomposition methods were clearly superior and therefore their use seems advisable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Independent Components Analysis (ICA) has proven itself to be a powerful signal-processing technique for solving the Blind-Source Separation (BSS) problems in different scientific domains. In the present work, an application of ICA for processing NIR hyperspectral images to detect traces of peanut in wheat flour is presented. Processing was performed without a priori knowledge of the chemical composition of the two food materials. The aim was to extract the source signals of the different chemical components from the initial data set and to use them in order to determine the distribution of peanut traces in the hyperspectral images. To determine the optimal number of independent component to be extracted, the Random ICA by blocks method was used. This method is based on the repeated calculation of several models using an increasing number of independent components after randomly segmenting the matrix data into two blocks and then calculating the correlations between the signals extracted from the two blocks. The extracted ICA signals were interpreted and their ability to classify peanut and wheat flour was studied. Finally, all the extracted ICs were used to construct a single synthetic signal that could be used directly with the hyperspectral images to enhance the contrast between the peanut and the wheat flours in a real multi-use industrial environment. Furthermore, feature extraction methods (connected components labelling algorithm followed by flood fill method to extract object contours) were applied in order to target the spatial location of the presence of peanut traces. A good visualization of the distributions of peanut traces was thus obtained

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a general mean field model of ligand-protein interactions to determine the thermodynamic equilibrium of a system at finite temperature. The method is employed in structural assessments of two human immuno-deficiency virus type 1 protease complexes where the gross effects of protein flexibility are incorporated by utilizing a data base of crystal structures. Analysis of the energy spectra for these complexes has revealed that structural and thermo-dynamic aspects of molecular recognition can be rationalized on the basis of the extent of frustration in the binding energy landscape. In particular, the relationship between receptor-specific binding of these ligands to human immunodeficiency virus type 1 protease and a minimal frustration principle is analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.