4 resultados para GENERATORS
em National Center for Biotechnology Information - NCBI
Resumo:
Averaged event-related potential (ERP) data recorded from the human scalp reveal electroencephalographic (EEG) activity that is reliably time-locked and phase-locked to experimental events. We report here the application of a method based on information theory that decomposes one or more ERPs recorded at multiple scalp sensors into a sum of components with fixed scalp distributions and sparsely activated, maximally independent time courses. Independent component analysis (ICA) decomposes ERP data into a number of components equal to the number of sensors. The derived components have distinct but not necessarily orthogonal scalp projections. Unlike dipole-fitting methods, the algorithm does not model the locations of their generators in the head. Unlike methods that remove second-order correlations, such as principal component analysis (PCA), ICA also minimizes higher-order dependencies. Applied to detected—and undetected—target ERPs from an auditory vigilance experiment, the algorithm derived ten components that decomposed each of the major response peaks into one or more ICA components with relatively simple scalp distributions. Three of these components were active only when the subject detected the targets, three other components only when the target went undetected, and one in both cases. Three additional components accounted for the steady-state brain response to a 39-Hz background click train. Major features of the decomposition proved robust across sessions and changes in sensor number and placement. This method of ERP analysis can be used to compare responses from multiple stimuli, task conditions, and subject states.
Resumo:
Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.
Resumo:
Mammalian hearing depends on the enhanced mechanical properties of the basilar membrane within the cochlear duct. The enhancement arises through the action of outer hair cells that act like force generators within the organ of Corti. Simple considerations show that underlying mechanism of somatic motility depends on local area changes within the lateral membrane of the cell. The molecular basis for this phenomenon is a dense array of particles that are inserted into the basolateral membrane and that are capable of sensing membrane potential field. We show here that outer hair cells selectively take up fructose, at rates high enough to suggest that a sugar transporter may be part of the motor complex. The relation of these findings to a recent candidate for the molecular motor is also discussed.
Resumo:
As a measure of dynamical structure, short-term fluctuations of coherence between 0.3 and 100 Hz in the electroencephalogram (EEG) of humans were studied from recordings made by chronic subdural macroelectrodes 5-10 mm apart, on temporal, frontal, and parietal lobes, and from intracranial probes deep in the temporal lobe, including the hippocampus, during sleep, alert, and seizure states. The time series of coherence between adjacent sites calculated every second or less often varies widely in stability over time; sometimes it is stable for half a minute or more. Within 2-min samples, coherence commonly fluctuates by a factor up to 2-3, in all bands, within the time scale of seconds to tens of seconds. The power spectrum of the time series of these fluctuations is broad, extending to 0.02 Hz or slower, and is weighted toward the slower frequencies; little power is faster than 0.5 Hz. Some records show conspicuous swings with a preferred duration of 5-15s, either irregularly or quasirhythmically with a broad peak around 0.1 Hz. Periodicity is not statistically significant in most records. In our sampling, we have not found a consistent difference between lobes of the brain, subdural and depth electrodes, or sleeping and waking states. Seizures generally raise the mean coherence in all frequencies and may reduce the fluctuations by a ceiling effect. The coherence time series of different bands is positively correlated (0.45 overall); significant nonindependence extends for at least two octaves. Coherence fluctuations are quite local; the time series of adjacent electrodes is correlated with that of the nearest neighbor pairs (10 mm) to a coefficient averaging approximately 0.4, falling to approximately 0.2 for neighbors-but-one (20 mm) and to < 0.1 for neighbors-but-two (30 mm). The evidence indicates fine structure in time and space, a dynamic and local determination of this measure of cooperativity. Widely separated frequencies tending to fluctuate together exclude independent oscillators as the general or usual basis of the EEG, although a few rhythms are well known under special conditions. Broad-band events may be the more usual generators. Loci only a few millimeters apart can fluctuate widely in seconds, either in parallel or independently. Scalp EEG coherence cannot be predicted from subdural or deep recordings, or vice versa, and intracortical microelectrodes show still greater coherence fluctuation in space and time. Widely used computations of chaos and dimensionality made upon data from scalp or even subdural or depth electrodes, even when reproducible in successive samples, cannot be considered representative of the brain or the given structure or brain state but only of the scale or view (receptive field) of the electrodes used. Relevant to the evolution of more complex brains, which is an outstanding fact of animal evolution, we believe that measures of cooperativity are likely to be among the dynamic features by which major evolutionary grades of brains differ.