864 resultados para cognitive control, aging, heart rate variability (HRV), respiratory sinus arrhythmia (RSA), event-related potentials (ERPs)
Resumo:
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Resumo:
A large variety of social signals, such as facial expression and body language, are conveyed in everyday interactions and an accurate perception and interpretation of these social cues is necessary in order for reciprocal social interactions to take place successfully and efficiently. The present study was conducted to determine whether impairments in social functioning that are commonly observed following a closed head injury, could at least be partially attributable to disruption in the ability to appreciate social cues. More specifically, an attempt was made to determine whether face processing deficits following a closed head injury (CHI) coincide with changes in electrophysiological responsivity to the presentation of facial stimuli. A number of event-related potentials (ERPs) that have been linked specifically to various aspects of visual processing were examined. These included the N170, an index of structural encoding ability, the N400, an index of the ability to detect differences in serially presented stimuli, and the Late Positivity (LP), an index of the sensitivity to affective content in visually-presented stimuli. Electrophysiological responses were recorded while participants with and without a closed head injury were presented with pairs of faces delivered in a rapid sequence and asked to compare them on the basis of whether they matched with respect to identity or emotion. Other behavioural measures of identity and emotion recognition were also employed, along with a small battery of standard neuropsychological tests used to determine general levels of cognitive impairment. Participants in the CHI group were impaired in a number of cognitive domains that are commonly affected following a brain injury. These impairments included reduced efficiency in various aspects of encoding verbal information into memory, general slower rate of information processing, decreased sensitivity to smell, and greater difficulty in the regulation of emotion and a limited awareness of this impairment. Impairments in face and emotion processing were clearly evident in the CHI group. However, despite these impairments in face processing, there were no significant differences between groups in the electrophysiological components examined. The only exception was a trend indicating delayed N170 peak latencies in the CHI group (p = .09), which may reflect inefficient structural encoding processes. In addition, group differences were noted in the region of the N100, thought to reflect very early selective attention. It is possible, then, that facial expression and identity processing deficits following CHI are secondary to (or exacerbated by) an underlying disruption of very early attentional processes. Alternately the difficulty may arise in the later cognitive stages involved in the interpretation of the relevant visual information. However, the present data do not allow these alternatives to be distinguished. Nonetheless, it was clearly evident that individuals with CHI are more likely than controls to make face processing errors, particularly for the more difficult to discriminate negative emotions. Those working with individuals who have sustained a head injury should be alerted to this potential source of social monitoring difficulties which is often observed as part of the sequelae following a CHI.
Resumo:
La capture contingente de l’attention est un phénomène dans lequel les mécanismes d’orientation endogène et exogène de l’attention interagissent, de sorte qu’une propriété qui est pertinente à la tâche en cours, et donc qui fait l’objet de contrôles attentionnels descendants, endogènes, capture l’attention de façon involontaire, exogène, vers sa position spatiale. Dans cette thèse, trois aspects de ce phénomène ont été étudiés. Premièrement, en explorant le décours temporel de la capture contingente de l’attention et la réponse électrophysiologique à des distracteurs capturant ainsi l’attention, il a été établi que le déficit comportemental symptomatique de cette forme de capture était lié à un déploiement de l’attention visuospatiale vers la position du distracteur, et que ce traitement spatialement sélectif pouvait être modulé par le partage d’autres propriétés entre le distracteur et la cible. Deuxièmement, l’utilisation des potentiels liés aux événements a permis de dissocier l’hypothèse de capture contingente de l’attention et l’hypothèse de capture pure de l’attention. Selon cette interprétation, un stimulus ne peut capturer l’attention aux stades préattentifs de traitement que s’il présente le plus fort signal ascendant parmi tous les stimuli présents. Les contrôles attentionnels descendants ne serviraient donc qu’à désengager l’attention d’un tel stimulus. Les résultats présentés ici vont à l’encontre d’une telle interprétation, puisqu’un déploiement de l’attention visuospatiale, indexé par la présence d’une N2pc, n’a été observé que lorsqu’un distracteur périphérique possédait une caractéristique pertinente à la tâche en cours, même lorsque ses propriétés de bas niveau n’étaient pas plus saillantes que celles des autres items présents. Finalement, en utilisant un paradigme où la cible était définie en fonction de son appartenance à une catégorie alphanumérique, il a été démontré que des contrôles attentionnels en faveur d’un attribut conceptuel pouvaient guider l’attention visuospatiale de façon involontaire, rejetant une nouvelle fois l’hypothèse de la capture pure de l’attention.
Resumo:
Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.
Resumo:
RATIONALE: Olanzapine is an atypical antipsychotic drug with a more favourable safety profile than typical antipsychotics with a hitherto unknown topographic quantitative electroencephalogram (QEEG) profile. OBJECTIVES: We investigated electrical brain activity (QEEG and cognitive event related potentials, ERPs) in healthy subjects who received olanzapine. METHODS: Vigilance-controlled, 19-channel EEG and ERP in an auditory odd-ball paradigm were recorded before and 3 h, 6 h and 9 h after administration of either a single dose of placebo or olanzapine (2.5 mg and 5 mg) in ten healthy subjects. QEEG was analysed by spectral analysis and evaluated in nine frequency bands. For the P300 component in the odd-ball ERP, the amplitude and latency was analysed. Statistical effects were tested using a repeated-measurement analysis of variance. RESULTS: For the interaction between time and treatment, significant effects were observed for theta, alpha-2, beta-2 and beta-4 frequency bands. The amplitude of the activity in the theta band increased most significantly 6 h after the 5-mg administration of olanzapine. A pronounced decrease of the alpha-2 activity especially 9 h after 5 mg olanzapine administration could be observed. In most beta frequency bands, and most significantly in the beta-4 band, a dose-dependent decrease of the activity beginning 6 h after drug administration was demonstrated. Topographic effects could be observed for the beta-2 band (occipital decrease) and a tendency for the alpha-2 band (frontal increase and occipital decrease), both indicating a frontal shift of brain electrical activity. There were no significant changes in P300 amplitude or latency after drug administration. Conclusion: QEEG alterations after olanzapine administration were similar to EEG effects gained by other atypical antipsychotic drugs, such as clozapine. The increase of theta activity is comparable to the frequency distribution observed for thymoleptics or antipsychotics for which treatment-emergent somnolence is commonly observed, whereas the decrease of beta activity observed after olanzapine administration is not characteristic for these drugs. There were no clear signs for an increased cerebral excitability after a single-dose administration of 2.5 mg and 5 mg olanzapine in healthy controls.
Resumo:
Cognitive event-related potentials (ERPs) are widely employed in the study of dementive disorders. The morphology of averaged response is known to be under the influence of neurodegenerative processes and exploited for diagnostic purposes. This work is built over the idea that there is additional information in the dynamics of single-trial responses. We introduce a novel way to detect mild cognitive impairment (MCI) from the recordings of auditory ERP responses. Using single trial responses from a cohort of 25 amnestic MCI patients and a group of age-matched controls, we suggest a descriptor capable of encapsulating single-trial (ST) response dynamics for the benefit of early diagnosis. A customized vector quantization (VQ) scheme is first employed to summarize the overall set of ST-responses by means of a small-sized codebook of brain waves that is semantically organized. Each ST-response is then treated as a trajectory that can be encoded as a sequence of code vectors. A subject's set of responses is consequently represented as a histogram of activated code vectors. Discriminating MCI patients from healthy controls is based on the deduced response profiles and carried out by means of a standard machine learning procedure. The novel response representation was found to improve significantly MCI detection with respect to the standard alternative representation obtained via ensemble averaging (13% in terms of sensitivity and 6% in terms of specificity). Hence, the role of cognitive ERPs as biomarker for MCI can be enhanced by adopting the delicate description of our VQ scheme.
Resumo:
Earlier research found evidence for electro-cortical race bias towards black target faces in white American participants irrespective of the task relevance of race. The present study investigated whether an implicit race bias generalizes across cultural contexts and racial in- and out-groups. An Australian sample of 56 Chinese and Caucasian males and females completed four oddball tasks that required sex judgements for pictures of male and female Chinese and Caucasian posers. The nature of the background (across task) and of the deviant stimuli (within task) was fully counterbalanced. Event-related potentials (ERPs) to deviant stimuli recorded from three midline sites were quantified in terms of mean amplitude for four components: N1, P2, N2 and a late positive complex (LPC; 350–700 ms). Deviants that differed from the backgrounds in sex or race elicited enhanced LPC activity. These differences were not modulated by participant race or sex. The current results replicate earlier reports of effects of poser race relative to background race on the LPC component of the ERP waveform. In addition, they indicate that an implicit race bias occurs regardless of participant's or poser's race and is not confined to a particular cultural context.
Resumo:
Discovering the means to prevent and cure schizophrenia is a vision that motivates many scientists. But in order to achieve this goal, we need to understand its neurobiological basis. The emergent metadiscipline of cognitive neuroscience fields an impressive array of tools that can be marshaled towards achieving this goal, including powerful new methods of imaging the brain (both structural and functional) as well as assessments of perceptual and cognitive capacities based on psychophysical procedures, experimental tasks and models developed by cognitive science. We believe that the integration of data from this array of tools offers the greatest possibilities and potential for advancing understanding of the neural basis of not only normal cognition but also the cognitive impairments that are fundamental to schizophrenia. Since sufficient expertise in the application of these tools and methods rarely reside in a single individual, or even a single laboratory, collaboration is a key element in this endeavor. Here, we review some of the products of our integrative efforts in collaboration with our colleagues on the East Coast of Australia and Pacific Rim. This research focuses on the neural basis of executive function deficits and impairments in early auditory processing in patients using various combinations of performance indices (from perceptual and cognitive paradigms), ERPs, fMRI and sMRI. In each case, integration of two or more sources of information provides more information than any one source alone by revealing new insights into structure-function relationships. Furthermore, the addition of other imaging methodologies (such as DTI) and approaches (such as computational models of cognition) offers new horizons in human brain imaging research and in understanding human behavior.
Resumo:
Empirical evidence suggests impaired facial emotion recognition in schizophrenia. However, the nature of this deficit is the subject of ongoing research. The current study tested the hypothesis that a generalized deficit at an early stage of face-specific processing (i.e. putatively subserved by the fusiform gyrus) accounts for impaired facial emotion recognition in schizophrenia as opposed to the Negative Emotion-specific Deficit Model, which suggests impaired facial information processing at subsequent stages. Event-related potentials (ERPs) were recorded from 11 schizophrenia patients and 15 matched controls while performing a gender discrimination and a facial emotion recognition task. Significant reduction of the face-specific vertex positive potential (VPP) at a peak latency of 165 ms was confirmed in schizophrenia subjects whereas their early visual processing, as indexed by P1, was found to be intact. Attenuated VPP was found to correlate with subsequent P3 amplitude reduction and to predict accuracy when performing a facial emotion discrimination task. A subset of ten schizophrenia patients and ten matched healthy control subjects also performed similar tasks in the magnetic resonance imaging scanner. Patients showed reduced blood oxygenation level-dependent (BOLD) activation in the fusiform, inferior frontal, middle temporal and middle occipital gyrus as well as in the amygdala. Correlation analyses revealed that VPP and the subsequent P3a ERP components predict fusiform gyrus BOLD activation. These results suggest that problems in facial affect recognition in schizophrenia may represent flow-on effects of a generalized deficit in early visual processing.
Resumo:
In the present work, effects of stimulus repetition and change in a continuous stimulus stream on the processing of somatosensory information in the human brain were studied. Human scalp-recorded somatosensory event-related potentials (ERPs) and magnetoencephalographic (MEG) responses rapidly diminished with stimulus repetition when mechanical or electric stimuli were applied to fingers. On the contrary, when the ERPs and multi-unit a ctivity (MUA) were directly recorded from the primary (SI) and secondary (SII) somatosensory cortices in a monkey, there was no marked decrement in the somatosensory responses as a function of stimulus repetition. These results suggest that this rate effect is not due to the response diminution in the SI and SII cortices. Obviously the responses to the first stimulus after a long "silent" period are nhanced due to unspecific initial orientation, originating in more broadly distributed and/or deeper neural structures, perhaps in the prefrontal cortices. With fast repetition rates not only the late unspecific but also some early specific somatosensory ERPs were diminished in amplitude. The fast decrease of the ERPs as a function of stimulus repetition is mainly due to the disappearance of the orientation effect and with faster repetition rates additively due to stimulus specific refractoriness. A sudden infrequent change in the continuous stimulus stream also enhanced somatosensory MEG responses to electric stimuli applied to different fingers. These responses were quite similar to those elicited by the deviant stimuli alone when the frequent standard stimuli were omitted. This enhancement was obviously due to the release from refractoriness because the neural structures generating the responses to the infrequent deviants had more time to recover from the refractoriness than the respective structures for the standards. Infrequent deviant mechanical stimuli among frequent standard stimuli also enhanced somatosensory ERPs and, in addition, they elicited a new negative wave which did not occur in the deviants-alone condition. This extra negativity could be recorded to deviations in the stimulation site and in the frequency of the vibratory stimuli. This response is probably a somatosensory analogue of the auditory mismatch negativity (MMN) which has been suggested to reflect a neural mismatch process between the sensory input and the sensory memory trace.
Resumo:
Cognitive impairments of attention, memory and executive functions are a fundamental feature of the pathophysiology of schizophrenia. The neurophysiological and neurochemical changes in the auditory cortex are shown to underlie cognitive impairmentsin schizophrenia patients. Functional state of the neural substrate of auditory information processing could be objectively and non-invasively probed with auditory event-related potentials (ERPs) and event- related fields (ERFs). In the current work, we explored the neurochemical effect on the neural origins of auditory information processing in relation to schizophrenia. By means of ERPs/ERFs we aimed to determine how neural substrates of auditory information processing are modulated by antipsychotic medication in schizophrenia spectrum patients (Studies I, II) and by neuropharmacological challenges in healthy human subjects (Studies III, IV). First, with auditory ERPs we investigated the effects of olanzapine (Study I) and risperidone (Study II) in a group of patients with schizophrenia spectrum disorders. After 2 and 4 weeks of treatment, olanzapine has no significant effects on mismatch negativity(MMN) and P300, which, as it has been suggested, respectively reflect preattentive and attention-dependent information processing. After 2 weeks of treatment, risperidone has no significant effect on P300, however risperidone reduces P200 amplitude. This latter effect of risperidone on neural resources responsible for P200 generation could be partly explained through the action of dopamine. Subsequently, we used simultaneous EEG/MEG to investigate the effects of memantine (Study III) and methylphenidate (Study IV) in healthy subjects. We found that memantine modulates MMN response without changing other ERP components. This could be interpreted as being due to the possible influence of memantine through the NMDA receptors on auditory change- detection mechanism, with processing of auditory stimuli remaining otherwise unchanged. Further, we found that methylphenidate does not modulate the MMN response. This finding could indicate no association between catecholaminergic activities and electrophysiological measures of preattentive auditory discrimination processes reflected in the MMN. However, methylphenidate decreases the P200 amplitudes. This could be interpreted as a modulation of auditory information processing reflected in P200 by dopaminergic and noradrenergic systems. Taken together, our set of studies indicates a complex pattern of neurochemical influences produced by the antipsychotic drugs in the neural substrate of auditory information processing in patients with schizophrenia spectrum disorders and by the pharmacological challenges in healthy subjects studied with ERPs and ERFs.
Resumo:
Although immensely complex, speech is also a very efficient means of communication between humans. Understanding how we acquire the skills necessary for perceiving and producing speech remains an intriguing goal for research. However, while learning is likely to begin as soon as we start hearing speech, the tools for studying the language acquisition strategies in the earliest stages of development remain scarce. One prospective strategy is statistical learning. In order to investigate its role in language development, we designed a new research method. The method was tested in adults using magnetoencephalography (MEG) as a measure of cortical activity. Neonatal brain activity was measured with electroencephalography (EEG). Additionally, we developed a method for assessing the integration of seen and heard syllables in the developing brain as well as a method for assessing the role of visual speech when learning phoneme categories. The MEG study showed that adults learn statistical properties of speech during passive listening of syllables. The amplitude of the N400m component of the event-related magnetic fields (ERFs) reflected the location of syllables within pseudowords. The amplitude was also enhanced for syllables in a statistically unexpected position. The results suggest a role for the N400m component in statistical learning studies in adults. Using the same research design with sleeping newborn infants, the auditory event-related potentials (ERPs) measured with EEG reflected the location of syllables within pseudowords. The results were successfully replicated in another group of infants. The results show that even newborn infants have a powerful mechanism for automatic extraction of statistical characteristics from speech. We also found that 5-month-old infants integrate some auditory and visual syllables into a fused percept, whereas other syllable combinations are not fully integrated. Auditory syllables were paired with visual syllables possessing a different phonetic identity, and the ERPs for these artificial syllable combinations were compared with the ERPs for normal syllables. For congruent auditory-visual syllable combinations, the ERPs did not differ from those for normal syllables. However, for incongruent auditory-visual syllable combinations, we observed a mismatch response in the ERPs. The results show an early ability to perceive speech cross-modally. Finally, we exposed two groups of 6-month-old infants to artificially created auditory syllables located between two stereotypical English syllables in the formant space. The auditory syllables followed, equally for both groups, a unimodal statistical distribution, suggestive of a single phoneme category. The visual syllables combined with the auditory syllables, however, were different for the two groups, one group receiving visual stimuli suggestive of two separate phoneme categories, the other receiving visual stimuli suggestive of only one phoneme category. After a short exposure, we observed different learning outcomes for the two groups of infants. The results thus show that visual speech can influence learning of phoneme categories. Altogether, the results demonstrate that complex language learning skills exist from birth. They also suggest a role for the visual component of speech in the learning of phoneme categories.
Resumo:
This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments. Palabras clave
Resumo:
Although the influence of emotional states on immune function has been generally recognized, researches on the effects of negative emotion on individual SIgA levels have reported mixed findings. Our study aimed to elucidate the relationship between changes in EEG activity and cognitive and psychological mechanisms to the immune changes induced by negative emotion. In experiment one, we investigated how the negative emotional arousal that was induced by watching a number of unpleasant pictures altered the concentration of secretory immunoglobulin A (SIgA). Although our results found discrepancies in the changing tendency of SIgA concentration among participants (some participants’ SIgA decreased after watching unpleasant pictures, whereas others increased), further analysis revealed a coherency among the changing of SIgA concentration, participants’ general coping styles and their actual emotion regulation strategies in perceiving unpleasant pictures, and the event-related potentials (ERPs) associated with the watching of unpleasant pictures. The participants whose SIgA increased after watching unpleasant pictures (the increasers) had higher positive coping scores in the Trait Coping Styles Questionnaire (TCSQ) than those whose SIgA decreased (the decreasers). Also, relative to the decreasers, the increasers tended to use more emotion regulation strategies especially when the presented pictures were extremely negative and exhibited a reverse dissociation pattern between the extremely negative pictures and the moderately negative ones in the amplitude of late positive potential (LPP) that was related to the cognitive evaluation of stimuli’s meaning. On this basis, Event-related potentials were recorded first while participants passively viewed unpleasant pictures, and then during an emotion regulation block in which participants were instructed to reappraise unpleasant pictures in the experiment two. We also collected the immune index before and after the passive viewing block and the emotion regulation block. Our study proved that participants felt a less intense emotional response to unpleasant pictures that followed a reappraisal instruction. The decreasing emotional responding to unpleasant pictures decreased the amplitude of the LPP. But larger N2 was induced in the emotion regulation block, because the participants needed to obtained more attentional resources to detect and integrate more stimulus features to use the cognitive reappraisal strategy effectively. The present study has important theoretic and practical significance. For the theoretic significance, our study elucidated the relationship between changes in EEG activity and cognitive and psychological mechanisms to the immune changes induced by negative emotion by using the technologies of ERP, experimental interview and psychological measurement. Meanwhile, our study also provided an explanation for the different changing tendencies of SIgA induced by negative emotions, and it plays an important role in further studying the cognitive neural mechanisms of immune level in response to emotion. As to the practical significance, our study suggests that individuals who use active emotion regulation in the face of negative emotion stimuli may experience significantly increases in immune system function, subsequently lowering the possibility of infection.
Resumo:
The time-courses of orthographic, phonological and semantic processing of Chinese characters were investigated systematically with multi-channel event-related potentials (ERPs). New evidences concerning whether phonology or semantics is processed first and whether phonology mediates semantic access were obtained, supporting and developing the new concept of repetition, overlapping, and alternating processing in Chinese character recognition. Statistic parameter mapping based on physiological double dissociation has been developed. Seven experiments were conducted: I) deciding which type of structure, left-right or non-left-right, the character displayed on the screen was; 2) deciding whether or not there was a vowel/a/in the pronunciation of the character; 3) deciding which classification, natural object or non-natural object, the character was; 4) deciding which color, red or green, the character was; 5) deciding which color, red or green, the non-character was; 6) fixing on the non-character; 7) fixing on the crosslet. The main results are: 1. N240 and P240:N240 and P240 localized at occipital and prefrontal respectively were found in experiments 1, 2, 3, and 4, but not in experiments 5, 6, or 7. The difference between the former 4 and the latter 3 experiments was only their stimuli: the former's were true Chinese characters while the latter's were non-characters or crosslet. Thus Chinese characters were related to these two components, which reflected unique processing of Chinese characters peaking at about 240 msec. 2. Basic visual feature analysis: In comparison with experiment 7 there was a common cognitive process in experiments 1, 2, 4, and 6 - basic visual feature analysis. The corresponding ERP amplitude increase in most sites started from about 60 msec. 3. Orthography: The ERP differences located at the main processing area of orthography (occipital) between experiments 1, 2, 3, 4 and experiment 5 started from about 130 msec. This was the category difference between Chinese characters and non-characters, which revealed that orthographic processing started from about 130 msec. The ERP differences between the experiments 1, 2, 3 and the experiment 4 occurred in 210-250, 230-240, and 190-250 msec respectively, suggesting orthography was processed again. These were the differences between language and non-language tasks, which revealed a higher level processing than that in the above mentioned 130 msec. All the phenomena imply that the orthographic processing does not finished in one time of processing; the second time of processing is not a simple repetition, but a higher level one. 4. Phonology: The ERPs of experiment 2 (phonological task) were significantly stronger than those of experiment 3 (semantic task) at the main processing areas of phonology (temporal and left prefrontal) starting from about 270 msec, which revealed phonologic processing. The ERP differences at left frontal between experiment 2 and experiment 1 (orthographic task) started from about 250 msec. When comparing phonological task with experiment 4 (character color decision), the ERP differences at left temporal and prefrontal started from about 220 msec. Thus phonological processing may start before 220 msec. 5. Semantic: The ERPs of experiment 3 (semantic task) were significantly stronger than those of experiment 2 (phonological task) at the main processing areas of semantics (parietal and occipital) starting from about 290 msec, which revealed semantic processing. The ERP differences at these areas between experiment 3 and experiment 4 (character color decision) started from about 270 msec. The ERP differences between experiment 3 and experiment 1 (orthographic task) started from about 260 msec. Thus semantic processing may start before 260 msec. 6. Overlapping of phonological and semantic processing: From about 270 to 350 msec, the ERPs of experiment 2 (phonological task) were significantly larger than those of experiment 3 (semantic task) at the main processing areas of phonology (temporal and left prefrontal); while from about 290-360 msec, the ERPs of experiment 3 were significantly larger than those of experiment 2 at the main processing areas of semantics (frontal, parietal, and occipital). Thus phonological processing may start earlier than semantic and their time-courses may alternate, which reveals parallel processing. 7. Semantic processing needs part phonology: When experiment 1 (orthographic task) served as baseline, the ERPs of experiment 2 and 3 (phonological and semantic tasks) significantly increased at the main processing areas of phonology (left temporal and frontal) starting from about 250 msec. The ERPs of experiment 3, besides, increased significantly at the main processing areas of semantics (parietal and frontal) starting from about 260 msec. When experiment 4 (character color decision) served as baseline, the ERPs of experiment 2 and 3 significantly increased at phonological areas (left temporal and frontal) starting from about 220 msec. The ERPs of experiment 3, similarly, increased significantly at semantic areas (parietal and frontal) starting from about270 msec. Hence, before semantic processing, a part of phonological information may be required. The conclusion could be got from above results in the present experimental conditions: 1. The basic visual feature processing starts from about 60 msec; 2. Orthographic processing starts from about 130 msec, and repeats at about 240 msec. The second processing is not simple repetition of the first one, but a higher level processing; 3. Phonological processing begins earlier than semantic, and their time-courses overlap; 4. Before semantic processing, a part of phonological information may be required; 5. The repetition, overlapping, and alternating of the orthographic, phonological and semantic processing of Chinese characters could exist in cognition. Thus the problem of whether phonology mediates semantics access is not a simple, but a complicated issue.