6 resultados para Visual-mental-imagery
em Chinese Academy of Sciences Institutional Repositories Grid Portal
Resumo:
As a species of internal representation, how is mental imagery organized in the brain? There are two issues related to this question: the time course and the nature of mental imagery. On the nature of mental imagery, today's imagery debate is influenced by two opposing theories: (1) Pylyshyn’s propositional theory and (2) Kosslyn’s depictive representation theory. Behavioural studies indicated that imagery encodes properties of the physical world, such as the spacial and size information of the visual world. Neuroimaging and neuropsychological data indicated that sensory cortex; especially the primary sensory cortex, is involved in imagery. In visual modality, neuroimaging data further indicated that during visual imagery, spatial information is mapped in the primary visual, providing strong evidences for depictive theory. In the auditory modality, behavioural studies also indicated that auditory imagery represents loudness and pitch of sound; this kind of neuroimaging evidence, however, is absent. The aim of the present study was to investigate the time course of auditory imagery processing, and to provide the neuroimaging evidence that imaginal auditory representations encode loudness and pitch information, using the ERP method and a cue-imagery (S1)-S2 paradigm. The results revealed that imagery effects started with an enhancement of the P2, probably indexing the top-down allocation of attention to the imagery task; and continued into a more positive-going late positive potentials (LPC), probably reflecting the formation of auditory imagery. The amplitude of this LPC was inversely related to the pitch of the imagined sound, but directly related to the loudness of the imagined sound, which were consistent with auditory perception related N1 component, providing evidences that auditory imagery encodes pitch and loudness information. When the S2 showed difference in pitch of loudness from the previously imagined S1, the behavioral performance were significantly worse and accordingly a conflict related N2 was elicited; and the high conflict elicited greater N2 amplitude than low conflict condition, providing further evidences that imagery is analog of perception and can encode pitch and loudness information. The present study suggests that imagery starts with an mechanism of top-down allocation of attention to the imagery task; and continuing into the step of imagery formation during which the physical features of the imagined stimulus can be encoded, providing supports to Kosslyn’s depictive representation theory.
Resumo:
In the history of psychology research, more attention had been focused on the relation between local processing and global processing. For the global information and the local information, which is processed earlier? And which is processed faster? Precedence of the global over the local level in visual perception has been well established by Navon with compound stimuli, and Navon’s original study gave rise to many publications, including replications, generalization to other kinds of stimuli (nonverbal material, digits), populations (infants, children, brain-damaged subjects), and tasks (lateral visual hemifield presentation, copy drawing, memory recognition, and recall), and triggered some debate about the conditions in which global precedence is and is not observed (number, size, sparsity, and goodness of the stimuli, exposure duration, etc.). However, whether there is a global advantage or precedence in other cognitive processes was less tested. Most researches had suggested that there was a functional equivalency between visual perception and visual image processing. However, it’s still unknown whether there will be a global advantage on mental rotation. In the present study, we combined the mental rotation task with the compound stimuli to explore whether the global or local advantage also existed at the mental imagery transformation stages. In two pilot studies, the perceptual global precedence was found to be present in a normal/mirror-image judgment task when the stimuli exposure time was short; while the stimuli exposure time was prolonged (stimuli kept available till subjects’ response) the perceptual global precedence was showed to be eliminated. In all of the subsequent experiments, stimili would be presented till subjects’ response. Then mental rotation was added in normal/mirror-image judgment (some of the stimuli were rotated to certain angles from upright) in normal experiments, experiment 1 and 2 observed a global advantage on mental rotation both with a focused-attention design (Experiment 1) and divided-attention design (Experiment 2). Subjects’ reaction times were increased with rotation angles, and the accuracy was decreased with rotation angles, suggesting that subject need a mental rotation to make a normal/mirror judgment. The most important results were that subjects’ response to global rotation was faster than that to local rotation. The analysis of slope of rotation further indicated that, to some extend, the speed of global rotation was faster than that of local rotation. These results suggest a global advantage on mental rotation. Experiment 3 took advantage of the high temporal resolution of event-related potentials to explore the temporal pattern of global advantage on mental rotation. Event-related potential results indicated the parietal P300 amplitude was inversely related to the character orientation, and the local rotation task delayed the onset of the mental-rotation-related negativity at parietal electrodes. None clear effect was found for occipital N150. All these results suggested that the global rotation was not only processed faster than local rotation, but also occurred earlier than local rotation. Experiments 4 and 5 took the effect size of global advantage as the main dependent variable, and visual angle and exposure duration of the stimuli as independent variables, to examine the relationship between perceptual global precedence and global advantage on mental rotation. Results indicated that visual angle and exposure duration did not influence the effect size of global advantage on mental rotation. The global advantage on mental rotation and the perceptual global advantage seemed to be independent but their effects could be accumulated at some condition. These findings not only contribute to revealing a new processing property of mental rotation, but also deepen our understanding of the problem of global/local processing and shed light on the debate on locus of global precedence.
Resumo:
One of the great puzzles in the psychology of visual perception is that the visual world appears to be a coherent whole despite our viewing it through temporally discontinuous series of eye fixations. The investigators attempted to explain this puzzle from the perspective of sequential visual information integration. In recent years, investigators hypothesized that information maintained in the visual short-term memory (VSTM) could become visual mental images gradually during time delay in visual buffer and integrated with information perceived currently. Some elementary studies had been carried out to investigate the integration between VSTM and visual percepts, but further research is required to account for several questions on the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information. Based on the theory of similarity between visual mental image and visual perception, this research (including three studies) employed the temporal integration paradigm and empty cell localization task to further explore the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information (sequential arrays). The purpose of study 1 was to further explore the temporal characteristics of sequential visual information integration by examining the effects of encoding time of sequential stimuli on the integration of sequential visual information. The purpose of study 2 was to further explore the spatial characteristics of sequential visual information integration by investigating the effects of spatial characteristics change on the integration of sequential visual information. The purpose of study 3 was to explore the information representation of information maintained in the VSTM and integration mechanism in the process of integrating sequential visual information by employing the behavioral experiments and eye tracking technology. The results indicated that: (1) Sequential arrays could be integrated without strategic instruction. Increasing the duration of the first array could cause improvement in performance and increasing the duration of the second array could not improve the performance. Temporal correlation model was not fit to explain the sequential array integration under long-ISI conditions. (2) Stimuli complexity influenced not only the overall performance of sequential arrays but also the values of ISI at asymptotic level of performance. Sequential arrays still could be integrated when the spatial characteristics of sequential arrays changed. During ISI, constructing and manipulating of visual mental image of array 1 were two separate processing phases. (3) During integrating sequential arrays, people represented the pattern constituted by the objects' image maintained in the VSTM and the topological characteristics of the objects' image had some impact on fixation location. The image-perception integration hypothesis was supported when the number of dots in array 1 was less than empty cells, and the convert-and-compare hypothesis was supported when the number of the dot in array 1 was equal to or more than empty cells. These findings not only contribute to make people understand the process of sequential visual information integration better, but also have significant practical application in the design of visual interface.
Resumo:
随着油藏数值模拟技术的发展以及油藏数值模拟软件的不断改进和完善,油藏数值模拟软件在油田开发中的应用越来越广泛。对油藏数值模拟软件计算出的数据进行整理不仅枯燥而且花费了大量时间。本文利用Visual Basic语言编制了处理ECLIPSE软件生成的油气田开发指标数据软件(RSMAN)。该软件方便了油气田开发指标的整理和汇总,软件界面友好,操作简单。
Resumo:
MicrosoftVisualC十十6.0作为Microsoft Visual Studio的重要组成部分,包含了迄今为止功能最为强大的基于Windows的应用框架,在同类产品中处于领先地位。VisualC十十6.0是Microsoft迄今为止最全面、最完善的程序开发工具,为了适应各种编程风格,该软件提供了各种各样的辅助工具,在发挥编程能力和提高灵活性方面达到了空前的水平。与以往VisualC十十的各种版本相比较,VisualC十十6.0在编程环境、程序语言技术等方面做了许多改进,从而使VisualC十十更加适合专业程序员快速进行应用程序的开发。
本书内容丰富、图文并茂,是一本适合各种读者学习VisualC十十6.0的优秀参考书。
目 录
第一章 VisualC十十6.0简介及安装
1.1VisualC十十6.0新特性
1.2viSualC十十6.0开发环境简介
1.3如何学习使用VisualC十十6.0
1.4VisualC十十6.0的安装
第二章 走进C十十的世界
2.1类和对象的简介
2.2继承和多态性――一个具体的例子
2.3内嵌对象
2.4在栈中申请对象
2.5全程对象的申请
2.6对象之间的相互关系――指针数据成员
2.7this指针的使用
2.8对指针的引用
2.9友元类和友元函数
2.10静态类成员
2.11重载运算符
2.12从代码中分离出类定义
2.13匈牙利表示法
第三章 VisualC十十6.0的编程环境
3.1VisualC十十6.0主窗口
3.2VisualC十十6.0工具栏
3.3VisualC十十6.0菜单栏
3.4项目与项目工作区
3.5资源与资源编辑器
第四章 编一个最简单的VC十十程序
4.1什么是AppWizard?
4.2迎接你的第一个AppWizard程序
4.3“Iamaprogrammer.”在哪儿?
第五章 程序框架入门
5.1一个简化过的程序框架
5.2WinMain():第一个动作
5.3登记窗口类
5.4创建一个窗口
5.5显示窗口
5.6显示出那条消息
5.7窗口类与窗口对象
第六章 消息循环
6.1在消息循环中兜圈子
6.2对事件做出响应:WindowFun()
6.3响应不同的消息
6.4现在你还跟得上吗?
6.5设备界面进行交互
第七章 精通程序框架
7.1WinMain()函数在哪儿?
7.2应用程序框架和源文件
7.3工具条、状态条和打印等选项
7.4程序的控制流程
第八章 使用classWizard编程
8.1使用ClassWizard添加消息处理函数
8.2classWizard功能介绍
8.3传送鼠标消息
8.4保存鼠标绘图的信息
第九章 视图与文档
9.1Document-View模式
9.2从视图中分离出文档
9.3保存文档
9.4再访MyProg2.cpp
第十章 对象连接与嵌入(OLE)及其自动化
10.1公共对象模式(COM)
10.2类厂(classfactory)
10.3OLE自动化
10.4IDispatch接口
第十一章 动态连接库(DLLs)
11.1为什么使用DLL
11.2传统的DLL
11.3MFC库DLL
11.4MyProg4A――编写自己的类库扩展DLL
11.5MyProg4B――使用MFC库扩展DLL
11.6资源访问
第十二章 图形设备接口
12.1设备环境类
12.2GDI对象
12.3Windows的颜色映射
12.4映射方式
12.5字体
12.6MyProg3例程序
12.7MyProg3B程序
12.8MyPr0g3C例程序――使用CScrollView
第十三章 对话框
13.1在状态条上显示对话控件的帮助信息
13.2利用Fi1eOpen通用对话框打开多个文件
13.3定制通用文件对话框
13.4扩展和缩减一个对话框
13.5显示一个模式或无模式对话框
13.6编写定制的DDX/DDV例程
第十四章 剖析工具Spy十+
14.1窗体
14.2消息
14.3进程与线程
第十五章 代码调试
15.1TRACE
15.2调试框架
15.3自我诊断
15.4调试代码的作用
15.5用Dump()显示对象的信息
15.6检查内存
Resumo:
A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition. (C) 1996 Optical Society of America