942 resultados para scene
Resumo:
Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.
Resumo:
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full-and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. © 1992-2012 IEEE.
Resumo:
This work addresses the challenging problem of unconstrained 3D human pose estimation (HPE) from a novel perspective. Existing approaches struggle to operate in realistic applications, mainly due to their scene-dependent priors, such as background segmentation and multi-camera network, which restrict their use in unconstrained environments. We therfore present a framework which applies action detection and 2D pose estimation techniques to infer 3D poses in an unconstrained video. Action detection offers spatiotemporal priors to 3D human pose estimation by both recognising and localising actions in space-time. Instead of holistic features, e.g. silhouettes, we leverage the flexibility of deformable part model to detect 2D body parts as a feature to estimate 3D poses. A new unconstrained pose dataset has been collected to justify the feasibility of our method, which demonstrated promising results, significantly outperforming the relevant state-of-the-arts. © 2013 IEEE.
Resumo:
The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems.
Resumo:
This paper describes an interactive system for quickly modelling 3D body shapes from a single image. It provides the user with a convenient way to obtain their 3D body shapes so as to try on virtual garments online. For the ease of use, we first introduce a novel interface for users to conveniently extract anthropometric measurements from a single photo, while using readily available scene cues for automatic image rectification. Then, we propose a unified probabilistic framework using Gaussian processes, which predict the body parameters from input measurements while correcting the aspect ratio ambiguity resulting from photo rectification. Extensive experiments and user studies have supported the efficacy of our system. This system is now being exploited commercially online1. © 2011. The copyright of this document resides with its authors.
co-creativepen toolkit: a pen-based 3d toolkit for children cooperatly designing virtual environment
Resumo:
Co-CreativePen Toolkit is a pen-based 3D toolkit for children cooperatly designing virtual environment. This toolkit is used to construct different applications involved with distributedpen-based 3D interaction. In this toolkit,sketch method is encapsulated as kinds of interaction techniques. Children can use pen to construct 3D and IBR objects, to navigate in the virtual world, to select and manipulate virtual objects, and to communicate with other children. Children can use pen to select other children in the virtual world, and use pen to write message to children selected The distributed architecture of Co-CreativePen Toolkit is based on the CORBA. A common scene graph is managed in the server with several copies of this graph are managed in every client.Every changes of the scene graph in client will cause the change in the server and other client.
Resumo:
单摄像头条件下,基于桌面TUI,划分出4类具有不同场景语义的用户界面操作实物;分析实物用户界面中执行通用交互任务的特点,建立支持4类语义的3层系统设计框架;结合场景规划系统自身特征,设计了4类承载上述场景语义的标记,并提出标记任务分配策略;为解决大场景与小视域之间的矛盾,结合paddle技术,提出动态比例空间、动态成组、双手交互和遮挡处理等交互技术;借鉴认知心理学原理,提出时空复用交互技术;开发基于TUI的交互工具箱,并进行了应用验证.
Resumo:
借鉴真实世界的认知心理学原理,将虚拟场景的可视表达和语义信息结合起来共同服务于用户的交互过程,多种3D交互技术被融合在一个统一的交互框架内,使复杂虚拟环境中的3D用户界面更容易被用户理解和使用.通过增强场景图的语义处理能力,建立支持高层语义的3D用户界面体系结构,3D交互系统不仅在几何层上而且还能在语义层上支持交互任务的执行.最后介绍了一个应用实例.
Resumo:
黄土高原地形三维虚拟是"数字黄土高原"的基础,可为区域水土保持生态建设提供科技支撑。针对直接在地理信息系统软件中观察三维场景存在的控制交互能力不足问题,提出综合利用地理信息系统软件的地形插值算法,基于MFC框架下的OpenGL程序设计的思路,实现地形的真实感三维虚拟。以黄土丘陵沟壑区康家沟小流域为例,等高线数据在AutoCAD和ArcView软件中处理,生成ASCII格式的规则网格DEM数据,依据它们绘制三角形带,采用加权平均法求得各点的法向量,设置光照与材质模式,添加动态天空背景,实现了该流域地形的真实感三维虚拟,并增加交互能力,完成自由漫游与多角度观察。
Resumo:
A simple and environment friendly chemical route for detecting latent fingermarks by one-step single-metal nanoparticles deposition method (SND) was achieved successfully on several non-porous items. Gold nanoparticles (AuNPs) synthesized using sodium borohydride as reducing agent in the presence of glucose, were used as working solution for latent fingermarks detection. The SND technique just needs one step to obtain clear ridge details in a wide pH range (2.5-5.0), whereas the standard multi-metal deposition (MMD) technique requires six baths in a narrow pH range (2.5-2.8). The SND is very convenient to detect latent fingermarks in forensic scene or laboratory for forensic operators. The SND technique provided sharp and clear development of latent fingermarks, without background staining, dramatically diminished the bath steps.
Resumo:
月球探测对于我国有着长远的战略意义,移动机器人将在我国“二”期月球探测计划中发挥举足轻重的作用。因为月面环境恶劣、机器人自主性有限,所以基于虚拟现实的遥操作技术将在月球探测任务中发挥重要作用。它给操作者提供一种三维的、逼真的和可交互的机器人仿真平台。在此平台上,操作者可以借助科学家的智能来解决月球探测机器人自主和遥操作的结合问题,可以验证路径规划、机械臂运动规划及控制指令等。 本文分析了月球探测机器人和真实三维地形几何拓扑信息的交互过程,借助基于虚拟现实的遥操作技术,开发了基于真实地形场景的移动机器人运动仿真平台。在此平台上,运动仿真反应了机器人真实的运动状态。 首先通过对真实地形三维点云的三角剖分和纹理影射,我们得到了真实三维地形场景。然后借助OpenGL软件库和Solidworks软件我们对月球探测机器人进行了精确的几何建模。 本文在分析国内外星球探测机器人仿真系统基础上,提出了一种轮式移动机器人轮子与地形几何拓扑信息交互的方法,此方法解释了地形变化如何影响到机器人姿态的变化。通过在虚拟地形上实验和分析机器人状态数据,证明了此方法的合理性。 本文还推导了六轮移动机器人的运动学模型,确定了机器人车体位姿及其变化与轮子接地点位姿及其变化之间的关系,为机器人如何调整姿态以适应变化的三维崎岖地形提供了理论基础。并利用速度投影法,得到了轮式移动机器人运动学模型新的形式。 最后结合运动学模型和几何模型,我们在Windows平台上利用VC++OpenGL 开发了基于真实地形场景的星球探测机器仿真系统,实现了月球探测机器人的实时仿真。该系统具有较强的交互性和实时性,为星球探测机器人虚拟导航、路径验证、遥操作等提供了验证平台。
Resumo:
Rainbow三维摄像机是一种基于光谱分析的快速三维信息获取方法。该方法利用连续变化的彩色光谱照射景物 ,彩色CCD摄像机摄取的景物图像将呈现有规律的颜色变化 ,而且不同的颜色 (波长 )构成了不同的空间颜色面。通过标定这些颜色面和摄像机成象模型 ,即可计算出图像中各点的三维坐标。
Resumo:
Rainbow 三维摄像机是一种基于光谱分析的快速三维信息获取方法。该方法利用连续变化的彩色光谱照射景物,彩色CCD 摄像机摄取的景物图象将呈现有规律的颜色变化,而且不同的颜色(波长)构成了不同的空间颜色面。通过标定这些颜色面和摄像机成象模型,即可计算出图象中各点的三维坐标。该文重点讨论实现该方法的标定技术和颜色分类技术,最后给出实验结果。