16 resultados para Omnidirectional vision
em Chinese Academy of Sciences Institutional Repositories Grid Portal
Resumo:
A pump and probe system is developed, where the probe pulse duration tau is less than 60 fs while the pump pulse is stretched up to 150-670 fs. The time-resolved excitation processes and damage mechanisms in the omnidirectional reflectors SiO2/TiO2 and ZnS/MgF2 are studied. It is found that as the pump pulse energy is higher than the threshold value, the reflectivity of the probe pulse decreases rapidly during the former half, rather than around the peak of the pump pulse. A coupled dynamic model based on the avalanche ionization (AI) theory is used to study the excitation processes in the sample and its inverse influences on the pump pulse. The results indicate that as pulse duration is longer than 150 fs, photoionization (PI) and AI both play important roles in the generation of conduction band electrons (CBEs); the CBE density generated via AI is higher than that via PI by a factor of 10(2)-10(4). The theory explains well the experimental results about the ultrafast excitation processes and the threshold fluences. (c) 2006 American Institute of Physics.
Resumo:
Rhodopsin, encoded by the gene Rhodopsin (RH1), is extremely sensitive to light, and is responsible for dim-light vision. Bats are nocturnal mammals that inhabit poor light environments. Megabats (Old-World fruit bats) generally have well-developed eyes, while microbats (insectivorous bats) have developed echolocation and in general their eyes were degraded, however, dramatic differences in the eyes, and their reliance on vision, exist in this group. In this study, we examined the rod opsin gene (RH1), and compared its evolution to that of two cone opsin genes (SWS1 and M/LWS). While phylogenetic reconstruction with the cone opsin genes SWS1 and M/LWS generated a species tree in accord with expectations, the RH1 gene tree united Pteropodidae (Old-World fruit bats) and Yangochiroptera, with very high bootstrap values, suggesting the possibility of convergent evolution. The hypothesis of convergent evolution was further supported when nonsynonymous sites or amino acid sequences were used to construct phylogenies. Reconstructed RH1 sequences at internal nodes of the bat species phylogeny showed that: (1) Old-World fruit bats share an amino acid change (S270G) with the tomb bat; (2) Miniopterus share two amino acid changes (V104I, M183L) with Rhinolophoidea; (3) the amino acid replacement I123V occurred independently on four branches, and the replacements L99M, L266V and I286V occurred each on two branches. The multiple parallel amino acid replacements that occurred in the evolution of bat RH1 suggest the possibility of multiple convergences of their ecological specialization (i.e., various photic environments) during adaptation for the nocturnal lifestyle, and suggest that further attention is needed on the study of the ecology and behavior of bats.
Resumo:
We investigated the molecular evolution of duplicated color vision genes (LWS-1 and SWS2) within cyprinid fish, focusing on the most cavefish-rich genus-Sinocyclocheilus. Maximum likelihood-based codon substitution approaches were used to analyze the evolution of vision genes. We found that the duplicated color vision genes had unequal evolutionary rates, which may lead to a related function divergence. Divergence of LWS-1 was strongly influenced by positive selection causing an accelerated rate of substitution in the proportion of pocket-forming residues. The SWS2 pigment experienced divergent selection between lineages, and no positively selected site was found. A duplicate copy of LWS-1 of some cyprinine species had become a pseudogene, but all SWS2 sequences remained intact in the regions examined in the cyprinid fishes examined in this study. The pseudogenization events did not occur randomly in the two copies of LWS-1 within Sinocyclocheilus species. Some cave species of Sinocyclocheilus with numerous morphological specializations that seem to be highly adapted for caves, retain both intact copies of color vision genes in their genome. We found some novel amino acid substitutions at key sites, which might represent interesting target sites for future mutagenesis experiments. Our data add to the increasing evidence that duplicate genes experience lower selective constraints and in some cases positive selection following gene duplication. Some of these observations are unexpected and may provide insights into the effect of caves on the evolution of color vision genes in fishes.
Resumo:
A programmable vision chip for real-time vision applications is presented. The chip architecture is a combination of a SIMD processing element array and row-parallel processors, which can perform pixel-parallel and row-parallel operations at high speed. It implements the mathematical morphology method to carry out low-level and mid-level image processing and sends out image features for high-level image processing without I/O bottleneck. The chip can perform many algorithms through software control. The simulated maximum frequency of the vision chip is 300 MHz with 16 x 16 pixels resolution. It achieves the rate of 1000 frames per second in real-time vision. A prototype chip with a 16 x 16 PE array is fabricated by the 0.18 mu m standard CMOS process. It has a pixel size of 30 mu m x 40 mu m and 8.72 mW power consumption with a 1.8 V power supply. Experiments including the mathematical morphology method and target tracking application demonstrated that the chip is fully functional and can be applied in real-time vision applications.
Resumo:
A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE) array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps). A prototype chip with 64 x 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 mu m Standard CMOS process. The area size of chip is 1.5 mm x 3.5 mm. Each pixel size is 9.5 mu m x 9.5 mu m and each processing element size is 23 mu m x 29 mu m. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking.
Resumo:
This paper presents a novel vision chip for high-speed target tracking. Two concise algorithms for high-speed target tracking are developed. The algorithms include some basic operations that can be used to process the real-time image information during target tracking. The vision chip is implemented that is based on the algorithms and a row-parallel architecture. A prototype chip has 64 x 64 pixels is fabricated by 0.35 pm complementary metal-oxide-semiconductor transistor (CMOS) process with 4.5 x 2.5 mm(2) area. It operates at a rate of 1000 frames per second with 10 MHz chip main clock. The experiment results demonstrate that a high-speed target can be tracked in complex static background and a high-speed target among other high-speed objects can be tracked in clean background.
Resumo:
The characteristic of several night imaging and display technologies on cars are introduced. Compared with the current night vision technologies on cars, Range-gated technology can eliminate backscattered light and increase the SNR of system. The theory of range-gated image technology is described. The plan of range-gated system on cars is designed; the divergence angle of laser can be designed to change automatically, this allows overfilling of the camera field of view to effectively attenuate the laser when necessary. Safety range of the driver is calculated according to the theory analysis. Observation distance of the designed system is about 500m which is satisfied with the need of safety driver range.
Resumo:
In this paper we present a robust face location system based on human vision simulations to automatically locate faces in color static images. Our method is divided into four stages. In the first stage we use a gauss low-pass filter to remove the fine information of images, which is useless in the initial stage of human vision. During the second and the third stages, our technique approximately detects the image regions, which may contain faces. During the fourth stage, the existence of faces in the selected regions is verified. Having combined the advantages of Bottom-Up Feature Based Methods and Appearance-Based Methods, our algorithm performs well in various images, including those with highly complex backgrounds.
Resumo:
This paper presents a novel architecture of vision chip for fast traffic lane detection (FTLD). The architecture consists of a 32*32 SIMD processing element (PE) array processor and a dual-core RISC processor. The PE array processor performs low-level pixel-parallel image processing at high speed and outputs image features for high-level image processing without I/O bottleneck. The dual-core processor carries out high-level image processing. A parallel fast lane detection algorithm for this architecture is developed. The FPGA system with a CMOS image sensor is used to implement the architecture. Experiment results show that the system can perform the fast traffic lane detection at 50fps rate. It is much faster than previous works and has good robustness that can operate in various intensity of light. The novel architecture of vision chip is able to meet the demand of real-time lane departure warning system.
Resumo:
This paper presents a novel CMOS color pixel with a 2D metal-grating structure for real-time vision chips. It consists of an N-well/P-substrate diode without salicide and 2D metal-grating layers on the diode. The periods of the 2D metal structure are controlled to realize color filtering. We implemented sixteen kinds of the pixels with the different metal-grating structures in a standard 0.18 mu m CMOS process. The measured results demonstrate that the N-well/P-substrate diode without salicide and with the 2D metal-grating structures can serve as the high speed RGB color active pixel sensor for real-time vision chips well.
Resumo:
D-vision系统(这里"D"有"Divide Screen"和"Duplex-Vision"双重含义)是一类基于PC机群的多投影虚拟现实系统(或简称多投影系统).给出D-vision系统中双手6自由度力觉交互的实现过程:在客户端协同控制两个力觉交互设备Spidar-G(Space Interface for Artificial Reality withGrip)实现双手协作交互,其次构造一个基于UDP的Socket类完成客户端和绘制服务器节点之间的通讯,传递跟踪球的位置、方向等信息;然后,通过分布绘制实现在大屏幕上无缝显示.最后实验结果表明:在D-vision系统中双手6自由度力觉交互是一种自然直观的人机交互方式.
Resumo:
A portable 3D laser scanning system has been designed and built for robot vision. By tilting the charge coupled device (CCD) plane of portable 3D scanning system according to the Scheimpflug condition, the depth-of-view is successfully extended from less than 40 to 100 mm. Based on the tilted camera model, the traditional two-step camera calibration method is modified by introducing the angle factor. Meanwhile, a novel segmental calibration approach, i.e., dividing the whole work range into two parts and calibrating, respectively, with corresponding system parameters, is proposed to effectively improve the measurement accuracy of the large depth-of-view 3D laser scanner. In the process of 3D reconstruction, different calibration parameters are used to transform the 2D coordinates into 3D coordinates according to the different positions of the image in the CCD plane, and the measurement accuracy of 60 mu m is obtained experimentally. Finally, the experiment of scanning a lamina by the large depth-of-view portable 3D laser scanner used by an industrial robot IRB 4400 is also employed to demonstrate the effectiveness and high measurement accuracy of our scanning system. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Behavioral and ventilatory parameters have the possibility of predicting the stress state of fish in vivo and in situ. This paper presents a new image-processing algorithm for quantifying the average swimming speed of a fish school in an aquarium. This method is based on the alteration in projected area caused by the movement of individual fish during frame sequences captured at given time intervals. The image enhancement method increases the contrast between fish and background, and is thus suitable for use in turbid aquaculture water. Behavioral parameters (swimming activity and distribution parameters) and changes in ventilation frequency (VF) of tilapia (Oreochromis niloticus) responded to acute fluctuations in dissolved oxygen (DO) which were monitored continuously in the course of normoxia, falling DO level, maintenance of hypoxia (three levels of 1.5, 0.8 and 0.3 mg l(-1)) and subsequent recovery to normoxia. These parameters responded sensitively to acute variations in DO level; they displayed significant changes (P < 0.05) during severe hypoxia (0.8 and 0.3 mg l(-1) level) compared with normoxic condition, but there was no significant difference under conditions of mild hypoxia (1.5 mg l(-1) level). There was no significant difference in VF between two levels of severe hypoxia 0.8 and 0.3 mg l(-1) level during the low DO condition. The activity and distribution parameters displayed distinguishable differences between the 0.8 and 0.3 mg l(-1) levels. The behavioral parameters are thus capable of distinguishing between different degrees of severe hypoxia, though there were relatively large fluctuations. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
折反射全向成像系统是由普通透视相机和反射镜面组成的全向成像装置,可实时获取360°无需拼接的全景图像,近年来已成为研究热点并在视频会议、三维重建和移动机器人导航等领域有着广泛的应用。 本文主要对单相机全向立体视觉系统的设计、标定、匹配以及三维重建展开研究。介绍了一种可实时获取全向三维信息的折反射全向立体视觉光学装置OSVOD(Omnidirectional Stereo Vision Optical Device),OSVOD由两个双曲面镜和一个普通透视相机组成。其中两个双曲镜面上下同轴、间隔一定距离固定在一个玻璃筒内,下镜面中间开有一孔,上镜面通过下镜面的孔在相机像平面上成像,这样空间一点经上下反射镜的反射在像平面上有两个像点,用一个相机实现了立体视觉。两镜面的共同轴和相机镜头的光轴共线,共同焦点和镜头的光心重合,该配置能保证系统满足单一视点约束SVP(Single View-Point)。本结构配置也使系统的外极线呈一系列的放射线,对应点匹配简单。此外两镜面的间隔安装也使得系统的等效基线较长,从而具有较高的精度。 本文第一部分对当前的各种全向成像方法进行了简单介绍,并对各方法的特点做了归纳。第二部分介绍折反射全向视觉的研究现状,就各种反射镜面的成像特点做了对比。 第三部分介绍OSVOD的设计方法,包括机构的设计和镜面的设计,并对设计的结果做了误差分析。 第四部分是OSVOD的标定研究。给出了一种包括OSVOD中相机和镜面位置关系在内的系统参数的标定方法。该方法利用空间坐标已知的标定点在像平面上成的像,结合系统成像模型反算出标定点的空间坐标,再利用标定点的已知空间坐标和反算出的空间坐标建立方程,运用基于Levenberg-Marquardt的反向传播算法(backpropagation)标定相机与反射镜面间的安装偏差。该标定方法可推广到所有的折反射成像系统。 第五部分是基于全向图像的匹配研究。针对系统获取的立体图像对之间成像比例存在较大的差异,首先将图像展开成柱面投影图像,然后就下镜面成像展开的柱面做Canny边缘检测,得到了图像的边缘点;就得到的边缘点在展开的两幅柱面图像上做直接相关匹配。最后将获取的匹配点做一致性校验,并对一致性校验通过的匹配点做三维计算,生产稀疏的三维图像。 最后是结论和将来的工作展望。
Resumo:
获取全向三维信息对移动机器人导航和行动规划具有重要意义。尽管有许多其他方法可以完成这一任务,如超声传感器和激光测距仪,但是折反射立体视觉系统在大多数情况下可以获得更高的精度和更大的视场,并且不消耗额外的能量。本文采用了一种新型的折反射全向立体视觉光学装置(OSVOD)进行立体视觉的研究,OSVOD是由两个双曲面镜和一个透视相机所组成的系统,能够由单幅图像实现立体视觉。 本文重点对单相机全向立体视觉系统的标定、单幅图像匹配、运动估计和多目匹配这三项最关键的技术进行了研究。在系统标定方面,给出了一种包括OSVOD中相机和镜面位置关系在内的系统参数的标定方法。该方法利用空间坐标已知的标定点在像平面上成的像,结合系统成像模型反算出标定点的空间坐标,再利用标定点的已知空间坐标和反算出的空间坐标建立方程,运用基于Levenberg-Marquardt的反向传播算法标定相机与反射镜面间的安装偏差。该标定方法可推广到所有的折反射成像系统。 在单幅图像匹配方面,针对系统获取的立体图像对之间成像比例存在较大的差异和畸变的问题,将图像展开成柱面投影图像和俯视投影图像,并提出了一种三步算法,首先匹配非歧义的点,从而将匹配划分为小的独立的子问题,并且每条极线只匹配到最远的特征点,从而避免在远端的不可靠匹配。在随后的动态规划算法中,设计了一个特定的能量函数,对不同的纹理强度和置信程度分别加权,得到了可靠的全向致密深度图。 在运动估计与多目匹配方面,以Harris角点做为待匹配的特征,由相关性得到初匹配结果。该结果中不可避免地存在误匹配,采用随机采样一致算法来得到运动估计。完成运动估计后,利用不同位置得到的图像进行了基于边缘检测的多目匹配,快速准确地获取了障碍物信息。