18 resultados para vision rehabilitation

em Chinese Academy of Sciences Institutional Repositories Grid Portal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rhodopsin, encoded by the gene Rhodopsin (RH1), is extremely sensitive to light, and is responsible for dim-light vision. Bats are nocturnal mammals that inhabit poor light environments. Megabats (Old-World fruit bats) generally have well-developed eyes, while microbats (insectivorous bats) have developed echolocation and in general their eyes were degraded, however, dramatic differences in the eyes, and their reliance on vision, exist in this group. In this study, we examined the rod opsin gene (RH1), and compared its evolution to that of two cone opsin genes (SWS1 and M/LWS). While phylogenetic reconstruction with the cone opsin genes SWS1 and M/LWS generated a species tree in accord with expectations, the RH1 gene tree united Pteropodidae (Old-World fruit bats) and Yangochiroptera, with very high bootstrap values, suggesting the possibility of convergent evolution. The hypothesis of convergent evolution was further supported when nonsynonymous sites or amino acid sequences were used to construct phylogenies. Reconstructed RH1 sequences at internal nodes of the bat species phylogeny showed that: (1) Old-World fruit bats share an amino acid change (S270G) with the tomb bat; (2) Miniopterus share two amino acid changes (V104I, M183L) with Rhinolophoidea; (3) the amino acid replacement I123V occurred independently on four branches, and the replacements L99M, L266V and I286V occurred each on two branches. The multiple parallel amino acid replacements that occurred in the evolution of bat RH1 suggest the possibility of multiple convergences of their ecological specialization (i.e., various photic environments) during adaptation for the nocturnal lifestyle, and suggest that further attention is needed on the study of the ecology and behavior of bats.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated the molecular evolution of duplicated color vision genes (LWS-1 and SWS2) within cyprinid fish, focusing on the most cavefish-rich genus-Sinocyclocheilus. Maximum likelihood-based codon substitution approaches were used to analyze the evolution of vision genes. We found that the duplicated color vision genes had unequal evolutionary rates, which may lead to a related function divergence. Divergence of LWS-1 was strongly influenced by positive selection causing an accelerated rate of substitution in the proportion of pocket-forming residues. The SWS2 pigment experienced divergent selection between lineages, and no positively selected site was found. A duplicate copy of LWS-1 of some cyprinine species had become a pseudogene, but all SWS2 sequences remained intact in the regions examined in the cyprinid fishes examined in this study. The pseudogenization events did not occur randomly in the two copies of LWS-1 within Sinocyclocheilus species. Some cave species of Sinocyclocheilus with numerous morphological specializations that seem to be highly adapted for caves, retain both intact copies of color vision genes in their genome. We found some novel amino acid substitutions at key sites, which might represent interesting target sites for future mutagenesis experiments. Our data add to the increasing evidence that duplicate genes experience lower selective constraints and in some cases positive selection following gene duplication. Some of these observations are unexpected and may provide insights into the effect of caves on the evolution of color vision genes in fishes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A programmable vision chip for real-time vision applications is presented. The chip architecture is a combination of a SIMD processing element array and row-parallel processors, which can perform pixel-parallel and row-parallel operations at high speed. It implements the mathematical morphology method to carry out low-level and mid-level image processing and sends out image features for high-level image processing without I/O bottleneck. The chip can perform many algorithms through software control. The simulated maximum frequency of the vision chip is 300 MHz with 16 x 16 pixels resolution. It achieves the rate of 1000 frames per second in real-time vision. A prototype chip with a 16 x 16 PE array is fabricated by the 0.18 mu m standard CMOS process. It has a pixel size of 30 mu m x 40 mu m and 8.72 mW power consumption with a 1.8 V power supply. Experiments including the mathematical morphology method and target tracking application demonstrated that the chip is fully functional and can be applied in real-time vision applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE) array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps). A prototype chip with 64 x 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 mu m Standard CMOS process. The area size of chip is 1.5 mm x 3.5 mm. Each pixel size is 9.5 mu m x 9.5 mu m and each processing element size is 23 mu m x 29 mu m. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel vision chip for high-speed target tracking. Two concise algorithms for high-speed target tracking are developed. The algorithms include some basic operations that can be used to process the real-time image information during target tracking. The vision chip is implemented that is based on the algorithms and a row-parallel architecture. A prototype chip has 64 x 64 pixels is fabricated by 0.35 pm complementary metal-oxide-semiconductor transistor (CMOS) process with 4.5 x 2.5 mm(2) area. It operates at a rate of 1000 frames per second with 10 MHz chip main clock. The experiment results demonstrate that a high-speed target can be tracked in complex static background and a high-speed target among other high-speed objects can be tracked in clean background.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The characteristic of several night imaging and display technologies on cars are introduced. Compared with the current night vision technologies on cars, Range-gated technology can eliminate backscattered light and increase the SNR of system. The theory of range-gated image technology is described. The plan of range-gated system on cars is designed; the divergence angle of laser can be designed to change automatically, this allows overfilling of the camera field of view to effectively attenuate the laser when necessary. Safety range of the driver is calculated according to the theory analysis. Observation distance of the designed system is about 500m which is satisfied with the need of safety driver range.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a robust face location system based on human vision simulations to automatically locate faces in color static images. Our method is divided into four stages. In the first stage we use a gauss low-pass filter to remove the fine information of images, which is useless in the initial stage of human vision. During the second and the third stages, our technique approximately detects the image regions, which may contain faces. During the fourth stage, the existence of faces in the selected regions is verified. Having combined the advantages of Bottom-Up Feature Based Methods and Appearance-Based Methods, our algorithm performs well in various images, including those with highly complex backgrounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel architecture of vision chip for fast traffic lane detection (FTLD). The architecture consists of a 32*32 SIMD processing element (PE) array processor and a dual-core RISC processor. The PE array processor performs low-level pixel-parallel image processing at high speed and outputs image features for high-level image processing without I/O bottleneck. The dual-core processor carries out high-level image processing. A parallel fast lane detection algorithm for this architecture is developed. The FPGA system with a CMOS image sensor is used to implement the architecture. Experiment results show that the system can perform the fast traffic lane detection at 50fps rate. It is much faster than previous works and has good robustness that can operate in various intensity of light. The novel architecture of vision chip is able to meet the demand of real-time lane departure warning system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel CMOS color pixel with a 2D metal-grating structure for real-time vision chips. It consists of an N-well/P-substrate diode without salicide and 2D metal-grating layers on the diode. The periods of the 2D metal structure are controlled to realize color filtering. We implemented sixteen kinds of the pixels with the different metal-grating structures in a standard 0.18 mu m CMOS process. The measured results demonstrate that the N-well/P-substrate diode without salicide and with the 2D metal-grating structures can serve as the high speed RGB color active pixel sensor for real-time vision chips well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

D-vision系统(这里"D"有"Divide Screen"和"Duplex-Vision"双重含义)是一类基于PC机群的多投影虚拟现实系统(或简称多投影系统).给出D-vision系统中双手6自由度力觉交互的实现过程:在客户端协同控制两个力觉交互设备Spidar-G(Space Interface for Artificial Reality withGrip)实现双手协作交互,其次构造一个基于UDP的Socket类完成客户端和绘制服务器节点之间的通讯,传递跟踪球的位置、方向等信息;然后,通过分布绘制实现在大屏幕上无缝显示.最后实验结果表明:在D-vision系统中双手6自由度力觉交互是一种自然直观的人机交互方式.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A portable 3D laser scanning system has been designed and built for robot vision. By tilting the charge coupled device (CCD) plane of portable 3D scanning system according to the Scheimpflug condition, the depth-of-view is successfully extended from less than 40 to 100 mm. Based on the tilted camera model, the traditional two-step camera calibration method is modified by introducing the angle factor. Meanwhile, a novel segmental calibration approach, i.e., dividing the whole work range into two parts and calibrating, respectively, with corresponding system parameters, is proposed to effectively improve the measurement accuracy of the large depth-of-view 3D laser scanner. In the process of 3D reconstruction, different calibration parameters are used to transform the 2D coordinates into 3D coordinates according to the different positions of the image in the CCD plane, and the measurement accuracy of 60 mu m is obtained experimentally. Finally, the experiment of scanning a lamina by the large depth-of-view portable 3D laser scanner used by an industrial robot IRB 4400 is also employed to demonstrate the effectiveness and high measurement accuracy of our scanning system. (C) 2007 Elsevier Ltd. All rights reserved.