907 resultados para Visual control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

En la búsqueda de la modelización y cuantificación óptimas de la correlación instrumental y visual de materiales goniocromáticos (pinturas y plásticos, principalmente) en los procesos de control de calidad típicos del sector automovilístico, se presenta aquí una comparación de las prestaciones de diversos tipos actuales de instrumentos de medida del color y cabinas de iluminación. Los multi-gonioespectrofotómetros analizados fueron: Datacolor FX10®, X-Rite MA98® y BYK-mac®, y, las cabinas de iluminación direccional: gonio-vision-box® y byko-spectra effect®. Las propiedades ópticas, básicamente fotométricas y colorimétricas, evaluadas fueron: geometrías de medida, fuente de luz, distancia de observación y dirección visual a la muestra. Los resultados aportados mostraron que actualmente dicha correlación instrumental y visual no es óptima debido a desajustes en las cabinas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson disease is mainly characterized by the degeneration of dopaminergic neurons in the central nervous system, including the retina. Different interrelated molecular mechanisms underlying Parkinson disease-associated neuronal death have been put forward in the brain, including oxidative stress and mitochondrial dysfunction. Systemic injection of the proneurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) to monkeys elicits the appearance of a parkinsonian syndrome, including morphological and functional impairments in the retina. However, the intracellular events leading to derangement of dopaminergic and other retinal neurons in MPTP-treated animal models have not been so far investigated. Here we have used a comparative proteomics approach to identify proteins differentially expressed in the retina of MPTP-treated monkeys. Proteins were solubilized from the neural retinas of control and MPTP-treated animals, labelled separately with two different cyanine fluorophores and run pairwise on 2D DIGE gels. Out of >700 protein spots resolved and quantified, 36 were found to exhibit statistically significant differences in their expression levels, of at least ±1.4-fold, in the parkinsonian monkey retina compared with controls. Most of these spots were excised from preparative 2D gels, trypsinized and subjected to MALDI-TOF MS and LC-MS/MS analyses. Data obtained were used for protein sequence database interrogation, and 15 different proteins were successfully identified, of which 13 were underexpressed and 2 overexpressed. These proteins were involved in key cellular functional pathways such as glycolysis and mitochondrial electron transport, neuronal protection against stress and survival, and phototransduction processes. These functional categories underscore that alterations in energy metabolism, neuroprotective mechanisms and signal transduction are involved in MPTPinduced neuronal degeneration in the retina, in similarity to mechanisms thought to underlie neuronal death in the Parkinson’s diseased brain and neurodegenerative diseases of the retina proper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grady distinguishes two main types of metaphor in order to provide a solution in the controversies stemming from the conceptual theory of metaphor: correlation-based metaphors and resemblance metaphors. In “correlation-based metaphors”, the source domain is sensory-motor, while the target domain is not. On the contrary, “resemblance metaphors” are originated by a physical or conceptual perception which is common in both domains, by the association of concepts with common features. Primary metaphors are the minimal units of correlation-based metaphors; they are inherent in human nature and the result of the nature of our brain, our body and the world that we inhabit. We acquire them automatically and we cannot avoid them. Furthermore, as corporal experiences are universal, so are primary metaphors. In this paper, I will argue that primary metaphors manifest themselves visually through scene-setting techniques such as composition, framing, camera movement or lighting. Film-makers can use the different aspects of mise-en-scène metaphorically in order to express abstract notions like evil, importance, control, relationship or confusion. Such visual manifestations, as also occurs with their verbal equivalents, frequently go unnoticed or have been used so often that they have become clichés. But the important thing to bear in mind is that their origin lies in a primary metaphor and due to this origin these kinds of film-making strategies have been so expressively successful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este proyecto se pretende diseñar un sistema embebido capaz de realizar procesamiento de imágenes y guiado de un hexacóptero. El hexacóptero dispondrá a bordo de una cámara así como las baterías y todo el hardware necesario para realizar el procesamiento de la información visual obtenida e implementar el controlador necesario para permitir su guiado. OpenCV es una biblioteca de primitivas de procesado de imagen que permite crear algoritmos de Visión por Computador de última generación. OpenCV fue desarrollado originalmente por Intel en 1999 para mostrar la capacidad de procesamiento de los micros de Intel, por lo que la mayoría de la biblioteca está optimizada para correr en estos micros, incluyendo las extensiones MMX y SSE. http://en.wikipedia.org/wiki/OpenCV Actualmente es ampliamente utilizada tanto por la comunidad científica como por la industria, para desarrollar nuevos algoritmos para equipos de sobremesa y sobre todo para sistemas empotrados (robots móviles, cámaras inteligentes, sistemas de inspección, sistemas de vigilancia, etc..). Debido a su gran popularidad se han realizado compilaciones de la biblioteca para distintos sistemas operativos tradicionales (Windows, Linux, Mac), para dispositivos móviles (Android, iOS) y para sistemas embebidos basados en distintos tipos de procesadores (ARM principalmente). - iPhone port: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ - Android port: http://opencv.willowgarage.com/wiki/AndroidExperimental Un ejemplo de plataforma embebida es la tarjeta Zedboard (http://www.zedboard.org/), que representa el estado del arte en dispositivos embebidos basados en la arquitectura Cortex de ARM. La tarjeta incluye un procesador Cortex-A9 dual core junto con una gran cantidad de periféricos y posibilidades de conexión a tarjetas de expansión de terceras partes, lo que permite desarrollar aplicaciones en muy distintos campos de la Visión por Computador.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional visual servoing systems do not deal with the topic of moving objects tracking. When these systems are employed to track a moving object, depending on the object velocity, visual features can go out of the image, causing the fail of the tracking task. This occurs specially when the object and the robot are both stopped and then the object starts the movement. In this work, we have employed a retina camera based on Address Event Representation (AER) in order to use events as input in the visual servoing system. The events launched by the camera indicate a pixel movement. Event visual information is processed only at the moment it occurs, reducing the response time of visual servoing systems when they are used to track moving objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the groups of oceanic microfossils, only Radiolaria occur in abundances and preservation states sufficient to provide biostratigraphic control for restricted intervals within sediments recovered in Hole 1223A. The distribution of these microfossils has been divided into four major intervals, A-D. Radiolaria distribution Interval A occupies the depth range 0-3.0 meters below seafloor (mbsf), where the abundance of specimens is very low and preservation is poor. Radiolaria distribution Interval B occupies the depth range 3.02-7.1 mbsf. Radiolaria in Interval B are locally rare to abundant and well preserved, and assemblages range in age from pure early Eocene to early Eocene admixed with late Neogene taxa. Radiolaria distribution Interval C occupies the depth range 7.1-36.99 mbsf and is characterized by sediments either barren of microfossils or containing extremely rare early Eocene specimens. Radiolaria distribution Interval D occupies the depth range 36.99-38.7 mbsf (base of the recovered sedimentary section), where early Eocene Radiolaria are present in rare to common frequencies, but opal-A to opal-CT recrystallization has degraded the preservation state. The late Neogene assemblage of Radiolaria distribution Interval B is dated at 1.55-2.0 Ma, based on occurrences of Eucyrtidium matuyamai, Lamprocyclas heteroporos, and Theocorythium trachelium trachelium. The early Eocene assemblage of Radiolaria distribution Intervals B and D is somewhat problematically assigned to the Buryella clinata Zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Highway Administration, Office of Research, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The placement of monocular laser lesions in the adult cat retina produces a lesion projection zone (LPZ) in primary visual cortex (V1) in which the majority of neurons have a normally located receptive field (RF) for stimulation of the intact eye and an ectopically located RF ( displaced to intact retina at the edge of the lesion) for stimulation of the lesioned eye. Animals that had such lesions for 14 - 85 d were studied under halothane and nitrous oxide anesthesia with conventional neurophysiological recording techniques and stimulation of moving light bars. Previous work suggested that a candidate source of input, which could account for the development of the ectopic RFs, was long-range horizontal connections within V1. The critical contribution of such input was examined by placing a pipette containing the neurotoxin kainic acid at a site in the normal V1 visual representation that overlapped with the ectopic RF recorded at a site within the LPZ. Continuation of well defined responses to stimulation of the intact eye served as a control against direct effects of the kainic acid at the LPZ recording site. In six of seven cases examined, kainic acid deactivation of neurons at the injection site blocked responsiveness to lesioned-eye stimulation at the ectopic RF for the LPZ recording site. We therefore conclude that long-range horizontal projections contribute to the dominant input underlying the capacity for retinal lesion-induced plasticity in V1.