964 resultados para Visual-system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring the retention, or residence time, of dosage forms to biological tissue is commonly a qualitative measurement, where no real values to describe the retention can be recorded. The result of this is an assessment that is dependent upon a user's interpretation of visual observation. This research paper outlines the development of a methodology to quantitatively measure, both by image analysis and by spectrophotometric techniques, the retention of material to biological tissues, using the retention of polymer solutions to ocular tissue as an example. Both methods have been shown to be repeatable, with the spectrophotometric measurement generating data reliably and quickly for further analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a video surveillance framework that robustly and efficiently detects abandoned objects in surveillance scenes. The framework is based on a novel threat assessment algorithm which combines the concept of ownership with automatic understanding of social relations in order to infer abandonment of objects. Implementation is achieved through development of a logic-based inference engine based on Prolog. Threat detection performance is conducted by testing against a range of datasets describing realistic situations and demonstrates a reduction in the number of false alarms generated. The proposed system represents the approach employed in the EU SUBITO project (Surveillance of Unattended Baggage and the Identification and Tracking of the Owner).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radial glia in the developing optic tectum express the key guidance molecules responsible for topographic targeting of retinal axons. However, the extent to which the radial glia are themselves influenced by retinal inputs and visual experience remains unknown. Using multiphoton live imaging of radial glia in the optic tectum of intact Xenopus laevis tadpoles in conjunction with manipulations of neural activity and sensory stimuli, radial glia were observed to exhibit spontaneous calcium transients that were modulated by visual stimulation. Structurally, radial glia extended and retracted many filopodial processes within the tectal neuropil over minutes. These processes interacted with retinotectal synapses and their motility was modulated by nitric oxide (NO) signaling downstream of neuronal NMDA receptor (NMDAR) activation and visual stimulation. These findings provide the first in vivo demonstration that radial glia actively respond both structurally and functionally to neural activity, via NMDAR-dependent NO release during the period of retinal axon ingrowth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of histological sections has long been a valuable tool in the pathological studies. The interpretation of tissue conditions, however, relies directly on visual evaluation of tissue slides, which may be difficult to interpret because of poor contrast or poor color differentiation. The Chromatic Contrast Visualization System (CCV) combines an optical microscope with electronically controlled light-emitting diodes (LEDs) in order to generate adjustable intensities of RGB channels for sample illumination. While most image enhancement techniques rely on software post-processing of an image acquired under standard illumination conditions, CCV produces real-time variations in the color composition of the light source itself. The possibility of covering the entire RGB chromatic range, combined with the optical properties of the different tissues, allows for a substantial enhancement in image details. Traditional image acquisition methods do not exploit these visual enhancements which results in poorer visual distinction among tissue structures. Photodynamic therapy (PDT) procedures are of increasing interest in the treatment of several forms of cancer. This study uses histological slides of rat liver samples that were induced to necrosis after being exposed to PDT. Results show that visualization of tissue structures could be improved by changing colors and intensities of the microscope light source. PDT-necrosed tissue samples are better differentiated when illuminated with different color wavelengths, leading to an improved differentiation of cells in the necrosis area. Due to the potential benefits it can bring to interpretation and diagnosis, further research in this field could make CCV an attractive technique for medical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vibrational spectroscopic characterization of a sulfur dioxide visual sensor was carried out using a Raman microscope system. It was observed the formation of two distinct complexes, that were characterized by the position and relative intensities of the bands assigned to the symmetric stretching, nu(s)(SO(2)),of the linked SO(2) molecules. In fact, in the yellowish orange complex, that corresponds to the 1:1 stoichiometry, only one band is observed, assigned to nu(s)(SO(2)) at ca. 1080 cm-(1) and, in the deep red complex, that corresponds to the 1:2 complex, at ca. 1070 and 1090 cm(-)1 are observed. The variation of the relative intensities of the bands assigned to nu(s)(SO(2)) present in the Ni(II)center dot SO(2) complex, in different points of the sample, shows clearly the requirement of the Raman microscope in the vibrational characterization of this kind of molecular sensor. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Location aware content-based experiences have a substantial tradition in HCI, several projects over the last two decades have explored the association of digital media to specific locations or objects. However, a large portion of the literature has little focus on the creative side of designing of the experience and on the iterative process of user evaluations. In this thesis we present two iterations in the design and evaluation of a location based story delivery system (LBSDS), inspired by local folklore and oral storytelling in Madeira. We started by testing an already existing location based story platform, PlaceWear, with short multimedia clips that recounted local traditions and folktales, to this experience we called iLand. An initial evaluation of iLand, was conducted; we shadowed users during the experience and then they responded to a questionnaire. By analyzing the evaluation results we uncovered several issues that informed the redesign of the system itself as well as part of the story content. The outcome of this re design was the 7Stories experience. In the new experience we performed the integration of visual markers in the interface and the framing of the fragmented story content through the literary technique of the narrator. This was done aiming to improving the connection of the audience to the physical context where the experience is delivered. The 7Stories experience was evaluated following a similar methodology to the iLand evaluation but the user’s experience resulted considerably different; because of the same setting for the experience in both versions and the constancy of the most of the content across the two versions we were able to assess the specific effect of the new design and discuss its strengths and shortcomings. Although we did not run a formal and strict comparative test between the two evaluations, it is evident from the collected data how the specific design changes to our LBSDS influenced the user experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to estimate the components of variance and genetic parameters for the visual scores which constitute the Morphological Evaluation System (MES), such as body structure (S), precocity (P) and musculature (M) in Nellore beef-cattle at the weaning and yearling stages, by using threshold Bayesian models. The information used for this was gleaned from visual scores of 5,407 animals evaluated at the weaning and 2,649 at the yearling stages. The genetic parameters for visual score traits were estimated through two-trait analysis, using the threshold animal model, with Bayesian statistics methodology and MTGSAM (Multiple Trait Gibbs Sampler for Animal Models) threshold software. Heritability estimates for S, P and M were 0.68, 0.65 and 0.62 (at weaning) and 0.44, 0.38 and 0.32 (at the yearling stage), respectively. Heritability estimates for S, P and M were found to be high, and so it is expected that these traits should respond favorably to direct selection. The visual scores evaluated at the weaning and yearling stages might be used in the composition of new selection indexes, as they presented sufficient genetic variability to promote genetic progress in such morphological traits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with the development of a prototype of a helicopter quadrotor for monitoring applications in oil facilities. Anomaly detection problems can be resolved through monitoringmissions performed by a suitably instrumented quadrotor, i.e. infrared thermosensors should be embedded. The proposed monitoring system aims to reduce accidents as well as to make possible the use of non-destructive techniques for detection and location of leaks caused by corrosion. To this end, the implementation of a prototype, its stabilization and a navigation strategy have been proposed. The control strategy is based on dividing the problem into two control hierarchical levels: the lower level stabilizes the angles and the altitude of the vehicle at the desired values, while the higher one provide appropriate references signals to the lower level in order the quadrotor performs the desired movements. The navigation strategy for helicopter quadrotor is made using information provided by a acquisition image system (monocular camera) embedded onto the helicopter. Considering that the low-level control has been solved, the proposed vision-based navigation technique treats the problem as high level control strategies, such as, relative position control, trajectory generation and trajectory tracking. For the position control we use a control technique for visual servoing based on image features. The trajectory generation is done in a offline step, which is a visual trajectory composed of a sequence of images. For the trajectory tracking problem is proposed a control strategy by continuous servovision, thus enabling a navigation strategy without metric maps. Simulation and experimental results are presented to validate the proposal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform