920 resultados para decoupled image-based visual servoing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: We examined body image perception and its association with reported weight-control behavior among adolescents in the Seychelles.METHODS: We conducted a school-based survey of 1432 students aging 11-17 years in the Seychelles. Perception of body image was assessed using both a closed-ended question (CEQ) and Stunkard's pictorial silhouettes (SPS). Voluntary attempts to change weight were also assessed.RESULTS: A substantial proportion of the overweight students did not consider themselves as overweight (SPS: 24%, CEQ: 34%), and a substantial proportion of the normal-weight students considered themselves as too thin (SPS: 29%, CEQ: 15%). Logistic regression analysis showed that students with an accurate weight perception were more likely to have appropriate weight-control behavior.CONCLUSIONS: We found that substantial proportions of students had an inaccurate perception of their weight and that weight perception was associated with weight-control behavior. These findings point to forces that can drive the upwards overweight trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human electrophysiological studies support a model whereby sensitivity to so-called illusory contour stimuli is first seen within the lateral occipital complex. A challenge to this model posits that the lateral occipital complex is a general site for crude region-based segmentation, based on findings of equivalent hemodynamic activations in the lateral occipital complex to illusory contour and so-called salient region stimuli, a stimulus class that lacks the classic bounding contours of illusory contours. Using high-density electrical mapping of visual evoked potentials, we show that early lateral occipital cortex activity is substantially stronger to illusory contour than to salient region stimuli, whereas later lateral occipital complex activity is stronger to salient region than to illusory contour stimuli. Our results suggest that equivalent hemodynamic activity to illusory contour and salient region stimuli probably reflects temporally integrated responses, a result of the poor temporal resolution of hemodynamic imaging. The temporal precision of visual evoked potentials is critical for establishing viable models of completion processes and visual scene analysis. We propose that crude spatial segmentation analyses, which are insensitive to illusory contours, occur first within dorsal visual regions, not the lateral occipital complex, and that initial illusory contour sensitivity is a function of the lateral occipital complex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. The importance of dietary lipids for carotenoid-based ornaments has rarely been investigated, although theory predicts that dietary lipids may control the development of these widespread animal signals. Dietary lipids have been suggested to enhance the expression of male carotenoid-based ornaments because they provide carotenoids with a hydrophobic domain that facilitates their absorption and transport. Dietary lipids may also enhance the uptake of tocopherols (vitamin E), which share common absorption and transport routes with carotenoids. Here, we test whether dietary lipids enhance carotenoid availability and male carotenoid-based colorations. We also explore the effects of dietary lipids on plasma tocopherol concentration, which allow disentangling between different pathways that may explain how dietary lipids affect ornamental expression. 2. Following a two-factorial design, we manipulated dietary access of naturally occurring fatty acids (oleic acid) and carotenoids (lutein and zeaxanthin) and measured its effects on the circulating concentrations of carotenoids (lutein and zeaxanthin) and vitamin E (α- and γ-(β-) tocopherols) and on the ventral, carotenoid-based coloration of male common lizards (Lacerta vivipara). 3. Lutein but not zeaxanthin plasma concentrations increased with carotenoid supplementation, which, however, did not affect coloration. Lipid intake negatively affected circulating concentrations of lutein and γ-(β-) tocopherol and led to significantly less orange colorations. The path analysis suggests that a relationship between the observed colour change and the change in plasma concentrations of γ-(β-) tocopherol may exist. 4. Our study shows for the first time that dietary lipids do not enhance but reduce the intensity of male carotenoid-based ornaments. Although dietary lipids affected plasma carotenoid concentration, its negative effect on coloration appeared to be linked to lower vitamin E plasma concentrations. These findings suggest that a conflict between dietary lipids and carotenoid and tocopherol uptake may arise if these nutrients are independently obtained from natural diets and that such conflict may reinforce signal honesty in carotenoid-based ornaments. They also suggest that, at least in the common lizard, sexual selection with respect to carotenoid-based coloration may select for males with low antioxidant capacity and thus for males of superior health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the round table the following topics related to image colour processing will be discussed: historical point of view. Studies of Aguilonius, Gerritsen, Newton and Maxwell. CIE standard (Commission International de lpsilaEclaraige). Colour models. RGB, HIS, etc. Colour segmentation based on HSI model. Industrial applications. Summary and discussion. At the end, video images showing the robustness of colour in front of B/W images will be presented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a method to achieve the most relevant contours of an image. The presented method proposes to integrate the information of the local contours from chromatic components such as H, S and I, taking into account the criteria of coherence of the local contour orientation values obtained from each of these components. The process is based on parametrizing pixel by pixel the local contours (magnitude and orientation values) from the H, S and I images. This process is carried out individually for each chromatic component. If the criterion of dispersion of the obtained orientation values is high, this chromatic component will lose relevance. A final processing integrates the extracted contours of the three chromatic components, generating the so-called integrated contours image

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detecting changes between images of the same scene taken at different times is of great interest for monitoring and understanding the environment. It is widely used for on-land application but suffers from different constraints. Unfortunately, Change detection algorithms require highly accurate geometric and photometric registration. This requirement has precluded their use in underwater imagery in the past. In this paper, the change detection techniques available nowadays for on-land application were analyzed and a method to automatically detect the changes in sequences of underwater images is proposed. Target application scenarios are habitat restoration sites, or area monitoring after sudden impacts from hurricanes or ship groundings. The method is based on the creation of a 3D terrain model from one image sequence over an area of interest. This model allows for synthesizing textured views that correspond to the same viewpoints of a second image sequence. The generated views are photometrically matched and corrected against the corresponding frames from the second sequence. Standard change detection techniques are then applied to find areas of difference. Additionally, the paper shows that it is possible to detect false positives, resulting from non-rigid objects, by applying the same change detection method to the first sequence exclusively. The developed method was able to correctly find the changes between two challenging sequences of images from a coral reef taken one year apart and acquired with two different cameras

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When underwater vehicles perform navigation close to the ocean floor, computer vision techniques can be applied to obtain quite accurate motion estimates. The most crucial step in the vision-based estimation of the vehicle motion consists on detecting matchings between image pairs. Here we propose the extensive use of texture analysis as a tool to ameliorate the correspondence problem in underwater images. Once a robust set of correspondences has been found, the three-dimensional motion of the vehicle can be computed with respect to the bed of the sea. Finally, motion estimates allow the construction of a map that could aid to the navigation of the robot

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the improvements achieved in our mosaicking system to assist unmanned underwater vehicle navigation. A major advance has been attained in the processing of images of the ocean floor when light absorption effects are evident. Due to the absorption of natural light, underwater vehicles often require artificial light sources attached to them to provide the adequate illumination for processing underwater images. Unfortunately, these flashlights tend to illuminate the scene in a nonuniform fashion. In this paper a technique to correct non-uniform lighting is proposed. The acquired frames are compensated through a point-by-point division of the image by an estimation of the illumination field. Then, the gray-levels of the obtained image remapped to enhance image contrast. Experiments with real images are presented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shape complexity has recently received attention from different fields, such as computer vision and psychology. In this paper, integral geometry and information theory tools are applied to quantify the shape complexity from two different perspectives: from the inside of the object, we evaluate its degree of structure or correlation between its surfaces (inner complexity), and from the outside, we compute its degree of interaction with the circumscribing sphere (outer complexity). Our shape complexity measures are based on the following two facts: uniformly distributed global lines crossing an object define a continuous information channel and the continuous mutual information of this channel is independent of the object discretisation and invariant to translations, rotations, and changes of scale. The measures introduced in this paper can be potentially used as shape descriptors for object recognition, image retrieval, object localisation, tumour analysis, and protein docking, among others

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an information theoretic framework for image segmentation is presented. This approach is based on the information channel that goes from the image intensity histogram to the regions of the partitioned image. It allows us to define a new family of segmentation methods which maximize the mutual information of the channel. Firstly, a greedy top-down algorithm which partitions an image into homogeneous regions is introduced. Secondly, a histogram quantization algorithm which clusters color bins in a greedy bottom-up way is defined. Finally, the resulting regions in the partitioning algorithm can optionally be merged using the quantized histogram

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past decade, significant interest has been expressed in relating the spatial statistics of surface-based reflection ground-penetrating radar (GPR) data to those of the imaged subsurface volume. A primary motivation for this work is that changes in the radar wave velocity, which largely control the character of the observed data, are expected to be related to corresponding changes in subsurface water content. Although previous work has indeed indicated that the spatial statistics of GPR images are linked to those of the water content distribution of the probed region, a viable method for quantitatively analyzing the GPR data and solving the corresponding inverse problem has not yet been presented. Here we address this issue by first deriving a relationship between the 2-D autocorrelation of a water content distribution and that of the corresponding GPR reflection image. We then show how a Bayesian inversion strategy based on Markov chain Monte Carlo sampling can be used to estimate the posterior distribution of subsurface correlation model parameters that are consistent with the GPR data. Our results indicate that if the underlying assumptions are valid and we possess adequate prior knowledge regarding the water content distribution, in particular its vertical variability, this methodology allows not only for the reliable recovery of lateral correlation model parameters but also for estimates of parameter uncertainties. In the case where prior knowledge regarding the vertical variability of water content is not available, the results show that the methodology still reliably recovers the aspect ratio of the heterogeneity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchrotron radiation X-ray tomographic microscopy is a nondestructive method providing ultra-high-resolution 3D digital images of rock microstructures. We describe this method and, to demonstrate its wide applicability, we present 3D images of very different rock types: Berea sandstone, Fontainebleau sandstone, dolomite, calcitic dolomite, and three-phase magmatic glasses. For some samples, full and partial saturation scenarios are considered using oil, water, and air. The rock images precisely reveal the 3D rock microstructure, the pore space morphology, and the interfaces between fluids saturating the same pore. We provide the raw image data sets as online supplementary material, along with laboratory data describing the rock properties. By making these data sets available to other research groups, we aim to stimulate work based on digital rock images of high quality and high resolution. We also discuss and suggest possible applications and research directions that can be pursued on the basis of our data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).