875 resultados para Computer Vision for Robotics and Automation
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Animal behavioral parameters can be used to assess welfare status in commercial broiler breeders. Behavioral parameters can be monitored with a variety of sensing devices, for instance, the use of video cameras allows comprehensive assessment of animal behavioral expressions. Nevertheless, the development of efficient methods and algorithms to continuously identify and differentiate animal behavior patterns is needed. The objective this study was to provide a methodology to identify hen white broiler breeder behavior using combined techniques of image processing and computer vision. These techniques were applied to differentiate body shapes from a sequence of frames as the birds expressed their behaviors. The method was comprised of four stages: (1) identification of body positions and their relationship with typical behaviors. For this stage, the number of frames required to identify each behavior was determined; (2) collection of image samples, with the isolation of the birds that expressed a behavior of interest; (3) image processing and analysis using a filter developed to separate white birds from the dark background; and finally (4) construction and validation of a behavioral classification tree, using the software tool Weka (model 148). The constructed tree was structured in 8 levels and 27 leaves, and it was validated using two modes: the set training mode with an overall rate of success of 96.7%, and the cross validation mode with an overall rate of success of 70.3%. The results presented here confirmed the feasibility of the method developed to identify white broiler breeder behavior for a particular group of study. Nevertheless, more improvements in the method can be made in order to increase the validation overall rate of success. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The identification of color vision types in primates is fundamental to understanding the evolution and biological function of color perception. The Hard, Randy, and Rittler (HRR) pseudoisochromatic test categorizes human color vision types successfully. Here we provide an experimental setup to employ HRR in a nonhuman primate, the capuchin (Cebus libidinosus), a platyrrhine with polymorphic color vision. The HRR test consists of plates with a matrix composed of gray circles that vary in size and brightness. Differently colored circles form a geometric shape (X, O, or Delta) that is discriminated visually from the gray background pattern. The ability to identify these shapes determines the type of dyschromatopsy (deficiency in color vision). We tested six capuchins in their own cages under natural sunlight. The subjects chose between two HRR plates in each trial: one with the gray pattern only and the other with a colored shape, presented on the left or right side at random. We presented the test 40 times and calculated the 95 % confidence limits for chance performance based on the binomial test. We also genotyped all subjects for exons 3 and 5 of the X-linked opsin genes. The HRR test diagnosed two subjects as protan dichromats (missing or defective L-cone), three as deutan dichromats (missing or defective M-cone), and one female as trichromat. Genetic analysis supported the behavioral data for all subjects. These findings show that the HRR test can be applied to diagnose color vision in nonhuman primates.
Resumo:
The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in Sao Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p < 0.05). The operators were mainly female and young (from 15 to 24 years old). The call center was opened 24 hours and the operators weekly hours were 36 hours with break time from 21 to 35 minutes per day. The symptoms reported were eye fatigue (73.9%), "weight" in the eyes (68.2%), "burning" eyes (54.6%), tearing (43.9%) and weakening of vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.
Resumo:
Inspection for corrosion of gas storage spheres at the welding seam lines must be done periodically. Until now this inspection is being done manually and has a high cost associated to it and a high risk of inspection personel injuries. The Brazilian Petroleum Company, Petrobras, is seeking cost reduction and personel safety by the use of autonomous robot technology. This paper presents the development of a robot capable of autonomously follow a welding line and transporting corrosion measurement sensors. The robot uses a pair of sensors each composed of a laser source and a video camera that allows the estimation of the center of the welding line. The mechanical robot uses four magnetic wheels to adhere to the sphere's surface and was constructed in a way that always three wheels are in contact with the sphere's metallic surface which guarantees enough magnetic atraction to hold the robot in the sphere's surface all the time. Additionally, an independently actuated table for attaching the corrosion inspection sensors was included for small position corrections. Tests were conducted at the laboratory and in a real sphere showing the validity of the proposed approach and implementation.
Resumo:
Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.
Resumo:
[EN]The human face provides useful information during interaction; therefore, any system integrating Vision- BasedHuman Computer Interaction requires fast and reliable face and facial feature detection. Different approaches have focused on this ability but only open source implementations have been extensively used by researchers. A good example is the Viola–Jones object detection framework that particularly in the context of facial processing has been frequently used.
Resumo:
[EN]In this paper, we experimentally study the combination of face and facial feature detectors to improve face detection performance. The face detection problem, as suggeted by recent face detection challenges, is still not solved. Face detectors traditionally fail in large-scale problems and/or when the face is occluded or di erent head rotations are present. The combination of face and facial feature detectors is evaluated with a public database. The obtained results evidence an improvement in the positive detection rate while reducing the false detection rate. Additionally, we prove that the integration of facial feature detectors provides useful information for pose estimation and face alignment.
Resumo:
[EN]Nowadays companies demand graduates able to work in multidisciplinary and collaborative projects. Hence, new educational methods are needed in order to support a more advanced society, and progress towards a higher quality of life and sustainability. The University of the Basque Country belongs to the European Higher Education Area, which was created as a result of the Bologna process to ensure the connection and quality of European national educational systems. In this framework, this paper proposes an innovative teaching methodology developed for the "Robotics" subject course that belongs to the syllabus of the B.Sc. degree in Industrial Electronics and Automation Engineering. We present an innovative methodology for Robotics learning based on collaborative projects, aimed at responding to the demands of a multidisciplinary and multilingual society.
Resumo:
[EN]Parliamentary websites have become one of the most important windows for citizens and media to follow the activities of their legislatures and to hold parliaments to account. Therefore, most parliamentary institutions aim to provide new multimedia solutions capable of displaying video fragments on demand on plenary activities. This paper presents a multimedia system for parliamentary institutions to produce video fragments on demand through a website with linked information and public feedback that helps to explain the content shown in these fragments. A prototype implementation has been developed for the Canary Islands Parliament (Spain) and shows how traditional parliamentary streaming systems can be enhanced by the use of semantics and computer vision for video analytics...
Resumo:
[EN]In this work local binary patterns based focus measures are presented. Local binary patterns (LBP) have been introduced in computer vision tasks like texture classification or face recognition. In applications where recognition is based on LBP, a computational saving can be achieved with the use of LBP in the focus measures. The behavior of the proposed measures is studied to test if they fulfill the properties of the focus measures and then a comparison with some well know focus measures is carried out in different scenarios.