851 resultados para computer vision face recognition detection voice recognition sistemi biometrici iOS
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
The current set of studies was conducted to examine the cross-race effect (CRE), a phenomenon commonly found in the face perception literature. The CRE is evident when participants display better own-race face recognition accuracy than other-race recognition accuracy (e.g. Ackerman et al., 2006). Typically the cross-race effect is attributed to perceptual expertise, (i.e., other-race faces are processed less holistically; Michel, Rossion, Han, Chung & Caldara, 2006), and the social cognitive model (i.e., other-race faces are processed at the categorical level by virtue of being an out-group member; Hugenberg, Young, Bernstein, & Sacco, 2010). These effects may be mediated by differential attention. I investigated whether other-race faces are disregarded and, consequently, not remembered as accurately as own-race (in-group) faces. In Experiment 1, I examined how the magnitude of the CRE differed when participants learned individual faces sequentially versus when they learned multiple faces simultaneously in arrays comprising faces and objects. I also examined how the CRE differed when participants recognized individual faces presented sequentially versus in arrays of eight faces. Participants’ recognition accuracy was better for own-race faces than other-race faces regardless of familiarization method. However, the difference between own- and other-race accuracy was larger when faces were familiarized sequentially in comparison to familiarization with arrays. Participants’ response patterns during testing differed depending on the combination of familiarization and testing method. Participants had more false alarms for other-race faces than own-race faces if they learned faces sequentially (regardless of testing strategy); if participants learned faces in arrays, they had more false alarms for other-race faces than own-races faces if ii i they were tested with sequentially presented faces. These results are consistent with the perceptual expertise model in that participants were better able to use the full two seconds in the sequential task for own-race faces, but not for other-race faces. The purpose of Experiment 2 was to examine participants’ attentional allocation in complex scenes. Participants were shown scenes comprising people in real places, but the head stimuli used in Experiment 1 were superimposed onto the bodies in each scene. Using a Tobii eyetracker, participants’ looking time for both own- and other-race faces was evaluated to determine whether participants looked longer at own-race faces and whether individual differences in looking time correlated with individual differences in recognition accuracy. The results of this experiment demonstrated that although own-race faces were preferentially attended to in comparison to other-race faces, individual differences in looking time biases towards own-race faces did not correlate with individual differences in own-race recognition advantages. These results are also consistent with perceptual expertise, as it seems that the role of attentional biases towards own-race faces is independent of the cognitive processing that occurs for own-race faces. All together, these results have implications for face perception tasks that are performed in the lab, how accurate people may be when remembering faces in the real world, and the accuracy and patterns of errors in eyewitness testimony.
Resumo:
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production.
Resumo:
This thesis takes liberation to be supreme knowledge of the unity underlying the world of multiplicity. This knowledge is always already attained, so all are eternally liberated, but it is unrecognized in ordinary experience. We will look at the Bhagavad-Gītā to consider why this is so. When Arjuna saw Kṛṣṇa’s imperishable Self, he saw all beings standing as one in Kṛṣṇa; thus, he was confronted by supreme knowledge. But he was overwhelmed with fear and confusion and took refuge in blindness. I argue that Arjuna was not prepared to face recognition because he was unpractised in non-attachment. Attached to his subjectivity, he trembled in the face of unity. The supreme goal is standing firm in recognition while living in the world.
Resumo:
La zeitgesit contemporaine sur la reconnaissance des visages suggère que le processus de reconnaissance reposerait essentiellement sur le traitement des distances entre les attributs internes du visage. Il est toutefois surprenant de noter que cette hypothèse n’a jamais été évaluée directement dans la littérature. Pour ce faire, 515 photographies de visages ont été annotées afin d’évaluer l’information véhiculée par de telles distances. Les résultats obtenus suggèrent que les études précédentes ayant utilisé des modifications de ces distances ont présenté 4 fois plus d’informations que les distances inter-attributs du monde réel. De plus, il semblerait que les observateurs humains utilisent difficilement les distances inter-attributs issues de visages réels pour reconnaître leurs semblables à plusieurs distances de visionnement (pourcentage correct maximal de 65%). Qui plus est, la performance des observateurs est presque parfaitement restaurée lorsque l’information des distances inter-attributs n’est pas utilisable mais que les observateurs peuvent utiliser les autres sources d’information de visages réels. Nous concluons que des indices faciaux autre que les distances inter-attributs tel que la forme des attributs et les propriétés de la peau véhiculent l’information utilisée par le système visuel pour opérer la reconnaissance des visages.
Resumo:
Cette thése a été réalisée dans le cadre d'une cotutelle avec l'Institut National Polytechnique de Grenoble (France). La recherche a été effectuée au sein des laboratoires de vision 3D (DIRO, UdM) et PERCEPTION-INRIA (Grenoble).
Resumo:
L’objectif de cette recherche est la création d’une plateforme en ligne qui permettrait d’examiner les différences individuelles de stratégies de traitement de l’information visuelle dans différentes tâches de catégorisation des visages. Le but d’une telle plateforme est de récolter des données de participants géographiquement dispersés et dont les habiletés en reconnaissance des visages sont variables. En effet, de nombreuses études ont montré qu’il existe de grande variabilité dans le spectre des habiletés à reconnaître les visages, allant de la prosopagnosie développementale (Susilo & Duchaine, 2013), un trouble de reconnaissance des visages en l’absence de lésion cérébrale, aux super-recognizers, des individus dont les habiletés en reconnaissance des visages sont au-dessus de la moyenne (Russell, Duchaine & Nakayama, 2009). Entre ces deux extrêmes, les habiletés en reconnaissance des visages dans la population normale varient. Afin de démontrer la faisabilité de la création d’une telle plateforme pour des individus d’habiletés très variables, nous avons adapté une tâche de reconnaissance de l’identité des visages de célébrités utilisant la méthode Bubbles (Gosselin & Schyns, 2001) et avons recruté 14 sujets contrôles et un sujet présentant une prosopagnosie développementale. Nous avons pu mettre en évidence l’importance des yeux et de la bouche dans l’identification des visages chez les sujets « normaux ». Les meilleurs participants semblent, au contraire, utiliser majoritairement le côté gauche du visage (l’œil gauche et le côté gauche de la bouche).
Resumo:
Segmentation of medical imagery is a challenging problem due to the complexity of the images, as well as to the absence of models of the anatomy that fully capture the possible deformations in each structure. Brain tissue is a particularly complex structure, and its segmentation is an important step for studies in temporal change detection of morphology, as well as for 3D visualization in surgical planning. In this paper, we present a method for segmentation of brain tissue from magnetic resonance images that is a combination of three existing techniques from the Computer Vision literature: EM segmentation, binary morphology, and active contour models. Each of these techniques has been customized for the problem of brain tissue segmentation in a way that the resultant method is more robust than its components. Finally, we present the results of a parallel implementation of this method on IBM's supercomputer Power Visualization System for a database of 20 brain scans each with 256x256x124 voxels and validate those against segmentations generated by neuroanatomy experts.
Resumo:
Mosaics have been commonly used as visual maps for undersea exploration and navigation. The position and orientation of an underwater vehicle can be calculated by integrating the apparent motion of the images which form the mosaic. A feature-based mosaicking method is proposed in this paper. The creation of the mosaic is accomplished in four stages: feature selection and matching, detection of points describing the dominant motion, homography computation and mosaic construction. In this work we demonstrate that the use of color and textures as discriminative properties of the image can improve, to a large extent, the accuracy of the constructed mosaic. The system is able to provide 3D metric information concerning the vehicle motion using the knowledge of the intrinsic parameters of the camera while integrating the measurements of an ultrasonic sensor. The experimental results of real images have been tested on the GARBI underwater vehicle
Resumo:
We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc
Resumo:
In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory
Resumo:
In a search for new sensor systems and new methods for underwater vehicle positioning based on visual observation, this paper presents a computer vision system based on coded light projection. 3D information is taken from an underwater scene. This information is used to test obstacle avoidance behaviour. In addition, the main ideas for achieving stabilisation of the vehicle in front of an object are presented
Resumo:
El modelat d'escenes és clau en un gran ventall d'aplicacions que van des de la generació mapes fins a la realitat augmentada. Aquesta tesis presenta una solució completa per a la creació de models 3D amb textura. En primer lloc es presenta un mètode de Structure from Motion seqüencial, a on el model 3D de l'entorn s'actualitza a mesura que s'adquireix nova informació visual. La proposta és més precisa i robusta que l'estat de l'art. També s'ha desenvolupat un mètode online, basat en visual bag-of-words, per a la detecció eficient de llaços. Essent una tècnica completament seqüencial i automàtica, permet la reducció de deriva, millorant la navegació i construcció de mapes. Per tal de construir mapes en àrees extenses, es proposa un algorisme de simplificació de models 3D, orientat a aplicacions online. L'eficiència de les propostes s'ha comparat amb altres mètodes utilitzant diversos conjunts de dades submarines i terrestres.
Resumo:
La nostra investigació s'inscriu en la concepció dinàmica de la intel·ligència, i concretament en el processos que configuren el processament cerebral en el Model d'integració de la informació descrit per Das, Kirby i Jarman (1979). Els dos processos cerebrals que constitueixen la base de la conducta intel·ligent són el processament simultani i el processament seqüencial; són les dues estratègies principals del processament de la informació. Tota classe d'estímul és susceptible d'ésser processat o bé seqüencialment (seriació, verbal, anàlisi), o be simultàniament (global, visual, síntesi). Basant-nos en el recull bibliogràfic i amb la convicció de que apropant-nos al coneixement de les peculiaritats del processament de la informació, ens endinsem en la comprensió del procés que mena a la conducta intel·ligent, i per tant, a l'aprenentatge, formulem la següent hipòtesi de treball: en els nens de preescolar (d'entre els 3 i els sis anys) es donaran aquest dos tipus de processament i variaran en funció de l'edat, el sexe, l'atenció, les dificultats d'aprenentatge, els problemes de llenguatge, el bilingüisme, el nivell sociocultural, la dominància manual, el nivell mental i de la presència de patologia. Les diferències que s'esdevinguin ens permetran de formular criteris i pautes per a la intervenció educativa. Els nostres objectius es refonen en mesurar el processament en nens de preescolar de les comarques gironines, verificar la relació de cada tipus de processament amb les variables esmentades, comprovar si s'estableix un paral·lelisme entre el processament i les aportacions de concepció localitzacionista de les funcions cerebrals en base als nostres resultats, i pautes per a la intervenció pedagògica. Quant al mètode, hem seleccionat una mostra representativa dels nens i nenes matriculats a les escoles publiques de les comarques gironines durant el curs 92/93, mitjançant un mostreig aleatori estratificat i per conglomerats. El tamany real de la mostra és de dos-cents seixanta un subjectes. Els instruments emprats han estat els següents: el Test K-ABC de Kaufman & Kaufman (1983) per a la avaluació del processament; un formulari dirigit als pares per a la recollida de la informació pertinent; entrevistes amb les mestres, i el Test de la Figura Humana de Goodenough. Pel que fa referència als resultats de la nostra recerca i en funció dels objectius proposats, constatem els fets següents. En els nens de preescolar, amb edats d'entre els tres i els sis anys, es constata l'existència dels dos tipus de processament cerebral, sense que es doni un predomini d'un sobre de l'altre; ambdós processaments actuen interrelacionadament. Ambdós tipus de processament milloren a mesura que augmenta l'edat, però es constaten diferències derivades del nivell mental: amb un nivell mental normal s'hi associa una millora d'ambdós processaments, mentre que amb un nivell mental deficient només millora fonamentalment el processament seqüencial. Tanmateix, el processament simultani està més relacionat amb les funcions cognitives complexes i és més nivell mental dependent que el processament seqüencial. Tant les dificultats d'aprenentatge com els problemes de llenguatge predominen en els nens i nenes amb un desequilibri significatiu entre ambdós tipus de processament; les dificultats d'aprenentatge estan més relacionades amb una deficiència del processament simultani, mentre que els problemes de llenguatge es relacionen més amb una deficiència en el processament seqüencial. Els nivells socioculturals baixos es relacionen amb resultats inferiors en ambdós tipus de processament. Per altra part, entre els nens bilingües és més freqüent el processament seqüencial significatiu. El test de la Figura Humana es comporta com un marcador de processament simultani i el nivell atencional com un marcador de la gravetat del problema que afecta al processament i en el següent ordre: nivell mental deficient, dificultats, d'aprenentatge i problemes de llenguatge . Les deficiències atencionals van lligades a deficiències en el processament simultani i a la presencia de patologia. Quant a la dominància manual no es constaten diferències en el processament. Finalment, respecte del sexe només podem aportar que quan un dels dos tipus de processament és deficitari,i es dóna per tant, un desequilibri en el processament, predomina significativament el nombre de nens afectats per sobre del de nenes.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.