983 resultados para Face detection


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The fundamental senses of the human body are: vision, hearing, touch, taste and smell. These senses are the functions that provide our relationship with the environment. The vision serves as a sensory receptor responsible for obtaining information from the outside world that will be sent to the brain. The gaze reflects its attention, intention and interest. Therefore, the estimation of gaze direction, using computer tools, provides a promising alternative to improve the capacity of human-computer interaction, mainly with respect to those people who suffer from motor deficiencies. Thus, the objective of this work is to present a non-intrusive system that basically uses a personal computer and a low cost webcam, combined with the use of digital image processing techniques, Wavelets transforms and pattern recognition, such as artificial neural network models, resulting in a complete system that performs since the image acquisition (including face detection and eye tracking) to the estimation of gaze direction. The obtained results show the feasibility of the proposed system, as well as several feature advantages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Learning to read and write at an early stage is the process of transferring the sound form of the spoken language for the graphical form of writing, a process, a time that in our system of alphabetical called writing, the letters are graphical representations in the level of phoneme. So that this representation occurs, it is necessary that the individual already can of some form perceive and manipulate the different sonorous segments of the word. This capacity of perception directed to the segments of the word calls Phonological Awareness. Thus, it was established had for objective to verify the pertaining to school performance of 1ª to 4ª series with and without of learning in Tests of Phonological Awareness. Fourth children with age average of 9 years and 3 months without learning disabilities had been submitted to the Protocol of Phonological Awareness (CIELO, 2002) using of this instrument had participated of this study 80 pertaining to school of both only the phonological tasks. The data received from quantiqualitative approach whose results were extracted inferences. The statistically significant results occurred in the tasks of Realism Face Detection, Syllables, Detecting Phonemes, Phonemic Synthesis and Reversal Phonemic. Based on the results we observed that children without learning difficulties performed better on all tasks mentioned above

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[ES]This paper describes some simple but useful computer vision techniques for human-robot interaction. First, an omnidirectional camera setting is described that can detect people in the surroundings of the robot, giving their angular positions and a rough estimate of the distance. The device can be easily built with inexpensive components. Second, we comment on a color-based face detection technique that can alleviate skin-color false positives. Third, a simple head nod and shake detector is described, suitable for detecting affirmative/negative, approval/dissaproval, understanding/disbelief head gestures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Desde hace más de 20 años, muchos grupos de investigación trabajan en el estudio de técnicas de reconocimiento automático de expresiones faciales. En los últimos años, gracias al avance de las metodologías, ha habido numerosos avances que hacen posible una rápida detección de las caras presentes en una imagen y proporcionan algoritmos de clasificación de expresiones. En este proyecto se realiza un estudio sobre el estado del arte en reconocimiento automático de emociones, para conocer los diversos métodos que existen en el análisis facial y en el reconocimiento de la emoción. Con el fin de poder comparar estos métodos y otros futuros, se implementa una herramienta modular y ampliable y que además integra un método de extracción de características que consiste en la obtención de puntos de interés en la cara y dos métodos para clasificar la expresión, uno mediante comparación de desplazamientos de los puntos faciales, y otro mediante detección de movimientos específicos llamados unidades de acción. Para el entrenamiento del sistema y la posterior evaluación del mismo, se emplean las bases de datos Cohn-Kanade+ y JAFFE, de libre acceso a la comunidad científica. Después, una evaluación de estos métodos es llevada a cabo usando diferentes parámetros, bases de datos y variando el número de emociones. Finalmente, se extraen conclusiones del trabajo y su evaluación, proponiendo las mejoras necesarias e investigación futura. ABSTRACT. Currently, many research teams focus on the study of techniques for automatic facial expression recognition. Due to the appearance of digital image processing, in recent years there have been many advances in the field of face detection, feature extraction and expression classification. In this project, a study of the state of the art on automatic emotion recognition is performed to know the different methods existing in facial feature extraction and emotion recognition. To compare these methods, a user friendly tool is implemented. Besides, a feature extraction method is developed which consists in obtaining 19 facial feature points. Those are passed to two expression classifier methods, one based on point displacements, and one based on the recognition of facial Action Units. Cohn-Kanade+ and JAFFE databases, both freely available to the scientific community, are used for system training and evaluation. Then, an evaluation of the methods is performed with different parameters, databases and varying the number of emotions. Finally, conclusions of the work and its evaluation are extracted, proposing some necessary improvements and future research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, a lot of applications use digital images. For example in face recognition to detect and tag persons in photograph, for security control, and a lot of applications that can be found in smart cities, as speed control in roads or highways and cameras in traffic lights to detect drivers ignoring red light. Also in medicine digital images are used, such as x-ray, scanners, etc. These applications depend on the quality of the image obtained. A good camera is expensive, and the image obtained depends also on external factor as light. To make these applications work properly, image enhancement is as important as, for example, a good face detection algorithm. Image enhancement also can be used in normal photograph, for pictures done in bad light conditions, or just to improve the contrast of an image. There are some applications for smartphones that allow users apply filters or change the bright, colour or contrast on the pictures. This project compares four different techniques to use in image enhancement. After applying one of these techniques to an image, it will use better the whole available dynamic range. Some of the algorithms are designed for grey scale images and others for colour images. It is used Matlab software to develop and present the final results. These algorithms are Successive Means Quantization Transform (SMQT), Histogram Equalization, using Matlab function and own implemented function, and V transform. Finally, as conclusions, we can prove that Histogram equalization algorithm is the simplest of all, it has a wide variability of grey levels and it is not suitable for colour images. V transform algorithm is a good option for colour images. The algorithm is linear and requires low computational power. SMQT algorithm is non-linear, insensitive to gain and bias and it can extract structure of the data. RESUMEN. Hoy en día incontable número de aplicaciones usan imágenes digitales. Por ejemplo, para el control de la seguridad se usa el reconocimiento de rostros para detectar y etiquetar personas en fotografías o vídeos, para distintos usos de las ciudades inteligentes, como control de velocidad en carreteras o autopistas, cámaras en los semáforos para detectar a conductores haciendo caso omiso de un semáforo en rojo, etc. También en la medicina se utilizan imágenes digitales, como por ejemplo, rayos X, escáneres, etc. Todas estas aplicaciones dependen de la calidad de la imagen obtenida. Una buena cámara es cara, y la imagen obtenida depende también de factores externos como la luz. Para hacer que estas aplicaciones funciones correctamente, el tratamiento de imagen es tan importante como, por ejemplo, un buen algoritmo de detección de rostros. La mejora de la imagen también se puede utilizar en la fotografía no profesional o de consumo, para las fotos realizadas en malas condiciones de luz, o simplemente para mejorar el contraste de una imagen. Existen aplicaciones para teléfonos móviles que permiten a los usuarios aplicar filtros y cambiar el brillo, el color o el contraste en las imágenes. Este proyecto compara cuatro técnicas diferentes para utilizar el tratamiento de imagen. Se utiliza la herramienta de software matemático Matlab para desarrollar y presentar los resultados finales. Estos algoritmos son Successive Means Quantization Transform (SMQT), Ecualización del histograma, usando la propia función de Matlab y una nueva función que se desarrolla en este proyecto y, por último, una función de transformada V. Finalmente, como conclusión, podemos comprobar que el algoritmo de Ecualización del histograma es el más simple de todos, tiene una amplia variabilidad de niveles de gris y no es adecuado para imágenes en color. El algoritmo de transformada V es una buena opción para imágenes en color, es lineal y requiere baja potencia de cálculo. El algoritmo SMQT no es lineal, insensible a la ganancia y polarización y, gracias a él, se puede extraer la estructura de los datos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN]This paper describes the approach for face detection and selection of frontal views, for further processing. This approach based on color detection and symmetry operator application, is integrated in an Active Vision System o ering promising results just making use of some opportunistic skills.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new method of eye localisation and face segmentation for use in a face recognition system. By using two near infrared light sources, we have shown that the face can be coarsely segmented, and the eyes can be accurately located, increasing the accuracy of the face localisation and improving the overall speed of the system. The system is able to locate both eyes within 25% of the eye-to-eye distance in over 96% of test cases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent years face recognition systems have been applied in various useful applications, such as surveillance, access control, criminal investigations, law enforcement, and others. However face biometric systems can be highly vulnerable to spoofing attacks where an impostor tries to bypass the face recognition system using a photo or video sequence. In this paper a novel liveness detection method, based on the 3D structure of the face, is proposed. Processing the 3D curvature of the acquired data, the proposed approach allows a biometric system to distinguish a real face from a photo, increasing the overall performance of the system and reducing its vulnerability. In order to test the real capability of the methodology a 3D face database has been collected simulating spoofing attacks, therefore using photographs instead of real faces. The experimental results show the effectiveness of the proposed approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

[EN]The human face provides useful information during interaction; therefore, any system integrating Vision- BasedHuman Computer Interaction requires fast and reliable face and facial feature detection. Different approaches have focused on this ability but only open source implementations have been extensively used by researchers. A good example is the Viola–Jones object detection framework that particularly in the context of facial processing has been frequently used.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new technology is being proposed as a solution to the problem of unintentional facial detection and recognition in pictures in which the individuals appearing want to express their privacy preferences, through the use of different tags. The existing methods for face de-identification were mostly ad hoc solutions that only provided an absolute binary solution in a privacy context such as pixelation, or a bar mask. As the number and users of social networks are increasing, our preferences regarding our privacy may become more complex, leaving these absolute binary solutions as something obsolete. The proposed technology overcomes this problem by embedding information in a tag which will be placed close to the face without being disruptive. Through a decoding method the tag will provide the preferences that will be applied to the images in further stages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acoustically, vehicles are extremely noisy environments and as a consequence audio-only in-car voice recognition systems perform very poorly. Seeing that the visual modality is immune to acoustic noise, using the visual lip information from the driver is seen as a viable strategy in circumventing this problem. However, implementing such an approach requires a system being able to accurately locate and track the driver’s face and facial features in real-time. In this paper we present such an approach using the Viola-Jones algorithm. Using this system, we present our results which show that using the Viola-Jones approach is a suitable method of locating and tracking the driver’s lips despite the visual variability of illumination and head pose.