996 resultados para static images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies of subjective time have adopted different methods to understand different processes of time perception. Four sculptures, with implied movement ranked as 1.5-, 3.0-, 4.5-, and 6.0-point stimuli on the Body Movement Ranking Scale, were randomly presented to 42 university students untrained in visual arts and ballet. Participants were allowed to observe the images for any length of time (exploration time) and, immediately after each image was observed, recorded the duration as they perceived it. The results of temporal ratio (exploration time/time estimation) showed that exploration time of images also affected perception of time, i.e., the subjective time for sculptures representing implied movement were overestimated.\

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES]This paper describes an analysis performed for facial description in static images and video streams. The still image context is first analyzed in order to decide the optimal classifier configuration for each problem: gender recognition, race classification, and glasses and moustache presence. These results are later applied to significant samples which are automatically extracted in real-time from video streams achieving promising results in the facial description of 70 individuals by means of gender, race and the presence of glasses and moustache.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado, Processamento de Linguagem Natural e Indústrias da Língua, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014

Relevância:

70.00% 70.00%

Publicador:

Resumo:

There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

While the neural regions associated with facial identity recognition are considered to be well defined, the neural correlates of non-moving and moving images of facial emotion processing are less clear. This study examined the brain electrical activity changes in 26 participants (14 males M = 21.64, SD = 3.99; 12 females M = 24.42, SD = 4.36), during a passive face viewing task, a scrambled face task and separate emotion and gender face discrimination tasks. The steady state visual evoked potential (SSVEP) was recorded from 64-electrode sites. Consistent with previous research, face related activity was evidenced at scalp regions over the parieto-temporal region approximately 170 ms after stimulus presentation. Results also identified different SSVEP spatio-temporal changes associated with the processing of static and dynamic facial emotions with respect to gender, with static stimuli predominately associated with an increase in inhibitory processing within the frontal region. Dynamic facial emotions were associated with changes in SSVEP response within the temporal region, which are proposed to index inhibitory processing. It is suggested that static images represent non-canonical stimuli which are processed via different mechanisms to their more ecologically valid dynamic counterparts.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper focuses on quantifying the benefits of pictogram based instructions relative to static images for work instruction delivery. The assembly of a stiffened aircraft panel has been used as an exemplar for the work which seeks to address the challenge of identifying an instructional mode that can be location or language neutral while at the same time optimising assembly build times and maintaining build quality. Key performance parameters measured using a series of panel build experiments conducted by two separate groups were: overall build time, the number of subject references to instructional media, the number of build errors and the time taken to correct any mistakes. Overall build time for five builds for a group using pictogram instructions was about 20% lower than for the group using image based instructions. Also, the pictogram group made fewer errors. Although previous work identified that animated instructions result in optimal build times, the language neutrality of pictograms as well as the fact that they can be used without visualisation hardware mean that, on balance, they have broader applicability in terms of transferring assembly knowledge to the manufacturing environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Facial expression is an important channel for human communication and can be applied in many real applications. One critical step for facial expression recognition (FER) is to accurately extract emotional features. Current approaches on FER in static images have not fully considered and utilized the features of facial element and muscle movements, which represent static and dynamic, as well as geometric and appearance characteristics of facial expressions. This paper proposes an approach to solve this limitation using ‘salient’ distance features, which are obtained by extracting patch-based 3D Gabor features, selecting the ‘salient’ patches, and performing patch matching operations. The experimental results demonstrate high correct recognition rate (CRR), significant performance improvements due to the consideration of facial element and muscle movements, promising results under face registration errors, and fast processing time. The comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose Corneal confocal microscopy (CCM) is a rapid non-invasive ophthalmic technique, which has been shown to diagnose and stratify the severity of diabetic neuropathy. Current morphometric techniques assess individual static images of the subbasal nerve plexus; this work explores the potential for non-invasive assessment of the wide-field morphology and dynamic changes of this plexus in vivo. Methods In this pilot study, laser scanning CCM was used to acquire maps (using a dynamic fixation target and semi-automated tiling software) of the central corneal sub-basal nerve plexus in 4 diabetic patients with and 6 without neuropathy and in 2 control subjects. Nerve migration was measured in an additional 7 diabetic patients with neuropathy, 4 without neuropathy and in 2 control subjects by repeating a modified version of the mapping procedure within 2-8 weeks, thus facilitating re-identification of distinctive nerve landmarks in the 2 montages. The rate of nerve movement was determined from these data and normalised to a weekly rate (µm/week), using customised software. Results Wide-field corneal nerve fibre length correlated significantly with the Neuropathy Disability Score (r = -0.58, p < 0.05), vibration perception (r = -0.66, p < 0.05) and peroneal conduction velocity (r = 0.67, p < 0.05). Central corneal nerve fibre length did not correlate with any of these measures of neuropathy (p > 0.05 for all). The rate of corneal nerve migration was 14.3 ± 1.1 µm/week in diabetic patients with neuropathy, 19.7 ± 13.3µm/week in diabetic patients without neuropathy, and 24.4 ± 9.8µm/week in control subjects; however, these differences were not significantly different (p = 0.543). Conclusions Our data demonstrate that it is possible to capture wide-field images of the corneal nerve plexus, and to quantify the rate of corneal nerve migration by repeating this procedure over a number of weeks. Further studies on larger sample sizes are required to determine the utility of this approach for the diagnosis and monitoring of diabetic neuropathy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increasing number of available protein structures requires efficient tools for multiple structure comparison. Indeed, multiple structural alignments are essential for the analysis of function, evolution and architecture of protein structures. For this purpose, we proposed a new web server called multiple Protein Block Alignment (mulPBA). This server implements a method based on a structural alphabet to describe the backbone conformation of a protein chain in terms of dihedral angles. This sequence-like' representation enables the use of powerful sequence alignment methods for primary structure comparison, followed by an iterative refinement of the structural superposition. This approach yields alignments superior to most of the rigid-body alignment methods and highly comparable with the flexible structure comparison approaches. We implement this method in a web server designed to do multiple structure superimpositions from a set of structures given by the user. Outputs are given as both sequence alignment and superposed 3D structures visualized directly by static images generated by PyMol or through a Jmol applet allowing dynamic interaction. Multiple global quality measures are given. Relatedness between structures is indicated by a distance dendogram. Superimposed structures in PDB format can be also downloaded, and the results are quickly obtained. mulPBA server can be accessed at www.dsimb.inserm.fr/dsimb_tools/mulpba/.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A tatuagem é uma antiga forma de inscrição corporal que apesar de sua idade não sofreu alterações em termos de materiais e técnicas. O desenvolvimento de tecnologias para a concepção de novas modalidades de intervenção orgânica terá ramificações em diversas áreas, permitindo o uso de novas interfaces epiteliais interativas (tatuagens dinâmicas responsivas), e criando novas vias de interação e comunicação incorporada. Em contraste à prática tradicional de imagens estáticas, as tatuagens dinâmicas (TDs) permitem a geração de imagens dinâmicas e interativas na pele. Nosso objetivo aqui é apresentar este novo campo de pesquisa e refletir sobre o papel do designer no projeto de tatuagens dinâmicas e as implicações destas tatuagens que transformam a pele em uma nova fonte de inscrições interativas e reversíveis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present a robust face location system based on human vision simulations to automatically locate faces in color static images. Our method is divided into four stages. In the first stage we use a gauss low-pass filter to remove the fine information of images, which is useless in the initial stage of human vision. During the second and the third stages, our technique approximately detects the image regions, which may contain faces. During the fourth stage, the existence of faces in the selected regions is verified. Having combined the advantages of Bottom-Up Feature Based Methods and Appearance-Based Methods, our algorithm performs well in various images, including those with highly complex backgrounds.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Emotion research has long been dominated by the “standard method” of displaying posed or acted static images of facial expressions of emotion. While this method has been useful it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose Generalized Additive Models and Generalized Additive Mixed Models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The mixed model GAMM approach is preferred as it can account for autocorrelation in time series data and allows emotion decoding participants to be modelled as random effects. To increase confidence in linear differences we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition we provide comments on the use of Generalized Additive Models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Saliency maps determine the likelihood that we focus on interesting areas of scenes or images. These maps can be built using several low-level image features, one of which having a particular relevance: colour. In this paper we present a new computational model, based only on colour features, which provides a sound basis for saliency maps for static images and video, plus region segregation and cues for local gist vision.