124 resultados para Epigraphs Recognition
Resumo:
Training a system to recognize handwritten words is a task that requires a large amount of data with their correct transcription. However, the creation of such a training set, including the generation of the ground truth, is tedious and costly. One way of reducing the high cost of labeled training data acquisition is to exploit unlabeled data, which can be gathered easily. Making use of both labeled and unlabeled data is known as semi-supervised learning. One of the most general versions of semi-supervised learning is self-training, where a recognizer iteratively retrains itself on its own output on new, unlabeled data. In this paper we propose to apply semi-supervised learning, and in particular self-training, to the problem of cursive, handwritten word recognition. The special focus of the paper is on retraining rules that define what data are actually being used in the retraining phase. In a series of experiments it is shown that the performance of a neural network based recognizer can be significantly improved through the use of unlabeled data and self-training if appropriate retraining rules are applied.
Resumo:
Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstra- te the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a beha- vioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video- based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye- head coordination are identified using EHCA (eye-head co- ordination analyzer), a MATLAB software which was developed to analyze eye-head shifts. To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.
Resumo:
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.
Resumo:
T cell receptors (TCR) containing Vβ20-1 have been implicated in a wide range of T cell mediated disease and allergic reactions, making it a target for understanding these. Mechanics of T cell receptors are largely unexplained by static structures available from x-ray crystallographic studies. A small number of molecular dynamic simulations have been conducted on TCR, however are currently lacking either portions of the receptor or explanations for differences between binding and non-binding TCR recognition of respective peptide-HLA. We performed molecular dynamic simulations of a TCR containing variable domain Vβ20-1, sequenced from drug responsive T cells. These were initially from a patient showing maculopapular eruptions in response to the sulfanilamide-antibiotic sulfamethoxazole (SMX). The CDR2β domain of this TCR was found to dock SMX with high affinity. Using this compound as a perturbation, overall mechanisms involved in responses mediated by this receptor were explored, showing a chemical action on the TCR free from HLA or peptide interaction. Our simulations show two completely separate modes of binding cognate peptide-HLA complexes, with an increased affinity induced by SMX bound to the Vβ20-1. Overall binding of the TCR is mediated through a primary recognition by either the variable β or α domain, and a switch in recognition within these across TCR loops contacting the peptide and HLA occurs when SMX is present in the CDR2β loop. Large binding affinity differences are induced by summed small amino acid changes primarily by SMX modifying only three critical CDR2β loop amino acid positions. These residues, TYRβ57, ASPβ64, and LYSβ65 initially hold hydrogen bonds from the CDR2β to adjacent CDR loops. Effects from SMX binding are amplified and traverse longer distances through internal TCR hydrogen bonding networks, controlling the overall TCR conformation. Thus, the CDR2β of Vβ20-1 acts as a ligand controlled switch affecting overall TCR binding affinity.
Resumo:
Distrust should automatically activate a "thinking the opposite". Thus, according to Schul, Mayo and Burnstein (2004), subjects detect antonyms of adjectives faster when confronted with untrustworthy rather than trustworthy faces. We conducted four experiments within their paradigm to test whether the response latency of detecting antonyms remains stable. We introduced the following changes: the paradigm was applied with and without an induction phase, faces were culturally adapted, the stimuli were presented according more to priming rules, and the canonicity of antonyms was controlled. Results show that the response latency of detecting antonyms is difficult to predict. Even if faces are culturally adapted and priming rules are applied more strictly, response latency depends on whether the induction phase is applied and on the canonicity of antonyms rather than on the trustworthiness of faces. In general, this paradigm seems not to be appropriate to test thinking the opposite under distrust.
Resumo:
Background: Emotional processing in essential hypertension beyond self-report questionnaire has hardly been investigated. The aim of this study is to examine associations between hypertension status and recognition of facial affect. Methods: 25 healthy, non-smoking, medication-free men including 13 hypertensive subjects aged between 20 and 65 years completed a computer-based task in order to examine sensitivity of recognition of facial affect. Neutral faces gradually changed to a specific emotion in a pseudo-continuous manner. Slides of the six basic emotions (fear, sadness, disgust, happiness, anger, surprise) were chosen from the „NimStim Set“. Pictures of three female and three male faces were electronically morphed in 1% steps of intensity from 0% to 100% (36 sets of faces with 100 pictures each). Each picture of a set was presented for one second, ranging from 0% to 100% of intensity. Participants were instructed to press a stop button as soon as they recognized the expression of the face. After stopping a forced choice between the six basic emotions was required. As dependent variables, we recorded the emotion intensity at which the presentation was stopped and the number of errors (error rate). Recognition sensitivity was calculated as emotion intensity of correctly identified emotions. Results: Mean arterial pressure was associated with a significantly increased recognition sensitivity of facial affect for the emotion anger (ß = - .43, p = 0.03*, Δ R2= .110). There was no association with the emotions fear, sadness, disgust, happiness, and surprise (p’s > .0.41). Mean arterial pressure did not relate to the mean number of errors for any of the facial emotions. Conclusions: Our findings suggest that an increased blood pressure is associated with increased recognition sensitivity of facial affect for the emotion anger, if a face shows anger. Hypertensives perceive facial anger expression faster than normotensives, if anger is shown.
Resumo:
The goal of this study was to investigate recognition memory performance across the lifespan and to determine how estimates of recollection and familiarity contribute to performance. In each of three experiments, participants from five groups from 14 up to 85 years of age (children, young adults, middle-aged adults, young-old adults, and old-old adults) were presented with high- and low-frequency words in a study phase and were tested immediately afterwards and/or after a one day retention interval. The results showed that word frequency and retention interval affected recognition memory performance as well as estimates of recollection and familiarity. Across the lifespan, the trajectory of recognition memory followed an inverse u-shape function that was neither affected by word frequency nor by retention interval. The trajectory of estimates of recollection also followed an inverse u-shape function, and was especially pronounced for low-frequency words. In contrast, estimates of familiarity did not differ across the lifespan. The results indicate that age differences in recognition memory are mainly due to differences in processes related to recollection while the contribution of familiarity-based processes seems to be age-invariant.
Resumo:
In this paper we present a solution to the problem of action and gesture recognition using sparse representations. The dictionary is modelled as a simple concatenation of features computed for each action or gesture class from the training data, and test data is classified by finding sparse representation of the test video features over this dictionary. Our method does not impose any explicit training procedure on the dictionary. We experiment our model with two kinds of features, by projecting (i) Gait Energy Images (GEIs) and (ii) Motion-descriptors, to a lower dimension using Random projection. Experiments have shown 100% recognition rate on standard datasets and are compared to the results obtained with widely used SVM classifier.