986 resultados para Recognition Memory
Resumo:
The study aimed to determine if the memory bias for negative faces previously demonstrated in depression and dysphoria generalises from long- to short-term memory. A total of 29 dysphoric (DP) and22 non-dysphoric (ND) participants were presented with a series of faces and asked to identify the emotion portrayed (happiness, sadness, anger, or neutral affect). Following a delay, four faces were presented (the original plus three distractors) and participants were asked to identify the target face. Half of the trials assessed memory for facial emotion, and the remaining trials examined memory for facial identity. At encoding, no group differences were apparent. At memory testing, relative to ND participants, DP participants exhibited impaired memory for all types of facial emotion and for facial identity when the faces featured happiness, anger, or neutral affect, but not sadness. DP participants exhibited impaired identity memory for happy faces relative to angry, sad, and neutral, whereas ND participants exhibited enhanced facial identity memory when faces were angry. In general, memory for faces was not related to performance at encoding. However, in DP participants only, memory for sad faces was related to sadness recognition at encoding. The results suggest that the negative memory bias for faces in dysphoria does not generalise from long- to short-term memory.
Resumo:
The functional catechol-O-methyltransferase (COMT Val108/158Met) polymorphism has been shown to have an impact on tasks of executive function, memory and attention and recently, tasks with an affective component. As oestrogen reduces COMT activity, we focused on the interaction between gender and COMT genotype on brain activations during an affective processing task. We used functional MRI (fMRI) to record brain activations from 74 healthy subjects who engaged in a facial affect recognition task; subjects viewed and identified fearful compared to neutral faces. There was no main effect of the COMT polymorphism, gender or genotypegender interaction on task performance. We found a significant effect of gender on brain activations in the left amygdala and right temporal pole, where females demonstrated increased activations over males. Within these regions, Val/Val carriers showed greater signal magnitude compared to Met/Met carriers, particularly in females. The COMT Val108/158Met polymorphism impacts on gender-related patterns of activation in limbic and paralimbic regions but the functional significance of any oestrogen-related COMT inhibition appears modest. Copyright © 2008 CINP.
Resumo:
Objectives: The aims were to determine if emotion recognition deficits observed in eating disorders generalise to non-clinical disordered eating and to establish if other psychopathological and personality factors contributed to, or accounted for, these deficits. Design: Females with high (n=23) and low (n=22) scores on the Eating Disorder Inventory (EDI) were assessed on their ability to recognise emotion from videotaped social interactions. Participants also completed a face memory task, a Stroop task, and self-report measures of alexithymia, depression and anxiety. Results: Relative to the low EDI group, high EDI participants exhibited a general deficit in recognition of emotion, which was related to their scores on the alexithymia measure and the bulimia subscale of the EDI. They also exhibited a specific deficit in the recognition of anger, which was related to their scores on the body dissatisfaction subscale of the EDI. Conclusions: In line with clinical eating disorders, non-clinical disordered eating is associated with emotion recognition deficits. However, the nature of these deficits appears to be dependent upon the type of eating psychopathology and the degree of co-morbid alexithymia.
Resumo:
In this paper, a modification for the high-order neural network (HONN) is presented. Third order networks are considered for achieving translation, rotation and scale invariant pattern recognition. They require however much storage and computation power for the task. The proposed modified HONN takes into account a priori knowledge of the binary patterns that have to be learned, achieving significant gain in computation time and memory requirements. This modification enables the efficient computation of HONNs for image fields of greater that 100 × 100 pixels without any loss of pattern information.
Resumo:
This study examined the effect of schemas on consistency and accuracy of memory across interviews, providing theoretical hypotheses explaining why inconsistencies may occur. The design manipulated schema-typicality of items (schema-typical and atypical), question format (free-recall, cued-recall and recognition) and retention interval (immediate/2 week and 2 week/4 week). Consistency, accuracy and experiential quality of memory were measured. ^ All independent variables affected accuracy and experiential quality of memory while question format was the only variable affecting consistency. These results challenge the commonly held notion in the legal arena that consistency is a proxy for accuracy. The study also demonstrates that other variables, such as item-typicality and retention interval have different effects on consistency and accuracy in memory. ^
Resumo:
Perception and recognition of faces are fundamental cognitive abilities that form a basis for our social interactions. Research has investigated face perception using a variety of methodologies across the lifespan. Habituation, novelty preference, and visual paired comparison paradigms are typically used to investigate face perception in young infants. Storybook recognition tasks and eyewitness lineup paradigms are generally used to investigate face perception in young children. These methodologies have introduced systematic differences including the use of linguistic information for children but not infants, greater memory load for children than infants, and longer exposure times to faces for infants than for older children, making comparisons across age difficult. Thus, research investigating infant and child perception of faces using common methods, measures, and stimuli is needed to better understand how face perception develops. According to predictions of the Intersensory Redundancy Hypothesis (IRH; Bahrick & Lickliter, 2000, 2002), in early development, perception of faces is enhanced in unimodal visual (i.e., silent dynamic face) rather than bimodal audiovisual (i.e., dynamic face with synchronous speech) stimulation. The current study investigated the development of face recognition across children of three ages: 5 – 6 months, 18 – 24 months, and 3.5 – 4 years, using the novelty preference paradigm and the same stimuli for all age groups. It also assessed the role of modality (unimodal visual versus bimodal audiovisual) and memory load (low versus high) on face recognition. It was hypothesized that face recognition would improve across age and would be enhanced in unimodal visual stimulation with a low memory load. Results demonstrated a developmental trend (F(2, 90) = 5.00, p = 0.009) with older children showing significantly better recognition of faces than younger children. In contrast to predictions, no differences were found as a function of modality of presentation (bimodal audiovisual versus unimodal visual) or memory load (low versus high). This study was the first to demonstrate a developmental improvement in face recognition from infancy through childhood using common methods, measures and stimuli consistent across age.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
After a crime has occurred, one of the most pressing objectives for investigators is to identify and interview any eyewitness that can provide information about the crime. Depending on his or her training, the investigative interviewer will use (to varying degrees) mostly yes/no questions, some cued and multiple-choice questions, with few open-ended questions. When the witness cannot generate any more details about the crime, one assumes the eyewitness' memory for the critical event has been exhausted. However, given what we know about memory, is this a safe assumption? In line with the extant literature on human cognition, if one assumes (a) an eyewitness has more available memories of the crime than he or she has accessible and (b) only explicit probes have been used to elicit information, then one can argue this eyewitness may still be able to provide additional information via implicit memory tests. In accordance with these notions, the present study had two goals: demonstrate that (1) eyewitnesses can reveal memory implicitly for a detail-rich event and (2) particularly for brief crimes, eyewitnesses can reveal memory for event details implicitly that were inaccessible when probed for explicitly. Undergraduates (N = 227) participated in a psychological experiment in exchange for research credit. Participants were presented with one of three stimulus videos (brief crime vs. long crime vs. irrelevant video). Then, participants either completed a series of implicit memory tasks or worked on a puzzle for 5 minutes. Lastly, participants were interviewed explicitly about the previous video via free recall and recognition tasks. Findings indicated that participants who viewed the brief crime provided significantly more crime-related details implicitly than those who viewed the long crime. The data also showed participants who viewed the long crime provided marginally more accurate details during free recall than participants who viewed the brief crime. Furthermore, participants who completed the implicit memory tasks provided significantly less accurate information during the explicit interview than participants who were not given implicit memory tasks. This study was the first to investigate implicit memory for eyewitnesses of a crime. To determine its applied value, additional empirical work is required.
Resumo:
Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.
Resumo:
The aim of this work is to evaluate the roles of age and emotional valence in word recognition in terms of ex-Gaussian distribution components. In order to do that, a word recognition task was carried out with two age groups, in which emotional valence was manipulated. Older participants did not present a clear trend for reaction times. The younger participants showed significant statistical differences in negative words for target and distracting conditions. Addressing the ex-Gaussian tau parameter, often related to attentional demands in the literature, age-related differences in emotional valence seem not to have an effect for negative words. Focusing on emotional valence for each group, the younger participants only showed an effect on negative distracting words. The older participants showed an effect regarding negative and positive target words, and negative distracting words. This suggests that the attentional demand is higher for emotional words, in particular, for the older participants.
Resumo:
The occurrences of visual hallucinations seem to be more prevalent in low light and hallucinators tend to be more prone to false positive type errors in memory tasks. Here we investigated whether the richness of stimuli does indeed affect recognition differently in hallucinating and nonhallucinating participants, and if so whether this difference extends to identifying spatial context. We compared 36 Parkinson's disease (PD) patients with visual hallucinations, 32 Parkinson's patients without hallucinations, and 36 age-matched controls, on a visual memory task where color and black and white pictures were presented at different locations. Participants had to recognize the pictures among distracters along with the location of the stimulus. Findings revealed clear differences in performance between the groups. Both PD groups had impaired recognition compared to the controls, but those with hallucinations were significantly more impaired on black and white than on color stimuli. In addition, the group with hallucinations was significantly impaired compared to the other two groups on spatial memory. We suggest that not only do PD patients have poorer recognition of pictorial stimuli than controls, those who present with visual hallucinations appear to be more heavily reliant on bottom up sensory input and impaired on spatial ability.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
[EN] The experiment discussed in this paper is a direct replication of Finkbeiner (2005) and an indirect replication of Jiang and Forster (2001) and Witzel and Forster(2012). The paper explores the use of episodic memory in L2 vocabulary processing. By administering an L1 episodic recognition task with L2 masked translation primes, reduced reaction times would suggest L2 vocabulary storage in episodic memory. The methodology follows Finkbeiner (2005) who argued that a blank screen introduced after the prime in Jiang Forster (2001) led to a ghosting effect, compromising the imperceptibility of the prime. The results here mostly corroborate Finkbeiner (2005) with no significant priming effects. While Finkbeiner discusses his findings in terms of the dissociability of episodic and semantic memory, and discounts Jiang and Forster’s (2001) results to participants’ strategic responding, I add a layer of analysis based on declarative and procedural constituents. From this perspective, Jiang and Forster (2001) and Witzel and Forster’s (2012) results can be seen as possible episodic memory activation, and Finkbeiner’s (2005) and my lack of priming effects might be due to the sole activation of procedural neural networks. Priming effects are found in concrete and abstract words but require verification through further experimentation.
Resumo:
Several researchers have investigated the effects of alcohol on memory. Few researchers have studied the effects of alcohol on an eyewitness's recall and recognition of crime events. This study proposed to examine the effects of alcohol and viewing conditions on subjects' ability to recall information regarding a videotaped bank robbery. Thirty male and 22 female subjects participated in a 2 (consumption: alcohol v. no alcohol) x 2 (lighting: good v. poor) factorial experiment with Average Accuracy and Total Amount of Information recalled as the primary dependent measures. There was no significant difference between the Intoxicated and Sober subjects regarding the amount of information recalled or their average accuracy. The main effect for lighting conditions and gender differences were also not significant.
Resumo:
After a crime has occurred, one of the most pressing objectives for investigators is to identify and interview any eyewitness that can provide information about the crime. Depending on his or her training, the investigative interviewer will use (to varying degrees) mostly yes/no questions, some cued and multiple-choice questions, with few open-ended questions. When the witness cannot generate any more details about the crime, one assumes the eyewitness’ memory for the critical event has been exhausted. However, given what we know about memory, is this a safe assumption? In line with the extant literature on human cognition, if one assumes (a) an eyewitness has more available memories of the crime than he or she has accessible and (b) only explicit probes have been used to elicit information, then one can argue this eyewitness may still be able to provide additional information via implicit memory tests. In accordance with these notions, the present study had two goals: demonstrate that (1) eyewitnesses can reveal memory implicitly for a detail-rich event and (2) particularly for brief crimes, eyewitnesses can reveal memory for event details implicitly that were inaccessible when probed for explicitly. Undergraduates (N = 227) participated in a psychological experiment in exchange for research credit. Participants were presented with one of three stimulus videos (brief crime vs. long crime vs. irrelevant video). Then, participants either completed a series of implicit memory tasks or worked on a puzzle for 5 minutes. Lastly, participants were interviewed explicitly about the previous video via free recall and recognition tasks. Findings indicated that participants who viewed the brief crime provided significantly more crime-related details implicitly than those who viewed the long crime. The data also showed participants who viewed the long crime provided marginally more accurate details during free recall than participants who viewed the brief crime. Furthermore, participants who completed the implicit memory tasks provided significantly less accurate information during the explicit interview than participants who were not given implicit memory tasks. This study was the first to investigate implicit memory for eyewitnesses of a crime. To determine its applied value, additional empirical work is required.