989 resultados para Associative Memory


Relevância:

60.00% 60.00%

Publicador:

Resumo:

O sindroma de Burnout, quadro psicofisio-patológico tem sido objecto de investigação intensiva, desde o artigo de Freudenberger (1974) intitulado "Staff Burnout", com dois objectivos: compreendê-lo melhor, através de meios de diagnóstico, e criar técnicas de intervenção terapêutica. Na realidade, desde essa altura, foram efectuados e publicados um número avultado de trabalhos de investigação, nos campos do diagnóstico e caracterização do Burnout, e da sua resolução terapêutica. O pensamento dominante, nessa altura e ainda hoje, é de tendência analítica e/ou psico-social. Este quadro, espoletado por uma sucessão de episódios emocionalmente negativos em contexto ocupacional em indivíduos com provável predisposição genética e sujeitos a situações de pressão laboral, dos mais diversos tipos (podendo ir do “simples” stress por acumulação de tarefas até às situações de mobbing), tem efeitos frequentemente dramáticos ao nível da dinâmica biopsico- social, nos seus mais diversos aspectos. Estes estendem-se, quase sempre, muito para lá das problemáticas laborais, prejudicando, de forma mais ou menos grave, as interacções sociais com particular impacto ao nível da dinâmica familiar. Por outro lado, o Burnout propicia o aparecimento de patologias diversas, já que toda a estrutura psiconeuro-endocrino-imunulógica estará posta em causa, potenciando situações de fragilidade sistémica. No entanto, há aspectos correlacionáveis com este quadro disfuncional que têm sido muito pouco abordados – alterações cognitivo-operativas ou neuropsicológicas. Aliás os trabalhos que sobre eles incidem são em número muito reduzido. Assim após termos registado queixas, acentuadas, ao nível da capacidade de concentração e da memória em pessoas com burnout observadas na clínica hospitalar e privada, decidimos investigar estas situações, usando uma metodologia clínica de tipo qualitativo, e constatámos que, na realidade, as queixas eram pertinentes. Posto isto, achámos que a situação deveria ser aprofundada e partimos para um trabalho mais sistematizado, este, com o objectivo de caracterizar melhor o tipo de disfunções atencionais e mnésicas. Para isso, após uma selecção prévia, a partir de um grupo de 192 enfermeiros que responderam à Escala de Maslach, avaliámos uma amostra de risco constituída por 40 enfermeiros e enfermeiras, de Instituições Psiquiátricas da Grande Lisboa, trabalhando em urgência e enfermaria, que comparámos com uma amostra de igual número de enfermeiros, desenvolvendo a sua actividade na consulta externa ou em ambientes mais protegidos de stress ocupacional continuo. Para o efeito, e após uma anamnese cuidada, aplicámos provas de atenção e memória, sensíveis a qualquer tipo de compromisso encefálico seja ele funcional ou patológico. Para a componente atenção/concentração e a componente vísuo-grafo-espacial usámos a prova de Toulouse-Piéron, assim como as séries de dígitos ou digit span, para a vertente audio-verbal. A dinâmica mnésica foi avaliada através da prova de memória associativa (Escala de Memória de Wechsler) para testar a variante áudio-verbal, e a reprodução de figuras (Escala de Memória de Wechlser). Os resultados, após uma dupla análise clínica e estatística, comprovaram globalmente as hipóteses, indicando uma correlação significativa entre o grau de Burnout e os défices neuropsicológicos detectados: alteração da atenção/concentração e dismnésia, de natureza limitativa face às exigências quotidianas dos indivíduos. Finalmente, com base na revisão da literatura e os resultados deste estudo, foi esquematizado um Modelo Neuropsicológico do sindroma de Burnout, que nos parece espelhar as relações entre este quadro clínico, as alterações cognitivooperativas encontradas e as principais estruturas encefálicas, que julgamos, implicadas em toda a dinâmica do processo disfuncional.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Self-organizing neural networks have been implemented in a wide range of application areas such as speech processing, image processing, optimization and robotics. Recent variations to the basic model proposed by the authors enable it to order state space using a subset of the input vector and to apply a local adaptation procedure that does not rely on a predefined test duration limit. Both these variations have been incorporated into a new feature map architecture that forms an integral part of an Hybrid Learning System (HLS) based on a genetic-based classifier system. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset targets set against a subset of their features. The system's objective is to achieve these targets by evolving a behavioural repertoire that efficiently explores and exploits the problem environment. Feature maps encode two types of knowledge within HLS — long-term memory traces of useful regularities within the environment and the classifier performance data calibrated against an object's feature states and targets. Self-organization of these networks constitutes non-genetic-based (experience-driven) learning within HLS. This paper presents a description of the HLS architecture and an analysis of the modified feature map implementing associative memory. Initial results are presented that demonstrate the behaviour of the system on a simple control task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The modelling of a nonlinear stochastic dynamical processes from data involves solving the problems of data gathering, preprocessing, model architecture selection, learning or adaptation, parametric evaluation and model validation. For a given model architecture such as associative memory networks, a common problem in non-linear modelling is the problem of "the curse of dimensionality". A series of complementary data based constructive identification schemes, mainly based on but not limited to an operating point dependent fuzzy models, are introduced in this paper with the aim to overcome the curse of dimensionality. These include (i) a mixture of experts algorithm based on a forward constrained regression algorithm; (ii) an inherent parsimonious delaunay input space partition based piecewise local lineal modelling concept; (iii) a neurofuzzy model constructive approach based on forward orthogonal least squares and optimal experimental design and finally (iv) the neurofuzzy model construction algorithm based on basis functions that are Bézier Bernstein polynomial functions and the additive decomposition. Illustrative examples demonstrate their applicability, showing that the final major hurdle in data based modelling has almost been removed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective. The main purpose of the study was to examine whether emotion impairs associative memory for previously seen items in older adults, as previously observed in younger adults. Method. Thirty-two younger adults and 32 older adults participated. The experiment consisted of 2 parts. In Part 1, participants learned picture–object associations for negative and neutral pictures. In Part 2, they learned picture–location associations for negative and neutral pictures; half of these pictures were seen in Part 1 whereas the other half were new. The dependent measure was how many locations of negative versus neutral items in the new versus old categories participants remembered in Part 2. Results. Both groups had more difficulty learning the locations of old negative pictures than of new negative pictures. However, this pattern was not observed for neutral items. Discussion. Despite the fact that older adults showed overall decline in associative memory, the impairing effect of emotion on updating associative memory was similar between younger and older adults.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The hippocampus receives input from upper levels of the association cortex and is implicated in many mnemonic processes, but the exact mechanisms by which it codes and stores information is an unresolved topic. This work examines the flow of information through the hippocampal formation while attempting to determine the computations that each of the hippocampal subfields performs in learning and memory. The formation, storage, and recall of hippocampal-dependent memories theoretically utilize an autoassociative attractor network that functions by implementing two competitive, yet complementary, processes. Pattern separation, hypothesized to occur in the dentate gyrus (DG), refers to the ability to decrease the similarity among incoming information by producing output patterns that overlap less than the inputs. In contrast, pattern completion, hypothesized to occur in the CA3 region, refers to the ability to reproduce a previously stored output pattern from a partial or degraded input pattern. Prior to addressing the functional role of the DG and CA3 subfields, the spatial firing properties of neurons in the dentate gyrus were examined. The principal cell of the dentate gyrus, the granule cell, has spatially selective place fields; however, the behavioral correlates of another excitatory cell, the mossy cell of the dentate polymorphic layer, are unknown. This report shows that putative mossy cells have spatially selective firing that consists of multiple fields similar to previously reported properties of granule cells. Other cells recorded from the DG had single place fields. Compared to cells with multiple fields, cells with single fields fired at a lower rate during sleep, were less likely to burst, and were more likely to be recorded simultaneously with a large population of neurons that were active during sleep and silent during behavior. These data suggest that single-field and multiple-field cells constitute at least two distinct cell classes in the DG. Based on these characteristics, we propose that putative mossy cells tend to fire in multiple, distinct locations in an environment, whereas putative granule cells tend to fire in single locations, similar to place fields of the CA1 and CA3 regions. Experimental evidence supporting the theories of pattern separation and pattern completion comes from both behavioral and electrophysiological tests. These studies specifically focused on the function of each subregion and made implicit assumptions about how environmental manipulations changed the representations encoded by the hippocampal inputs. However, the cell populations that provided these inputs were in most cases not directly examined. We conducted a series of studies to investigate the neural activity in the entorhinal cortex, dentate gyrus, and CA3 in the same experimental conditions, which allowed a direct comparison between the input and output representations. The results show that the dentate gyrus representation changes between the familiar and cue altered environments more than its input representations, whereas the CA3 representation changes less than its input representations. These findings are consistent with longstanding computational models proposing that (1) CA3 is an associative memory system performing pattern completion in order to recall previous memories from partial inputs, and (2) the dentate gyrus performs pattern separation to help store different memories in ways that reduce interference when the memories are subsequently recalled.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Geospatio-temporal conceptual models provide a mechanism to explicitly represent geospatial and temporal aspects of applications. Such models, which focus on both what and when/where, need to be more expressive than conventional conceptual models (e.g., the ER model), which primarily focus on what is important for a given application. In this study, we view conceptual schema comprehension of geospatio-temporal data semantics in terms of matching the external problem representation (that is, the conceptual schema) to the problem-solving task (that is, syntactic and semantic comprehension tasks), an argument based on the theory of cognitive fit. Our theory suggests that an external problem representation that matches the problem solver's internal task representation will enhance performance, for example, in comprehending such schemas. To assess performance on geospatio-temporal schema comprehension tasks, we conducted a laboratory experiment using two semantically identical conceptual schemas, one of which mapped closely to the internal task representation while the other did not. As expected, we found that the geospatio-temporal conceptual schema that corresponded to the internal representation of the task enhanced the accuracy of schema comprehension; comprehension time was equivalent for both. Cognitive fit between the internal representation of the task and conceptual schemas with geospatio-temporal annotations was, therefore, manifested in accuracy of schema comprehension and not in time for problem solution. Our findings suggest that the annotated schemas facilitate understanding of data semantics represented on the schema.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Thouless-Anderson-Palmer (TAP) approach was originally developed for analysing the Sherrington-Kirkpatrick model in the study of spin glass models and has been employed since then mainly in the context of extensively connected systems whereby each dynamical variable interacts weakly with the others. Recently, we extended this method for handling general intensively connected systems where each variable has only O(1) connections characterised by strong couplings. However, the new formulation looks quite different with respect to existing analyses and it is only natural to question whether it actually reproduces known results for systems of extensive connectivity. In this chapter, we apply our formulation of the TAP approach to an extensively connected system, the Hopfield associative memory model, showing that it produces identical results to those obtained by the conventional formulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis initially presents an 'assay' of the literature pertaining to individual differences in human-computer interaction. A series of experiments is then reported, designed to investigate the association between a variety of individual characteristics and various computer task and interface factors. Predictor variables included age, computer expertise, and psychometric tests of spatial visualisation, spatial memory, logical reasoning, associative memory, and verbal ability. These were studied in relation to a variety of computer-based tacks, including: (1) word processing and its component elements; (ii) the location of target words within passages of text; (iii) the navigation of networks and menus; (iv) command generation using menus and command line interfaces; (v) the search and selection of icons and text labels; (vi) information retrieval. A measure of self-report workload was also included in several of these experiments. The main experimental findings included: (i) an interaction between spatial ability and the manipulation of semantic but not spatial interface content; (ii) verbal ability being only predictive of certain task components of word processing; (iii) age differences in word processing and information retrieval speed but not accuracy; (iv) evidence of compensatory strategies being employed by older subjects; (v) evidence of performance strategy differences which disadvantaged high spatial subjects in conditions of low spatial information content; (vi) interactive effects of associative memory, expertise and command strategy; (vii) an association between logical reasoning and word processing but not information retrieval; (viii) an interaction between expertise and cognitive demand; and (ix) a stronger association between cognitive ability and novice performance than expert performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Neuroimaging studies of episodic memory, or memory of events from our personal past, have predominantly focused their attention on medial temporal lobe (MTL). There is growing acknowledgement however, from the cognitive neuroscience of memory literature, that regions outside the MTL can support episodic memory processes. The medial prefrontal cortex is one such region garnering increasing interest from researchers. Using behavioral and functional magnetic resonance imaging measures, over two studies, this thesis provides evidence of a mnemonic role of the medial PFC. In the first study, participants were scanned while judging the extent to which they agreed or disagreed with the sociopolitical views of unfamiliar individuals. Behavioral tests of associative recognition revealed that participants remembered with high confidence viewpoints previously linked with judgments of strong agreement/disagreement. Neurally, the medial PFC mediated the interaction between high-confidence associative recognition memory and beliefs associated with strong agree/disagree judgments. In an effort to generalize this finding to well-established associative information, in the second study, we investigated associative recognition memory for real-world concepts. Object-scene pairs congruent or incongruent with a preexisting schema were presented to participants in a cued-recall paradigm. Behavioral tests of conceptual and perceptual recognition revealed memory enhancements arising from strong resonance between presented pairs and preexisting schemas. Neurally, the medial PFC tracked increases in visual recall of schema-congruent pairs whereas the MTL tracked increases in visual recall of schema-incongruent pairs. Additionally, ventral areas of the medial PFC tracked conceptual components of visual recall specifically for schema-congruent pairs. These findings are consistent with a recent theoretical proposal of medial PFC contributions to memory for schema-related content. Collectively, these studies provide evidence of a role for the medial PFC in associative recognition memory persisting for associative information deployed in our daily social interactions and for those associations formed over multiple learning episodes. Additionally, this set of findings advance our understanding of the cognitive contributions of the medial PFC beyond its canonical role in processes underlying social cognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free association norms indicate that words are organized into semantic/associative neighborhoods within a larger network of words and links that bind the net together. We present evidence indicating that memory for a recent word event can depend on implicitly and simultaneously activating related words in its neighborhood. Processing a word during encoding primes its network representation as a function of the density of the links in its neighborhood. Such priming increases recall and recognition and can have long lasting effects when the word is processed in working memory. Evidence for this phenomenon is reviewed in extralist cuing, primed free association, intralist cuing, and single-item recognition tasks. The findings also show that when a related word is presented to cue the recall of a studied word, the cue activates it in an array of related words that distract and reduce the probability of its selection. The activation of the semantic network produces priming benefits during encoding and search costs during retrieval. In extralist cuing recall is a negative function of cue-to-distracter strength and a positive function of neighborhood density, cue-to-target strength, and target-to cue strength. We show how four measures derived from the network can be combined and used to predict memory performance. These measures play different roles in different tasks indicating that the contribution of the semantic network varies with the context provided by the task. We evaluate spreading activation and quantum-like entanglement explanations for the priming effect produced by neighborhood density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pavlovian fear conditioning is a robust technique for examining behavioral and cellular components of fear learning and memory. In fear conditioning, the subject learns to associate a previously neutral stimulus with an inherently noxious co-stimulus. The learned association is reflected in the subjects' behavior upon subsequent re-exposure to the previously neutral stimulus or the training environment. Using fear conditioning, investigators can obtain a large amount of data that describe multiple aspects of learning and memory. In a single test, researchers can evaluate functional integrity in fear circuitry, which is both well characterized and highly conserved across species. Additionally, the availability of sensitive and reliable automated scoring software makes fear conditioning amenable to high-throughput experimentation in the rodent model; thus, this model of learning and memory is particularly useful for pharmacological and toxicological screening. Due to the conserved nature of fear circuitry across species, data from Pavlovian fear conditioning are highly translatable to human models. We describe equipment and techniques needed to perform and analyze conditioned fear data. We provide two examples of fear conditioning experiments, one in rats and one in mice, and the types of data that can be collected in a single experiment. © 2012 Springer Science+Business Media, LLC.