975 resultados para memory processing
Resumo:
The goal of the RAP-WAM AND-parallel Prolog abstract architecture is to provide inference speeds significantly beyond those of sequential systems, while supporting Prolog semantics and preserving sequential performance and storage efficiency. This paper presents simulation results supporting these claims with special emphasis on memory performance on a two-level sharedmemory multiprocessor organization. Several solutions to the cache coherency problem are analyzed. It is shown that RAP-WAM offers good locality and storage efficiency and that it can effectively take advantage of broadcast caches. It is argued that speeds in excess of 2 ML IPS on real applications exhibiting medium parallelism can be attained with current technology.
Resumo:
Collaborative filtering recommender systems contribute to alleviating the problem of information overload that exists on the Internet as a result of the mass use of Web 2.0 applications. The use of an adequate similarity measure becomes a determining factor in the quality of the prediction and recommendation results of the recommender system, as well as in its performance. In this paper, we present a memory-based collaborative filtering similarity measure that provides extremely high-quality and balanced results; these results are complemented with a low processing time (high performance), similar to the one required to execute traditional similarity metrics. The experiments have been carried out on the MovieLens and Netflix databases, using a representative set of information retrieval quality measures.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
To investigate the types of memory traces recovered by the medial temporal lobe (MTL), neural activity during veridical and illusory recognition was measured with the use of functional MRI (fMRI). Twelve healthy young adults watched a videotape segment in which two speakers alternatively presented lists of associated words, and then the subjects performed a recognition test including words presented in the study lists (True items), new words closely related to studied words (False items), and new unrelated words (New items). The main finding was a dissociation between two MTL regions: whereas the hippocampus was similarly activated for True and False items, suggesting the recovery of semantic information, the parahippocampal gyrus was more activated for True than for False items, suggesting the recovery of perceptual information. The study also yielded a dissociation between two prefrontal cortex (PFC) regions: whereas bilateral dorsolateral PFC was more activated for True and False items than for New items, possibly reflecting monitoring of retrieved information, left ventrolateral PFC was more activated for New than for True and False items, possibly reflecting semantic processing. Precuneus and lateral parietal regions were more activated for True and False than for New items. Orbitofrontal cortex and cerebellar regions were more activated for False than for True items. In conclusion, the results suggest that activity in anterior MTL regions does not distinguish True from False, whereas activity in posterior MTL regions does.
Resumo:
Cholinergic transmission at muscarinic acetylcholine receptors (mAChR) has been implicated in higher brain functions such as learning and memory, and loss of synapses may contribute to the symptoms of Alzheimer disease. A heterogeneous family of five genetically distinct mAChR subtypes differentially modulate a variety of intracellular signaling systems as well as the processing of key molecules involved in the pathology of the disease. Although many muscarinic effects have been identified in memory circuits, including a diversity of pre- and post-synaptic actions in hippocampus, the identities of the molecular subtypes responsible for any given function remain elusive. All five mAChR genes are expressed in hippocampus, and subtype-specific antibodies have enabled identification, quantification, and localization of the encoded proteins. The m1, m2, and m4 mAChR proteins are most abundant in forebrain regions and they have distinct cellular and subcellular localizations suggestive of various pre- and postsynaptic functions in cholinergic circuits. The subtypes are also differentially altered in postmortem brain samples from Alzheimer disease cases. Further understanding of the molecular pharmacology of failing synapses in Alzheimer disease, together with the development of new subtype-selective drugs, may provide more specific and effective treatments for the disease.
Resumo:
We review research on the neural bases of verbal working memory, focusing on human neuroimaging studies. We first consider experiments that indicate that verbal working memory is composed of multiple components. One component involves the subvocal rehearsal of phonological information and is neurally implemented by left-hemisphere speech areas, including Broca’s area, the premotor area, and the supplementary motor area. Other components of verbal working memory may be devoted to pure storage and to executive processing of the contents of memory. These studies rest on a subtraction logic, in which two tasks are imaged, differing only in that one task presumably has an extra process, and the difference image is taken to reflect that process. We then review studies that show that the previous results can be obtained with experimental methods other than subtraction. We focus on the method of parametric variation, in which a parameter that presumably reflects a single process is varied. In the last section, we consider the distinction between working memory tasks that require only storage of information vs. those that require that the stored items be processed in some way. These experiments provide some support for the hypothesis that, when a task requires processing the contents of working memory, the dorsolateral prefrontal cortex is disproportionately activated.
Resumo:
This article reviews attempts to characterize the mental operations mediated by left inferior prefrontal cortex, especially the anterior and inferior portion of the gyrus, with the functional neuroimaging techniques of positron emission tomography and functional magnetic resonance imaging. Activations in this region occur during semantic, relative to nonsemantic, tasks for the generation of words to semantic cues or the classification of words or pictures into semantic categories. This activation appears in the right prefrontal cortex of people known to be atypically right-hemisphere dominant for language. In this region, activations are associated with meaningful encoding that leads to superior explicit memory for stimuli and deactivations with implicit semantic memory (repetition priming) for words and pictures. New findings are reported showing that patients with global amnesia show deactivations in the same region associated with repetition priming, that activation in this region reflects selection of a response from among numerous relative to few alternatives, and that activations in a portion of this region are associated specifically with semantic relative to phonological processing. It is hypothesized that activations in left inferior prefrontal cortex reflect a domain-specific semantic working memory capacity that is invoked more for semantic than nonsemantic analyses regardless of stimulus modality, more for initial than for repeated semantic analysis of a word or picture, more when a response must be selected from among many than few legitimate alternatives, and that yields superior later explicit memory for experiences.
Resumo:
Working memory refers to the ability of the brain to store and manipulate information over brief time periods, ranging from seconds to minutes. As opposed to long-term memory, which is critically dependent upon hippocampal processing, critical substrates for working memory are distributed in a modality-specific fashion throughout cortex. N-methyl-D-aspartate (NMDA) receptors play a crucial role in the initiation of long-term memory. Neurochemical mechanisms underlying the transient memory storage required for working memory, however, remain obscure. Auditory sensory memory, which refers to the ability of the brain to retain transient representations of the physical features (e.g., pitch) of simple auditory stimuli for periods of up to approximately 30 sec, represents one of the simplest components of the brain working memory system. Functioning of the auditory sensory memory system is indexed by the generation of a well-defined event-related potential, termed mismatch negativity (MMN). MMN can thus be used as an objective index of auditory sensory memory functioning and a probe for investigating underlying neurochemical mechanisms. Monkeys generate cortical activity in response to deviant stimuli that closely resembles human MMN. This study uses a combination of intracortical recording and pharmacological micromanipulations in awake monkeys to demonstrate that both competitive and noncompetitive NMDA antagonists block the generation of MMN without affecting prior obligatory activity in primary auditory cortex. These findings suggest that, on a neurophysiological level, MMN represents selective current flow through open, unblocked NMDA channels. Furthermore, they suggest a crucial role of cortical NMDA receptors in the assessment of stimulus familiarity/unfamiliarity, which is a key process underlying working memory performance.
Resumo:
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Resumo:
"August 1973."
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.
Resumo:
To determine whether the visuospatial n-back working memory task is a reliable and valid measure of cognitive processes believed to underlie intelligence, this study compared the reaction times and accuracy of perforniance of 70 participants, with performance on the Multidimensional Aptitude Battery (MAB). Testing was conducted over two sessions separated by 1 week. Participants completed the MAB during the second test session. Moderate testretest reliability for percentage accuracy scores was found across the four levels of the n-back task, whilst reaction times were highly reliable. Furthermore, participants' performance on the MAB was negatively correlated with accuracy of performance at the easier levels of the n-back task and positively correlated with accuracy of performance at the harder task levels. These findings confirm previous research examining the cognitive basis of intelligence, and suggest that intelligence is the product of faster speed of information processing, as well as superior working memory capacity. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The aim of the present study was to investigate verb and context processing in 10 individuals with Parkinson's disease (PD) and matched controls. A self-paced stop making sense judgment task was employed where participants read a sentence preceded by a context which made the thematic role of the verb plausible or implausible. Participants were required to indicate whether the sentence ceased to make sense at any point by responding yes/no at each word. PD participants were less accurate than the control participants at detecting sentence anomalies based on verb selection restrictions and previously encountered contextual elements. However, further research is required to determine the precise nature of the grammatical processing disturbance associated with PD. (c) 2005 Elsevier Ltd. All rights reserved.