20 resultados para Novel of memory
em Aston University Research Archive
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.
Resumo:
Book review: Eigler, Friederike, and Jens Kugele, eds. Heimat. At the Intersection of Memory and Space. Berlin: de Gruyter, 2012. 268 pp. $126.00 hardcover.
Resumo:
This paper explores how the concept of Alzheimer’s disease (AD) is constructed through Spanish media and documentary films and how it is represented. The article analyses three documentary films and the cultural and social contexts in and from which they emerged: Solé´s Bucarest: la memòria perduda [Bucharest: Memory Lost] (2007), Bosch´s Bicicleta, cullera, poma [Bicycle, Spoon, Apple] (2010) , and Frabra’s Las voces de la memoria [Memory´s Voices] (2011). The three documentary films approach AD from different perspectives, creating well-structured discourses of what AD represents for contemporary Spanish society, from medicalisation of AD to issues of personhood and citizenship. These three films are studied from an interdisciplinary perspective, in an effort to strengthen the links between ageing and dementia studies and cultural studies. Examining documentary film representations of AD from these perspectives enables semiotic analyses beyond the aesthetic perspectives of film studies, and the exploration of the articulation of knowledge and power in discourses about AD in contemporary Spain
Resumo:
The observation of parallels between the memory distortion and persuasion literatures leads, quite logically, to the appealing notion that people can be 'persuaded' to change their memories. Indeed, numerous studies show that memory can be influenced and distorted by a variety of persuasive tactics, and the theoretical accounts commonly used by researchers to explain episodic and autobiographical memory distortion phenomena can generally predict and explain these persuasion effects. Yet, despite these empirical and theoretical overlaps, explicit reference to persuasion and attitude-change research in the memory distortion literature is surprisingly rare. In this paper, we argue that stronger theoretical foundations are needed to draw the memory distortion and persuasion literatures together in a productive direction. We reason that theoretical approaches to remembering that distinguish (false) beliefs in the occurrence of events from (false) memories of those events - compatible with a source monitoring approach - would be beneficial to this end. Such approaches, we argue, would provide a stronger platform to use persuasion findings to enhance the psychological understanding of memory distortion.
Resumo:
The focus of this article is the process of doing memory-work research. We tell the story of our experience of what it was like to use this approach. We were enthused to work collectively on a "discovery" project to explore a method with which we were unfamiliar. We hoped to build working relationships based on mutual respect and the desire to focus on methodology and its place in our psychological understanding. The empirical activities highlighted methodological and experiential challenges, which tested our adherence to the social constructionist premise of Haug's original description of memory work. Combined with practical difficulties of living across Europe, writing and analyzing the memories became contentious. We found ourselves having to address a number of tensions emanating from the work and our approach to it. We discuss some of these tensions alongside examples that illustrate the research process and the ways we negotiated the collective nature of the memory-work approach. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
Background - Not only is compulsive checking the most common symptom in Obsessive Compulsive Disorder (OCD) with an estimated prevalence of 50–80% in patients, but approximately ~15% of the general population reveal subclinical checking tendencies that impact negatively on their performance in daily activities. Therefore, it is critical to understand how checking affects attention and memory in clinical as well as subclinical checkers. Eye fixations are commonly used as indicators for the distribution of attention but research in OCD has revealed mixed results at best. Methodology/Principal Finding - Here we report atypical eye movement patterns in subclinical checkers during an ecologically valid working memory (WM) manipulation. Our key manipulation was to present an intermediate probe during the delay period of the memory task, explicitly asking for the location of a letter, which, however, had not been part of the encoding set (i.e., misleading participants). Using eye movement measures we now provide evidence that high checkers’ inhibitory impairments for misleading information results in them checking the contents of WM in an atypical manner. Checkers fixate more often and for longer when misleading information is presented than non-checkers. Specifically, checkers spend more time checking stimulus locations as well as locations that had actually been empty during encoding. Conclusions/Significance - We conclude that these atypical eye movement patterns directly reflect internal checking of memory contents and we discuss the implications of our findings for the interpretation of behavioural and neuropsychological data. In addition our results highlight the importance of ecologically valid methodology for revealing the impact of detrimental attention and memory checking on eye movement patterns.
Resumo:
Despite the large body of research regarding the role of memory in OCD, the results are described as mixed at best (Hermans et al., 2008). For example, inconsistent findings have been reported with respect to basic capacity, intact verbal, and generally affected visuospatial memory. We suggest that this is due to the traditional pursuit of OCD memory impairment as one of the general capacity and/or domain specificity (visuospatial vs. verbal). In contrast, we conclude from our experiments (i.e., Harkin & Kessler, 2009, 2011; Harkin, Rutherford, & Kessler, 2011) and recent literature (e.g., Greisberg & McKay, 2003) that OCD memory impairment is secondary to executive dysfunction, and more specifically we identify three common factors (EBL: Executive-functioning efficiency, Binding complexity, and memory Load) that we generalize to 58 experimental findings from 46 OCD memory studies. As a result we explain otherwise inconsistent research – e.g., intact vs. deficient verbal memory – that are difficult to reconcile within a capacity or domain specific perspective. We conclude by discussing the relationship between our account and others', which in most cases is complementary rather than contradictory.
Resumo:
Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
The 1980s have seen spectacular advances in our understanding of the molecular bases of neurobiology. Biological membranes, channel proteins, cytoskeletal elements, and neuroactive peptides have all been illuminated by the molecular approach. The operation of synapses can be seen to be far more subtle and complex than has previously been imagined, and the development of the brain and physical basis of memory have both been illuminated by this new understanding. In addition, some of the ways in which the brain may go wrong can be traced to malfunction at the molecular level. This study attemps a synthesis of this new knowledge, to provide an indication of how an understanding at the molecular level can help towards a theory of the brain in health and disease. The text will be of benefit to undergraduate students of biochemistry, medical science, pharmacy, pharmacology and general biology.
Resumo:
This thesis describes the design and implementation of a new dynamic simulator called DASP. It is a computer program package written in standard Fortran 77 for the dynamic analysis and simulation of chemical plants. Its main uses include the investigation of a plant's response to disturbances, the determination of the optimal ranges and sensitivities of controller settings and the simulation of the startup and shutdown of chemical plants. The design and structure of the program and a number of features incorporated into it combine to make DASP an effective tool for dynamic simulation. It is an equation-oriented dynamic simulator but the model equations describing the user's problem are generated from in-built model equation library. A combination of the structuring of the model subroutines, the concept of a unit module, and the use of the connection matrix of the problem given by the user have been exploited to achieve this objective. The Executive program has a structure similar to that of a CSSL-type simulator. DASP solves a system of differential equations coupled to nonlinear algebraic equations using an advanced mixed equation solver. The strategy used in formulating the model equations makes it possible to obtain the steady state solution of the problem using the same model equations. DASP can handle state and time events in an efficient way and this includes the modification of the flowsheet. DASP is highly portable and this has been demonstrated by running it on a number of computers with only trivial modifications. The program runs on a microcomputer with 640 kByte of memory. It is a semi-interactive program, with the bulk of all input data given in pre-prepared data files with communication with the user is via an interactive terminal. Using the features in-built in the package, the user can view or modify the values of any input data, variables and parameters in the model, and modify the structure of the flowsheet of the problem during a simulation session. The program has been demonstrated and verified using a number of example problems.
Resumo:
The English writing system is notoriously irregular in its orthography at the phonemic level. It was therefore proposed that focusing beginner-spellers’ attention on sound-letter relations at the sub-syllabic level might improve spelling performance. This hypothesis was tested in Experiments 1 and 2 using a ‘clue word’ paradigm to investigate the effect of analogy teaching intervention / non-intervention on the spelling performance of an experimental group and controls. The results overall showed the intervention to be effective in improving spelling, and this effect to be enduring. Experiment 3 demonstrated a greater application of analogy in spelling, when clue words, which participants used in analogy to spell test words, remained in view during testing. A series of regression analyses, with spelling entered as the criterion variable and age, analogy and phonological plausibility (PP) as predictors, showed both analogy and PP to be highly predictive of spelling. Experiment 4 showed that children could use analogy to improve their spelling, even without intervention, by comparing their performance in spelling words presented in analogous categories or in random lists. Consideration of children’s patterns of analogy use at different points of development showed three age groups to use similar patterns of analogy, but contrasting analogy patterns for spelling different words. This challenges stage theories of analogy use in literacy. Overall the most salient units used in analogy were the rime and, to a slightly lesser degree, the onset-vowel and vowel. Finally, Experiment 5 showed analogy and phonology to be fairly equally influential in spelling, but analogy to be more influential than phonology in reading. Five separate experiments therefore found analogy to be highly influential in spelling. Experiment 5 also considered the role of memory and attention in literacy attainment. The important implications of this research are that analogy, rather than purely phonics-based strategy, is instrumental in correct spelling in English.
Resumo:
A paradox of memory research is that repeated checking results in a decrease in memory certainty, memory vividness and confidence [van den Hout, M. A., & Kindt, M. (2003a). Phenomenological validity of an OCD-memory model and the remember/know distinction. Behaviour Research and Therapy, 41, 369–378; van den Hout, M. A., & Kindt, M. (2003b). Repeated checking causes memory distrust. Behaviour Research and Therapy, 41, 301–316]. Although these findings have been mainly attributed to changes in episodic long-term memory, it has been suggested [Shimamura, A. P. (2000). Toward a cognitive neuroscience of metacognition. Consciousness and Cognition, 9, 313–323] that representations in working memory could already suffer from detrimental checking. In two experiments we set out to test this hypothesis by employing a delayed-match-to-sample working memory task. Letters had to be remembered in their correct locations, a task that was designed to engage the episodic short-term buffer of working memory [Baddeley, A. D. (2000). The episodic buffer: a new component in working memory? Trends in Cognitive Sciences, 4, 417–423]. Of most importance, we introduced an intermediate distractor question that was prone to induce frustrating and unnecessary checking on trials where no correct answer was possible. Reaction times and confidence ratings on the actual memory test of these trials confirmed the success of this manipulation. Most importantly, high checkers [cf. VOCI; Thordarson, D. S., Radomsky, A. S., Rachman, S., Shafran, R, Sawchuk, C. N., & Hakstian, A. R. (2004). The Vancouver obsessional compulsive inventory (VOCI). Behaviour Research and Therapy, 42(11), 1289–1314] were less accurate than low checkers when frustrating checking was induced, especially if the experimental context actually emphasized the irrelevance of the misleading question. The clinical relevance of this result was substantiated by means of an extreme groups comparison across the two studies. The findings are discussed in the context of detrimental checking and lack of distractor inhibition as a way of weakening fragile bindings within the episodic short-term buffer of Baddeley's (2000) model. Clinical implications, limitations and future research are considered.
Resumo:
Compulsive checking is known to influence memory, yet there is little consideration of checking as a cognitive style within the typical population. We employed a working memory task where letters had to be remembered in their locations. The key experimental manipulation was to induce repeated checking after encoding by asking about a stimulus that had not been presented. We recorded the effect that such misleading probes had on a subsequent memory test. Participants drawn from the typical population but who scored highly on a checking-scale had poorer memory and less confidence than low scoring individuals. While thoroughness is regarded as a quality, our results indicate that a cognitive style that favours repeated checking does not always lead to the best performance as it can undermine the authenticity of memory traces. This may affect various aspects of everyday life including the work environment and we discuss its implications and possible counter-measures. Copyright © 2010 John Wiley & Sons, Ltd.