114 resultados para Associative Memory
Resumo:
Modelling how a word is activated in human memory is an important requirement for determining the probability of recall of a word in an extra-list cueing experiment. The spreading activation, spooky-action-at-a-distance and entanglement models have all been used to model the activation of a word. Recently a hypothesis was put forward that the mean activation levels of the respective models are as follows: Spreading � Entanglment � Spooking-action-at-a-distance This article investigates this hypothesis by means of a substantial empirical analysis of each model using the University of South Florida word association, rhyme and word norms.
Resumo:
It is recognised that individuals do not always respond honestly when completing psychological tests. One of the foremost issues for research in this area is the inability to detect individuals attempting to fake. While a number of strategies have been identified in faking, a commonality of these strategies is the latent role of long term memory. Seven studies were conducted in order to examine whether it is possible to detect the activation of faking related cognitions using a lexical decision task. Study 1 found that engagement with experiential processing styles predicted the ability to fake successfully, confirming the role of associative processing styles in faking. After identifying appropriate stimuli for the lexical decision task (Studies 2A and 2B), Studies 3 to 5 examined whether a cognitive state of faking could be primed and subsequently identified, using a lexical decision task. Throughout the course of these studies, the experimental methodology was increasingly refined in an attempt to successfully identify the relevant priming mechanisms. The results were consistent and robust throughout the three priming studies: faking good on a personality test primed positive faking related words in the lexical decision tasks. Faking bad, however, did not result in reliable priming of negative faking related cognitions. To more completely address potential issues with the stimuli and the possible role of affective priming, two additional studies were conducted. Studies 6A and 6B revealed that negative faking related words were more arousing than positive faking related words, and that positive faking related words were more abstract than negative faking related words and neutral words. Study 7 examined whether the priming effects evident in the lexical decision tasks occurred as a result of an unintentional mood induction while faking the psychological tests. Results were equivocal in this regard. This program of research aligned the fields of psychological assessment and cognition to inform the preliminary development and validation of a new tool to detect faking. Consequently, an implicit technique to identify attempts to fake good on a psychological test has been identified, using long established and robust cognitive theories in a novel and innovative way. This approach represents a new paradigm for the detection of individuals responding strategically to psychological testing. With continuing development and validation, this technique may have immense utility in the field of psychological assessment.
Resumo:
Various time-memory tradeoffs attacks for stream ciphers have been proposed over the years. However, the claimed success of these attacks assumes the initialisation process of the stream cipher is one-to-one. Some stream cipher proposals do not have a one-to-one initialisation process. In this paper, we examine the impact of this on the success of time-memory-data tradeoff attacks. Under the circumstances, some attacks are more successful than previously claimed while others are less. The conditions for both cases are established.
Resumo:
As computers approach the physical limits of information storable in memory, new methods will be needed to further improve information storage and retrieval. We propose a quantum inspired vector based approach, which offers a contextually dependent mapping from the subsymbolic to the symbolic representations of information. If implemented computationally, this approach would provide exceptionally high density of information storage, without the traditionally required physical increase in storage capacity. The approach is inspired by the structure of human memory and incorporates elements of Gardenfors’ Conceptual Space approach and Humphreys et al.’s matrix model of memory.
Resumo:
A century ago, as the Western world embarked on a period of traumatic change, the visual realism of photography and documentary film brought print and radio news to life. The vision that these new mediums threw into stark relief was one of intense social and political upheaval: the birth of modernity fired and tempered in the crucible of the Great War. As millions died in this fiery chamber and the influenza pandemic that followed, lines of empires staggered to their fall, and new geo-political boundaries were scored in the raw, red flesh of Europe. The decade of 1910 to 1919 also heralded a prolific period of artistic experimentation. It marked the beginning of the social and artistic age of modernity and, with it, the nascent beginnings of a new art form: film. We still live in the shadow of this violent, traumatic and fertile age; haunted by the ghosts of Flanders and Gallipoli and its ripples of innovation and creativity. Something happened here, but to understand how and why is not easy; for the documentary images we carry with us in our collective cultural memory have become what Baudrillard refers to as simulacra. Detached from their referents, they have become referents themselves, to underscore other, grand narratives in television and Hollywood films. The personal histories of the individuals they represent so graphically–and their hope, love and loss–are folded into a national story that serves, like war memorials and national holidays, to buttress social myths and values. And, as filmic images cross-pollinate, with each iteration offering a new catharsis, events that must have been terrifying or wondrous are abstracted. In this paper we first discuss this transformation through reference to theories of documentary and memory–this will form a conceptual framework for a subsequent discussion of the short film Anmer. Produced by the first author in 2010, Anmer is a visual essay on documentary, simulacra and the symbolic narratives of history. Its form, structure and aesthetic speak of the confluence of documentary, history, memory and dream. Located in the first decade of the twentieth century, its non-linear narratives of personal tragedy and poetic dreamscapes are an evocative reminder of the distance between intimate experience, grand narratives, and the mythologies of popular films. This transformation of documentary sources not only played out in the processes of the film’s production, but also came to form its theme.
Resumo:
Studies of orthographic skills transfer between languages focus mostly on working memory (WM) ability in alphabetic first language (L1) speakers when learning another, often alphabetically congruent, language. We report two studies that, instead, explored the transferability of L1 orthographic processing skills in WM in logographic-L1 and alphabetic-L1 speakers. English-French bilingual and English monolingual (alphabetic-L1) speakers, and Chinese-English (logographic-L1) speakers, learned a set of artificial logographs and associated meanings (Study 1). The logographs were used in WM tasks with and without concurrent articulatory or visuo-spatial suppression. The logographic-L1 bilinguals were markedly less affected by articulatory suppression than alphabetic-L1 monolinguals (who did not differ from their bilingual peers). Bilinguals overall were less affected by spatial interference, reflecting superior phonological processing skills or, conceivably, greater executive control. A comparison of span sizes for meaningful and meaningless logographs (Study 2) replicated these findings. However, the logographic-L1 bilinguals’ spans in L1 were measurably greater than those of their alphabetic-L1 (bilingual and monolingual) peers; a finding unaccounted for by faster articulation rates or differences in general intelligence. The overall pattern of results suggests an advantage (possibly perceptual) for logographic-L1 speakers, over and above the bilingual advantage also seen elsewhere in third language (L3) acquisition.
Resumo:
Different archives of television material construct different versions of Australian national identity. There exists a Pro-Am archive of Australian television history materials consisting of many individual collections. This archive is not centrally located nor clearly bounded. The collections are not all linked to each other, nor are they aware of each other, and they do not claim to have a single common project. Pro-Am collections tend not to address Australian television as a whole, rather addressing particular genres, programs or production companies. Their vision of Australia is 'ordinary' and everyday. The boundaries of 'Australia' in the Pro-Am archive are porous, allowing non-Australians to contribute material, and also including non-Australian material and this causes little sense of anxiety.
Resumo:
The symbolic and improvisational nature of Livecoding requires a shared networking framework to be flexible and extensible, while at the same time providing support for synchronisation, persistence and redundancy. Above all the framework should be robust and available across a range of platforms. This paper proposes tuple space as a suitable framework for network communication in ensemble livecoding contexts. The role of tuple space as a concurrency framework and the associated timing aspects of the tuple space model are explored through Spaces, an implementation of tuple space for the Impromptu environment.
Resumo:
A recent Australian literature digitisation project uncovered some surprising discoveries in the children’s books that it digitised. The Children’s Literature Digital Resources (CLDR) Project digitised children’s books that were first published between 1851 to 1945 and made them available online through AustLit: The Australian Literature Resource. The digitisation process also preserved, within the pages of those books, a range of bookplates, book labels, inscriptions, and loose ephemera. This material allows us to trace the provenance of some of the digitised works, some of which came from the personal libraries of now-famous authors, and others from less celebrated sources. These extra-textual traces can contribute to cultural memory of the past by providing evidence of how books were collected and exchanged, and what kinds of books were presented as prizes in schools and Sunday schools. They also provide insight into Australian literary and artistic networks, particularly of the first few decades of the 20th century. This article describes the kinds of material uncovered in the digitisation process and suggests that the material provides insights into literary and cultural histories that might otherwise be forgotten. It also argues that the indexing of this material is vital if it is not to be lost to future researchers.
Resumo:
The process of researching children’s literature from the past is a growing challenge as resources age and are increasingly treated as rare items, stored away within libraries and other research centres. In Australia, researchers and librarians have collaborated with the bibliographic database AustLit: The Australian Literature Resource to produce the Australian Children’s Literature Digital Resources Project (CLDR). This Project aims to address the growing demand for online access to rare children’s literature resources, and demonstrates the research potential of early Australian children’s literature by supplementing the collection with relevant critical articles. The CLDR project is designed with a specific focus and provides access to full text Australian children’s literature from European settlement to 1945. The collection demonstrates a need and desire to preserve literature treasures to prevent losing such collections in a digital age. The collection covers many themes relevant to the conference including, trauma, survival, memory, survival, hauntings, and histories. The resource provides new and exciting ways with which to research children’s literature from the past and offers a fascinating repository to scholars and professionals of ranging disciplines who are in interested in Australian children’s literature.
Resumo:
Elaborated Intrusion theory (EI theory; Kavanagh, Andrade, & May, 2005) posits two main cognitive components in craving: associative processes that lead to intrusive thoughts about the craved substance or activity, and elaborative processes supporting mental imagery of the substance or activity. We used a novel visuospatial task to test the hypothesis that visual imagery plays a key role in craving. Experiment 1 showed that spending 10 min constructing shapes from modeling clay (plasticine) reduced participants' craving for chocolate compared with spending 10 min 'letting your mind wander'. Increasing the load on verbal working memory using a mental arithmetic task (counting backwards by threes) did not reduce craving further. Experiment 2 compared effects on craving of a simpler verbal task (counting by ones) and clay modeling. Clay modeling reduced overall craving strength and strength of craving imagery, and reduced the frequency of thoughts about chocolate. The results are consistent with EI theory, showing that craving is reduced by loading the visuospatial sketchpad of working memory but not by loading the phonological loop. Clay modeling might be a useful self-help tool to help manage craving for chocolate, snacks and other foods.