992 resultados para semantic memory
Resumo:
The process of researching children’s literature from the past is a growing challenge as resources age and are increasingly treated as rare items, stored away within libraries and other research centres. In Australia, researchers and librarians have collaborated with the bibliographic database AustLit: The Australian Literature Resource to produce the Australian Children’s Literature Digital Resources Project (CLDR). This Project aims to address the growing demand for online access to rare children’s literature resources, and demonstrates the research potential of early Australian children’s literature by supplementing the collection with relevant critical articles. The CLDR project is designed with a specific focus and provides access to full text Australian children’s literature from European settlement to 1945. The collection demonstrates a need and desire to preserve literature treasures to prevent losing such collections in a digital age. The collection covers many themes relevant to the conference including, trauma, survival, memory, survival, hauntings, and histories. The resource provides new and exciting ways with which to research children’s literature from the past and offers a fascinating repository to scholars and professionals of ranging disciplines who are in interested in Australian children’s literature.
Resumo:
Entity-oriented search has become an essential component of modern search engines. It focuses on retrieving a list of entities or information about the specific entities instead of documents. In this paper, we study the problem of finding entity related information, referred to as attribute-value pairs, that play a significant role in searching target entities. We propose a novel decomposition framework combining reduced relations and the discriminative model, Conditional Random Field (CRF), for automatically finding entity-related attribute-value pairs from free text documents. This decomposition framework allows us to locate potential text fragments and identify the hidden semantics, in the form of attribute-value pairs for user queries. Empirical analysis shows that the decomposition framework outperforms pattern-based approaches due to its capability of effective integration of syntactic and semantic features.
Resumo:
Finding and labelling semantic features patterns of documents in a large, spatial corpus is a challenging problem. Text documents have characteristics that make semantic labelling difficult; the rapidly increasing volume of online documents makes a bottleneck in finding meaningful textual patterns. Aiming to deal with these issues, we propose an unsupervised documnent labelling approach based on semantic content and feature patterns. A world ontology with extensive topic coverage is exploited to supply controlled, structured subjects for labelling. An algorithm is also introduced to reduce dimensionality based on the study of ontological structure. The proposed approach was promisingly evaluated by compared with typical machine learning methods including SVMs, Rocchio, and kNN.
Resumo:
Two experiments examine outcomes for sponsor and ambusher brands within sponsorship settings. It is demonstrated that although making consumers aware of the presence of ambusher brands can reduce subsequent event recall to competitor cues, recall to sponsor cues can also suffer. Attitudinal effects are also considered.
Resumo:
The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.
Resumo:
The world is rapidly ageing. It is against this backdrop that there are increasing incidences of dementia reported worldwide, with Alzheimer's disease (AD) being the most common form of dementia in the elderly. It is estimated that AD affects almost 4 million people in the US, and costs the US economy more than 65 million dollars annually. There is currently no cure for AD but various therapeutic agents have been employed in attempting to slow down the progression of the illness, one of which is oestrogen. Over the last decades, scientists have focused mainly on the roles of oestrogen in the prevention and treatment of AD. Newer evidences suggested that testosterone might also be involved in the pathogenesis of AD. Although the exact mechanisms on how androgen might affect AD are still largely unknown, it is known that testosterone can act directly via androgen receptor-dependent mechanisms or indirectly by converting to oestrogen to exert this effect. Clinical trials need to be conducted to ascertain the putative role of androgen replacement in Alzheimer's disease.
Resumo:
In this paper, the deposition of C-20 fullerenes on a diamond (001)-(2x1) surface and the fabrication of C-20 thin film at 100 K were investigated by a molecular dynamics (MD) simulation using the many-body Brenner bond order potential. First, we found that the collision dynamic of a single C-20 fullerene on a diamond surface was strongly dependent on its impact energy. Within the energy range 10-45 eV, the C-20 fullerene chemisorbed on the surface retained its free cage structure. This is consistent with the experimental observation, where it was called the memory effect in "C-20-type" films [P. Melion , Int. J. Mod. B 9, 339 (1995); P. Milani , Cluster Beam Synthesis of Nanostructured Materials (Springer, Berlin, 1999)]. Next, more than one hundred C-20 (10-25 eV) were deposited one after the other onto the surface. The initial growth stage of C-20 thin film was observed to be in the three-dimensional island mode. The randomly deposited C-20 fullerenes stacked on diamond surface and acted as building blocks forming a polymerlike structure. The assembled film was also highly porous due to cluster-cluster interaction. The bond angle distribution and the neighbor-atom-number distribution of the film presented a well-defined local order, which is of sp(3) hybridization character, the same as that of a free C-20 cage. These simulation results are again in good agreement with the experimental observation. Finally, the deposited C-20 film showed high stability even when the temperature was raised up to 1500 K.
Resumo:
There has been a renewal of interest in memory studies in recent years, particularly in the Western world. This chapter considers aspects of personal memory followed by the concept of cultural memory. It then examines how the Australian cultural memory of the Anzac Legend is represented in a number of recent picture books.
Resumo:
In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.
Resumo:
Text categorisation is challenging, due to the complex structure with heterogeneous, changing topics in documents. The performance of text categorisation relies on the quality of samples, effectiveness of document features, and the topic coverage of categories, depending on the employing strategies; supervised or unsupervised; single labelled or multi-labelled. Attempting to deal with these reliability issues in text categorisation, we propose an unsupervised multi-labelled text categorisation approach that maps the local knowledge in documents to global knowledge in a world ontology to optimise categorisation result. The conceptual framework of the approach consists of three modules; pattern mining for feature extraction; feature-subject mapping for categorisation; concept generalisation for optimised categorisation. The approach has been promisingly evaluated by compared with typical text categorisation methods, based on the ground truth encoded by human experts.
Resumo:
A key question in neuroscience is how memory is selectively allocated to neural networks in the brain. This question remains a significant research challenge, in both rodent models and humans alike, because of the inherent difficulty in tracking and deciphering large, highly dimensional neuronal ensembles that support memory (i.e., the engram). In a previous study we showed that consolidation of a new fear memory is allocated to a common topography of amygdala neurons. When a consolidated memory is retrieved, it may enter a labile state, requiring reconsolidation for it to persist. What is not known is whether the original spatial allocation of a consolidated memory changes during reconsolidation. Knowledge about the spatial allocation of a memory, during consolidation and reconsolidation, provides fundamental insight into its core physical structure (i.e., the engram). Using design-based stereology, we operationally define reconsolidation by showing a nearly identical quantity of neurons in the dorsolateral amygdala (LAd) that expressed a plasticity-related protein, phosphorylated mitogen-activated protein kinase, following both memory acquisition and retrieval. Next, we confirm that Pavlovian fear conditioning recruits a stable, topographically organized population of activated neurons in the LAd. When the stored fear memory was briefly reactivated in the presence of the relevant conditioned stimulus, a similar topography of activated neurons was uncovered. In addition, we found evidence for activated neurons allocated to new regions of the LAd. These findings provide the first insight into the spatial allocation of a fear engram in the LAd, during its consolidation and reconsolidation phase.
Resumo:
Pavlovian fear conditioning is a robust technique for examining behavioral and cellular components of fear learning and memory. In fear conditioning, the subject learns to associate a previously neutral stimulus with an inherently noxious co-stimulus. The learned association is reflected in the subjects' behavior upon subsequent re-exposure to the previously neutral stimulus or the training environment. Using fear conditioning, investigators can obtain a large amount of data that describe multiple aspects of learning and memory. In a single test, researchers can evaluate functional integrity in fear circuitry, which is both well characterized and highly conserved across species. Additionally, the availability of sensitive and reliable automated scoring software makes fear conditioning amenable to high-throughput experimentation in the rodent model; thus, this model of learning and memory is particularly useful for pharmacological and toxicological screening. Due to the conserved nature of fear circuitry across species, data from Pavlovian fear conditioning are highly translatable to human models. We describe equipment and techniques needed to perform and analyze conditioned fear data. We provide two examples of fear conditioning experiments, one in rats and one in mice, and the types of data that can be collected in a single experiment. © 2012 Springer Science+Business Media, LLC.
Resumo:
Pavlovian fear conditioning, also known as classical fear conditioning is an important model in the study of the neurobiology of normal and pathological fear. Progress in the neurobiology of Pavlovian fear also enhances our understanding of disorders such as posttraumatic stress disorder (PTSD) and with developing effective treatment strategies. Here we describe how Pavlovian fear conditioning is a key tool for understanding both the neurobiology of fear and the mechanisms underlying variations in fear memory strength observed across different phenotypes. First we discuss how Pavlovian fear models aspects of PTSD. Second, we describe the neural circuits of Pavlovian fear and the molecular mechanisms within these circuits that regulate fear memory. Finally, we show how fear memory strength is heritable; and describe genes which are specifically linked to both changes in Pavlovian fear behavior and to its underlying neural circuitry. These emerging data begin to define the essential genes, cells and circuits that contribute to normal and pathological fear.
Resumo:
This thesis is a study of how the contents of volatile memory on the Windows operating system can be better understood and utilised for the purposes of digital forensic investigations. It proposes several techniques to improve the analysis of memory, with a focus on improving the detection of unknown code such as malware. These contributions allow the creation of a more complete reconstruction of the state of a computer at acquisition time, including whether or not the computer has been infected by malicious code.