182 resultados para Hardware Transactional Memory
Resumo:
Three experiments examined transfer across form (words/pictures) and modality (visual/ auditory) in written word, auditory word, and pictorial implicit memory tests, as well as on a free recall task. Experiment 1 showed no significant transfer across form on any of the three implicit memory tests,and an asymmetric pattern of transfer across modality. In contrast, the free recall results revealed a very different picture. Experiment 2 further investigated the asymmetric modality effects obtained for the implicit memory measures by employing articulatory suppression and picture naming to control the generation of phonological codes. Finally, Experiment 3 examined the effects of overt word naming and covert picture labelling on transfer between study and test form. The results of the experiments are discussed in relation to Tulving and Schacter's (1990) Perceptual Representation Systems framework and Roediger's (1990) Transfer Appropriate Processing theory.
Resumo:
Two distinctions in the human learning literature are becoming increasingly influential; implicit versus explicit memory, and implicit versus explicit learning, respectively. To date, these distinctions have been used to refer to apparently different phenomena. Recent research suggests, however, that the same processes may be underlying performance in the two types of task. This paper reviews recent results in the two areas and suggests ways in which the two distinctions may be related.
Resumo:
A study of the formation and propagation of volume anomalies in North Atlantic Mode Waters is presented, based on 100 yr of monthly mean fields taken from the control run of the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3). Analysis of the temporal and. spatial variability in the thickness between pairs of isothermal surfaces bounding the central temperature of the three main North Atlantic subtropical mode waters shows that large-scale variability in formation occurs over time scales ranging from 5 to 20 yr. The largest formation anomalies are associated with a southward shift in the mixed layer isothermal distribution, possibly due to changes in the gyre dynamics and/or changes in the overlying wind field and air-sea heat fluxes. The persistence of these anomalies is shown to result from their subduction beneath the winter mixed layer base where they recirculate around the subtropical gyre in the background geostrophic flow. Anomalies in the warmest mode (18 degrees C) formed on the western side of the basin persist for up to 5 yr. They are removed by mixing transformation to warmer classes and are returned to the seasonal mixed layer near the Gulf Stream where the stored heat may be released to the atmosphere. Anomalies in the cooler modes (16 degrees and 14 degrees C) formed on the eastern side of the basin persist for up to 10 yr. There is no clear evidence of significant transformation of these cooler mode anomalies to adjacent classes. It has been proposed that the eastern anomalies are removed through a tropical-subtropical water mass exchange mechanism beneath the trade wind belt (south of 20 degrees N). The analysis shows that anomalous mode water formation plays a key role in the long-term storage of heat in the model, and that the release of heat associated with these anomalies suggests a predictable climate feedback mechanism.
Resumo:
A task combining both digit and Corsi memory tests was administered to a group of 75 children. The task is shown to share variance with standardized reading and maths attainments, even after partialling out performance on component tasks separately assessed. The emergent task property may reflect coordination skills, although several different refinements can be made to this general conclusion.
Resumo:
The concept of “working” memory is traceable back to nineteenth century theorists (Baldwin, 1894; James 1890) but the term itself was not used until the mid-twentieth century (Miller, Galanter & Pribram, 1960). A variety of different explanatory constructs have since evolved which all make use of the working memory label (Miyake & Shah, 1999). This history is briefly reviewed and alternative formulations of working memory (as language-processor, executive attention, and global workspace) are considered as potential mechanisms for cognitive change within and between individuals and between species. A means, derived from the literature on human problem-solving (Newell & Simon, 1972), of tracing memory and computational demands across a single task is described and applied to two specific examples of tool-use by chimpanzees and early hominids. The examples show how specific proposals for necessary and/or sufficient computational and memory requirements can be more rigorously assessed on a task by task basis. General difficulties in connecting cognitive theories (arising from the observed capabilities of individuals deprived of material support) with archaeological data (primarily remnants of material culture) are discussed.
Resumo:
Many models of immediate memory predict the presence or absence of various effects, but none have been tested to see whether they predict an appropriate distribution of effect sizes. The authors show that the feature model (J. S. Nairne, 1990) produces appropriate distributions of effect sizes for both the phonological confusion effect and the word-length effect. The model produces the appropriate number of reversals, when participants are more accurate with similar items or long items, and also correctly predicts that participants performing less well overall demonstrate smaller and less reliable phonological similarity and word-length effects and are more likely to show reversals. These patterns appear within the model without the need to assume a change in encoding or rehearsal strategy or the deployment of a different storage buffer. The implications of these results and the wider applicability of the distributionmodeling approach are discussed.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
Emerging parasitoids of aphids encounter secondary plant chemistry from cues left by the mother parasitoid at oviposition and from the plant-feeding of the host aphid. In practice, however, it is secondary plant cheinistry oil the Surface of the aphid mummy which influences parasitoid olfactory behaviour. Offspring of Aphidius colemani reared oil Myzus persicae on artificial diet did no distinguish between the odours of bean and cabbage, but showed a clear preference for cabbage odour if sinigrin had been painted oil the back of the mummy. Similarly Aphidius rhopalosiphi reared on Metopolophium dirhodum on wheat preferred the odour of wheat plants grown near tomato plants to odour of wheat alone if the wheat plants oil which they had been reared had been exposed to the volatiles of nearby tomato plants. Aphidius rhopalosiphi reared on M dirhodum, and removed from the mummy before emergence, showed a preference for the odour of a different wheat cultivar if they had contacted a mummy from that cultivar, and similar results were obtained with A. colemani naturally emerged from M. persicae mummies. Aphidius colemani emerged from mummies oil one crucifer were allowed to contact in sequence (for 45 min each) mummies from two different crucifers. The mumber of attacks made in 10 min oil M. persicae was always significantly higher when aphids were feeding oil the same plant as the origin of the last MUMMY offered, or oil the second plant if aphids feeding on the third plant were not included. Chilling emerged A. colemani for 24 h at 5 degrees C appeared to erase the imprint of secondary plant chemistry, and they no longer showed host plant odour preferences in the olfactometer. When the parasitoids were chilled after three Successive mummy experiences, memory of the last experience appeared at least temporarily erased and preference was then shown for the chemistry of the second experience.
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.