946 resultados para Memory-based
Resumo:
Two novel studies examining the capacity and characteristics of working memory for object weights, experienced through lifting, were completed. Both studies employed visually identical objects of varying weight and focused on memories linking object locations and weights. Whereas numerous studies have examined the capacity of visual working memory, the capacity of sensorimotor memory involved in motor control and object manipulation has not yet been explored. In addition to assessing working memory for object weights using an explicit perceptual test, we also assessed memory for weight using an implicit measure based on motor performance. The vertical lifting or LF and the horizontal GF applied during lifts, measured from force sensors embedded in the object handles, were used to assess participants’ ability to predict object weights. In Experiment 1, participants were presented with sets of 3, 4, 5, 7 or 9 objects. They lifted each object in the set and then repeated this procedure 10 times with the objects lifted either in a fixed or random order. Sensorimotor memory was examined by assessing, as a function of object set size, how lifting forces changed across successive lifts of a given object. The results indicated that force scaling for weight improved across the repetitions of lifts, and was better for smaller set sizes when compared to the larger set sizes, with the latter effect being clearest when objects were lifting in a random order. However, in general the observed force scaling was poorly scaled. In Experiment 2, working memory was examined in two ways: by determining participants’ ability to detect a change in the weight of one of 3 to 6 objects lifted twice, and by simultaneously measuring the fingertip forces applied when lifting the objects. The results showed that, even when presented with 6 objects, participants were extremely accurate in explicitly detecting which object changed weight. In addition, force scaling for object weight, which was generally quite weak, was similar across set sizes. Thus, a capacity limit less than 6 was not found for either the explicit or implicit measures collected.
Resumo:
Two experiments investigated the consequences of action at encoding and recall on the ability to follow sequences of instructions. Children aged 7–9 years recalled sequences of spoken action commands under presentation and recall conditions that either did or did not involve their physical performance. In both experiments, recall was enhanced by carrying out the instructions as they were being initially presented and also by performing them at recall. In contrast, the accuracy of instruction-following did not improve above spoken presentation alone, either when the instructions were silently read or heard by the child (Experiment 1), or when the child repeated the spoken instructions as they were presented (Experiment 2). These findings suggest that the enactment advantage at presentation does not simply reflect a general benefit of a dual exposure to instructions, and that it is not a result of their self-production at presentation. The benefits of action-based recall were reduced following enactment during presentation, suggesting that the positive effects of action at encoding and recall may have a common origin. It is proposed that the benefits of physical movement arise from the existence of a short-term motor store that maintains the temporal, spatial, and motoric features of either planned or already executed actions.
Resumo:
The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations,computing clusters and distributed cloud appliances.
Resumo:
Background: Many school-based interventions are being delivered in the absence of evidence of effectiveness (Snowling & Hulme, 2011, Br. J. Educ. Psychol., 81, 1).Aim: This study sought to address this oversight by evaluating the effectiveness of the commonly used the Lexia Reading Core5 intervention, with 4- to 6-year-old pupils in Northern Ireland.Sample: A total of 126 primary school pupils in year 1 and year 2 were screened on the Phonological Assessment Battery 2nd Edition (PhAB-2). Children were recruited from the equivalent year groups to Reception and Year 1 in England and Wales, and Pre-kindergarten and Kindergarten in North America.
Methods: A total of 98 below-average pupils were randomized (T0) to either an 8-week block (inline image = 647.51 min, SD = 158.21) of daily access to Lexia Reading Core5 (n = 49) or a waiting-list control group (n = 49). Assessment of phonological skills was completed at post-intervention (T1) and at 2-month follow-up (T2) for the intervention group only.
Results: Analysis of covariance which controlled for baseline scores found that the Lexia Reading Core5 intervention group made significantly greater gains in blending, F(1, 95) = 6.50, p = .012, partial η2 = .064 (small effect size) and non-word reading, F(1, 95) = 7.20, p = .009, partial η2 = .070 (small effect size). Analysis of the 2-month follow-up of the intervention group found that all group treatment gains were maintained. However, improvements were not uniform among the intervention group with 35% failing to make progress despite access to support. Post-hoc analysis revealed that higher T0 phonological working memory scores predicted improvements made in phonological skills.
Conclusions: An early-intervention, computer-based literacy program can be effective in boosting the phonological skills of 4- to 6-year-olds, particularly if these literacy difficulties are not linked to phonological working memory deficits.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
This work explores the development of MemTri. A memory forensics triage tool that can assess the likelihood of criminal activity in a memory image, based on evidence data artefacts generated by several applications. Fictitious illegal suspect activity scenarios were performed on virtual machines to generate 60 test memory images for input into MemTri. Four categories of applications (i.e. Internet Browsers, Instant Messengers, FTP Client and Document Processors) are examined for data artefacts located through the use of regular expressions. These identified data artefacts are then analysed using a Bayesian Network, to assess the likelihood that a seized memory image contained evidence of illegal activity. Currently, MemTri is under development and this paper introduces only the basic concept as well as the components that the application is built on. A complete description of MemTri coupled with extensive experimental results is expected to be published in the first semester of 2017.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
Young children often experience relational memory failures, which are thought to be due to underdeveloped recollection processes. Manipulations with adults, however, have suggested that relational memory tasks can be accomplished with familiarity, a processes that is fully developed during early childhood. The goal of the present study was to determine if relational memory performance could be improved in early childhood by teaching children a memory strategy (i.e., unitization) shown to increase familiarity in adults. Six- and 8-year old children were taught to use visualization strategies that either unitized or did not unitize pictures and colored borders. Analysis revealed inconclusive results regarding differences in familiarity between the two conditions, suggesting that the unitization memory strategy did not improve the contribution of familiarity as it has been shown to do in adults. Based on these findings, it cannot be concluded that unitization strategies increase the contribution of familiarity in childhood.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
[EN] The experiment discussed in this paper is a direct replication of Finkbeiner (2005) and an indirect replication of Jiang and Forster (2001) and Witzel and Forster(2012). The paper explores the use of episodic memory in L2 vocabulary processing. By administering an L1 episodic recognition task with L2 masked translation primes, reduced reaction times would suggest L2 vocabulary storage in episodic memory. The methodology follows Finkbeiner (2005) who argued that a blank screen introduced after the prime in Jiang Forster (2001) led to a ghosting effect, compromising the imperceptibility of the prime. The results here mostly corroborate Finkbeiner (2005) with no significant priming effects. While Finkbeiner discusses his findings in terms of the dissociability of episodic and semantic memory, and discounts Jiang and Forster’s (2001) results to participants’ strategic responding, I add a layer of analysis based on declarative and procedural constituents. From this perspective, Jiang and Forster (2001) and Witzel and Forster’s (2012) results can be seen as possible episodic memory activation, and Finkbeiner’s (2005) and my lack of priming effects might be due to the sole activation of procedural neural networks. Priming effects are found in concrete and abstract words but require verification through further experimentation.
Resumo:
Memristori on yksi elektroniikan peruskomponenteista vastuksen, kondensaattorin ja kelan lisäksi. Se on passiivinen komponentti, jonka teorian kehitti Leon Chua vuonna 1971. Kesti kuitenkin yli kolmekymmentä vuotta ennen kuin teoria pystyttiin yhdistämään kokeellisiin tuloksiin. Vuonna 2008 Hewlett Packard julkaisi artikkelin, jossa he väittivät valmistaneensa ensimmäisen toimivan memristorin. Memristori eli muistivastus on resistiivinen komponentti, jonka vastusarvoa pystytään muuttamaan. Nimens mukaisesti memristori kykenee myös säilyttämään vastusarvonsa ilman jatkuvaa virtaa ja jännitettä. Tyypillisesti memristorilla on vähintään kaksi vastusarvoa, joista kumpikin pystytään valitsemaan syöttämällä komponentille jännitettä tai virtaa. Tämän vuoksi memristoreita kutsutaankin usein resistiivisiksi kytkimiksi. Resistiivisiä kytkimiä tutkitaan nykyään paljon erityisesti niiden mahdollistaman muistiteknologian takia. Resistiivisistä kytkimistä rakennettua muistia kutsutaan ReRAM-muistiksi (lyhenne sanoista resistive random access memory). ReRAM-muisti on Flash-muistin tapaan haihtumaton muisti, jota voidaan sähköisesti ohjelmoida tai tyhjentää. Flash-muistia käytetään tällä hetkellä esimerkiksi muistitikuissa. ReRAM-muisti mahdollistaa kuitenkin nopeamman ja vähävirtaiseman toiminnan Flashiin verrattuna, joten se on tulevaisuudessa varteenotettava kilpailija markkinoilla. ReRAM-muisti mahdollistaa myös useammin bitin tallentamisen yhteen muistisoluun binäärisen (”0” tai ”1”) toiminnan sijaan. Tyypillisesti ReRAM-muistisolulla on kaksi rajoittavaa vastusarvoa, mutta näiden kahden tilan välille pystytään mahdollisesti ohjelmoimaan useampia tiloja. Muistisoluja voidaan kutsua analogisiksi, jos tilojen määrää ei ole rajoitettu. Analogisilla muistisoluilla olisi mahdollista rakentaa tehokkaasti esimerkiksi neuroverkkoja. Neuroverkoilla pyritään mallintamaan aivojen toimintaa ja suorittamaan tehtäviä, jotka ovat tyypillisesti vaikeita perinteisille tietokoneohjelmille. Neuroverkkoja käytetään esimerkiksi puheentunnistuksessa tai tekoälytoteutuksissa. Tässä diplomityössä tarkastellaan Ta2O5 -perustuvan ReRAM-muistisolun analogista toimintaa pitäen mielessä soveltuvuus neuroverkkoihin. ReRAM-muistisolun valmistus ja mittaustulokset käydään läpi. Muistisolun toiminta on harvoin täysin analogista, koska kahden rajoittavan vastusarvon välillä on usein rajattu määrä tiloja. Tämän vuoksi toimintaa kutsutaan pseudoanalogiseksi. Mittaustulokset osoittavat, että yksittäinen ReRAM-muistisolu kykenee binääriseen toimintaan hyvin. Joiltain osin yksittäinen solu kykenee tallentamaan useampia tiloja, mutta vastusarvoissa on peräkkäisten ohjelmointisyklien välillä suurta vaihtelevuutta, joka hankaloittaa tulkintaa. Valmistettu ReRAM-muistisolu ei sellaisenaan kykene toimimaan pseudoanalogisena muistina, vaan se vaati rinnalleen virtaa rajoittavan komponentin. Myös valmistusprosessin kehittäminen vähentäisi yksittäisen solun toiminnassa esiintyvää varianssia, jolloin sen toiminta muistuttaisi enemmän pseudoanalogista muistia.
Resumo:
Technologies such as automobiles or mobile phones allow us to perform beyond our physical capabilities and travel faster or communicate over long distances. Technologies such as computers and calculators can also help us perform beyond our mental capabilities by storing and manipulating information that we would be unable to process or remember. In recent years there has been a growing interest in assistive technology for cognition (ATC) which can help people compensate for cognitive impairments. The aim of this thesis was to investigate ATC for memory to help people with memory difficulties which impacts independent functioning during everyday life. Chapter one argues that using both neuropsychological and human computing interaction theory and approaches is crucial when developing and researching ATC. Chapter two describes a systematic review and meta-analysis of studies which tested technology to aid memory for groups with ABI, stroke or degenerative disease. Good evidence was found supporting the efficacy of prompting devices which remind the user about a future intention at a set time. Chapter three looks at the prevalence of technologies and memory aids in current use by people with ABI and dementia and the factors that predicted this use. Pre-morbid use of technology, current use of non-tech aids and strategies and age (ABI group only) were the best predictors of this use. Based on the results, chapter four focuses on mobile phone based reminders for people with ABI. Focus groups were held with people with memory impairments after ABI and ABI caregivers (N=12) which discussed the barriers to uptake of mobile phone based reminding. Thematic analysis revealed six key themes that impact uptake of reminder apps; Perceived Need, Social Acceptability, Experience/Expectation, Desired Content and Functions, Cognitive Accessibility and Sensory/Motor Accessibility. The Perceived need theme described the difficulties with insight, motivation and memory which can prevent people from initially setting reminders on a smartphone. Chapter five investigates the efficacy and acceptability of unsolicited prompts (UPs) from a smartphone app (ForgetMeNot) to encourage people with ABI to set reminders. A single-case experimental design study evaluated use of the app over four weeks by three people with severe ABI living in a post-acute rehabilitation hospital. When six UPs were presented through the day from ForgetMeNot, daily reminder-setting and daily memory task completion increased compared to when using the app without the UPs. Chapter six investigates another barrier from chapter 4 – cognitive and sensory accessibility. A study is reported which shows that an app with ‘decision tree’ interface design (ApplTree) leads to more accurate reminder setting performance with no compromise of speed or independence (amount of guidance required) for people with ABI (n=14) compared to a calendar based interface. Chapter seven investigates the efficacy of a wearable reminding device (smartwatch) as a tool for delivering reminders set on a smartphone. Four community dwelling participants with memory difficulties following ABI were included in an ABA single case experimental design study. Three of the participants successfully used the smartwatch throughout the intervention weeks and these participants gave positive usability ratings. Two participants showed improved memory performance when using the smartwatch and all participants had marked decline in memory performance when the technology was removed. Chapter eight is a discussion which highlights the implications of these results for clinicians, researchers and designers.
Resumo:
International audience
Resumo:
In the multi-core CPU world, transactional memory (TM)has emerged as an alternative to lock-based programming for thread synchronization. Recent research proposes the use of TM in GPU architectures, where a high number of computing threads, organized in SIMT fashion, requires an effective synchronization method. In contrast to CPUs, GPUs offer two memory spaces: global memory and local memory. The local memory space serves as a shared scratch-pad for a subset of the computing threads, and it is used by programmers to speed-up their applications thanks to its low latency. Prior work from the authors proposed a lightweight hardware TM (HTM) support based in the local memory, modifying the SIMT execution model and adding a conflict detection mechanism. An efficient implementation of these features is key in order to provide an effective synchronization mechanism at the local memory level. After a quick description of the main features of our HTM design for GPU local memory, in this work we gather together a number of proposals designed with the aim of improving those mechanisms with high impact on performance. Firstly, the SIMT execution model is modified to increase the parallelism of the application when transactions must be serialized in order to make forward progress. Secondly, the conflict detection mechanism is optimized depending on application characteristics, such us the read/write sets, the probability of conflict between transactions and the existence of read-only transactions. As these features can be present in hardware simultaneously, it is a task of the compiler and runtime to determine which ones are more important for a given application. This work includes a discussion on the analysis to be done in order to choose the best configuration solution.