986 resultados para Software Transactional Memory (STM)
Resumo:
The present study investigated how object locations learned separately are integrated and represented as a single spatial layout in memory. Two experiments were conducted in which participants learned a room-sized spatial layout that was divided into two sets of five objects. Results suggested that integration across sets was performed efficiently when it was done during initial encoding of the environment but entailed cost in accuracy when it was attempted at the time of memory retrieval. These findings suggest that, once formed, spatial representations in memory generally remain independent and integrating them into a single representation requires additional cognitive processes.
Resumo:
In this paper we present a cryptanalysis of a new 256-bit hash function, FORK-256, proposed by Hong et al. at FSE 2006. This cryptanalysis is based on some unexpected differentials existing for the step transformation. We show their possible uses in different attack scenarios by giving a 1-bit (resp. 2-bit) near collision attack against the full compression function of FORK-256 running with complexity of 2^125 (resp. 2^120) and with negligible memory, and by exhibiting a 22-bit near pseudo-collision. We also show that we can find collisions for the full compression function with a small amount of memory with complexity not exceeding 2^126.6 hash evaluations. We further show how to reduce this complexity to 2^109.6 hash computations by using 273 memory words. Finally, we show that this attack can be extended with no additional cost to find collisions for the full hash function, i.e. with the predefined IV.
Resumo:
language (such as C++ and Java). The model used allows to insert watermarks on three “orthogonal” levels. For the first level, watermarks are injected into objects. The second level watermarking is used to select proper variants of the source code. The third level uses transition function that can be used to generate copies with different functionalities. Generic watermarking schemes were presented and their security discussed.
Resumo:
Studies of semantic impairment arising from brain disease suggest that the anterior temporal lobes are critical for semantic abilities in humans; yet activation of these regions is rarely reported in functional imaging studies of healthy controls performing semantic tasks. Here, we combined neuropsychological and PET functional imaging data to show that when healthy subjects identify concepts at a specific level, the regions activated correspond to the site of maximal atrophy in patients with relatively pure semantic impairment. The stimuli were color photographs of common animals or vehicles, and the task was category verification at specific (e.g., robin), intermediate (e.g., bird), or general (e.g., animal) levels. Specific, relative to general, categorization activated the antero-lateral temporal cortices bilaterally, despite matching of these experimental conditions for difficulty. Critically, in patients with atrophy in precisely these areas, the most pronounced deficit was in the retrieval of specific semantic information.
Resumo:
In 2006, Gaurav Gupta and Josef Pieprzyk presented an attack on the branch-based software watermarking scheme proposed by Ginger Myles and Hongxia Jin in 2005. The software watermarking model is based on replacing jump instructions or unconditional branch statements (UBS) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and/or integrity checks change and the target address is not computed correctly. Gupta and Pieprzyk's attack uses debugger capabilities such as register and address lookup and breakpoints to minimize the requirement to manually inspect the software. Using these resources, the FBF and calls to the same is identified, correct displacement values are generated and calls to FBF are replaced by the original UBS transferring control of the attack to the correct target instruction. In this paper, we propose a watermarking model that provides security against such debugging attacks. Two primary measures taken are shifting the stack pointer modification operation from the FBF to the individual UBSs, and coding the stack pointer modification in the same language as that of the rest of the code rather than assembly language to avoid conspicuous contents. The manual component complexity increases from O(1) in the previous scheme to O(n) in our proposed scheme.
Resumo:
The generation of a correlation matrix for set of genomic sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. Each sequence may be millions of bases long and there may be thousands of such sequences which we wish to compare, so not all sequences may fit into main memory at the same time. Each sequence needs to be compared with every other sequence, so we will generally need to page some sequences in and out more than once. In order to minimize execution time we need to minimize this I/O. This paper develops an approach for faster and scalable computing of large-size correlation matrices through the maximal exploitation of available memory and reducing the number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different bioinformatics problems with different correlation matrix sizes. The significant performance improvement of the approach over previous work is demonstrated through benchmark examples.
Resumo:
MicroRNAs are small non-coding RNAs that mediate post-transcriptional gene silencing. Fear-extinction learning in C57/Bl6J mice led to increased expression of the brain-specific microRNA miR-128b, which disrupted stability of several plasticity-related target genes and regulated formation of fear-extinction memory. Increased miR-128b activity may therefore facilitate the transition from retrieval of the original fear memory toward the formation of a new fear-extinction memory.
Resumo:
It is well established that the coordinated regulation of activity-dependent gene expression by the histone acetyltransferase (HAT) family of transcriptional coactivators is crucial for the formation of contextual fear and spatial memory, and for hippocampal synaptic plasticity. However, no studies have examined the role of this epigenetic mechanism within the infralimbic prefrontal cortex (ILPFC), an area of the brain that is essential for the formation and consolidation of fear extinction memory. Here we report that a postextinction training infusion of a combined p300/CBP inhibitor (Lys-CoA-Tat), directly into the ILPFC, enhances fear extinction memory in mice. Our results also demonstrate that the HAT p300 is highly expressed within pyramidal neurons of the ILPFC and that the small-molecule p300-specific inhibitor (C646) infused into the ILPFC immediately after weak extinction training enhances the consolidation of fear extinction memory. C646 infused 6 h after extinction had no effect on fear extinction memory, nor did an immediate postextinction training infusion into the prelimbic prefrontal cortex. Consistent with the behavioral findings, inhibition of p300 activity within the ILPFC facilitated long-term potentiation (LTP) under stimulation conditions that do not evoke long-lasting LTP. These data suggest that one function of p300 activity within the ILPFC is to constrain synaptic plasticity, and that a reduction in the function of this HAT is required for the formation of fear extinction memory.
Resumo:
Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time.
Resumo:
The capability of storing multi-bit information is one of the most important challenges in memory technologies. An ambipolar polymer which intrinsically has the ability to transport electrons and holes as a semiconducting layer provides an opportunity for the charge trapping layer to trap both electrons and holes efficiently. Here, we achieved large memory window and distinct multilevel data storage by utilizing the phenomena of ambipolar charge trapping mechanism. As fabricated flexible memory devices display five well-defined data levels with good endurance and retention properties showing potential application in printed electronics.
Resumo:
With increased consolidation and a few large vendors dominating the market, how can software vendors distinguish themselves in order to maintain profitability and gain market share? Increasingly customers are becoming more proactive in selecting a vendor and a product, drawing upon various publications, market surveys, mailing lists, and, of course, other users. In particular, though, a company's Web site is the obvious place to begin information gathering. In sum, it may seem that the days of the uninformed customer prepared to be "sold to" are potentially all but gone.
Resumo:
The adoption of packaged software is becoming increasingly common in a variety of organizations and much of the packaged software literature presents this as a straightforward, linear process based on rationalistic evaluation. This paper applies the framework of power relations developed by Markus and Bjørn-‐Anderson (1987) to a longitudinal study concerning the adoption of a customer relationship management package in a small organization. This is used to highlight both overt and covert power issues within the selection and procurement of the product and illustrate the interplay of power between senior management, IT managers, IT vendors and consultants, and end-‐users. The paper contributes to the growing body of literature on packaged software and also to our understanding of how power is deeply embedded within the surrounding processes.
Resumo:
Purpose – This paper seeks to analyse the process of packaged software selection in a small organization, focussing particularly on the role of IT consultants as intermediaries in the process. Design/methodology/approach – This is based upon a longitudinal, qualitative field study concerning the adoption of a customer relationship management package in an SME management consultancy. Findings – The authors illustrate how the process of “salesmanship”, an activity directed by the vendor/consultant and focussed on the interests of senior management, marginalises user needs and ultimately secures the procurement of the software package. Research limitations/implications – Despite the best intentions the authors lose something of the rich detail of the lived experience of technology in presenting the case study as a linear narrative. Specifically, the authors have been unable to do justice to the complexity of the multifarious ways in which individual perceptions of the project were influenced and shaped by the opinions of others. Practical implications – Practitioners, particularly those from within SMEs, should be made aware of the ways in which external parties may have a vested interest in steering projects in a particular direction, which may not necessarily align with their own interests. Originality/value – This study highlights in detail the role of consultants and vendors in software selection processes, an area which has received minimal attention to date. Prior work in this area emphasises the necessary conditions for, and positive outcomes of, appointing external parties in an SME context, with only limited attention being paid to the potential problems such engagements may bring.
Resumo:
As organisations increasingly engage in the selection, purchase, and adoption of packaged software products, how these activities are carried out in practice becomes increasingly relevant for researchers and practitioners. Our focus in this paper is to propose a framework for understanding the packaged software selection process. The functionalist literature on this area of study suggests a number of generic recommendations, which are based on rational assumptions about the process and view the decision making that takes place as producing the “best technology solution.’” To explore this, we conducted a longitudinal, in-depth study of packaged software selection in a small organisation. For interpretation of the case, we draw upon the Social Construction of Technology, a theoretical framework arguing that technology is socially constituted and regarding the process of development as contradictory and uncertain. We offer a number of contributions. First, we further our understanding of packaged software selection with the critique that we offer of the functionalist literature, drawing insights from the emerging critical/constructivist literature and expanding our domain of interest to encompass the wider environment. Second, we weave this together with our experiences in the field, drawing on social constructivism for theoretical support, to develop a framework of packaged software selection that shows how various actors shape the process.