134 resultados para Hardware Transactional Memory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The symbolic and improvisational nature of Livecoding requires a shared networking framework to be flexible and extensible, while at the same time providing support for synchronisation, persistence and redundancy. Above all the framework should be robust and available across a range of platforms. This paper proposes tuple space as a suitable framework for network communication in ensemble livecoding contexts. The role of tuple space as a concurrency framework and the associated timing aspects of the tuple space model are explored through Spaces, an implementation of tuple space for the Impromptu environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent Australian literature digitisation project uncovered some surprising discoveries in the children’s books that it digitised. The Children’s Literature Digital Resources (CLDR) Project digitised children’s books that were first published between 1851 to 1945 and made them available online through AustLit: The Australian Literature Resource. The digitisation process also preserved, within the pages of those books, a range of bookplates, book labels, inscriptions, and loose ephemera. This material allows us to trace the provenance of some of the digitised works, some of which came from the personal libraries of now-famous authors, and others from less celebrated sources. These extra-textual traces can contribute to cultural memory of the past by providing evidence of how books were collected and exchanged, and what kinds of books were presented as prizes in schools and Sunday schools. They also provide insight into Australian literary and artistic networks, particularly of the first few decades of the 20th century. This article describes the kinds of material uncovered in the digitisation process and suggests that the material provides insights into literary and cultural histories that might otherwise be forgotten. It also argues that the indexing of this material is vital if it is not to be lost to future researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of researching children’s literature from the past is a growing challenge as resources age and are increasingly treated as rare items, stored away within libraries and other research centres. In Australia, researchers and librarians have collaborated with the bibliographic database AustLit: The Australian Literature Resource to produce the Australian Children’s Literature Digital Resources Project (CLDR). This Project aims to address the growing demand for online access to rare children’s literature resources, and demonstrates the research potential of early Australian children’s literature by supplementing the collection with relevant critical articles. The CLDR project is designed with a specific focus and provides access to full text Australian children’s literature from European settlement to 1945. The collection demonstrates a need and desire to preserve literature treasures to prevent losing such collections in a digital age. The collection covers many themes relevant to the conference including, trauma, survival, memory, survival, hauntings, and histories. The resource provides new and exciting ways with which to research children’s literature from the past and offers a fascinating repository to scholars and professionals of ranging disciplines who are in interested in Australian children’s literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feasibility of real-time calculation of parameters for an internal combustion engine via reconfigurable hardware implementation is investigated as an alternative to software computation. A detailed in-hardware field programmable gate array (FPGA)-based design is developed and evaluated using input crank angle and in-cylinder pressure data from fully instrumented diesel engines in the QUT Biofuel Engine Research Facility (BERF). Results indicate the feasibility of employing a hardware-based implementation for real-time processing for speeds comparable to the data sampling rate currently used in the facility, with acceptably low level of discrepancies between hardware and software-based calculation of key engine parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Free association norms indicate that words are organized into semantic/associative neighborhoods within a larger network of words and links that bind the net together. We present evidence indicating that memory for a recent word event can depend on implicitly and simultaneously activating related words in its neighborhood. Processing a word during encoding primes its network representation as a function of the density of the links in its neighborhood. Such priming increases recall and recognition and can have long lasting effects when the word is processed in working memory. Evidence for this phenomenon is reviewed in extralist cuing, primed free association, intralist cuing, and single-item recognition tasks. The findings also show that when a related word is presented to cue the recall of a studied word, the cue activates it in an array of related words that distract and reduce the probability of its selection. The activation of the semantic network produces priming benefits during encoding and search costs during retrieval. In extralist cuing recall is a negative function of cue-to-distracter strength and a positive function of neighborhood density, cue-to-target strength, and target-to cue strength. We show how four measures derived from the network can be combined and used to predict memory performance. These measures play different roles in different tasks indicating that the contribution of the semantic network varies with the context provided by the task. We evaluate spreading activation and quantum-like entanglement explanations for the priming effect produced by neighborhood density.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In power hardware in the loop (PHIL) simulations, a real-time simulated power system is interfaced to a piece of hardware, usually called hardware under test (HuT). A PHIL test can be realized using several simulation tools. Among them Real Time Digital Simulator (RTDS) is an ideal tool to perform complex power system simulations in near real-time. Stable operation of the entire system, along with the accuracy of simulation results are the main concerns regarding a PHIL simulation. In this paper, a simulated power network on RTDS will be interfaced to HuT through a voltage source converter (VSC). Issues around stability and other interface problems are studied and a new method to stabilize some unstable PHIL cases is proposed. PHIL simulation results in PSCAD and RSCAD are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two experiments examine outcomes for sponsor and ambusher brands within sponsorship settings. It is demonstrated that although making consumers aware of the presence of ambusher brands can reduce subsequent event recall to competitor cues, recall to sponsor cues can also suffer. Attitudinal effects are also considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The world is rapidly ageing. It is against this backdrop that there are increasing incidences of dementia reported worldwide, with Alzheimer's disease (AD) being the most common form of dementia in the elderly. It is estimated that AD affects almost 4 million people in the US, and costs the US economy more than 65 million dollars annually. There is currently no cure for AD but various therapeutic agents have been employed in attempting to slow down the progression of the illness, one of which is oestrogen. Over the last decades, scientists have focused mainly on the roles of oestrogen in the prevention and treatment of AD. Newer evidences suggested that testosterone might also be involved in the pathogenesis of AD. Although the exact mechanisms on how androgen might affect AD are still largely unknown, it is known that testosterone can act directly via androgen receptor-dependent mechanisms or indirectly by converting to oestrogen to exert this effect. Clinical trials need to be conducted to ascertain the putative role of androgen replacement in Alzheimer's disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the deposition of C-20 fullerenes on a diamond (001)-(2x1) surface and the fabrication of C-20 thin film at 100 K were investigated by a molecular dynamics (MD) simulation using the many-body Brenner bond order potential. First, we found that the collision dynamic of a single C-20 fullerene on a diamond surface was strongly dependent on its impact energy. Within the energy range 10-45 eV, the C-20 fullerene chemisorbed on the surface retained its free cage structure. This is consistent with the experimental observation, where it was called the memory effect in "C-20-type" films [P. Melion , Int. J. Mod. B 9, 339 (1995); P. Milani , Cluster Beam Synthesis of Nanostructured Materials (Springer, Berlin, 1999)]. Next, more than one hundred C-20 (10-25 eV) were deposited one after the other onto the surface. The initial growth stage of C-20 thin film was observed to be in the three-dimensional island mode. The randomly deposited C-20 fullerenes stacked on diamond surface and acted as building blocks forming a polymerlike structure. The assembled film was also highly porous due to cluster-cluster interaction. The bond angle distribution and the neighbor-atom-number distribution of the film presented a well-defined local order, which is of sp(3) hybridization character, the same as that of a free C-20 cage. These simulation results are again in good agreement with the experimental observation. Finally, the deposited C-20 film showed high stability even when the temperature was raised up to 1500 K.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been a renewal of interest in memory studies in recent years, particularly in the Western world. This chapter considers aspects of personal memory followed by the concept of cultural memory. It then examines how the Australian cultural memory of the Anzac Legend is represented in a number of recent picture books.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.