138 resultados para Cheever, Ezekiel, 1615-1708.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scalability and efficiency of on-chip communication of emerging Multiprocessor System-on-Chip (MPSoC) are critical design considerations. Conventional bus based interconnection schemes no longer fit for MPSoC with a large number of cores. Networks-on-Chip (NoC) is widely accepted as the next generation interconnection scheme for large scale MPSoC. The increase of MPSoC complexity requires fast and accurate system-level modeling techniques for rapid modeling and veri-fication of emerging MPSoCs. However, the existing modeling methods are limited in delivering the essentials of timing accuracy and simulation speed. This paper proposes a novel system-level Networks-on-Chip (NoC) modeling method, which is based on SystemC and TLM2.0 and capable of delivering timing accuracy close to cycle accurate modeling techniques at a significantly lower simulation cost. Experimental results are presented to demonstrate the proposed method. ©2010 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The combinatorial frequency generation by the periodic stacks of magnetically biased semiconductor layers has been modelled in a self-consistent problem formulation, taking into account the nonlinear dynamics of carriers. It is shown that magnetic bias not only renders nonreciprocity of the three-wave mixing process but also significantly enhances the nonlinear interactions in the stacks, especially at the frequencies close to the intrinsic magneto-plasma resonances of the constituent layers. The main mechanisms and properties of the combinatorial frequency generation and emission from the stacks are illustrated by the simulation results, and the effects of the individual layer parameters and the structure arrangement on the stack nonlinear and nonreciprocal response are discussed. © 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Field Programmable Gate Array (FPGA) implementation of the commonly used Histogram of Oriented Gradients (HOG) algorithm is explored. The HOG algorithm is employed to extract features for object detection. A key focus has been to explore the use of a new FPGA-based processor which has been targeted at image processing. The paper gives details of the mapping and scheduling factors that influence the performance and the stages that were undertaken to allow the algorithm to be deployed on FPGA hardware, whilst taking into account the specific IPPro architecture features. We show that multi-core IPPro performance can exceed that of against state-of-the-art FPGA designs by up to 3.2 times with reduced design and implementation effort and increased flexibility all on a low cost, Zynq programmable system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The opportunistic human pathogen Propionibacterium acnes is comprised of a number of distinct phylogroups, designated types IA1, IA2, IB, IC, II and III, that vary in their production of putative virulence factors, inflammatory potential, as well as biochemical, aggregative and morphological characteristics. Although Multilocus Sequence Typing (MLST) currently represents the gold standard for unambiguous phylogroup classification, and individual strain identification, it is a labour and time-consuming technique. As a consequence, we have developed a multiplex touchdown PCR assay that will, in a single reaction, confirm species identity and phylogeny of an isolate based on its pattern of reaction with six primer sets that target the 16S rRNA (all isolates), ATPase (type IA1, IA2, IC), sodA (type IA2, IB), atpD (type II) and recA (type III) housekeeping genes, as well as a Fic family toxin gene (type IC). When applied to 312 P. acnes isolates previously characterised by MLST, and representing type IA1 (n=145), IA2 (n=20), IB (n=65), IC (n=7), II (n=45) and III (n=30), the multiplex displayed 100% sensitivity and 100% specificity for the detection of isolates within each targeted phylogroup. No cross-reactivity with isolates from other bacterial species was observed. The multiplex assay will provide researchers with a rapid, high-throughput and technically undemanding typing method for epidemiological and phylogenetic investigations. It will facilitate studies investigating the association of lineages with various infections and clinical conditions, as well as a pre-screening tool to maximise the number of genetically diverse isolates selected for downstream, higher resolution sequence-based analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fully Homomorphic Encryption (FHE) is a recently developed cryptographic technique which allows computations on encrypted data. There are many interesting applications for this encryption method, especially within cloud computing. However, the computational complexity is such that it is not yet practical for real-time applications. This work proposes optimised hardware architectures of the encryption step of an integer-based FHE scheme with the aim of improving its practicality. A low-area design and a high-speed parallel design are proposed and implemented on a Xilinx Virtex-7 FPGA, targeting the available DSP slices, which offer high-speed multiplication and accumulation. Both use the Comba multiplication scheduling method to manage the large multiplications required with uneven sized multiplicands and to minimise the number of read and write operations to RAM. Results show that speed up factors of 3.6 and 10.4 can be achieved for the encryption step with medium-sized security parameters for the low-area and parallel designs respectively, compared to the benchmark software implementation on an Intel Core2 Duo E8400 platform running at 3 GHz.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

‘O daughter … forget your people and your father’s house’: Early Modern Women Writers and the Spanish ImaginaryAnne Holloway and Ramona WrayHolloway and Wray consider the perspectives offered by two very different seventeenth-century women (Mary Bonaventure Browne, or Mother Browne (b.1615- and Lady Ann Fanshawe (b.1625) both of whom exchanged Ireland for Spain, and both of whom record journeys both ‘real’ and imagined in their writings. Browne’s deployment of hagiographical tropes in her History of the Poor Clares may reveal the potential impact of Iberian conventual culture; her allusions to the markers of sanctity insistent on the immutability of the body, whilst accepting and anticipating spectral presence in the form of bilocation. Fanshawe’s Memoirs are considered alongside the material legacy of her ‘Booke of Receipts of Physickes, Salues, Waters, Cordialls, Preserues and Cookery.’ Her impressions both in transit and within the domus are similarly marked by receptivity and sensitivity to the host culture. Amidst a backdrop of religious persecution and political uncertainty, in both cases Spain emerges as a potentially enabling context for creativity and self-expression.Keywords: Memoir; Franciscan; Poor Clares; Fanshawe; Mary Bonaventure Browne; hagiography; life-writing; autobiography, women writers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms. 

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Renewed archaeological investigation of the West Mouth of Niah Cave, Borneo has demonstrated that even within lowland equatorial environments depositional conditions do exist where organic remains of late glacial and early post-glacial age can be preserved. Excavations by the Niah Cave Research Project (NCP) (2000-2003) towards the rear of the archaeological reserve produced several bone points and worked stingray spines, which exhibit evidence of hafting mastic and fibrous binding still adhering to their shafts. The position of both gives strong indication of how these cartilaginous points were hafted and gives insight into their potential function. These artefacts were recovered from secure and (14)C dated stratigraphic horizons. The results of this study have implications for our understanding the function of the Terminal Pleistocene and Early Holocene bone tools recovered from other regions of Island Southeast Asia. They demonstrate that by the end the Pleistocene rainforest foragers in Borneo were producing composite technologies that probably included fishing leisters and potentially the bow and arrow. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Side-channel analysis of cryptographic systems can allow for the recovery of secret information by an adversary even where the underlying algorithms have been shown to be provably secure. This is achieved by exploiting the unintentional leakages inherent in the underlying implementation of the algorithm in software or hardware. Within this field of research, a class of attacks known as profiling attacks, or more specifically as used here template attacks, have been shown to be extremely efficient at extracting secret keys. Template attacks assume a strong adversarial model, in that an attacker has an identical device with which to profile the power consumption of various operations. This can then be used to efficiently attack the target device. Inherent in this assumption is that the power consumption across the devices under test is somewhat similar. This central tenet of the attack is largely unexplored in the literature with the research community generally performing the profiling stage on the same device as being attacked. This is beneficial for evaluation or penetration testing as it is essentially the best case scenario for an attacker where the model built during the profiling stage matches exactly that of the target device, however it is not necessarily a reflection on how the attack will work in reality. In this work, a large scale evaluation of this assumption is performed, comparing the key recovery performance across 20 identical smart-cards when performing a profiling attack.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the Fortran program SIMLA, which is designed for the study of charged particle dynamics in laser and other background fields. The dynamics can be determined classically via the Lorentz force and Landau–Lifshitz equations or, alternatively, via the simulation of photon emission events determined by strong-field quantum-electrodynamics amplitudes and implemented using Monte-Carlo routines. Multiple background fields can be included in the simulation and, where applicable, the propagation direction, field type (plane wave, focussed paraxial, constant crossed, or constant magnetic), and time envelope of each can be independently specified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An orchestration is a multi-threaded computation that invokes a number of remote services. In practice, the responsiveness of a web-service fluctuates with demand; during surges in activity service responsiveness may be degraded, perhaps even to the point of failure. An uncertainty profile formalizes a user's perception of the effects of stress on an orchestration of web-services; it describes a strategic situation, modelled by a zero-sum angel–daemon game. Stressed web-service scenarios are analysed, using game theory, in a realistic way, lying between over-optimism (services are entirely reliable) and over-pessimism (all services are broken). The ‘resilience’ of an uncertainty profile can be assessed using the valuation of its associated zero-sum game. In order to demonstrate the validity of the approach, we consider two measures of resilience and a number of different stress models. It is shown how (i) uncertainty profiles can be ordered by risk (as measured by game valuations) and (ii) the structural properties of risk partial orders can be analysed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The nature and kinetics of plasmid DNA damage after DNA exposure to a kHz-driven atmospheric pressure nonthermal plasma jet has been investigated. Both single-strand break (SSB) and double-strand break (DSB) processes are reported here. While SSB had a higher rate constant, DSB is recognized to be more significant in living systems, often resulting in loss of viability. In a helium-operated plasma jet, adding oxygen to the feed gas resulted in higher rates of DNA DSB, which increased linearly with increasing oxygen content, up to an optimum level of 0.75% oxygen, after which the DSB rate decreased slightly, indicating an essential role for reactive oxygen species in the rapid degradation of DNA.