27 resultados para Computational power
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
We address the effects of natural three-qubit interactions on the computational power of one-way quantum computation. A benefit of using more sophisticated entanglement structures is the ability to construct compact and economic simulations of quantum algorithms with limited resources. We show that the features of our study are embodied by suitably prepared optical lattices, where effective three-spin interactions have been theoretically demonstrated. We use this to provide a compact construction for the Toffoli gate. Information flow and two-qubit interactions are also outlined, together with a brief analysis of relevant sources of imperfection.
Resumo:
With the emergence of multicore and manycore processors, engineers must design and develop software in drastically new ways to benefit from the computational power of all cores. However, developing parallel software is much harder than sequential software because parallelism can't be abstracted away easily. Authors Hans Vandierendonck and Tom Mens provide an overview of technologies and tools to support developers in this complex and error-prone task. © 2012 IEEE.
Resumo:
The urinary catheter is a thin plastic tube that has been designed to empty the bladder artificially, effortlessly, and with minimum discomfort. The current CH14 male catheter design was examined with a view to optimizing the mass flow rate. The literature imposed constraints to the analysis of the urinary catheter to ensure that a compromise between optimal flow, patient comfort, and everyday practicality from manufacture to use was achieved in the new design. As a result a total of six design characteristics were examined. The input variables in question were the length and width of eyelets 1 and 2 (four variables), the distance between the eyelets, and the angle of rotation between the eyelets. Due to the high number of possible input combinations a structured approach to the analysis of data was necessary. A combination of computational fluid dynamics (CFD) and design of experiments (DOE) has been used to evaluate the "optimal configuration." The use of CFD couple with DOE is a novel concept, which harnesses the computational power of CFD in the most efficient manner for prediction of the mass flow rate in the catheter. Copyright © 2009 by ASME.
Resumo:
Solid particle erosion is a major concern in the engineering industry, particularly where transport of slurry flow is involved. Such flow regimes are characteristic of those in alumina refinement plants. The entrainment of particulate matter, for example sand, in the Bayer liquor can cause severe erosion in pipe fittings, especially in those which redirect the flow. The considerable costs involved in the maintenance and replacement of these eroded components led to an interest in research into erosion prediction by numerical methods at Rusal Aughinish alumina refinery, Limerick, Ireland, and the University of Limerick. The first stage of this study focused on the use of computational fluid dynamics (CFD) to simulate solid particle erosion in elbows. Subsequently an analysis of the factors that affect erosion of elbows was performed using design of experiments (DOE) techniques. Combining CFD with DOE harnesses the computational power of CFD in the most efficient manner for prediction of elbow erosion. An analysis of the factors that affect the erosion of elbows was undertaken with the intention of producing an erosion prediction model. © 2009 Taylor & Francis.
Resumo:
In intelligent video surveillance systems, scalability (of the number of simultaneous video streams) is important. Two key factors which hinder scalability are the time spent in decompressing the input video streams, and the limited computational power of the processor. This paper demonstrates how a combination of algorithmic and hardware techniques can overcome these limitations, and significantly increase the number of simultaneous streams. The techniques used are processing in the compressed domain, and exploitation of the multicore and vector processing capability of modern processors. The paper presents a system which performs background modeling, using a Mixture of Gaussians approach. This is an important first step in the segmentation of moving targets. The paper explores the effects of reducing the number of coefficients in the compressed domain, in terms of throughput speed and quality of the background modeling. The speedups achieved by exploiting compressed domain processing, multicore and vector processing are explored individually. Experiments show that a combination of all these techniques can give a speedup of 170 times on a single CPU compared to a purely serial, spatial domain implementation, with a slight gain in quality.
Resumo:
Background: Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take >2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping.
Results: cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance.
Conclusion: Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.
Resumo:
Most Wave Energy Converters (WECs) being developed are fundamentally different from known marine structures. Limited experience is a fundamental challenge for the design, especially issues concerning load assumptions and power estimates. Reynolds-Averaged Navier-Stokes (RANS) CFD methods are being used successfully in many areas of marine engineering. They have been shown to accurately simulate many hydrodynamic effects and are a helpful tool for investigating complex flows. The major drawback is the significant computational power required and the associated overhead with pre and post-processing. This paper presents the challenges and advantages in the application of RANS CFD methods in the design process of a wave energy converter and compares the time, labour and ultimately financial requirements for obtaining practical results.
Resumo:
In wireless networks, the broadcast nature of the propagation medium makes the communication process vulnerable to malicious nodes (e.g. eavesdroppers) which are in the coverage area of the transmission. Thus, security issues play a vital role in wireless systems. Traditionally, information security has been addressed in the upper layers (e.g. the network layer) through the design of cryptographic protocols. Cryptography-based security aims to design a protocol such that it is computationally prohibitive for the eavesdropper to decode the information. The idea behind this approach relies on the limited computational power of the eavesdroppers. However, with advances in emerging hardware technologies, achieving secure communications relying on protocol-based mechanisms alone become insufficient. Owing to this fact, a new paradigm of secure communications has been shifted to implement the security at the physical layer. The key principle behind this strategy is to exploit the spatial-temporal characteristics of the wireless channel to guarantee secure data transmission without the need of cryptographic protocols.
Resumo:
Manipulator motion planning is a classic problem in robotics, with a number of complete solutions available for their motion in controlled (industrial) environments. Owing to recent technological advances in the field of robotics, there has been a significant development of more complex robots with high-fidelity sensors and more computational power. One such example has been a rise in the production of humanoid robots equipped with dual-arm manipulators which require complex motion planning algorithms. Also, the technological advances have resulted in a shift from using manipulators in strictly controlled environments, to investigating the deployment of manipulators in dynamic or unknown environments. As a result, a greater emphasis has been put on the development of local motion planners, which can provide real-time solutions to these problems. Artificial Potential Fields (APFs) is one such popular local motion planning technique, which can be applied to manipulator motion planning, however, the basic algorithm is severely prone to local minima problems. Here, two modified APF-based strategies for solving the dual-arm motion planning task in unknown environments are proposed. Both techniques make use of configuration sampling and subgoal selection to assist the APFs in avoiding these local minima scenarios. Extensive simulation results are presented to validate the efficacy of the proposed methodology.
Resumo:
This paper examines the applicability of an immersive virtual reality (VR) system to the process of organizational learning in a manufacturing context. The work focuses on the extent to which realism has to be represented in a simulated product build scenario in order to give the user an effective learning experience for an assembly task. Current technologies allow the visualization and manipulation of objects in VR systems but physical behaviors such as contact between objects and the effects of gravity are not commonly represented in off the shelf simulation solutions and the computational power required to facilitate these functions remains a challenge. This work demonstrates how physical behaviors can be coded and represented through the development of more effective mechanisms for the computer aided design (CAD) and VR interface.
Resumo:
We present a fully-distributed self-healing algorithm DEX, that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders - whose expansion properties hold deterministically - that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion which rapidly degrade in a dynamic setting, in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only O(log n) rounds and O(log n) messages are needed with high probability (n is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table (DHT) on top of DEX, with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes.
Resumo:
Even though computational power used for structural analysis is ever increasing, there is still a fundamental need for testing in structural engineering, either for validation of complex numerical models or to assess material behaviour. In addition to analysis of structures using scale models, many structural engineers are aware to some extent of cyclic and shake-table test methods, but less so of ‘hybrid testing’. The latter is a combination of physical testing (e.g. hydraulic
actuators) and computational modelling (e.g. finite element modelling). Over the past 40 years, hybrid testing of engineering structures has developed from concept through to maturity to become a reliable and accurate dynamic testing technique. The hybrid test method provides users with some additional benefits that standard dynamic testing methods do not, and the method is more cost-effective in comparison to shake-table testing. This article aims to provide the reader with a basic understanding of the hybrid test method, including its contextual development and potential as a dynamic testing technique.
Resumo:
Even though computational power used for structural analysis is ever increasing, there is still a fundamental need for testing in structural engineering, either for validation of complex numerical models or material behaviour. Many structural engineers/researchers are aware of cyclic and shake table test methods, but less so hybrid testing. Over the past 40 years, hybrid testing of engineering structures has developed from concept through to maturity to become a reliable and accurate dynamic testing technique. In particular, the application of hybrid testing as a seismic testing technique in recent years has increased notably. The hybrid test method provides users with some additional benefits that standard dynamic testing methods do not, and the method is much more cost effective in comparison to shake table testing. This paper aims to provide the reader with a basic understanding of the hybrid test method and its potential as a dynamic testing technique.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
We study the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy {\em churn} (i.e., nodes can join and leave the network continuously over time). We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of the network at every round (including random choices made by all the nodes), have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control over which nodes join and leave and at what times and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is an $O(\log^3 n)$ round algorithm that achieves Byzantine leader election under the presence of up to $O({n}^{1/2 - \epsilon})$ Byzantine nodes (for a small constant $\epsilon > 0$) and a churn of up to \\$O(\sqrt{n}/\poly\log(n))$ nodes per round (where $n$ is the stable network size).The algorithm elects a leader with probability at least $1-n^{-\Omega(1)}$ and guarantees that it is an honest node with probability at least $1-n^{-\Omega(1)}$; assuming the algorithm succeeds, the leader's identity will be known to a $1-o(1)$ fraction of the honest nodes. Our algorithm is fully-distributed, lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic (in $n$) time and requires nodes to send and receive messages of only polylogarithmic size per round.To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way.We also show how to implement an (almost-everywhere) public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest.