26 resultados para implementation results

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The inclusion of the Discrete Wavelet Transform in the JPEG-2000 standard has added impetus to the research of hardware architectures for the two-dimensional wavelet transform. In this paper, a VLSI architecture for performing the symmetrically extended two-dimensional transform is presented. This architecture conforms to the JPEG-2000 standard and is capable of near-optimal performance when dealing with the image boundaries. The architecture also achieves efficient processor utilization. Implementation results based on a Xilinx Virtex-2 FPGA device are included.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The second round of the NIST-run public competition is underway to find a new hash algorithm(s) for inclusion in the NIST Secure Hash Standard (SHA-3). This paper presents the full implementations of all of the second round candidates in hardware with all of their variants. In order to determine their computational efficiency, an important aspect in NIST's round two evaluation criteria, this paper gives an area/speed comparison of each design both with and without a hardware interface, thereby giving an overall impression of their performance in resource constrained and resource abundant environments. The implementation results are provided for a Virtex-5 FPGA device. The efficiency of the architectures for the hash functions are compared in terms of throughput per unit area. To the best of the authors' knowledge, this is the first work to date to present hardware designs which test for all message digest sizes (224, 256, 384, 512), and also the only work to include the padding as part of the hardware for the SHA-3 hash functions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a hardware solution for packet classification based on multi-fields is presented. The proposed scheme focuses on a new architecture based on the decomposition method. A hash circuit is used in order to reduce the memory space required for the Recursive Flow Classification (RFC) algorithm. The implementation results show that the proposed architecture achieves significant performance advantage that is comparable to that of some well-known algorithms. The solution is based on Altera Stratix III FPGA technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel application-specific instruction set processor (ASIP) for use in the construction of modern signal processing systems is presented. This is a flexible device that can be used in the construction of array processor systems for the real-time implementation of functions such as singular-value decomposition (SVD) and QR decomposition (QRD), as well as other important matrix computations. It uses a coordinate rotation digital computer (CORDIC) module to perform arithmetic operations and several approaches are adopted to achieve high performance including pipelining of the micro-rotations, the use of parallel instructions and a dual-bus architecture. In addition, a novel method for scale factor correction is presented which only needs to be applied once at the end of the computation. This also reduces computation time and enhances performance. Methods are described which allow this processor to be used in reduced dimension (i.e., folded) array processor structures that allow tradeoffs between hardware and performance. The net result is a flexible matrix computational processing element (PE) whose functionality can be changed under program control for use in a wider range of scenarios than previous work. Details are presented of the results of a design study, which considers the application of this decomposition PE architecture in a combined SVD/QRD system and demonstrates that a combination of high performance and efficient silicon implementation are achievable. © 2005 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: To describe the clinical implementation of dynamic multileaf collimation (DMLC). Custom compensated four-field treatments of carcinoma of the bladder have been used as a simple test site for the introduction of intensity modulated radiotherapy.MATERIALS AND METHODS: Compensating intensity modulations are calculated from computed tomography (CT) data, accounting for scattered, as well as primary radiation. Modulations are converted to multileaf collimator (MLC) leaf and jaw settings for dynamic delivery on a linear accelerator. A full dose calculation is carried out, accounting for dynamic leaf and jaw motion and transmission through these components. Before treatment, a test run of the delivery is performed and an absolute dose measurement made in a water or solid water phantom. Treatments are verified by in vivo diode measurements and real-time electronic portal imaging. RESULTS: Seven patients have been treated using DMLC. The technique improves dose homogeneity within the target volume, reducing high dose areas and compensating for loss of scatter at the beam edge. A typical total treatment time is 20 min. CONCLUSIONS: Compensated bladder treatments have proven an effective test site for DMLC in an extremely busy clinic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Course Scheduling consists of assigning lecture events to a limited set of specific timeslots and rooms. The objective is to satisfy as many soft constraints as possible, while maintaining a feasible solution timetable. The most successful techniques to date require a compute-intensive examination of the solution neighbourhood to direct searches to an optimum solution. Although they may require fewer neighbourhood moves than more exhaustive techniques to gain comparable results, they can take considerably longer to achieve success. This paper introduces an extended version of the Great Deluge Algorithm for the Course Timetabling problem which, while avoiding the problem of getting trapped in local optima, uses simple Neighbourhood search heuristics to obtain solutions in a relatively short amount of time. The paper presents results based on a standard set of benchmark datasets, beating over half of the currently published best results with in some cases up to 60% of an improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel tag computation circuit for a credit based Self-Clocked Fair Queuing (SCFQ) Scheduler is presented. The scheduler combines Weighted Fair Queuing (WFQ) with a credit based bandwidth reallocation scheme. The proposed architecture is able to reallocate bandwidth on the fly if particular links suffer from channel quality degradation .The hardware architecture is parallel and pipelined enabling an aggregated throughput rate of 180 million tag computations per second. The throughput performance is ideal for Broadband Wireless Access applications, allowing room for relatively complex computations in QoS aware adaptive scheduling. The high-level system break-down is described and synthesis results for Altera Stratix II FPGA technology are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we verify a new phase conjugating architecture suitable for deployment as (lie core building block in retrodirective antenna arrays, which can be scaled to any number of elements in a modular way without impacting on complexity. Our solution is based on a modified in-phase and quadrature modulator architecture, which completely resolves four major shortcomings of the conventional mixer-based approach currently used for the synthesis of phase conjugated energy derived from a sampled incoming wavefront. 1) The architecture presented removes the need for a local oscillator running at twice the RF signal frequency to be conjugated. 2) It maintains a constant transmit power even if receive power goes as low as -120 dBm. 3) All unwanted re-transmit signal products are suppressed by at least 40 dB. 4) The issue of poor RF-IF leakage prevalent in mixer-based phase-conjugation solutions is completely mitigated. The circuit has also been shown to have high conjugation accuracy (better than +/-1 degrees at -60-dBm input). Near theoretically perfect experimental monostatic and bistatic results are presented for a ten-element retrodirective array constructed using the new phase conjugation architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper outlines how the immediate life support (ILS) course was incorporated into an undergraduate-nursing curriculum in a university in Northern Ireland. It also reports on how the students perceived the impact of this course on their clinical practice. The aim was to develop the student’s ability to recognise the acutely ill patient and to determine the relevance of this to clinical practice. Prior to this the ILS course was only available to qualified nurses and this paper reports on the first time students were provided with an ILS course in an undergraduate setting. The ILS course was delivered to 89 third year nursing students (Adult Branch) and comprised one full teaching day per week over two weeks. Recognised Advanced Life Support (ALS) instructors, in keeping with the United Kingdom Resuscitation Council guidelines, taught the students. Participants completed a 17 item questionnaire which comprised an open-ended section for student comment. Questionnaire data was analysed descriptively using SSPSS version 15.0. Open-ended responses from the questionnaire data was analysed by content and thematic analysis. Results Student feedback reported that the ILS course helped them understand what constituted the acutely ill patient and the role of the nurse in managing a deteriorating situation. Students also reported that they valued the experience as highlighting gaps in their knowledge Conclusion. The inclusion of the ILS course provides students with necessary skills to assess and manage the deteriorating patient. In addition the data from this study suggest the ILS course should be delivered in an inter-professional setting – i.e taught jointly with medical students. References: Department of Health & Quality Assurance Agency (2006). Department of Health Phase 2 benchmarking project – final report. Gloucester: Department of Health, London and Quality Assurance Agency for Higher Education

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous large-scale changes in technology and the globalization of markets have resulted in the need for many SMEs to use innovation as a means of seeking competitive advantage where innovation includes both technological and organizational perspectives (Tapscott, 2009). However, there is a paucity of systematic and empirical research relating to the implementation of innovation management in the context of SMEs. The aim of this article is to redress this imbalance via an empirical study created to develop and test a model of innovation implementation in SMEs. This study uses Structural Equation Modelling (SEM) to test the plausibility of an innovation model, developed from earlier studies, as the basis of a questionnaire survey of 395 SMEs in the UK. The resultant model and construct relationship results are further probed using an explanatory multiple case analysis to explore ‘how’ and ‘why’ type questions within the model and construct relationships. The findings show that the
effects of leadership, people and culture on innovation implementation are mediated by business improvement activities relating to Total Quality Management/Continuous Improvement (TQM/CI) and product and process developments. It is concluded that SMEs have an opportunity to leverage existing quality and process improvement activities to move beyond continuous
improvement outcomes towards effective innovation implementation. The article concludes by suggesting areas suitable for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic power consumption is very dependent on interconnect, so clever mapping of digital signal processing algorithms to parallelised realisations with data locality is vital. This is a particular problem for fast algorithm implementations where typically, designers will have sacrificed circuit structure for efficiency in software implementation. This study outlines an approach for reducing the dynamic power consumption of a class of fast algorithms by minimising the index space separation; this allows the generation of field programmable gate array (FPGA) implementations with reduced power consumption. It is shown how a 50% reduction in relative index space separation results in a measured power gain of 36 and 37% over a Cooley-Tukey Fast Fourier Transform (FFT)-based solution for both actual power measurements for a Xilinx Virtex-II FPGA implementation and circuit measurements for a Xilinx Virtex-5 implementation. The authors show the generality of the approach by applying it to a number of other fast algorithms namely the discrete cosine, the discrete Hartley and the Walsh-Hadamard transforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A scalable large vocabulary, speaker independent speech recognition system is being developed using Hidden Markov Models (HMMs) for acoustic modeling and a Weighted Finite State Transducer (WFST) to compile sentence, word, and phoneme models. The system comprises a software backend search and an FPGA-based Gaussian calculation which are covered here. In this paper, we present an efficient pipelined design implemented both as an embedded peripheral and as a scalable, parallel hardware accelerator. Both architectures have been implemented on an Alpha Data XRC-5T1, reconfigurable computer housing a Virtex 5 SX95T FPGA. The core has been tested and is capable of calculating a full set of Gaussian results from 3825 acoustic models in 9.03 ms which coupled with a backend search of 5000 words has provided an accuracy of over 80%. Parallel implementations have been designed with up to 32 cores and have been successfully implemented with a clock frequency of 133?MHz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of accelerators, with compute architectures different and distinct from the CPU, has become a new research frontier in high-performance computing over the past ?ve years. This paper is a case study on how the instruction-level parallelism offered by three accelerator technologies, FPGA, GPU and ClearSpeed, can be exploited in atomic physics. The algorithm studied is the evaluation of two electron integrals, using direct numerical quadrature, a task that arises in the study of intermediate energy electron scattering by hydrogen atoms. The results of our ‘productivity’ study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new type of advanced encryption standard (AES) implementation using a normal basis is presented. The method is based on a lookup technique that makes use of inversion and shift registers, which leads to a smaller size of lookup for the S-box than its corresponding implementations. The reduction in the lookup size is based on grouping sets of inverses into conjugate sets which in turn leads to a reduction in the number of lookup values. The above technique is implemented in a regular AES architecture using register files, which requires less interconnect and area and is suitable for security applications. The results of the implementation are competitive in throughput and area compared with the corresponding solutions in a polynomial basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluates the implementation of Menter's gamma-Re-theta Transition Model within the CFX12 solver for turbulent transition prediction on a natural laminar flow nacelle. Some challenges associated with this type of modeling have been identified. The computational fluid dynamics transitional flow simulation results are presented for a series of cruise cases with freestream Mach numbers ranging from 0.8 to 0.88, angles of attack from 2 to 0 degrees, and mass flow ratios from 0.60 to 0.75. These were validated with a series of wind-tunnel tests on the nacelle by comparing the predicted and experimental surface pressure distributions and transition locations. A selection of the validation cases are presented in this paper. In all cases, computational fluid dynamics simulations agreed reasonably well with the experiments. The results indicate that Menter's gamma-Re-theta Transition Model is capable of predicting laminar boundary-layer transition to turbulence on a nacelle. Nonetheless, some limitations exist in both the Menter's gamma-Re-theta Transition Model and in the implementation of the computational fluid dynamics model. The implementation of a more comprehensive experimental correlation in Menter's gamma-Re-theta Transition Model, preferably the ones from nacelle experiments, including the effects of compressibility and streamline curvature, is necessary for an accurate transitional flow simulation on a nacelle. In addition, improvements to the computational fluid dynamics model are also suggested, including the consideration of varying distributed surface roughness and an appropriate empirical correction derived from nacelle experimental transition location data.