993 resultados para Standard architecture
Resumo:
A new reconfigurable subpixel interpolation architecture for multistandard (e.g., MPEG-2, MPEG-4, H.264, and AVS) video motion estimation (ME) is presented. This exploits the mixed use of parallel and serial-input FIR filters to achieve high throughput rate and efficient silicon utilization. Silicon design studies show that this can be implemented using 34.8 × 10 3 gates with area and performance that compares very favorably with specific fixed solutions, e.g., for the H.264 standard alone. This can support SDTV and HDTV applications when implemented in 0.18 µm CMOS technology, with further performance enhancements achievable at 0.13 µm and below. © 2009 IEEE.
Resumo:
A novel most significant digit first CORDIC architecture is presented that is suitable for the VLSI design of systolic array processor cells for performing QR decomposition. This is based on an on-line CORDIC algorithm with a constant scale factor and a latency independent of the wordlength. This has been derived through the extension of previously published CORDIC algorithms. It is shown that simplifying the calculation of convergence bounds also greatly simplifies the derivation of suitable VLSI architectures. Design studies, based on a 0.35-µ CMOS standard cell process, indicate that 20 such QR processor cells operating at rates suitable for radar beamfoming can be readily accommodated on a single chip.
Resumo:
The design and implementation of a programmable cyclic redundancy check (CRC) computation circuit architecture, suitable for deployment in network related system-on-chips (SoCs) is presented. The architecture has been designed to be field reprogrammable so that it is fully flexible in terms of the polynomial deployed and the input port width. The circuit includes an embedded configuration controller that has a low reconfiguration time and hardware cost. The circuit has been synthesised and mapped to 130-nm UMC standard cell [application-specific integrated circuit (ASIC)] technology and is capable of supporting line speeds of 5 Gb/s. © 2006 IEEE.
Resumo:
A new type of advanced encryption standard (AES) implementation using a normal basis is presented. The method is based on a lookup technique that makes use of inversion and shift registers, which leads to a smaller size of lookup for the S-box than its corresponding implementations. The reduction in the lookup size is based on grouping sets of inverses into conjugate sets which in turn leads to a reduction in the number of lookup values. The above technique is implemented in a regular AES architecture using register files, which requires less interconnect and area and is suitable for security applications. The results of the implementation are competitive in throughput and area compared with the corresponding solutions in a polynomial basis.
Resumo:
The area and power consumption of low-density parity check (LDPC) decoders are typically dominated by embedded memories. To alleviate such high memory costs, this paper exploits the fact that all internal memories of a LDPC decoder are frequently updated with new data. These unique memory access statistics are taken advantage of by replacing all static standard-cell based memories (SCMs) of a prior-art LDPC decoder implementation by dynamic SCMs (D-SCMs), which are designed to retain data just long enough to guarantee reliable operation. The use of D-SCMs leads to a 44% reduction in silicon area of the LDPC decoder compared to the use of static SCMs. The low-power LDPC decoder architecture with refresh-free D-SCMs was implemented in a 90nm CMOS process, and silicon measurements show full functionality and an information bit throughput of up to 600 Mbps (as required by the IEEE 802.11n standard).
Resumo:
The upcoming IEEE 802.11ac standard boosts the throughput of previous IEEE 802.11n by adding wider 80 MHz and 160 MHz channels with up to 8 antennas (versus 40 MHz channel and 4 antennas in 802.11n). This necessitates new 1-8 stream 256/512-point Fast Fourier Transform (FFT) / inverse FFT (IFFT) processing with 80/160 MSample/s throughput. Although there are abundant related work, they all fail to meet the requirements of IEEE 802.11ac FFT/IFFT on point size, throughput and multiple data streams at the same time. This paper proposes the first software defined FFT/IFFT architecture as a solution. By making use of a customised soft stream processor on FPGA, we show how a software defined FFT architecture can meet all the requirements of IEEE 802.11ac with low cost and high resource efficiency. When compared with dedicated Xilinx FFT core, our implementation exhibits only one third of the resources also up to three times of resource efficiency.
Resumo:
Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.
Resumo:
Access control is a software engineering challenge in database applications. Currently, there is no satisfactory solution to dynamically implement evolving fine-grained access control mechanisms (FGACM) on business tiers of relational database applications. To tackle this access control gap, we propose an architecture, herein referred to as Dynamic Access Control Architecture (DACA). DACA allows FGACM to be dynamically built and updated at runtime in accordance with the established fine-grained access control policies (FGACP). DACA explores and makes use of Call Level Interfaces (CLI) features to implement FGACM on business tiers. Among the features, we emphasize their performance and their multiple access modes to data residing on relational databases. The different access modes of CLI are wrapped by typed objects driven by FGACM, which are built and updated at runtime. Programmers prescind of traditional access modes of CLI and start using the ones dynamically implemented and updated. DACA comprises three main components: Policy Server (repository of metadata for FGACM), Dynamic Access Control Component (DACC) (business tier component responsible for implementing FGACM) and Policy Manager (broker between DACC and Policy Server). Unlike current approaches, DACA is not dependent on any particular access control model or on any access control policy, this way promoting its applicability to a wide range of different situations. In order to validate DACA, a solution based on Java, Java Database Connectivity (JDBC) and SQL Server was devised and implemented. Two evaluations were carried out. The first one evaluates DACA capability to implement and update FGACM dynamically, at runtime, and, the second one assesses DACA performance against a standard use of JDBC without any FGACM. The collected results show that DACA is an effective approach for implementing evolving FGACM on business tiers based on Call Level Interfaces, in this case JDBC.
Resumo:
The continuous demand for highly efficient wireless transmitter systems has triggered an increased interest in switching mode techniques to handle the required power amplification. The RF carrier amplitude-burst transmitter, i.e. a wireless transmitter chain where a phase-modulated carrier is modulated in amplitude in an on-off mode, according to some prescribed envelope-to-time conversion, such as pulse-width or sigma-delta modulation, constitutes a promising architecture capable of efficiently transmitting signals of highly demanding complex modulation schemes. However, the tested practical implementations present results that are way behind the theoretically advanced promises (perfect linearity and efficiency). My original contribution to knowledge presented in this thesis is the first thorough study and model of the power efficiency and linearity characteristics that can be actually achieved with this architecture. The analysis starts with a brief revision of the theoretical idealized behavior of these switched-mode amplifier systems, followed by the study of the many sources of impairments that appear when the real system is implemented. In particular, a special attention is paid to the dynamic load modulation caused by the often ignored interaction between the narrowband signal reconstruction filter and the usual single-ended switched-mode power amplifier, which, among many other performance impairments, forces a two transistor implementation. The performance of this architecture is clearly explained based on the presented theory, which is supported by simulations and corresponding measured results of a fully working implementation. The drawn conclusions allow the development of a set of design rules for future improvements, one of which is proposed and verified in this thesis. It suggests a significant modification to this traditional architecture, where now the phase modulated carrier is always on – and thus allowing a single transistor implementation – and the amplitude is impressed into the carrier phase according to a bi-phase code.
Resumo:
In this paper a parallel implementation of an Adaprtive Generalized Predictive Control (AGPC) algorithm is presented. Since the AGPC algorithm needs to be fed with knowledge of the plant transfer function, the parallelization of a standard Recursive Least Squares (RLS) estimator and a GPC predictor is discussed here.
Resumo:
In this paper a parallel implementation of an Adaprtive Generalized Predictive Control (AGPC) algorithm is presented. Since the AGPC algorithm needs to be fed with knowledge of the plant transfer function, the parallelization of a standard Recursive Least Squares (RLS) estimator and a GPC predictor is discussed here.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more
Resumo:
El treball desenvolupat en aquesta tesi aprofundeix i aporta solucions innovadores en el camp orientat a tractar el problema de la correspondència en imatges subaquàtiques. En aquests entorns, el que realment complica les tasques de processat és la falta de contorns ben definits per culpa d'imatges esborronades; un fet aquest que es deu fonamentalment a il·luminació deficient o a la manca d'uniformitat dels sistemes d'il·luminació artificials. Els objectius aconseguits en aquesta tesi es poden remarcar en dues grans direccions. Per millorar l'algorisme d'estimació de moviment es va proposar un nou mètode que introdueix paràmetres de textura per rebutjar falses correspondències entre parells d'imatges. Un seguit d'assaigs efectuats en imatges submarines reals han estat portats a terme per seleccionar les estratègies més adients. Amb la finalitat d'aconseguir resultats en temps real, es proposa una innovadora arquitectura VLSI per la implementació d'algunes parts de l'algorisme d'estimació de moviment amb alt cost computacional.
Resumo:
The development of architecture and the settlement is central to discussions concerning the Neolithic transformation asthe very visible evidence for the changes in society that run parallel to the domestication of plants and animals. Architecture hasbeen used as an important aspect of models of how the transformation occurred, and as evidence for the sharp difference betweenhunter-gatherer and farming societies. We suggest that the emerging evidence for considerable architectural complexity from theearly Neolithic indicates that some of our interpretations depend too much on a very basic understanding of structures which arenormally seen as being primarily for residential purposes and containing households, which become the organising principle for thenew communities which are often seen as fully sedentary and described as villages. Recent work in southern Jordan suggests that inthis region at least there is little evidence for a standard house, and that structures are constructed for a range of diverse primary purposes other than simple domestic shelters.