31 resultados para Complex Design Space

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper summarizes numerous research activities in high-performance networks and network security processing, and explores technology related performance constraints such as critical performance limitations of circuit architectures, which are set by the semiconductor technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficiently exploring exponential-size architectural design spaces with many interacting parameters remains an open problem: the sheer number of experiments required renders detailed simulation intractable.We attack this via an automated approach that builds accurate predictive models. We simulate sampled points, using results to teach our models the function describing relationships among design parameters. The models can be queried and are very fast, enabling efficient design tradeoff discovery. We validate our approach via two uniprocessor sensitivity studies, predicting IPC with only 1–2% error. In an experimental study using the approach, training on 1% of a 250-K-point CMP design space allows our models to predict performance with only 4–5% error. Our predictive modeling combines well with techniques that reduce the time taken by each simulation experiment, achieving net time savings of three-four orders of magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Let H be a two-dimensional complex Hilbert space and P(3H) the space of 3-homogeneous polynomials on H. We give a characterization of the extreme points of its unit ball, P(3H), from which we deduce that the unit sphere of P(3H) is the disjoint union of the sets of its extreme and smooth points. We also show that an extreme point of P(3H) remains extreme as considered as an element of L(3H). Finally we make a few remarks about the geometry of the unit ball of the predual of P(3H) and give a characterization of its smooth points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A methodology which allows a non-specialist to rapidly design silicon wavelet transform cores has been developed. This methodology is based on a generic architecture utilizing time-interleaved coefficients for the wavelet transform filters. The architecture is scaleable and it has been parameterized in terms of wavelet family, wavelet type, data word length and coefficient word length. The control circuit is designed in such a way that the cores can also be cascaded without any interface glue logic for any desired level of decomposition. This parameterization allows the use of any orthonormal wavelet family thereby extending the design space for improved transformation from algorithm to silicon. Case studies for stand alone and cascaded silicon cores for single and multi-stage analysis respectively are reported. The typical design time to produce silicon layout of a wavelet based system has been reduced by an order of magnitude. The cores are comparable in area and performance to hand-crafted designs. The designs have been captured in VHDL so they are portable across a range of foundries and are also applicable to FPGA and PLD implementations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Architects use cycle-by-cycle simulation to evaluate design choices and understand tradeoffs and interactions among design parameters. Efficiently exploring exponential-size design spaces with many interacting parameters remains an open problem: the sheer number of experiments renders detailed simulation intractable. We attack this problem via an automated approach that builds accurate, confident predictive design-space models. We simulate sampled points, using the results to teach our models the function describing relationships among design parameters. The models produce highly accurate performance estimates for other points in the space, can be queried to predict performance impacts of architectural changes, and are very fast compared to simulation, enabling efficient discovery of tradeoffs among parameters in different regions. We validate our approach via sensitivity studies on memory hierarchy and CPU design spaces: our models generally predict IPC with only 1-2% error and reduce required simulation by two orders of magnitude. We also show the efficacy of our technique for exploring chip multiprocessor (CMP) design spaces: when trained on a 1% sample drawn from a CMP design space with 250K points and up to 55x performance swings among different system configurations, our models predict performance with only 4-5% error on average. Our approach combines with techniques to reduce time per simulation, achieving net time savings of three-four orders of magnitude. Copyright © 2006 ACM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a unique cross-layer design framework that allows systematic exploration of the energy-delay-quality trade-offs at the algorithm, architecture and circuit level of design abstraction for each block of a system. In addition, taking into consideration the interactions between different sub-blocks of a system, it identifies the design solutions that can ensure the least energy at the "right amount of quality" for each sub-block/system under user quality/delay constraints. This is achieved by deriving sensitivity based design criteria, the balancing of which form the quantitative relations that can be used early in the system design process to evaluate the energy efficiency of various design options. The proposed framework when applied to the exploration of energy-quality design space of the main blocks of a digital camera and a wireless receiver, achieves 58% and 33% energy savings under 41% and 20% error increase, respectively. © 2010 ACM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Let H be a (real or complex) Hilbert space. Using spectral theory and properties of the Schatten–Von Neumann operators, we prove that every symmetric tensor of unit norm in HoH is an infinite absolute convex combination of points of the form xox with x in the unit sphere of the Hilbert space. We use this to obtain explicit characterizations of the smooth points of the unit ball of HoH .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The multiplicative spectrum of a complex Banach space X is the class K(X) of all (automatically compact and Hausdorff) topological spaces appearing as spectra of Banach algebras (X,*) for all possible continuous multiplications on X turning X into a commutative associative complex algebra with the unity. The properties of the multiplicative spectrum are studied. In particular, we show that K(X^n) consists of countable compact spaces with at most n non-isolated points for any separable hereditarily indecomposable Banach space X. We prove that K(C[0,1]) coincides with the class of all metrizable compact spaces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Use of the Dempster-Shafer (D-S) theory of evidence to deal with uncertainty in knowledge-based systems has been widely addressed. Several AI implementations have been undertaken based on the D-S theory of evidence or the extended theory. But the representation of uncertain relationships between evidence and hypothesis groups (heuristic knowledge) is still a major problem. This paper presents an approach to representing such knowledge, in which Yen’s probabilistic multi-set mappings have been extended to evidential mappings, and Shafer’s partition technique is used to get the mass function in a complex evidence space. Then, a new graphic method for describing the knowledge is introduced which is an extension of the graphic model by Lowrance et al. Finally, an extended framework for evidential reasoning systems is specified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dapivirine mucoadhesive gels and freeze-dried tablets were prepared using a 3 x 3 x 2 factorial design. An artificial neural network (ANN) with multi-layer perception was used to investigate the effect of hydroxypropyl-methylcellulose (HPMC): polyvinylpyrrolidone (PVP) ratio (XI), mucoadhesive concentration (X2) and delivery system (gel or freeze-dried mucoadhesive tablet, X3) on response variables; cumulative release of dapivirine at 24 h (Q(24)), mucoadhesive force (F-max) and zero-rate viscosity. Optimisation was performed by minimising the error between the experimental and predicted values of responses by ANN. The method was validated using check point analysis by preparing six formulations of gels and their corresponding freeze-dried tablets randomly selected from within the design space of contour plots. Experimental and predicted values of response variables were not significantly different (p > 0.05, two-sided paired t-test). For gels, Q(24) values were higher than their corresponding freeze-dried tablets. F-max values for freeze-dried tablets were significantly different (2-4 times greater, p > 0.05, two-sided paired t-test) compared to equivalent gets. Freeze-dried tablets having lower values for X1 and higher values for X2 components offered the best compromise between effective dapivirine release, mucoadhesion and viscosity such that increased vaginal residence time was likely to be achieved. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We treat the question of existence of common hypercyclic vectors for families of continuous linear operators. It is shown that for any continuous linear operator T on a complex Fréchet space X and a set ? ? R+ × C which is not of zero three-dimensional Lebesgue measure, the family {a T + b I : (a, b) ? ?} has no common hypercyclic vectors. This allows to answer negatively questions raised by Godefroy and Shapiro and by Aron. We also prove a sufficient condition for a family of scalar multiples of a given operator on a complex Fréchet space to have a common hypercyclic vector. It allows to show that if D = {z ? C : | z | < 1} and f ? H8 (D) is non-constant, then the family {z Mf{star operator} : b- 1 < | z | < a- 1} has a common hypercyclic vector, where Mf : H2 (D) ? H2 (D), Mf f = f f, a = inf {| f (z) | : z ? D} and b = sup {| f (z) | : | z | ? D}, providing an affirmative answer to a question by Bayart and Grivaux. Finally, extending a result of Costakis and Sambarino, we prove that the family {a Tb : a, b ? C {set minus} {0}} has a common hypercyclic vector, where Tb f (z) = f (z - b) acts on the Fréchet space H (C) of entire functions on one complex variable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show that for every supercyclic strongly continuous operator
semigroup $\{T_t\}_{t\geq 0}$ acting on a complex $\F$-space, every
$T_t$ with $t>0$ is supercyclic. Moreover, the set of supercyclic
vectors of $T_t$ does not depend on the choice of $t>0$.