974 resultados para Computational power


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A demanda crescente por poder computacional estimulou a pesquisa e desenvolvimento de processadores digitais cada vez mais densos em termos de transistores e com clock mais rápido, porém não podendo desconsiderar aspectos limitantes como consumo, dissipação de calor, complexidade fabril e valor comercial. Em outra linha de tratamento da informação, está a computação quântica, que tem como repositório elementar de armazenamento a versão quântica do bit, o q-bit ou quantum bit, guardando a superposição de dois estados, diferentemente do bit clássico, o qual registra apenas um dos estados. Simuladores quânticos, executáveis em computadores convencionais, possibilitam a execução de algoritmos quânticos mas, devido ao fato de serem produtos de software, estão sujeitos à redução de desempenho em razão do modelo computacional e limitações de memória. Esta Dissertação trata de uma versão implementável em hardware de um coprocessador para simulação de operações quânticas, utilizando uma arquitetura dedicada à aplicação, com possibilidade de explorar o paralelismo por replicação de componentes e pipeline. A arquitetura inclui uma memória de estado quântico, na qual são armazenados os estados individuais e grupais dos q-bits; uma memória de rascunho, onde serão armazenados os operadores quânticos para dois ou mais q-bits construídos em tempo de execução; uma unidade de cálculo, responsável pela execução de produtos de números complexos, base dos produtos tensoriais e matriciais necessários à execução das operações quânticas; uma unidade de medição, necessária à determinação do estado quântico da máquina; e, uma unidade de controle, que permite controlar a operação correta dos componente da via de dados, utilizando um microprograma e alguns outros componentes auxiliares.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Essa dissertação apresenta a implementação de um algoritmo genético paralelo utilizando o modelo de granularidade grossa, também conhecido como modelo das ilhas, para sistemas embutidos multiprocessados. Os sistemas embutidos multiprocessados estão tornando-se cada vez mais complexos, pressionados pela demanda por maior poder computacional requerido pelas aplicações, principalmente de multimídia, Internet e comunicações sem fio, que são executadas nesses sistemas. Algumas das referidas aplicações estão começando a utilizar algoritmos genéticos, que podem ser beneficiados pelas vantagens proporcionadas pelo processamento paralelo disponível em sistemas embutidos multiprocessados. No algoritmo genético paralelo do modelo das ilhas, cada processador do sistema embutido é responsável pela evolução de uma população de forma independente dos demais. A fim de acelerar o processo evolutivo, o operador de migração é executado em intervalos definidos para realizar a migração dos melhores indivíduos entre as ilhas. Diferentes topologias lógicas, tais como anel, vizinhança e broadcast, são analisadas na fase de migração de indivíduos. Resultados experimentais são gerados para a otimização de três funções encontradas na literatura.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A renderização de volume direta tornou-se uma técnica popular para visualização volumétrica de dados extraídos de fontes como simulações científicas, funções analíticas, scanners médicos, entre outras. Algoritmos de renderização de volume, como o raycasting, produzem imagens de alta qualidade. O seu uso, contudo, é limitado devido à alta demanda de processamento computacional e o alto uso de memória. Nesse trabalho, propomos uma nova implementação do algoritmo de raycasting que aproveita a arquitetura altamente paralela do processador Cell Broadband Engine, com seus 9 núcleos heterogêneos, que permitem renderização eficiente em malhas irregulares de dados. O poder computacional do processador Cell BE demanda um modelo de programação diferente. Aplicações precisam ser reescritas para explorar o potencial completo do processador Cell, que requer o uso de multithreading e código vetorizado. Em nossa abordagem, enfrentamos esse problema distribuindo a computação de cada raio incidente nas faces visíveis do volume entre os núcleos do processador, e vetorizando as operações da integral de iluminação em cada um. Os resultados experimentais mostram que podemos obter bons speedups reduzindo o tempo total de renderização de forma significativa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The integration of multiple functionalities into individual nanoelectronic components is increasingly explored as a means to step up computational power, or for advanced signal processing. Here, we report the fabrication of a coupled nanowire transistor, a device where two superimposed high-performance nanowire field-effect transistors capable of mutual interaction form a thyristor-like circuit. The structure embeds an internal level of signal processing, showing promise for applications in analogue computation. The device is naturally derived from a single NW via a self-aligned fabrication process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). © 2006 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). ©2006 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of Bayes' Theorem to signal processing provides a consistent framework for proceeding from prior knowledge to a posterior inference conditioned on both the prior knowledge and the observed signal data. The first part of the lecture will illustrate how the Bayesian methodology can be applied to a variety of signal processing problems. The second part of the lecture will introduce the concept of Markov Chain Monte-Carlo (MCMC) methods which is an effective approach to overcoming many of the analytical and computational problems inherent in statistical inference. Such techniques are at the centre of the rapidly developing area of Bayesian signal processing which, with the continual increase in available computational power, is likely to provide the underlying framework for most signal processing applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present the results of a computational study of the post-processed Galerkin methods put forward by Garcia-Archilla et al. applied to the non-linear von Karman equations governing the dynamic response of a thin cylindrical panel periodically forced by a transverse point load. We spatially discretize the shell using finite differences to produce a large system of ordinary differential equations (ODEs). By analogy with spectral non-linear Galerkin methods we split this large system into a 'slowly' contracting subsystem and a 'quickly' contracting subsystem. We then compare the accuracy and efficiency of (i) ignoring the dynamics of the 'quick' system (analogous to a traditional spectral Galerkin truncation and sometimes referred to as 'subspace dynamics' in the finite element community when applied to numerical eigenvectors), (ii) slaving the dynamics of the quick system to the slow system during numerical integration (analogous to a non-linear Galerkin method), and (iii) ignoring the influence of the dynamics of the quick system on the evolution of the slow system until we require some output, when we 'lift' the variables from the slow system to the quick using the same slaving rule as in (ii). This corresponds to the post-processing of Garcia-Archilla et al. We find that method (iii) produces essentially the same accuracy as method (ii) but requires only the computational power of method (i) and is thus more efficient than either. In contrast with spectral methods, this type of finite-difference technique can be applied to irregularly shaped domains. We feel that post-processing of this form is a valuable method that can be implemented in computational schemes for a wide variety of partial differential equations (PDEs) of practical importance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We define and construct efficient depth universal and almost size universal quantum circuits. Such circuits can be viewed as general purpose simulators for central classes of quantum circuits and can be used to capture the computational power of the circuit class being simulated. For depth we construct universal circuits whose depth is the same order as the circuits being simulated. For size, there is a log factor blow-up in the universal circuits constructed here. We prove that this construction is nearly optimal. Our results apply to a number of well-studied quantum circuit classes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The technological role of handheld devices is fundamentally changing. Portable computers were traditionally application specific. They were designed and optimised to deliver a specific task. However, it is now commonly acknowledged that future handheld devices need to be multi-functional and need to be capable of executing a range of high-performance applications. This thesis has coined the term pervasive handheld computing systems to refer to this type of mobile device. Portable computers are faced with a number of constraints in trying to meet these objectives. They are physically constrained by their size, their computational power, their memory resources, their power usage, and their networking ability. These constraints challenge pervasive handheld computing systems in achieving their multi-functional and high-performance requirements. This thesis proposes a two-pronged methodology to enable pervasive handheld computing systems meet their future objectives. The methodology is a fusion of two independent and yet complementary concepts. The first step utilises reconfigurable technology to enhance the physical hardware resources within the environment of a handheld device. This approach recognises that reconfigurable computing has the potential to dynamically increase the system functionality and versatility of a handheld device without major loss in performance. The second step of the methodology incorporates agent-based middleware protocols to support handheld devices to effectively manage and utilise these reconfigurable hardware resources within their environment. The thesis asserts the combined characteristics of reconfigurable computing and agent technology can meet the objectives of pervasive handheld computing systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address the effects of natural three-qubit interactions on the computational power of one-way quantum computation. A benefit of using more sophisticated entanglement structures is the ability to construct compact and economic simulations of quantum algorithms with limited resources. We show that the features of our study are embodied by suitably prepared optical lattices, where effective three-spin interactions have been theoretically demonstrated. We use this to provide a compact construction for the Toffoli gate. Information flow and two-qubit interactions are also outlined, together with a brief analysis of relevant sources of imperfection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the emergence of multicore and manycore processors, engineers must design and develop software in drastically new ways to benefit from the computational power of all cores. However, developing parallel software is much harder than sequential software because parallelism can't be abstracted away easily. Authors Hans Vandierendonck and Tom Mens provide an overview of technologies and tools to support developers in this complex and error-prone task. © 2012 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The urinary catheter is a thin plastic tube that has been designed to empty the bladder artificially, effortlessly, and with minimum discomfort. The current CH14 male catheter design was examined with a view to optimizing the mass flow rate. The literature imposed constraints to the analysis of the urinary catheter to ensure that a compromise between optimal flow, patient comfort, and everyday practicality from manufacture to use was achieved in the new design. As a result a total of six design characteristics were examined. The input variables in question were the length and width of eyelets 1 and 2 (four variables), the distance between the eyelets, and the angle of rotation between the eyelets. Due to the high number of possible input combinations a structured approach to the analysis of data was necessary. A combination of computational fluid dynamics (CFD) and design of experiments (DOE) has been used to evaluate the "optimal configuration." The use of CFD couple with DOE is a novel concept, which harnesses the computational power of CFD in the most efficient manner for prediction of the mass flow rate in the catheter. Copyright © 2009 by ASME.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Solid particle erosion is a major concern in the engineering industry, particularly where transport of slurry flow is involved. Such flow regimes are characteristic of those in alumina refinement plants. The entrainment of particulate matter, for example sand, in the Bayer liquor can cause severe erosion in pipe fittings, especially in those which redirect the flow. The considerable costs involved in the maintenance and replacement of these eroded components led to an interest in research into erosion prediction by numerical methods at Rusal Aughinish alumina refinery, Limerick, Ireland, and the University of Limerick. The first stage of this study focused on the use of computational fluid dynamics (CFD) to simulate solid particle erosion in elbows. Subsequently an analysis of the factors that affect erosion of elbows was performed using design of experiments (DOE) techniques. Combining CFD with DOE harnesses the computational power of CFD in the most efficient manner for prediction of elbow erosion. An analysis of the factors that affect the erosion of elbows was undertaken with the intention of producing an erosion prediction model. © 2009 Taylor & Francis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In intelligent video surveillance systems, scalability (of the number of simultaneous video streams) is important. Two key factors which hinder scalability are the time spent in decompressing the input video streams, and the limited computational power of the processor. This paper demonstrates how a combination of algorithmic and hardware techniques can overcome these limitations, and significantly increase the number of simultaneous streams. The techniques used are processing in the compressed domain, and exploitation of the multicore and vector processing capability of modern processors. The paper presents a system which performs background modeling, using a Mixture of Gaussians approach. This is an important first step in the segmentation of moving targets. The paper explores the effects of reducing the number of coefficients in the compressed domain, in terms of throughput speed and quality of the background modeling. The speedups achieved by exploiting compressed domain processing, multicore and vector processing are explored individually. Experiments show that a combination of all these techniques can give a speedup of 170 times on a single CPU compared to a purely serial, spatial domain implementation, with a slight gain in quality.