999 resultados para parallel frequency converters
Resumo:
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
As consumers demand more functionality) from their electronic devices and manufacturers supply the demand then electrical power and clock requirements tend to increase, however reassessing system architecture can fortunately lead to suitable counter reductions. To maintain low clock rates and therefore reduce electrical power, this paper presents a parallel convolutional coder for the transmit side in many wireless consumer devices. The coder accepts a parallel data input and directly computes punctured convolutional codes without the need for a separate puncturing operation while the coded bits are available at the output of the coder in a parallel fashion. Also as the computation is in parallel then the coder can be clocked at 7 times slower than the conventional shift-register based convolutional coder (using DVB 7/8 rate). The presented coder is directly relevant to the design of modern low-power consumer devices
A low clock frequency FFT core implementation for multiband full-rate ultra-wideband (UWB) receivers
Resumo:
This paper discusses the design, implementation and synthesis of an FFT module that has been specifically optimized for use in the OFDM based Multiband UWB system, although the work is generally applicable to many other OFDM based receiver systems. Previous work has detailed the requirements for the receiver FFT module within the Multiband UWB ODFM based system and this paper draws on those requirements coupled with modern digital architecture principles and low power design criteria to converge on our optimized solution particularly aimed at a low-clock rate implementation. The FFT design obtained in this paper is also applicable for implementation of the transmitter IFFT module therefore only needing one FFT module in the device for half-duplex operation. The results from this paper enable the baseband designers of the 200Mbit/sec variant of Multiband UWB systems (and indeed other OFDM based receivers) using System-on-Chip (SoC), FPGA and ASIC technology to create cost effective and low power consumer electronics product solutions biased toward the very competitive market.
Resumo:
The work reported in this paper is motivated by the fact that there is a need to apply autonomic computing concepts to parallel computing systems. Advancing on prior work based on intelligent cores [36], a swarm-array computing approach, this paper focuses on ‘Intelligent agents’ another swarm-array computing approach in which the task to be executed on a parallel computing core is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and is seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed swarm-array computing approach is validated on a multi-agent simulator.
Resumo:
Every winter, the high-latitude oceans are struck by severe storms that are considerably smaller than the weather-dominating synoptic depressions1. Accompanied by strong winds and heavy precipitation, these often explosively developing mesoscale cyclones—termed polar lows1—constitute a threat to offshore activities such as shipping or oil and gas exploitation. Yet owing to their small scale, polar lows are poorly represented in the observational and global reanalysis data2 often used for climatological investigations of atmospheric features and cannot be assessed in coarse-resolution global simulations of possible future climates. Here we show that in a future anthropogenically warmed climate, the frequency of polar lows is projected to decline. We used a series of regional climate model simulations to downscale a set of global climate change scenarios3 from the Intergovernmental Panel of Climate Change. In this process, we first simulated the formation of polar low systems in the North Atlantic and then counted the individual cases. A previous study4 using NCEP/NCAR re-analysis data5 revealed that polar low frequency from 1948 to 2005 did not systematically change. Now, in projections for the end of the twenty-first century, we found a significantly lower number of polar lows and a northward shift of their mean genesis region in response to elevated atmospheric greenhouse gas concentration. This change can be related to changes in the North Atlantic sea surface temperature and mid-troposphere temperature; the latter is found to rise faster than the former so that the resulting stability is increased, hindering the formation or intensification of polar lows. Our results provide a rare example of a climate change effect in which a type of extreme weather is likely to decrease, rather than increase.
Resumo:
In this paper we are mainly concerned with the development of efficient computer models capable of accurately predicting the propagation of low-to-middle frequency sound in the sea, in axially symmetric (2D) and in fully 3D environments. The major physical features of the problem, i.e. a variable bottom topography, elastic properties of the subbottom structure, volume attenuation and other range inhomogeneities are efficiently treated. The computer models presented are based on normal mode solutions of the Helmholtz equation on the one hand, and on various types of numerical schemes for parabolic approximations of the Helmholtz equation on the other. A new coupled mode code is introduced to model sound propagation in range-dependent ocean environments with variable bottom topography, where the effects of an elastic bottom, of volume attenuation, surface and bottom roughness are taken into account. New computer models based on finite difference and finite element techniques for the numerical solution of parabolic approximations are also presented. They include an efficient modeling of the bottom influence via impedance boundary conditions, they cover wide angle propagation, elastic bottom effects, variable bottom topography and reverberation effects. All the models are validated on several benchmark problems and versus experimental data. Results thus obtained were compared with analogous results from standard codes in the literature.
Resumo:
We propose a bridge between two important parallel programming paradigms: data parallelism and communicating sequential processes (CSP). Data parallel pipelined architectures obtained with the Alpha language can be embedded in a control intensive application expressed in CSP-based Handel formalism. The interface is formally defined from the semantics of the languages Alpha and Handel. This work will ease the design of compute intensive applications on FPGAs.
Resumo:
This paper is concerned with the uniformization of a system of afine recurrence equations. This transformation is used in the design (or compilation) of highly parallel embedded systems (VLSI systolic arrays, signal processing filters, etc.). In this paper, we present and implement an automatic system to achieve uniformization of systems of afine recurrence equations. We unify the results from many earlier papers, develop some theoretical extensions, and then propose effective uniformization algorithms. Our results can be used in any high level synthesis tool based on polyhedral representation of nested loop computations.
Resumo:
We compare the variability of the Atlantic meridional overturning circulation (AMOC) as simulated by the coupled climate models of the RAPID project, which cover a wide range of resolution and complexity, and observed by the RAPID/MOCHA array at about 26N. We analyse variability on a range of timescales. In models of all resolutions there is substantial variability on timescales of a few days; in most AOGCMs the amplitude of the variability is of somewhat larger magnitude than that observed by the RAPID array, while the amplitude of the simulated annual cycle is similar to observations. A dynamical decomposition shows that in the models, as in observations, the AMOC is predominantly geostrophic (driven by pressure and sea-level gradients), with both geostrophic and Ekman contributions to variability, the latter being exaggerated and the former underrepresented in models. Other ageostrophic terms, neglected in the observational estimate, are small but not negligible. In many RAPID models and in models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), interannual variability of the maximum of the AMOC wherever it lies, which is a commonly used model index, is similar to interannual variability in the AMOC at 26N. Annual volume and heat transport timeseries at the same latitude are well-correlated within 15-45N, indicating the climatic importance of the AMOC. In the RAPID and CMIP3 models, we show that the AMOC is correlated over considerable distances in latitude, but not the whole extent of the north Atlantic; consequently interannual variability of the AMOC at 50N is not well-correlated with the AMOC at 26N.
Resumo:
Purpose – The purpose of this paper is to consider Turing's two tests for machine intelligence: the parallel-paired, three-participants game presented in his 1950 paper, and the “jury-service” one-to-one measure described two years later in a radio broadcast. Both versions were instantiated in practical Turing tests during the 18th Loebner Prize for artificial intelligence hosted at the University of Reading, UK, in October 2008. This involved jury-service tests in the preliminary phase and parallel-paired in the final phase. Design/methodology/approach – Almost 100 test results from the final have been evaluated and this paper reports some intriguing nuances which arose as a result of the unique contest. Findings – In the 2008 competition, Turing's 30 per cent pass rate is not achieved by any machine in the parallel-paired tests but Turing's modified prediction: “at least in a hundred years time” is remembered. Originality/value – The paper presents actual responses from “modern Elizas” to human interrogators during contest dialogues that show considerable improvement in artificial conversational entities (ACE). Unlike their ancestor – Weizenbaum's natural language understanding system – ACE are now able to recall, share information and disclose personal interests.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
We consider scattering of a time harmonic incident plane wave by a convex polygon with piecewise constant impedance boundary conditions. Standard finite or boundary element methods require the number of degrees of freedom to grow at least linearly with respect to the frequency of the incident wave in order to maintain accuracy. Extending earlier work by Chandler-Wilde and Langdon for the sound soft problem, we propose a novel Galerkin boundary element method, with the approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh with smaller elements closer to the corners of the polygon. Theoretical analysis and numerical results suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency of the incident wave.
Resumo:
We consider the scattering of a time-harmonic acoustic incident plane wave by a sound soft convex curvilinear polygon with Lipschitz boundary. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the number of degrees of freedom required to achieve a prescribed level of accuracy grows at least linearly with respect to the frequency of the incident wave. Here we propose a novel Galerkin boundary element method with a hybrid approximation space, consisting of the products of plane wave basis functions with piecewise polynomials supported on several overlapping meshes; a uniform mesh on illuminated sides, and graded meshes refined towards the corners of the polygon on illuminated and shadow sides. Numerical experiments suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy need only grow logarithmically as the frequency of the incident wave increases.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.