947 resultados para Algorithmic skeleton


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a data flow based run time system as an efficient tool for supporting execution of parallel code on heterogeneous architectures hosting both multicore CPUs and GPUs. We discuss how the proposed run time system may be the target of both structured parallel applications developed using algorithmic skeletons/parallel design patterns and also more "domain specific" programming models. Experimental results demonstrating the feasibility of the approach are presented. © 2012 World Scientific Publishing Company.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of programmable logic devices as processing platforms for digital signal processing applications poses challenges concerning rapid implementation and high level optimization of algorithms on these platforms. This paper describes Abhainn, a rapid implementation methodology and toolsuite for translating an algorithmic expression of the system to a working implementation on a heterogeneous multiprocessor/field programmable gate array platform, or a standalone system on programmable chip solution. Two particular focuses for Abhainn are the automated but configurable realisation of inter-processor communuication fabrics, and the establishment of novel dedicated hardware component design methodologies allowing algorithm level transformation for system optimization. This paper outlines the approaches employed in both these particular instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple logic of conditional preferences is defined, with a language that allows the compact representation of certain kinds of conditional preference statements, a semantics and a proof theory. CP-nets and TCP-nets can be mapped into this logic, and the semantics and proof theory generalise those of CP-nets and TCP-nets. The system can also express preferences of a lexicographic kind. The paper derives various sufficient conditions for a set of conditional preferences to be consistent, along with algorithmic techniques for checking such conditions and hence confirming consistency. These techniques can also be used for totally ordering outcomes in a way that is consistent with the set of preferences, and they are further developed to give an approach to the problem of constrained optimisation for conditional preferences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biodiversity may be seen as a scientific measure of the complexity of a biological system, implying an information basis. Complexity cannot be directly valued, so economists have tried to define the services it provides, though often just valuing the services of 'key' species. Here we provide a new definition of biodiversity as a measure of functional information, arguing that complexity embodies meaningful information as Gregory Bateson defined it. We argue that functional information content (FIC) is the potentially valuable component of total (algorithmic) information content (AIC), as it alone determines biological fitness and supports ecosystem services. Inspired by recent extensions to the Noah's Ark problem, we show how FIC/AIC can be calculated to measure the degree of substitutability within an ecological community. Establishing substitutability is an essential foundation for valuation. From it, we derive a way to rank whole communities by Indirect Use Value, through quantifying the relation between system complexity and the production rate of ecosystem services. Understanding biodiversity as information evidently serves as a practical interface between economics and ecological science. © 2012 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Details are presented of the IRIS synthesis system for high-performance digital signal processing. This tool allows non-specialists to automatically derive VLSI circuit architectures from high-level, algorithmic representations, and provides a quick route to silicon implementation. The applicability of the system is demonstrated using the design example of a one-dimensional Discrete Cosine Transform circuit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Autonomic management can be used to improve the QoS provided by parallel/distributed applications. We discuss behavioural skeletons introduced in earlier work: rather than relying on programmer ability to design “from scratch” efficient autonomic policies, we encapsulate general autonomic controller features into algorithmic skeletons. Then we leave to the programmer the duty of specifying the parameters needed to specialise the skeletons to the needs of the particular application at hand. This results in the programmer having the ability to fast prototype and tune distributed/parallel applications with non-trivial autonomic management capabilities. We discuss how behavioural skeletons have been implemented in the framework of GCM(the Grid ComponentModel developed within the CoreGRID NoE and currently being implemented within the GridCOMP STREP project). We present results evaluating the overhead introduced by autonomic management activities as well as the overall behaviour of the skeletons. We also present results achieved with a long running application subject to autonomic management and dynamically adapting to changing features of the target architecture.
Overall the results demonstrate both the feasibility of implementing autonomic control via behavioural skeletons and the effectiveness of our sample behavioural skeletons in managing the “functional replication” pattern(s).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The range of potential applications for indoor and campus based personnel localisation has led researchers to create a wide spectrum of different algorithmic approaches and systems. However, the majority of the proposed systems overlook the unique radio environment presented by the human body leading to systematic errors and inaccuracies when deployed in this context. In this paper RSSI-based Monte Carlo Localisation was implemented using commercial 868 MHz off the shelf hardware and empirical data was gathered across a relatively large number of scenarios within a single indoor office environment. This data showed that the body shadowing effect caused by the human body introduced path skew into location estimates. It was also shown that, by using two body-worn nodes in concert, the effect of body shadowing can be mitigated by averaging the estimated position of the two nodes worn on either side of the body. © Springer Science+Business Media, LLC 2012.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe an approach aimed at addressing the issue of joint exploitation of control (stream) and data parallelism in a skeleton based parallel programming environment, based on annotations and refactoring. Annotations drive efficient implementation of a parallel computation. Refactoring is used to transform the associated skeleton tree into a more efficient, functionally equivalent skeleton tree. In most cases, cost models are used to drive the refactoring process. We show how sample use case applications/kernels may be optimized and discuss preliminary experiments with FastFlow assessing the theoretical results. © 2013 Springer-Verlag.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a lightweight prototype framework (LIBERO) designed for experimentation with behavioural skeletons-components implementing a well-known parallelism exploitation pattern and a rule-based autonomic manager taking care of some non-functional feature related to pattern computation. LIBERO supports multiple autonomic managers within the same behavioural skeleton, each taking care of a different non-functional concern. We introduce LIBERO-built on plain Java and JBoss-and discuss how multiple managers may be coordinated to achieve a common goal using a two-phase coordination protocol developed in earlier work. We present experimental results that demonstrate how the prototype may be used to investigate autonomic management of multiple, independent concerns. © 2011 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In intelligent video surveillance systems, scalability (of the number of simultaneous video streams) is important. Two key factors which hinder scalability are the time spent in decompressing the input video streams, and the limited computational power of the processor. This paper demonstrates how a combination of algorithmic and hardware techniques can overcome these limitations, and significantly increase the number of simultaneous streams. The techniques used are processing in the compressed domain, and exploitation of the multicore and vector processing capability of modern processors. The paper presents a system which performs background modeling, using a Mixture of Gaussians approach. This is an important first step in the segmentation of moving targets. The paper explores the effects of reducing the number of coefficients in the compressed domain, in terms of throughput speed and quality of the background modeling. The speedups achieved by exploiting compressed domain processing, multicore and vector processing are explored individually. Experiments show that a combination of all these techniques can give a speedup of 170 times on a single CPU compared to a purely serial, spatial domain implementation, with a slight gain in quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sponge classification has long been based mainly on morphocladistic analyses but is now being greatly challenged by more than 12 years of accumulated analyses of molecular data analyses. The current study used phylogenetic hypotheses based on sequence data from 18S rRNA, 28S rRNA, and the CO1 barcoding fragment, combined with morphology to justify the resurrection of the order Axinellida Lévi, 1953. Axinellida occupies a key position in different morphologically derived topologies. The abandonment of Axinellida and the establishment of Halichondrida Vosmaer, 1887 sensu lato to contain Halichondriidae Gray, 1867, Axinellidae Carter, 1875, Bubaridae Topsent, 1894, Heteroxyidae Dendy, 1905, and a new family Dictyonellidae van Soest et al., 1990 was based on the conclusion that an axially condensed skeleton evolved independently in separate lineages in preference to the less parsimonious assumption that asters (star-shaped spicules), acanthostyles (club-shaped spicules with spines), and sigmata (C-shaped spicules) each evolved more than once. Our new molecular trees are congruent and contrast with the earlier, morphologically based, trees. The results show that axially condensed skeletons, asters, acanthostyles, and sigmata are all homoplasious characters. The unrecognized homoplasious nature of these characters explains much of the incongruence between molecular-based and morphology-based phylogenies. We use the molecular trees presented here as a basis for re-interpreting the morphological characters within Heteroscleromorpha. The implications for the classification of Heteroscleromorpha are discussed and a new order Biemnida ord. nov. is erected.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-core and many-core platforms are becoming increasingly heterogeneous and asymmetric. This significantly increases the porting and tuning effort required for parallel codes, which in turn often leads to a growing gap between peak machine power and actual application performance. In this work a first step toward the automated optimization of high level skeleton-based parallel code is discussed. The paper presents an abstract annotation model for skeleton programs aimed at formally describing suitable mapping of parallel activities on a high-level platform representation. The derived mapping and scheduling strategies are used to generate optimized run-time code. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The management of non-functional features (performance, security, power management, etc.) is traditionally a difficult, error prone task for programmers of parallel applications. To take care of these non-functional features, autonomic managers running policies represented as rules using sensors and actuators to monitor and transform a running parallel application may be used. We discuss an approach aimed at providing formal tool support to the integration of independently developed autonomic managers taking care of different non-functional concerns within the same parallel application. Our approach builds on the Behavioural Skeleton experience (autonomic management of non-functional features in structured parallel applications) and on previous results on conflict detection and resolution in rule-based systems. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many modern networks are \emph{reconfigurable}, in the sense that the topology of the network can be changed by the nodes in the network. For example, peer-to-peer, wireless and ad-hoc networks are reconfigurable. More generally, many social networks, such as a company's organizational chart; infrastructure networks, such as an airline's transportation network; and biological networks, such as the human brain, are also reconfigurable. Modern reconfigurable networks have a complexity unprecedented in the history of engineering, resembling more a dynamic and evolving living animal rather than a structure of steel designed from a blueprint. Unfortunately, our mathematical and algorithmic tools have not yet developed enough to handle this complexity and fully exploit the flexibility of these networks. We believe that it is no longer possible to build networks that are scalable and never have node failures. Instead, these networks should be able to admit small, and maybe, periodic failures and still recover like skin heals from a cut. This process, where the network can recover itself by maintaining key invariants in response to attack by a powerful adversary is what we call \emph{self-healing}. Here, we present several fast and provably good distributed algorithms for self-healing in reconfigurable dynamic networks. Each of these algorithms have different properties, a different set of gaurantees and limitations. We also discuss future directions and theoretical questions we would like to answer. %in the final dissertation that this document is proposed to lead to.