631 resultados para algorithmic skeletons


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Autonomic management can be used to improve the QoS provided by parallel/distributed applications. We discuss behavioural skeletons introduced in earlier work: rather than relying on programmer ability to design “from scratch” efficient autonomic policies, we encapsulate general autonomic controller features into algorithmic skeletons. Then we leave to the programmer the duty of specifying the parameters needed to specialise the skeletons to the needs of the particular application at hand. This results in the programmer having the ability to fast prototype and tune distributed/parallel applications with non-trivial autonomic management capabilities. We discuss how behavioural skeletons have been implemented in the framework of GCM(the Grid ComponentModel developed within the CoreGRID NoE and currently being implemented within the GridCOMP STREP project). We present results evaluating the overhead introduced by autonomic management activities as well as the overall behaviour of the skeletons. We also present results achieved with a long running application subject to autonomic management and dynamically adapting to changing features of the target architecture.
Overall the results demonstrate both the feasibility of implementing autonomic control via behavioural skeletons and the effectiveness of our sample behavioural skeletons in managing the “functional replication” pattern(s).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We discuss how common problems arising with multi/many core distributed architectures can he effectively handled through co-design of parallel/distributed programming abstractions and of autonomic management of non-functional concerns. In particular, we demonstrate how restricted patterns (or skeletons) may be efficiently managed by rule-based autonomic managers. We discuss the basic principles underlying pattern+manager co-design, current implementations inspired by this approach and some result achieved with proof-or-concept, prototype.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a data flow based run time system as an efficient tool for supporting execution of parallel code on heterogeneous architectures hosting both multicore CPUs and GPUs. We discuss how the proposed run time system may be the target of both structured parallel applications developed using algorithmic skeletons/parallel design patterns and also more "domain specific" programming models. Experimental results demonstrating the feasibility of the approach are presented. © 2012 World Scientific Publishing Company.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The reverse engineering of a skeleton based programming environment and redesign to distribute management activities of the system and thereby remove a potential single point of failure is considered. The Ore notation is used to facilitate abstraction of the design and analysis of its properties. It is argued that Ore is particularly suited to this role as this type of management is essentially an orchestration activity. The Ore specification of the original version of the system is modified via a series of semi-formally justified derivation steps to obtain a specification of the decentralized management version which is then used as a basis for its implementation. Analysis of the two specifications allows qualitative prediction of the expected performance of the derived version with respect to the original, and this prediction is borne out in practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a methodology for optimizing the execution of data parallel (sub-)tasks on CPU and GPU cores of the same heterogeneous architecture. The methodology is based on two main components: i) an analytical performance model for scheduling tasks among CPU and GPU cores, such that the global execution time of the overall data parallel pattern is optimized; and ii) an autonomic module which uses the analytical performance model to implement the data parallel computations in a completely autonomic way, requiring no programmer intervention to optimize the computation across CPU and GPU cores. The analytical performance model uses a small set of simple parameters to devise a partitioning-between CPU and GPU cores-of the tasks derived from structured data parallel patterns/algorithmic skeletons. The model takes into account both hardware related and application dependent parameters. It computes the percentage of tasks to be executed on CPU and GPU cores such that both kinds of cores are exploited and performance figures are optimized. The autonomic module, implemented in FastFlow, executes a generic map (reduce) data parallel pattern scheduling part of the tasks to the GPU and part to CPU cores so as to achieve optimal execution time. Experimental results on state-of-the-art CPU/GPU architectures are shown that assess both performance model properties and autonomic module effectiveness. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce a new parallel pattern derived from a specific application domain and show how it turns out to have application beyond its domain of origin. The pool evolution pattern models the parallel evolution of a population subject to mutations and evolving in such a way that a given fitness function is optimized. The pattern has been demonstrated to be suitable for capturing and modeling the parallel patterns underpinning various evolutionary algorithms, as well as other parallel patterns typical of symbolic computation. In this paper we introduce the pattern, we discuss its implementation on modern multi/many core architectures and finally present experimental results obtained with FastFlow and Erlang implementations to assess its feasibility and scalability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Structured parallel programming, and in particular programming models using the algorithmic skeleton or parallel design pattern concepts, are increasingly considered to be the only viable means of supporting effective development of scalable and efficient parallel programs. Structured parallel programming models have been assessed in a number of works in the context of performance. In this paper we consider how the use of structured parallel programming models allows knowledge of the parallel patterns present to be harnessed to address both performance and energy consumption. We consider different features of structured parallel programming that may be leveraged to impact the performance/energy trade-off and we discuss a preliminary set of experiments validating our claims.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brucite [Mg(OH)2] microbialites occur in vacated interseptal spaces of living scleractinian coral colonies (Acropora, Pocillopora, Porites) from subtidal and intertidal settings in the Great Barrier Reef, Australia, and subtidal Montastraea from the Florida Keys, United States. Brucite encrusts microbial filaments of endobionts (i.e., fungi, green algae, cyanobacteria) growing under organic biofilms; the brucite distribution is patchy both within interseptal spaces and within coralla. Although brucite is undersaturated in seawater, its precipitation was apparently induced in the corals by lowered pCO2 and increased pH within microenvironments protected by microbial biofilms. The occurrence of brucite in shallow-marine settings highlights the importance of microenvironments in the formation and early diagenesis of marine carbonates. Significantly, the brucite precipitates discovered in microenvironments in these corals show that early diagenetic products do not necessarily reflect ambient seawater chemistry. Errors in environmental interpretation may arise where unidentified precipitates occur in microenvironments in skeletal carbonates that are subsequently utilized as geochemical seawater proxies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Live-collected samples of four common reef building coral genera (Acropora, Pocillopora, Goniastrea, Porites) from subtidal and intertidal settings of Heron Reef, Great Barrier Reef, show extensive early marine diagenesis where parts of the coralla less than 3 years old contain abundant macro- and microborings and aragonite, high-Mg calcite, low-Mg calcite, and brucite cements. Many types of cement are associated directly with microendoliths and endobionts that inhabit parts of the corallum recently abandoned by coral polyps. The occurrence of cements that generally do not precipitate in normal shallow seawater (e.g., brucite, low-Mg calcite) highlights the importance of microenvironments in coral diagenesis. Cements precipitated in microenvironments may not reXect ambient seawater chemistry. Hence, geochemical sampling of these cements will contaminate trace-element and stable-isotope inventories used for palaeoclimate and dating analysis. Thus, great care must be taken in vetting samples for both bulk and microanalysis of geochemistry. Visual inspection using scanning electron microscopy may be required for vetting in many cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generative music algorithms frequently operate by making musical decisions in a sequence, with each step of the sequence incorporating the local musical context in the decision process. The context is generally a short window of past musical actions. What is not generally included in the context is future actions. For real-time systems this is because the future is unknown. Offline systems also frequently utilise causal algorithms either for reasons of efficiency [1] or to simulate perceptual constraints [2]. However, even real-time agents can incorporate knowledge of their own future actions by utilising some form of planning. We argue that for rhythmic generation the incorporation of a limited form of planning - anticipatory timing - offers a worthwhile trade-off between musical salience and efficiency. We give an example of a real-time generative agent - the Jambot - that utilises anticipatory timing for rhythmic generation. We describe its operation, and compare its output with and without anticipatory timing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One aim of the Australasian Nutrition Care Day Survey (ANCDS) was to explore dietary intake and nutritional status of acute care hospital patients. Dietitians from 56 hospitals in Australia and New Zealand completed a 24-hour nutritional status and dietary intake audit of 3000 adult patients. Participants were evaluated for nutritional risk using the Malnutrition Screening Tool (MST). Those ‘at risk’ underwent nutritional assessment using Subjective Global Assessment (SGA). Dietitians observed participants’ dietary intake at each main meal and recorded mid-meal intake via participant interviews. Intakes were recorded as 0%, 25%, 50%, 75%, or 100% of that offered for each meal during the 24-hour audit. Preliminary results for 1550 participants (males = 853; females = 697), age = 64 ± 17 years and BMI = 27 ± 7 kg/m2. Fifty-five percent (n = 853) of the participants had BMI > 25 kg/m2. The MST identified 41% (n = 636) ‘at risk’ for malnutrition. Of those ‘at risk’, 70% were assessed as malnourished resulting in an overall malnutrition prevalence of 30% (25% moderately malnourished, 5% severely malnourished). One-quarter of malnourished participants (n = 118) were on standard hospital diets without additional nutritional support. Fifty percent of malnourished patients (n = 235) and 40% of all patients (n = 620) had an overall 24-hour food consumption of ≤50% during the 24-hour audit. The ANCDS found that skeletons in the hospital closet continue to exist and that acute care patients continue to have suboptimal dietary intake. The ANCDS provides valuable insight into gaps in existing nutrition care practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is 2015 and there are no indications that the relentless digital transformation of the music economy is about to slow down. Rather, the music economy continues to rapidly reinvent itself and industry powers, positions and practices that were redefined only a few years ago are being questioned once again. This paper examines the most recent changes of the music economy as it moves from a product-based towards an access-based logic. The paper starts out by recognising the essential role of technology in the evolution of the music economy. It then moves on to a discussion about the rise of so-called access-based music business models and points out some of the controversies and debates that are associated with these models and online services. With this as a background the paper explores how access-based music services and the algorithmically curated playlists developed by these services transform the relationship between artists, music and fans and challenges the music industrial power relationships and established industry practices once again.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of estimating the optimal parameter trajectory over a finite time interval in a parameterized stochastic differential equation (SDE), and propose a simulation-based algorithm for this purpose. Towards this end, we consider a discretization of the SDE over finite time instants and reformulate the problem as one of finding an optimal parameter at each of these instants. A stochastic approximation algorithm based on the smoothed functional technique is adapted to this setting for finding the optimal parameter trajectory. A proof of convergence of the algorithm is presented and results of numerical experiments over two different settings are shown. The algorithm is seen to exhibit good performance. We also present extensions of our framework to the case of finding optimal parameterized feedback policies for controlled SDE and present numerical results in this scenario as well.