998 resultados para compression parallel
Resumo:
In this paper we give first account of a simple analysis tool for modeling temporal compression for automatic mitigation of multipath induced intersymbol interference through the use of active phase conjugation (APC) technique. The temporal compression characteristics of an APC system is analyzed using a simple discrete channel model, and numerical results are provided to justify the theoretical findings.
Resumo:
The management of non-functional features (performance, security, power management, etc.) is traditionally a difficult, error prone task for programmers of parallel applications. To take care of these non-functional features, autonomic managers running policies represented as rules using sensors and actuators to monitor and transform a running parallel application may be used. We discuss an approach aimed at providing formal tool support to the integration of independently developed autonomic managers taking care of different non-functional concerns within the same parallel application. Our approach builds on the Behavioural Skeleton experience (autonomic management of non-functional features in structured parallel applications) and on previous results on conflict detection and resolution in rule-based systems. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
Refactoring is the process of changing the structure of a program without changing its behaviour. Refactoring has so far only really been deployed effectively for sequential programs. However, with the increased availability of multicore (and, soon, manycore) systems, refactoring can play an important role in helping both expert and non-expert parallel programmers structure and implement their parallel programs. This paper describes the design of a new refactoring tool that is aimed at increasing the programmability of parallel systems. To motivate our design, we refactor a number of examples in C, C++ and Erlang into good parallel implementations, using a set of formal pattern rewrite rules. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
This work presents a novel algorithm for decomposing NFA automata into one-state-active modules for parallel execution on Multiprocessor Systems on Chip (MP-SoC). Furthermore, performance related studies based on a 16-PE system for Snort, Bro and Linux-L7 regular expressions are presented. ©2009 IEEE.
Resumo:
Performance evaluation of parallel software and architectural exploration of innovative hardware support face a common challenge with emerging manycore platforms: they are limited by the slow running time and the low accuracy of software simulators. Manycore FPGA prototypes are difficult to build, but they offer great rewards. Software running on such prototypes runs orders of magnitude faster than current simulators. Moreover, researchers gain significant architectural insight during the modeling process. We use the Formic FPGA prototyping board [1], which specifically targets scalable and cost-efficient multi-board prototyping, to build and test a 64-board model of a 512-core, MicroBlaze-based, non-coherent hardware prototype with a full network-on-chip in a 3D-mesh topology. We expand the hardware architecture to include the ARM Versatile Express platforms and build a 520-core heterogeneous prototype of 8 Cortex-A9 cores and 512 MicroBlaze cores. We then develop an MPI library for the prototype and evaluate it extensively using several bare-metal and MPI benchmarks. We find that our processor prototype is highly scalable, models faithfully single-chip multicore architectures, and is a very efficient platform for parallel programming research, being 50,000 times faster than software simulation.
Resumo:
Northern Irish (and all UK-based) health care is facing major challenges. This article uses a specific theory to recommend and construct a framework to address challenges faced by the author, such as deficits in compression bandaging techniques in healing venous leg ulcers and resistance found when using evidence-based research within this practice. The article investigates the challenges faced by a newly formed community nursing team. It explores how specialist knowledge and skills are employed in tissue viability and how they enhance the management of venous leg ulceration by the community nursing team. To address these challenges and following a process of reflection, Lewin's forcefield analysis model of change management can be used as a framework for some recommendations made.
Resumo:
Porous poly(L-lactic acid) (PLA) scaffolds of 85 per cent and 90 per cent porosity are prepared using polymer sintering and porogen leaching method. Different weight fractions of 10 per cent, 30 per cent, and 50 per cent of hydroxyapatite (HA) are added to the PLA to control the acidity and degradation rate. The three-dimensional (3D) morphology and surface porosity are tested using micro-computer tomography (micro-CT), optical microscopy, and scanning electron microscopy (SEM). Results indicate that the surface porosity does not change on the addition of HA. The micro-CT examinations show a slight decrease in the pore size and increase in the wall thickness accompanied by reduced anisotropy for the scaffolds containing HA. Scanning electron micrographs show detectable interconnected pores for the scaffold with pure PLA. Addition of the HA results in agglomeration of the HA particles and reduced leaching of the porogen. Compression tests of the scaffold identify three stages in the stress-strain curve. The addition of HA results in a reduction in the modulus of the scaffold at the first stage of elastic bending of the wall, but this is reversed for the second and third stages of collapse of the wall and densification in the compression tests. In the scaffolds with 85 per cent porosity, the addition of a high percentage of HA could result in 70 per cent decrease in stiffness in the first stage, 200 per cent increase in stiffness in the second stage, and 20 per cent increase in stiffness in the third stage. The results of these tests are compared with the Gibson cellular material model that is proposed for prediction of the behaviour of cellular material under compression. The pH and molecular weight changes are tracked for the scaffolds within a period of 35 days. The addition of HA keeps the pH in the alkaline region, which results in higher rate of degradation at an early period of observation, followed by a reduced rate of degradation later in the process. The final molecular weight is higher for the scaffolds with HA than for scaffolds of pure PLA. The manufactured scaffolds offer acceptable properties in terms of the pore size range and interconnectivity of the pores and porosity for non-load-bearing bone graft substitute; however, improvement to the mixing of the phases of PLA and HA is required to achieve better integrity of the composite scaffolds. © 2008 IMechE.
Resumo:
This paper describes an investigation of the effect of fill factor; on the compaction behaviour of the granules during tableting and hence mechanical properties of tablets formed. The fill factor; which is the ratio of volume of wet powder material to vessel volume of the granulator, was used as an indicator of batch size. It has been established previously that in high shear granulation the batch size influences the size distribution and granule mechanical properties [1]. The work reported in this paper is an extension to the work presented in [1], hence granules from the same batches were used in production of tablets. The same tabletting conditions were employed during tabletting to allow a comparison of their properties. The compaction properties of the granules are inferred from the data generated during the tabletting process. The tablet strength and dissolution properties of the tablets were also measured. The results obtained show that the granule batch size affects the strength and dissolution of the tablets formed. The tablets produced from large batches were found to be weaker and had a faster dissolution rate. The fill factor was also found to affect the tablet to tablet variation of a non-functional active pharmaceutical ingredient included in the feed powder. Tablets produced from larger batches show greater variation compared to those from smaller batches.
Resumo:
The cycle of the academic year impacts on efforts to refine and improve major group design-build-test (DBT) projects since the time to run and evaluate projects is generally a full calendar year. By definition these major projects have a high degree of complexity since they act as the vehicle for the application of a range of technical knowledge and skills. There is also often an extensive list of desired learning outcomes which extends to include professional skills and attributes such as communication and team working. It is contended that student project definition and operation, like any other designed product, requires a number of iterations to achieve optimisation. The problem however is that if this cycle takes four or more years then by the time a project’s operational structure is fine tuned it is quite possible that the project theme is no longer relevant. The majority of the students will also inevitably experience a sub-optimal project experience over the 5 year development period. It would be much better if the ratio were flipped so that in 1 year an optimised project definition could be achieved which had sufficient longevity that it could run in the same efficient manner for 4 further years. An increased number of parallel investigators would also enable more varied and adventurous project concepts to be examined than a single institution could undertake alone in the same time frame.
This work-in-progress paper describes a parallel processing methodology for the accelerated definition of new student DBT project concepts. This methodology has been devised and implemented by a number of CDIO partner institutions in the UK & Ireland region. An agreed project theme was operated in parallel in one academic year with the objective of replacing a multi-year iterative cycle. Additionally the close collaboration and peer learning derived from the interaction between the coordinating academics facilitated the development of faculty teaching skills in line with CDIO standard 10.