994 resultados para structured parallel computations
Resumo:
Background: The use of Objective Structured Clinical Examination (OSCE) in Pharmacy has been explored; however this is the first attempt in Queen’s University School of Pharmacy, Belfast to assess students via this method in a module where chemistry is the main discipline.
Aims: To devise an OSCE to assess undergraduate ability to check extemporaneously dispensed products for clinical and formulation errors. This activity also aims to consider whether it is a viable method of assessment in such a science-based class, from a staff and student perspective.
Method: Students rotated around a number of stations, performing a check of the product, corresponding prescription and formulation record sheet detailing the theory behind the formulation. They were assessed on their ability to spot intentional mistakes at each one.
Results: Of the 79 students questioned, 95% indicated that OSCE made them aware of the importance of the clinical check carried out by the pharmacist. Nearly all of the undergraduates (72 out of 79) felt that OSCE made them aware of the type of mistakes that students make in class. Most (5 out of 7) of the academic team members strongly agreed that it made students aware of ‘point of dispensing’ checks carried out by pharmacists, in addition to helping them to prepare for their exam.
Conclusion: OSCE assesses both scientific and formulation skills, and has increased the diversity of assessment of this module, bringing with it many additional benefits for the undergraduates since it measures their ability to exercise professional judgement in a time- constrained environment and, in this way, mirrors the conditions many pharmacists work within.
Resumo:
Refactoring is the process of changing the structure of a program without changing its behaviour. Refactoring has so far only really been deployed effectively for sequential programs. However, with the increased availability of multicore (and, soon, manycore) systems, refactoring can play an important role in helping both expert and non-expert parallel programmers structure and implement their parallel programs. This paper describes the design of a new refactoring tool that is aimed at increasing the programmability of parallel systems. To motivate our design, we refactor a number of examples in C, C++ and Erlang into good parallel implementations, using a set of formal pattern rewrite rules. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
This work presents a novel algorithm for decomposing NFA automata into one-state-active modules for parallel execution on Multiprocessor Systems on Chip (MP-SoC). Furthermore, performance related studies based on a 16-PE system for Snort, Bro and Linux-L7 regular expressions are presented. ©2009 IEEE.
Resumo:
This paper argues that the structured dependency thesis must be extended to incorporate political power. It outlines a political framework of analysis with which to identify who gains and who loses from social policy. I argue that public policy for older people is a product not only of social structures but also of political decision-making. The Schneider and Ingram (1993) ‘ target populations’ model is used to investigate how the social construction of groups as dependent equates with lower levels of influence on policy making. In United Kingdom and European research, older people are identified as politically quiescent, but conversely in the United States seniors are viewed as one of the most influential and cohesive interest groups in the political culture. Why are American seniors perceived as politically powerful, while older people in Europe are viewed as dependent and politically weak? This paper applies the ‘target populations’ model to senior policy in the Republic of Ireland to investigate how theoretical work in the United States may be used to identify the significance of senior power in policy development. I conclude that research must recognise the connections between power, politics and social constructions to investigate how state policies can influence the likelihood that seniors will resist structured dependency using political means.
Resumo:
Performance evaluation of parallel software and architectural exploration of innovative hardware support face a common challenge with emerging manycore platforms: they are limited by the slow running time and the low accuracy of software simulators. Manycore FPGA prototypes are difficult to build, but they offer great rewards. Software running on such prototypes runs orders of magnitude faster than current simulators. Moreover, researchers gain significant architectural insight during the modeling process. We use the Formic FPGA prototyping board [1], which specifically targets scalable and cost-efficient multi-board prototyping, to build and test a 64-board model of a 512-core, MicroBlaze-based, non-coherent hardware prototype with a full network-on-chip in a 3D-mesh topology. We expand the hardware architecture to include the ARM Versatile Express platforms and build a 520-core heterogeneous prototype of 8 Cortex-A9 cores and 512 MicroBlaze cores. We then develop an MPI library for the prototype and evaluate it extensively using several bare-metal and MPI benchmarks. We find that our processor prototype is highly scalable, models faithfully single-chip multicore architectures, and is a very efficient platform for parallel programming research, being 50,000 times faster than software simulation.
Resumo:
Across a range of domains in psychology different theories assume different mental representations of knowledge. For example, in the literature on category-based inductive reasoning, certain theories (e.g., Rogers & McClelland, 2004; Sloutsky & Fisher, 2008) assume that the knowledge upon which inductive inferences are based is associative, whereas others (e.g., Heit & Rubinstein, 1994; Kemp & Tenenbaum, 2009; Osherson, Smith, Wilkie, López, & Shafir, 1990) assume that knowledge is structured. In this article we investigate whether associative and structured knowledge underlie inductive reasoning to different degrees under different processing conditions. We develop a measure of knowledge about the degree of association between categories and show that it dissociates from measures of structured knowledge. In Experiment 1 participants rated the strength of inductive arguments whose categories were either taxonomically or causally related. A measure of associative strength predicted reasoning when people had to respond fast, whereas causal and taxonomic knowledge explained inference strength when people responded slowly. In Experiment 2, we also manipulated whether the causal link between the categories was predictive or diagnostic. Participants preferred predictive to diagnostic arguments except when they responded under cognitive load. In Experiment 3, using an open-ended induction paradigm, people generated and evaluated their own conclusion categories. Inductive strength was predicted by associative strength under heavy cognitive load, whereas an index of structured knowledge was more predictive of inductive strength under minimal cognitive load. Together these results suggest that associative and structured models of reasoning apply best under different processing conditions and that the application of structured knowledge in reasoning is often effortful.
Resumo:
The cycle of the academic year impacts on efforts to refine and improve major group design-build-test (DBT) projects since the time to run and evaluate projects is generally a full calendar year. By definition these major projects have a high degree of complexity since they act as the vehicle for the application of a range of technical knowledge and skills. There is also often an extensive list of desired learning outcomes which extends to include professional skills and attributes such as communication and team working. It is contended that student project definition and operation, like any other designed product, requires a number of iterations to achieve optimisation. The problem however is that if this cycle takes four or more years then by the time a project’s operational structure is fine tuned it is quite possible that the project theme is no longer relevant. The majority of the students will also inevitably experience a sub-optimal project experience over the 5 year development period. It would be much better if the ratio were flipped so that in 1 year an optimised project definition could be achieved which had sufficient longevity that it could run in the same efficient manner for 4 further years. An increased number of parallel investigators would also enable more varied and adventurous project concepts to be examined than a single institution could undertake alone in the same time frame.
This work-in-progress paper describes a parallel processing methodology for the accelerated definition of new student DBT project concepts. This methodology has been devised and implemented by a number of CDIO partner institutions in the UK & Ireland region. An agreed project theme was operated in parallel in one academic year with the objective of replacing a multi-year iterative cycle. Additionally the close collaboration and peer learning derived from the interaction between the coordinating academics facilitated the development of faculty teaching skills in line with CDIO standard 10.
Resumo:
A maths support system for first-year engineering students with non-traditional entry qualifications has involved students working through practice questions structured to correspond with the maths module which runs in parallel. The setting was informal and there was significant one-to-one assistance. The non-traditional students (who are known to be less well prepared mathematically) were explicitly contacted in the first week of their university studies regarding the maths support and they generally seemed keen to participate. However, attendance at support classes was relatively low, on average, but varied greatly between students. Students appreciated the personal help and having time to ask questions. It seemed that having a small group of friends within the class promoted attendance – perhaps the mutual support or comfort that they all had similar mathematical difficulties was a factor. The classes helped develop confidence. Attendance was hindered by the class being timetabled too soon after the relevant lecture and students were reluctant to come with no work done beforehand. Although students at risk due to their mathematical unpreparedness can easily be identified at an early stage of their university career, encouraging them to partake of the maths support is an ongoing, major problem.
Resumo:
We present TProf, an energy profiling tool for OpenMP-like task-parallel programs. To compute the energy consumed by each task in a parallel application, TProf dynamically traces the parallel execution and uses a novel technique to estimate the per-task energy consumption. To achieve this estimation, TProf apportions the total processor energy among cores and overcomes the limitation of current works which would otherwise make parallel accounting impossible to achieve. We demonstrate the value of TProf by characterizing a set of task parallel programs, where we find that data locality, memory access patterns and task working sets are responsible for significant variance in energy consumption between seemingly homogeneous tasks. In addition, we identify opportunities for fine-grain energy optimization by applying per-task Dynamic Voltage and Frequency Scaling (DVFS).