42 resultados para Buyback Programs


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The prevalence of multicore processors is bound to drive most kinds of software development towards parallel programming. To limit the difficulty and overhead of parallel software design and maintenance, it is crucial that parallel programming models allow an easy-to-understand, concise and dense representation of parallelism. Parallel programming models such as Cilk++ and Intel TBBs attempt to offer a better, higher-level abstraction for parallel programming than threads and locking synchronization. It is not straightforward, however, to express all patterns of parallelism in these models. Pipelines are an important parallel construct, although difficult to express in Cilk and TBBs in a straightfor- ward way, not without a verbose restructuring of the code. In this paper we demonstrate that pipeline parallelism can be easily and concisely expressed in a Cilk-like language, which we extend with input, output and input/output dependency types on procedure arguments, enforced at runtime by the scheduler. We evaluate our implementation on real applications and show that our Cilk-like scheduler, extended to track and enforce these dependencies has performance comparable to Cilk++.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Service user forums have the potential for improving awareness of services, empowering service users and strengthening community partnerships within an inclusive treatment and rehabilitation framework. The research aimed to investigate perspectives about service user involvement in order to inform the development of effective service user forum(s) in west Ireland. A total of 30 interviews with key service providers and 12 interviews with service users were conducted, with interview questions focusing on: (1) awareness of the Service User Support Team and (2) barriers to service user involvement and the development of service user forums in the region. An integrated data collection and thematic analysis was undertaken. Current levels of service user involvement were low, restricted by one-way communication and appeared grounded in user-provider power differentials and stigma relating to drug dependency. Service providers queried the actual terms of reference, capacity and training that would be needed for service user forums to advocate and lobby for service users. The use of existing support groups, creation of internet user forums and rotation of rural meetings were recommended to promote engagement among service users. The research underscores the need for transparency, resources and a framework for good practice that reflects a participatory approach


Read More: http://informahealthcare.com/doi/abs/10.3109/09687637.2012.671860

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most parallel computing applications in highperformance computing use the Message Passing Interface (MPI) API. Given the fundamental importance of parallel computing to science and engineering research, application correctness is paramount. MPI was originally developed around 1993 by the MPI Forum, a group of vendors, parallel programming researchers, and computational scientists. However, the document defining the standard is not issued by an official standards organization but has become a de facto standard © 2011 ACM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a dynamic verification approach for large-scale message passing programs to locate correctness bugs caused by unforeseen nondeterministic interactions. This approach hinges on an efficient protocol to track the causality between nondeterministic message receive operations and potentially matching send operations. We show that causality tracking protocols that rely solely on logical clocks fail to capture all nuances of MPI program behavior, including the variety of ways in which nonblocking calls can complete. Our approach is hinged on formally defining the matches-before relation underlying the MPI standard, and devising lazy update logical clock based algorithms that can correctly discover all potential outcomes of nondeterministic receives in practice. can achieve the same coverage as a vector clock based algorithm while maintaining good scalability. LLCP allows us to analyze realistic MPI programs involving a thousand MPI processes, incurring only modest overheads in terms of communication bandwidth, latency, and memory consumption. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consideration of the ethical, social, and policy implications of research has become increasingly important to scientists and scholars whose work focuses on brain and mind, but limited empirical data exist on the education in ethics available to them. We examined the current landscape of ethics training in neuroscience programs, beginning with the Canadian context specifically, to elucidate the perceived needs of mentors and trainees and offer recommendations for resource development to meet those needs. We surveyed neuroscientists at all training levels and interviewed directors of neuroscience programs and training grants. A total of 88% of survey respondents reported general interest in ethics, and 96% indicated a desire for more ethics content as it applies to brain research and clinical translation. Expert interviews revealed formal ethics education in over half of programs and in 90% of grants-based programs. Lack of time, resources, and expertise, however, are major barriers to expanding ethics content in neuroscience education. We conclude with an initial set of recommendations to address these barriers which includes the development of flexible, tailored ethics education tools, increased financial support for ethics training, and strategies for fostering collaboration between ethics experts, neuroscience program directors, and funding agencies. © 2010 the Authors. Journal Compilation © 2010 International Mind, Brain, and Education Society and Blackwell Publishing, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: To compare the ability of Glaucoma Progression Analysis (GPA) and Threshold Noiseless Trend (TNT) programs to detect visual-field deterioration.

METHODS: Patients with open-angle glaucoma followed for a minimum of 2 years and a minimum of seven reliable visual fields were included. Progression was assessed subjectively by four masked glaucoma experts, and compared with GPA and TNT results. Each case was judged to be stable, deteriorated or suspicious of deterioration

RESULTS: A total of 56 eyes of 42 patients were followed with a mean of 7.8 (SD 1.0) tests over an average of 5.5 (1.04) years. Interobserver agreement to detect progression was good (mean kappa = 0.57). Progression was detected in 10-19 eyes by the experts, in six by GPA and in 24 by TNT. Using the consensus expert opinion as the gold standard (four clinicians detected progression), the GPA sensitivity and specificity were 75% and 83%, respectively, while the TNT sensitivity and specificity was 100% and 77%, respectively.

CONCLUSION: TNT showed greater concordance with the experts than GPA in the detection of visual-field deterioration. GPA showed a high specificity but lower sensitivity, mainly detecting cases of high focality and pronounced mean defect slopes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inherent difficulty of thread-based shared-memory programming has recently motivated research in high-level, task-parallel programming models. Recent advances of Task-Parallel models add implicit synchronization, where the system automatically detects and satisfies data dependencies among spawned tasks. However, dynamic dependence analysis incurs significant runtime overheads, because the runtime must track task resources and use this information to schedule tasks while avoiding conflicts and races.
We present SCOOP, a compiler that effectively integrates static and dynamic analysis in code generation. SCOOP combines context-sensitive points-to, control-flow, escape, and effect analyses to remove redundant dependence checks at runtime. Our static analysis can work in combination with existing dynamic analyses and task-parallel runtimes that use annotations to specify tasks and their memory footprints. We use our static dependence analysis to detect non-conflicting tasks and an existing dynamic analysis to handle the remaining dependencies. We evaluate the resulting hybrid dependence analysis on a set of task-parallel programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Refactoring is the process of changing the structure of a program without changing its behaviour. Refactoring has so far only really been deployed effectively for sequential programs. However, with the increased availability of multicore (and, soon, manycore) systems, refactoring can play an important role in helping both expert and non-expert parallel programmers structure and implement their parallel programs. This paper describes the design of a new refactoring tool that is aimed at increasing the programmability of parallel systems. To motivate our design, we refactor a number of examples in C, C++ and Erlang into good parallel implementations, using a set of formal pattern rewrite rules. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a systematic review of research on the achievement outcomes of all types of approaches to teaching science in elementary schools. Study inclusion criteria included use of randomized or matched control groups, a study duration of at least 4 weeks, and use of achievement measures independent of the experimental treatment. A total of 23 studies met these criteria. Among studies evaluating inquiry-based teaching approaches, programs that used science kits did not show positive outcomes on science achievement measures (weighted ES=+0.02 in 7 studies), but inquiry-based programs that emphasized professional development but not kits did show positive outcomes (weighted ES=+0.36 in 10 studies). Technological approaches integrating video and computer resources with teaching and cooperative learning showed positive outcomes in a few small, matched studies (ES=+0.42 in 6 studies). The review concludes that science teaching methods focused on enhancing teachers’ classroom instruction throughout the year, such as cooperative learning and science-reading integration, as well as approaches that give teachers technology tools to enhance instruction, have significant potential to improve science learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present TProf, an energy profiling tool for OpenMP-like task-parallel programs. To compute the energy consumed by each task in a parallel application, TProf dynamically traces the parallel execution and uses a novel technique to estimate the per-task energy consumption. To achieve this estimation, TProf apportions the total processor energy among cores and overcomes the limitation of current works which would otherwise make parallel accounting impossible to achieve. We demonstrate the value of TProf by characterizing a set of task parallel programs, where we find that data locality, memory access patterns and task working sets are responsible for significant variance in energy consumption between seemingly homogeneous tasks. In addition, we identify opportunities for fine-grain energy optimization by applying per-task Dynamic Voltage and Frequency Scaling (DVFS).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following earlier work demonstrating the utility of Orc as a means of specifying and reasoning about grid applications we propose the enhancement of such specifications with metadata that provide a means to extend an Orc specification with implementation oriented information. We argue that such specifications provide a useful refinement step in allowing reasoning about implementation related issues ahead of actual implementation or even prototyping. As examples, we demonstrate how such extended specifications can be used for investigating security related issues and for evaluating the cost of handling grid resource faults. The approach emphasises a semi-formal style of reasoning that makes maximum use of programmer domain knowledge and experience.