58 resultados para Graduate programs
Resumo:
BACKGROUND: To compare the ability of Glaucoma Progression Analysis (GPA) and Threshold Noiseless Trend (TNT) programs to detect visual-field deterioration.
METHODS: Patients with open-angle glaucoma followed for a minimum of 2 years and a minimum of seven reliable visual fields were included. Progression was assessed subjectively by four masked glaucoma experts, and compared with GPA and TNT results. Each case was judged to be stable, deteriorated or suspicious of deterioration
RESULTS: A total of 56 eyes of 42 patients were followed with a mean of 7.8 (SD 1.0) tests over an average of 5.5 (1.04) years. Interobserver agreement to detect progression was good (mean kappa = 0.57). Progression was detected in 10-19 eyes by the experts, in six by GPA and in 24 by TNT. Using the consensus expert opinion as the gold standard (four clinicians detected progression), the GPA sensitivity and specificity were 75% and 83%, respectively, while the TNT sensitivity and specificity was 100% and 77%, respectively.
CONCLUSION: TNT showed greater concordance with the experts than GPA in the detection of visual-field deterioration. GPA showed a high specificity but lower sensitivity, mainly detecting cases of high focality and pronounced mean defect slopes.
Resumo:
The inherent difficulty of thread-based shared-memory programming has recently motivated research in high-level, task-parallel programming models. Recent advances of Task-Parallel models add implicit synchronization, where the system automatically detects and satisfies data dependencies among spawned tasks. However, dynamic dependence analysis incurs significant runtime overheads, because the runtime must track task resources and use this information to schedule tasks while avoiding conflicts and races.
We present SCOOP, a compiler that effectively integrates static and dynamic analysis in code generation. SCOOP combines context-sensitive points-to, control-flow, escape, and effect analyses to remove redundant dependence checks at runtime. Our static analysis can work in combination with existing dynamic analyses and task-parallel runtimes that use annotations to specify tasks and their memory footprints. We use our static dependence analysis to detect non-conflicting tasks and an existing dynamic analysis to handle the remaining dependencies. We evaluate the resulting hybrid dependence analysis on a set of task-parallel programs.
Resumo:
Refactoring is the process of changing the structure of a program without changing its behaviour. Refactoring has so far only really been deployed effectively for sequential programs. However, with the increased availability of multicore (and, soon, manycore) systems, refactoring can play an important role in helping both expert and non-expert parallel programmers structure and implement their parallel programs. This paper describes the design of a new refactoring tool that is aimed at increasing the programmability of parallel systems. To motivate our design, we refactor a number of examples in C, C++ and Erlang into good parallel implementations, using a set of formal pattern rewrite rules. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
This article presents a systematic review of research on the achievement outcomes of all types of approaches to teaching science in elementary schools. Study inclusion criteria included use of randomized or matched control groups, a study duration of at least 4 weeks, and use of achievement measures independent of the experimental treatment. A total of 23 studies met these criteria. Among studies evaluating inquiry-based teaching approaches, programs that used science kits did not show positive outcomes on science achievement measures (weighted ES=+0.02 in 7 studies), but inquiry-based programs that emphasized professional development but not kits did show positive outcomes (weighted ES=+0.36 in 10 studies). Technological approaches integrating video and computer resources with teaching and cooperative learning showed positive outcomes in a few small, matched studies (ES=+0.42 in 6 studies). The review concludes that science teaching methods focused on enhancing teachers’ classroom instruction throughout the year, such as cooperative learning and science-reading integration, as well as approaches that give teachers technology tools to enhance instruction, have significant potential to improve science learning.
Resumo:
We present TProf, an energy profiling tool for OpenMP-like task-parallel programs. To compute the energy consumed by each task in a parallel application, TProf dynamically traces the parallel execution and uses a novel technique to estimate the per-task energy consumption. To achieve this estimation, TProf apportions the total processor energy among cores and overcomes the limitation of current works which would otherwise make parallel accounting impossible to achieve. We demonstrate the value of TProf by characterizing a set of task parallel programs, where we find that data locality, memory access patterns and task working sets are responsible for significant variance in energy consumption between seemingly homogeneous tasks. In addition, we identify opportunities for fine-grain energy optimization by applying per-task Dynamic Voltage and Frequency Scaling (DVFS).
Resumo:
Following earlier work demonstrating the utility of Orc as a means of specifying and reasoning about grid applications we propose the enhancement of such specifications with metadata that provide a means to extend an Orc specification with implementation oriented information. We argue that such specifications provide a useful refinement step in allowing reasoning about implementation related issues ahead of actual implementation or even prototyping. As examples, we demonstrate how such extended specifications can be used for investigating security related issues and for evaluating the cost of handling grid resource faults. The approach emphasises a semi-formal style of reasoning that makes maximum use of programmer domain knowledge and experience.
Resumo:
This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential code.
Resumo:
In contingent valuation, the willingness to pay for hypothetical programs may be affected by the order in which programs are presented to respondents. With inclusive lists, economic theory suggests that sequence effects should be expected. However, when policy makers allocate public budgets to several environmental programs, they may be interested in assessing the value of the programs without the valuations being affected by the order in which the programs are presented. Using single-bounded dichotomous choice contingent valuation questions, we show that if respondents have the possibility to revise their willingness-to-pay answers, sequence effects are mitigated. (JEL Q51, Q54)