15 resultados para Execution sermons.
Resumo:
Movement-related potentials (MRPs) reflect increasing cortical activity related to the preparation and execution of voluntary movement. Execution and preparatory components may be separated by comparing MRPs recorded from actual and imagined movement. Imagined movement initiates preparatory processes, but not motor execution activity. MRPs are maximal over the supplementary motor area (SMA), an area of the cortex involved in the planning and preparation of movement. The SMA receives input from the basal ganglia, which are affected in Huntington's disease (HD), a hyperkinetic movement disorder. In order to further elucidate the effects of the disorder upon the cortical activity relating to movement, MRPs were recorded from ten HD patients, and ten age-matched controls, whilst they performed and imagined performing a sequential button-pressing task. HD patients produced MRPs of significantly reduced size both for performed and imagined movement. The component relating to movement execution was obtained by subtracting the MRP for imagined movement from the MRP for performed movement, and was found to be normal in HD. The movement preparation component was found by subtracting the MRP found for a control condition of watching the visual cues from the MRP for imagined movement. This preparation component in HD was reduced in early slope, peak amplitude, and post-peak slope. This study therefore reported abnormal MRPs in HD. particularly in terms of the components relating to movement preparation, and this finding may further explain the movement deficits reported in the disease.
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) exhibits fundamental limitations as a method to reduce energy consumption in computing systems. In the HPC domain, where performance is of highest priority and codes are heavily optimized to minimize idle time, DVFS has limited opportunity to achieve substantial energy savings. This paper explores if operating processors Near the transistor Threshold Volt- age (NTV) is a better alternative to DVFS for break- ing the power wall in HPC. NTV presents challenges, since it compromises both performance and reliability to reduce power consumption. We present a first of its kind study of a significance-driven execution paradigm that selectively uses NTV and algorithmic error tolerance to reduce energy consumption in performance- constrained HPC environments. Using an iterative algorithm as a use case, we present an adaptive execution scheme that switches between near-threshold execution on many cores and above-threshold execution on one core, as the computational significance of iterations in the algorithm evolves over time. Using this scheme on state-of-the-art hardware, we demonstrate energy savings ranging between 35% to 67%, while compromising neither correctness nor performance.
Resumo:
In recent years concerns over litigation and the trend towards close monitoring of academic activity has seen the effective hijacking of research ethics by university managers and bureaucrats. This can effectively curtail cutting edge research as perceived ‘safe’ research strategies are encouraged. However, ethics is about more than research governance. Ultimately, it seeks to avoid harm and to increase benefits to society. Rural development debate is fairly quiet on the question of ethics, leaving guidance to professional bodies. This study draws on empirical research that examined the lives of migrant communities in Northern Ireland. This context of increasingly diverse rural development actors provides a backdrop for the way in which the researcher navigates through ethical issues as they unfold in the field. The analysis seeks to relocate ethics from being an annoying bureaucratic requirement to one where it is inherent to rigorous and professional research and practice. It reveals how attention to professional ethics can contribute to effective, situated and reflexive practice, thus transforming ethics to become an asset to professional researchers.
Resumo:
Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.