204 resultados para Dual task
Resumo:
We present BDDT, a task-parallel runtime system that dynamically discovers and resolves dependencies among parallel tasks. BDDT allows the programmer to specify detailed task footprints on any memory address range, multidimensional array tile or dynamic region. BDDT uses a block-based dependence analysis with arbitrary granularity. The analysis is applicable to existing C programs without having to restructure object or array allocation, and provides flexibility in array layouts and tile dimensions.
We evaluate BDDT using a representative set of benchmarks, and we compare it to SMPSs (the equivalent runtime system in StarSs) and OpenMP. BDDT performs comparable to or better than SMPSs and is able to cope with task granularity as much as one order of magnitude finer than SMPSs. Compared to OpenMP, BDDT performs up to 3.9× better for benchmarks that benefit from dynamic dependence analysis. BDDT provides additional data annotations to bypass dependence analysis. Using these annotations, BDDT outperforms OpenMP also in benchmarks where dependence analysis does not discover additional parallelism, thanks to a more efficient implementation of the runtime system.
Resumo:
In this study, we introduce a dual enlargement of gold nanoparticles (AuNPs) for the scanometric detection of pathogenic
bacteria. After capturing the target bacteria (Campylobacter jejuni cells), the gold immunoprobes were added to create signal on a solid substrate. The signal was then amplified dually by a gold growth process and a silver enhancement resulting in stronger intensity which can easily be recognized by an unaided eye, or measured by an inexpensive flatbed scanner. The dual-enhanced nanocatalysis is herein reported for the first time, it provides valuable insight into the development of a rapid, simple and cost-effective detection format.
Resumo:
This paper elaborates on the ergodic capacity of fixed-gain amplify-and-forward (AF) dual-hop systems, which have recently attracted considerable research and industry interest. In particular, two novel capacity bounds that allow for fast and efficient computation and apply for nonidentically distributed hops are derived. More importantly, they are generic since they apply to a wide range of popular fading channel models. Specifically, the proposed upper bound applies to Nakagami-m, Weibull, and generalized-K fading channels, whereas the proposed lower bound is more general and applies to Rician fading channels. Moreover, it is explicitly demonstrated that the proposed lower and upper bounds become asymptotically exact in the high signal-to-noise ratio (SNR) regime. Based on our analytical expressions and numerical results, we gain valuable insights into the impact of model parameters on the capacity of fixed-gain AF dual-hop relaying systems. © 2011 IEEE.
Resumo:
Recent progress in plasma science and technology has enabled the development of a new generation of stable cold non-equilibrium plasmas operating at ambient atmospheric pressure. This opens horizons for new plasma technologies, in particular in the emerging field of plasma medicine. These non-equilibrium plasmas are very efficient sources for energy transport through reactive neutral particles (radicals and metastables), charged particles (ions and electrons), UV radiation, and electro-magnetic fields. The effect of a cold radio frequency-driven atmospheric pressure plasma jet on plasmid DNA has been investigated. The formation of double strand breaks correlates well with the atomic oxygen density. Taken with other measurements, this indicates that neutral components in the jet are effective in inducing double strand breaks. Plasma manipulation techniques for controlled energy delivery are highly desirable. Numerical simulations are employed for detailed investigations of the electron dynamics, which determines the generation of reactive species. New concepts based on nonlinear power dissipation promise superior strategies to control energy transport for tailored technological exploitations. © 2012 American Institute of Physics.
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.
Resumo:
We report a first study of brain activity linked to task switching in individuals with Prader-Willi syndrome (PWS) PWS individuals show a specific cognitive deficit in task switching which may be associated with the display of temper outbursts and repetitive questioning The performance of participants with PWS and typically developing controls was matched in a cued task switching procedure and brain activity was contrasted on switching and non switching blocks using SARI Individuals with PWS did not show the typical frontal-parietal pattern of neural activity associated with switching blocks, with significantly reduced activation in regions of the posterior parietal and ventromedial prefrontal cortices We suggest that this is linked to a difficulty in PWS in setting appropriate attentional weights to enable task set reconfiguration In addition to this, PWS individuals did not show the typical pattern of deactivation, with significantly less deactivation in an anterior region of the ventromedial prefrontal cortex One plausible explanation for this is that individuals with PWS show dysfunction within the default mode network which has been linked to attentional control The data point to functional changes in the neural circuitry supporting task switching in PWS even when behavioural performance is matched to controls and thus highlight neural mechanisms that may be involved in a specific pathway between genes cognition and behaviour (C) 2010 Elsevier B V All rights reserved
Resumo:
Prader-Willi syndrome (PWS) and Fragile X syndrome (FraX) are associated with distinctive cognitive and behavioural profiles. We examined whether repetitive behaviours in the two syndromes were associated with deficits in specific executive functions. PWS, FraX, and typically developing (TD) children were assessed for executive functioning using the Test of Everyday Attention for Children and an adapted Simon spatial interference task. Relative to the TD children, children with PWS and FraX showed greater costs of attention switching on the Simon task, but after controlling for intellectual ability, these switching deficits were only significant in the PWS group. Children with PWS and FraX also showed significantly increased preference for routine and differing profiles of other specific types of repetitive behaviours. A measure of switch cost from the Simon task was positively correlated to scores on preference for routine questionnaire items and was strongly associated with scores on other items relating to a preference for predictability. It is proposed that a deficit in attention switching is a component of the endophenotypes of both PWS and FraX and is associated with specific behaviours. This proposal is discussed in the context of neurocognitive pathways between genes and behaviour.
Resumo:
According to a higher order reasoning account, inferential reasoning processes underpin the widely observed cue competition effect of blocking in causal learning. The inference required for blocking has been described as modus tollens (if p then q, not q therefore not p). Young children are known to have difficulties with this type of inference, but research with adults suggests that this inference is easier if participants think counterfactually. In this study, 100 children (51 five-year-olds and 49 six- to seven-year-olds) were assigned to two types of pretraining groups. The counterfactual group observed demonstrations of cues paired with outcomes and answered questions about what the outcome would have been if the causal status of cues had been different, whereas the factual group answered factual questions about the same demonstrations. Children then completed a causal learning task. Counterfactual pretraining enhanced levels of blocking as well as modus tollens reasoning but only for the younger children. These findings provide new evidence for an important role for inferential reasoning in causal learning.