235 resultados para Parallel Programming
Resumo:
The present study has employed a combination of spectroscopic, calorimetric and computational methods to explore the binding of the three side-chained triazatruxene derivative, termed azatrux, to a human telomeric G-quadruplex sequence, under conditions of molecular crowding. The binding of azatrux to the tetramolecular parallel [d(TGGGGT)](4) quadruplex in the presence and absence of crowding conditions, was also characterized. The data indicate that azatrux binds in an end-stacking mode to the parallel G-quadruplex scaffold and highlights the key structural elements involved in the binding. The selectivity of azatrux for the human telomeric G-quadruplex relative to another biologically relevant G-quadruplex (c-Kit87up) and to duplex DNA was also investigated under molecular crowding conditions, showing that azatrux has good selectivity for the human telomeric G-quadruplex over the other investigated DNA structures. (C) 2011 Elsevier Masson SAS. All rights reserved.
Resumo:
Multicore computational accelerators such as GPUs are now commodity components for highperformance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.