2 resultados para Parallelizing Compilers

em Repositorio Institucional de la Universidad de Málaga


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality and the speed for genome sequencing has advanced at the same time that technology boundaries are stretched. This advancement has been divided so far in three generations. The first-generation methods enabled sequencing of clonal DNA populations. The second-generation massively increased throughput by parallelizing many reactions while the third-generation methods allow direct sequencing of single DNA molecules. The first techniques to sequence DNA were not developed until the mid-1970s, when two distinct sequencing methods were developed almost simultaneously, one by Alan Maxam and Walter Gilbert, and the other one by Frederick Sanger. The first one is a chemical method to cleave DNA at specific points and the second one uses ddNTPs, which synthesizes a copy from the DNA chain template. Nevertheless, both methods generate fragments of varying lengths that are further electrophoresed. Moreover, it is important to say that until the 1990s, the sequencing of DNA was relatively expensive and it was seen as a long process. Besides, using radiolabeled nucleotides also compounded the problem through safety concerns and prevented the automation. Some advancements within the first generation include the replacement of radioactive labels by fluorescent labeled ddNTPs and cycle sequencing with thermostable DNA polymerase, which allows automation and signal amplification, making the process cheaper, safer and faster. Another method is Pyrosequencing, which is based on the “sequencing by synthesis” principle. It differs from Sanger sequencing, in that it relies on the detection of pyrophosphate release on nucleotide incorporation. By the end of the last millennia, parallelization of this method started the Next Generation Sequencing (NGS) with 454 as the first of many methods that can process multiple samples, calling it the 2º generation sequencing. Here electrophoresis was completely eliminated. One of the methods that is sometimes used is SOLiD, based on sequencing by ligation of fluorescently dye-labeled di-base probes which competes to ligate to the sequencing primer. Specificity of the di-base probe is achieved by interrogating every 1st and 2nd base in each ligation reaction. The widely used Solexa/Illumina method uses modified dNTPs containing so called “reversible terminators” which blocks further polymerization. The terminator also contains a fluorescent label, which can be detected by a camera. Now, the previous step towards the third generation was in charge of Ion Torrent, who developed a technique that is based in a method of “sequencing-by-synthesis”. Its main feature is the detection of hydrogen ions that are released during base incorporation. Likewise, the third generation takes into account nanotechnology advancements for the processing of unique DNA molecules to a real time synthesis sequencing system like PacBio; and finally, the NANOPORE, projected since 1995, also uses Nano-sensors forming channels obtained from bacteria that conducts the sample to a sensor that allows the detection of each nucleotide residue in the DNA strand. The advancements in terms of technology that we have nowadays have been so quick, that it makes wonder: ¿How do we imagine the next generation?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the multi-core CPU world, transactional memory (TM)has emerged as an alternative to lock-based programming for thread synchronization. Recent research proposes the use of TM in GPU architectures, where a high number of computing threads, organized in SIMT fashion, requires an effective synchronization method. In contrast to CPUs, GPUs offer two memory spaces: global memory and local memory. The local memory space serves as a shared scratch-pad for a subset of the computing threads, and it is used by programmers to speed-up their applications thanks to its low latency. Prior work from the authors proposed a lightweight hardware TM (HTM) support based in the local memory, modifying the SIMT execution model and adding a conflict detection mechanism. An efficient implementation of these features is key in order to provide an effective synchronization mechanism at the local memory level. After a quick description of the main features of our HTM design for GPU local memory, in this work we gather together a number of proposals designed with the aim of improving those mechanisms with high impact on performance. Firstly, the SIMT execution model is modified to increase the parallelism of the application when transactions must be serialized in order to make forward progress. Secondly, the conflict detection mechanism is optimized depending on application characteristics, such us the read/write sets, the probability of conflict between transactions and the existence of read-only transactions. As these features can be present in hardware simultaneously, it is a task of the compiler and runtime to determine which ones are more important for a given application. This work includes a discussion on the analysis to be done in order to choose the best configuration solution.