157 resultados para parallelization
Resumo:
A poster of this paper will be presented at the 25th International Conference on Parallel Architecture and Compilation Technology (PACT ’16), September 11-15, 2016, Haifa, Israel.
Resumo:
The Next Generation Sequencing (NGS) allows to sequence the whole genome of an organism, compared to Maxam and Gilbert and Sanger sequencing that only allow to sequence, hardly, a single gene. Removing the separation of DNA fragments by electrophoresis, and the development of techniques that let the parallelization (analysing simultaneously several DNA fragments) have been crucial for the improvements of this process. The new companies in this ambit, Roche and Illumina, bet for different protocols to achieve these goals. Illumina bets for the sequencing by synthesis (SBS), requiring the library preparation and the use of adapters. Likewise, Illumina has replaced Roche because its lower rate of misincorporation, making it ideal for studies of genetic variability, transcriptomic, epigenomic, and metagenomic, in which this study will focus. However, it is noteworthy that the last progress in sequencing is carried out by the third generation sequencing, using nanotechnology to design small sequencers that sequence the whole genome of an organism quickly and inexpensively. Moreover, they provide more reliable data than current systems because they sequence a single molecule, solving the problem of synchronisation. In this way, PacBio and Nanopore allow a great progress in diagnostic and personalized medicine. Metagenomics provide to make a qualitative and quantitative analysis of the various species present in a sample. The main advantage of this technique is the no necessary isolation and growth of the species, allowing the analysis of nonculturable species. The Illumina protocol studies the variable regions of the 16S rRNA gene, which contains variable and not variables regions providing a phylogenetic classification. Therefore, metagenomics is a topic of interest to know the biodiversity of complex ecosystems and to study the microbiome of patients given the high involvement with certain microbial profiles on the condition of certain metabolic diseases.
Resumo:
The quality and the speed for genome sequencing has advanced at the same time that technology boundaries are stretched. This advancement has been divided so far in three generations. The first-generation methods enabled sequencing of clonal DNA populations. The second-generation massively increased throughput by parallelizing many reactions while the third-generation methods allow direct sequencing of single DNA molecules. The first techniques to sequence DNA were not developed until the mid-1970s, when two distinct sequencing methods were developed almost simultaneously, one by Alan Maxam and Walter Gilbert, and the other one by Frederick Sanger. The first one is a chemical method to cleave DNA at specific points and the second one uses ddNTPs, which synthesizes a copy from the DNA chain template. Nevertheless, both methods generate fragments of varying lengths that are further electrophoresed. Moreover, it is important to say that until the 1990s, the sequencing of DNA was relatively expensive and it was seen as a long process. Besides, using radiolabeled nucleotides also compounded the problem through safety concerns and prevented the automation. Some advancements within the first generation include the replacement of radioactive labels by fluorescent labeled ddNTPs and cycle sequencing with thermostable DNA polymerase, which allows automation and signal amplification, making the process cheaper, safer and faster. Another method is Pyrosequencing, which is based on the “sequencing by synthesis” principle. It differs from Sanger sequencing, in that it relies on the detection of pyrophosphate release on nucleotide incorporation. By the end of the last millennia, parallelization of this method started the Next Generation Sequencing (NGS) with 454 as the first of many methods that can process multiple samples, calling it the 2º generation sequencing. Here electrophoresis was completely eliminated. One of the methods that is sometimes used is SOLiD, based on sequencing by ligation of fluorescently dye-labeled di-base probes which competes to ligate to the sequencing primer. Specificity of the di-base probe is achieved by interrogating every 1st and 2nd base in each ligation reaction. The widely used Solexa/Illumina method uses modified dNTPs containing so called “reversible terminators” which blocks further polymerization. The terminator also contains a fluorescent label, which can be detected by a camera. Now, the previous step towards the third generation was in charge of Ion Torrent, who developed a technique that is based in a method of “sequencing-by-synthesis”. Its main feature is the detection of hydrogen ions that are released during base incorporation. Likewise, the third generation takes into account nanotechnology advancements for the processing of unique DNA molecules to a real time synthesis sequencing system like PacBio; and finally, the NANOPORE, projected since 1995, also uses Nano-sensors forming channels obtained from bacteria that conducts the sample to a sensor that allows the detection of each nucleotide residue in the DNA strand. The advancements in terms of technology that we have nowadays have been so quick, that it makes wonder: ¿How do we imagine the next generation?
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2016.
Resumo:
This project examines the current available work on the explicit and implicit parallelization of the R scripting language and reports on experimental findings for the development of a model for predicting effective points for automatic parallelization to be performed, based upon input data sizes and function complexity. After finding or creating a series of custom benchmarks, an interval based on data size and time complexity where replacement becomes a viable option was found; specifically between O(N) and O(N3) exclusive. As data size increases, the benefits of parallel processing become more apparent and a point is reached where those benefits outweigh the costs in memory transfer time. Based on our observations, this point can be predicted with a fair amount of accuracy using regression on a sample of approximately ten data sizes spread evenly between a system determined minimum and maximum size.
Resumo:
Le développement au cours des dernières décennies de lasers à fibre à verrouillage de modes permet aujourd’hui d’avoir accès à des sources fiables d’impulsions femtosecondes qui sont utilisées autant dans les laboratoires de recherche que pour des applications commerciales. Grâce à leur large bande passante ainsi qu’à leur excellente dissipation de chaleur, les fibres dopées avec des ions de terres rares ont permis l’amplification et la génération d’impulsions brèves de haute énergie avec une forte cadence. Cependant, les effets non linéaires causés par la faible taille du faisceau dans la fibre ainsi que la saturation de l’inversion de population du milieu compliquent l’utilisation d’amplificateurs fibrés pour l’obtention d’impulsions brèves dont l’énergie dépasse le millijoule. Diverses stratégies comme l’étirement des impulsions à des durées de l’ordre de la nanoseconde, l’utilisation de fibres à cristaux photoniques ayant un coeur plus large et l’amplification en parallèle ont permis de contourner ces limitations pour obtenir des impulsions de quelques millijoules ayant une durée inférieure à la picoseconde. Ce mémoire de maîtrise présente une nouvelle approche pour l’amplification d’impulsions brèves utilisant la diffusion Raman des verres de silice comme milieu de gain. Il est connu que cet effet non linéaire permet l’amplification avec une large bande passante et ce dernier est d’ailleurs couramment utilisé aujourd’hui dans les réseaux de télécommunications par fibre optique. Puisque l’adaptation des schémas d’amplification Raman existants aux impulsions brèves de haute énergie n’est pas directe, on propose plutôt un schéma consistant à transférer l’énergie d’une impulsion pompe quasi monochromatique à une impulsion signal brève étirée avec une dérive en fréquence. Afin d’évaluer le potentiel du gain Raman pour l’amplification d’impulsions brèves, ce mémoire présente un modèle analytique permettant de prédire les caractéristiques de l’impulsion amplifiée selon celles de la pompe et le milieu dans lequel elles se propagent. On trouve alors que la bande passante élevée du gain Raman des verres de silice ainsi que sa saturation inhomogène permettent l’amplification d’impulsions signal à une énergie comparable à celle de la pompe tout en conservant une largeur spectrale élevée supportant la compression à des durées très brèves. Quelques variantes du schéma d’amplification sont proposées, et leur potentiel est évalué par l’utilisation du modèle analytique ou de simulations numériques. On prédit analytiquement et numériquement l’amplification Raman d’impulsions à des énergies de quelques millijoules, dont la durée est inférieure à 150 fs et dont la puissance crête avoisine 20 GW.
Resumo:
Over one million people lost their lives in the last twenty years from natural disasters like wildfires, earthquakes and man-made disasters. In such scenarios the usage of a fleet of robots aims at the parallelization of the workload and thus increasing speed and capabilities to complete time sensitive missions. This work focuses on the development of a dynamic fleet management system, which consists in the management of multiple agents cooperating in order to accomplish tasks. We presented a Mixed Integer Programming problem for the management and planning of mission’s tasks. The problem was solved using both an exact and a heuristic approach. The latter is based on the idea of solving iteratively smaller instances of the complete problem. Alongside, a fast and efficient algorithm for estimation of travel times between tasks is proposed. Experimental results demonstrate that the proposed heuristic approach is able to generate quality solutions, within specific time limits, compared to the exact one.