969 resultados para Massive Parallelization
Resumo:
By utilizing structure sharing among its parse trees, a GB parser can increase its efficiency dramatically. Using a GB parser which has as its phrase structure recovery component an implementation of Tomita's algorithm (as described in [Tom86]), we investigate how a GB parser can preserve the structure sharing output by Tomita's algorithm. In this report, we discuss the implications of using Tomita's algorithm in GB parsing, and we give some details of the structuresharing parser currently under construction. We also discuss a method of parallelizing a GB parser, and relate it to the existing literature on parallel GB parsing. Our approach to preserving sharing within a shared-packed forest is applicable not only to GB parsing, but anytime we want to preserve structure sharing in a parse forest in the presence of features.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
OBJECTIVE: To assess potential diagnostic and practice barriers to successful management of massive postpartum hemorrhage (PPH), emphasizing recognition and management of contributing coagulation disorders. STUDY DESIGN: A quantitative survey was conducted to assess practice patterns of US obstetrician-gynecologists in managing massive PPH, including assessment of coagulation. RESULTS: Nearly all (98%) of the 50 obstetrician-gynecologists participating in the survey reported having encountered at least one patient with "massive" PPH in the past 5 years. Approximately half (52%) reported having previously discovered an underlying bleeding disorder in a patient with PPH, with disseminated intravascular coagulation (88%, n=23/26) being identified more often than von Willebrand disease (73%, n=19/26). All reported having used methylergonovine and packed red blood cells in managing massive PPH, while 90% reported performing a hysterectomy. A drop in blood pressure and ongoing visible bleeding were the most commonly accepted indications for rechecking a "stat" complete blood count and coagulation studies, respectively, in patients with PPH; however, 4% of respondents reported that they would not routinely order coagulation studies. Forty-two percent reported having never consulted a hematologist for massive PPH. CONCLUSION: The survey findings highlight potential areas for improved practice in managing massive PPH, including earlier and more consistent assessment, monitoring of coagulation studies, and consultation with a hematologist.
Resumo:
Realizing scalable performance on high performance computing systems is not straightforward for single-phenomenon codes (such as computational fluid dynamics [CFD]). This task is magnified considerably when the target software involves the interactions of a range of phenomena that have distinctive solution procedures involving different discretization methods. The problems of addressing the key issues of retaining data integrity and the ordering of the calculation procedures are significant. A strategy for parallelizing this multiphysics family of codes is described for software exploiting finite-volume discretization methods on unstructured meshes using iterative solution procedures. A mesh partitioning-based SPMD approach is used. However, since different variables use distinct discretization schemes, this means that distinct partitions are required; techniques for addressing this issue are described using the mesh-partitioning tool, JOSTLE. In this contribution, the strategy is tested for a variety of test cases under a wide range of conditions (e.g., problem size, number of processors, asynchronous / synchronous communications, etc.) using a variety of strategies for mapping the mesh partition onto the processor topology.
Resumo:
The parallelization of existing/industrial electromagnetic software using the bulk synchronous parallel (BSP) computation model is presented. The software employs the finite element method with a preconditioned conjugate gradient-type solution for the resulting linear systems of equations. A geometric mesh-partitioning approach is applied within the BSP framework for the assembly and solution phases of the finite element computation. This is combined with a nongeometric, data-driven parallel quadrature procedure for the evaluation of right-hand-side terms in applications involving coil fields. A similar parallel decomposition is applied to the parallel calculation of electron beam trajectories required for the design of tube devices. The BSP parallelization approach adopted is fully portable, conceptually simple, and cost-effective, and it can be applied to a wide range of finite element applications not necessarily related to electromagnetics.
Resumo:
This chapter discusses the code parallelization environment, where a number of tools that address the main tasks, such as code parallelization, debugging, and optimization are available. The parallelization tools include ParaWise and CAPO, which enable the near automatic parallelization of real world scientific application codes for shared and distributed memory-based parallel systems. The chapter discusses the use of ParaWise and CAPO to transform the original serial code into an equivalent parallel code that contains appropriate OpenMP directives. Additionally, as user involvement can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform near automatic relative debugging of an OpenMP program that has been parallelized either using the tools or manually. In order for these tools to be effective in parallelizing a range of applications, a high quality fully inter-procedural dependence analysis, as well as user interaction is vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of parallelized NASA codes are discussed and show the benefits of using the environment.
Resumo:
Code parallelization using OpenMP for shared memory systems is relatively easier than using message passing for distributed memory systems. Despite this, it is still a challenge to use OpenMP to parallelize application codes in a way that yields effective scalable performance when executed on a shared memory parallel system. We describe an environment that will assist the programmer in the various tasks of code parallelization and this is achieved in a greatly reduced time frame and level of skill required. The parallelization environment includes a number of tools that address the main tasks of parallelism detection, OpenMP source code generation, debugging and optimization. These tools include a high quality, fully interprocedural dependence analysis with user interaction capabilities to facilitate the generation of efficient parallel code, an automatic relative debugging tool to identify erroneous user decisions in that interaction and also performance profiling to identify bottlenecks. Finally, experiences of parallelizing some NASA application codes are presented to illustrate some of the benefits of using the evolving environment.
Resumo:
The parallelization of real-world compute intensive Fortran application codes is generally not a trivial task. If the time to complete the parallelization is to be significantly reduced then an environment is needed that will assist the programmer in the various tasks of code parallelization. In this paper the authors present a code parallelization environment where a number of tools that address the main tasks such as code parallelization, debugging and optimization are available. The ParaWise and CAPO parallelization tools are discussed which enable the near automatic parallelization of real-world scientific application codes for shared and distributed memory-based parallel systems. As user involvement in the parallelization process can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform nearly automatic relative debugging of a program that has been parallelized using the tools. A high quality interprocedural dependence analysis as well as user-tool interaction are also highlighted and are vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of benchmark and real-world application codes parallelized are presented and show the benefits of using the environment
Resumo:
The seeding of an expanse of surface waters in the equatorial Pacific Ocean with low concentrations of dissolved iron triggered a massive phytoplankton bloom which consumed large quantities of carbon dioxide and nitrate that these microscopic plants cannot fully utilize under natural conditions. These and other observations provide unequivocal support for the hypothesis that phytoplankton growth in this oceanic region is limited by iron bioavailability.
Resumo:
Se describen y analizan los MOOCs (Massive Online Open Course) como método de difusión de información y documentación en los ámbitos educativos universitarios. Para ello se describe que son los MOOCs, mostrando su pujanza y su reciente aparición y desarrollo, y describiendo las muchas potencialidades y posibilidades que plantean, al igual que se describen los principales problemas. El desarrollo y utilidad de los MOOCs se ha planteado especialmente en el ámbito universitario, siendo utilizado como mecanismo para facilitar cursos en línea con el fin de difundir conocimiento científico y como método de marketing y financiación en las instituciones de educación superior.
Resumo:
We introduce a new survey of massive stars in the Galaxy and the Magellanic Clouds using the Fibre Large Array Multi- Element Spectrograph ( FLAMES) instrument at the Very Large Telescope ( VLT). Here we present observations of 269 Galactic stars with the FLAMES- Giraffe Spectrograph ( R similar or equal to 25 000), in fields centered on the open clusters NGC3293, NGC4755 and NGC6611. These data are supplemented by a further 50 targets observed with the Fibre- Fed Extended Range Optical Spectrograph ( FEROS, R = 48 000). Following a description of our scientific motivations and target selection criteria, the data reduction methods are described; of critical importance the FLAMES reduction pipeline is found to yield spectra that are in excellent agreement with less automated methods. Spectral classifications and radial velocity measurements are presented for each star, with particular attention paid to morphological peculiarities and evidence of binarity. These observations represent a significant increase in the known spectral content of NGC3293 and NGC4755, and will serve as standards against which our subsequent FLAMES observations in the Magellanic Clouds will be compared.
Resumo:
The massive star that underwent a collapse of its core to produce supernova (SN)1993J was subsequently identified as a non-variable red supergiant star in images of the galaxy M81 taken before explosion(1, 2). It showed an excess in ultraviolet and B-band colours, suggesting either the presence of a hot, massive companion star or that it was embedded in an unresolved young stellar association1. The spectra of SN1993J underwent a remarkable transformation from the signature of a hydrogen-rich type II supernova to one of a helium-rich (hydrogen-deficient) type Ib(3, 4). The spectral and photometric peculiarities were best explained by models in which the 13�20 solar mass supergiant had lost almost its entire hydrogen envelope to a close binary companion(5, 6, 7), producing a 'type IIb' supernova, but the hypothetical massive companion stars for this class of supernovae have so far eluded discovery. Here we report photometric and spectroscopic observations of SN1993J ten years after the explosion. At the position of the fading supernova we detect the unambiguous signature of a massive star: the binary companion to the progenitor.