933 resultados para PARALLEL COMPUTING
Resumo:
Os métodos numéricos convencionais, baseados em malhas, têm sido amplamente aplicados na resolução de problemas da Dinâmica dos Fluidos Computacional. Entretanto, em problemas de escoamento de fluidos que envolvem superfícies livres, grandes explosões, grandes deformações, descontinuidades, ondas de choque etc., estes métodos podem apresentar algumas dificuldades práticas quando da resolução destes problemas. Como uma alternativa viável, existem os métodos de partículas livre de malhas. Neste trabalho é feita uma introdução ao método Lagrangeano de partículas, livre de malhas, Smoothed Particle Hydrodynamics (SPH) voltado para a simulação numérica de escoamentos de fluidos newtonianos compressíveis e quase-incompressíveis. Dois códigos numéricos foram desenvolvidos, uma versão serial e outra em paralelo, empregando a linguagem de programação C/C++ e a Compute Unified Device Architecture (CUDA), que possibilita o processamento em paralelo empregando os núcleos das Graphics Processing Units (GPUs) das placas de vídeo da NVIDIA Corporation. Os resultados numéricos foram validados e a eficiência computacional avaliada considerandose a resolução dos problemas unidimensionais Shock Tube e Blast Wave e bidimensional da Cavidade (Shear Driven Cavity Problem).
Resumo:
A obtenção de imagens usando tomografia computadorizada revolucionou o diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um conjunto de radiografias são processadas por um algoritmo computacional, o mais usado atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de processamento. No presente trabalho é apresentada a paralelização do algoritmo de reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento (GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para executar computação paralela e abordados os conceitos introdutórios associados à tomografia computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA Quadro 2000 (192 núcleos).
Resumo:
Um dos problemas mais relevantes em organizações de grande porte é a escolha de locais para instalação de plantas industriais, centros de distribuição ou mesmo pontos comerciais. Esse problema logístico é uma decisão estratégica que pode causar um impacto significativo no custo total do produto comercializado. Existem na literatura diversos trabalhos que abordam esse problema. Assim, o objetivo desse trabalho é analisar o problema da localização de instalações proposto por diferentes autores e definir um modelo que seja o mais adequado possível ao mercado de distribuição de combustíveis no Brasil. Para isso, foi realizada uma análise do fluxo de refino e distribuição praticado neste segmento e da formação do respectivo custo de transporte. Foram consideradas restrições como capacidade de estoque, gama de produtos ofertados e níveis da hierarquia de distribuição. A partir dessa análise, foi definido um modelo matemático aplicado à redução dos custos de frete considerando-se a carga tributária. O modelo matemático foi implementado, em linguagem C, e permite simular o problema. Foram aplicadas técnicas de computação paralela visando reduzir o tempo de execução do algoritmo. Os resultados obtidos com o modelo Single Uncapacited Facility Location Problem (SUFLP) simulado nas duas versões do programa, sequencial e paralela, demonstram ganhos de até 5% em economia de custos e redução do tempo de execução em mais de 50%.
Resumo:
Boltzmann machines offer a new and exciting approach to automatic speech recognition, and provide a rigorous mathematical formalism for parallel computing arrays. In this paper we briefly summarize Boltzmann machine theory, and present results showing their ability to recognize both static and time-varying speech patterns. A machine with 2000 units was able to distinguish between the 11 steady-state vowels in English with an accuracy of 85%. The stability of the learning algorithm and methods of preprocessing and coding speech data before feeding it to the machine are also discussed. A new type of unit called a carry input unit, which involves a type of state-feedback, was developed for the processing of time-varying patterns and this was tested on a few short sentences. Use is made of the implications of recent work into associative memory, and the modelling of neural arrays to suggest a good configuration of Boltzmann machines for this sort of pattern recognition.
Resumo:
An approach of rapid hologram generation for the realistic three-dimensional (3-D) image reconstruction based on the angular tiling concept is proposed, using a new graphic rendering approach integrated with a previously developed layer-based method for hologram calculation. A 3-D object is simplified as layered cross-sectional images perpendicular to a chosen viewing direction, and our graphics rendering approach allows the incorporation of clear depth cues, occlusion, and shading in the generated holograms for angular tiling. The combination of these techniques together with parallel computing reduces the computation time of a single-view hologram for a 3-D image of extended graphics array resolution to 176 ms using a single consumer graphics processing unit card. © 2014 SPIE and IS and T.
Resumo:
Numerical modeling of groundwater is very important for understanding groundwater flow and solving hydrogeological problem. Today, groundwater studies require massive model cells and high calculation accuracy, which are beyond single-CPU computer’s capabilities. With the development of high performance parallel computing technologies, application of parallel computing method on numerical modeling of groundwater flow becomes necessary and important. Using parallel computing can improve the ability to resolve various hydro-geological and environmental problems. In this study, parallel computing method on two main types of modern parallel computer architecture, shared memory parallel systems and distributed shared memory parallel systems, are discussed. OpenMP and MPI (PETSc) are both used to parallelize the most widely used groundwater simulator, MODFLOW. Two parallel solvers, P-PCG and P-MODFLOW, were developed for MODFLOW. The parallelized MODFLOW was used to simulate regional groundwater flow in Beishan, Gansu Province, which is a potential high-level radioactive waste geological disposal area in China. 1. The OpenMP programming paradigm was used to parallelize the PCG (preconditioned conjugate-gradient method) solver, which is one of the main solver for MODFLOW. The parallel PCG solver, P-PCG, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. The largest test model has 1000 columns, 1000 rows and 1000 layers. Based on the timing results, execution times using the P-PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. 2. P-MODFLOW, a domain decomposition–based model implemented in a parallel computing environment is developed, which allows efficient simulation of a regional-scale groundwater flow. The basic approach partitions a large model domain into any number of sub-domains. Parallel processors are used to solve the model equations within each sub-domain. The use of domain decomposition method to achieve the MODFLOW program distributed shared memory parallel computing system will process the application of MODFLOW be extended to the fleet of the most popular systems, so that a large-scale simulation could take full advantage of hundreds or even thousands parallel processors. P-MODFLOW has a good parallel performance, with the maximum speedup of 18.32 (14 processors). Super linear speedups have been achieved in the parallel tests, indicating the efficiency and scalability of the code. Parallel program design, load balancing and full use of the PETSc were considered to achieve a highly efficient parallel program. 3. The characterization of regional ground water flow system is very important for high-level radioactive waste geological disposal. The Beishan area, located in northwestern Gansu Province, China, is selected as a potential site for disposal repository. The area includes about 80000 km2 and has complicated hydrogeological conditions, which greatly increase the computational effort of regional ground water flow models. In order to reduce computing time, parallel computing scheme was applied to regional ground water flow modeling. Models with over 10 million cells were used to simulate how the faults and different recharge conditions impact regional ground water flow pattern. The results of this study provide regional ground water flow information for the site characterization of the potential high-level radioactive waste disposal.
Resumo:
Large earthquakes, such as the Chile earthquake in 1960 and the Sumatra-Andaman earthquake on Dec 26, 2004 in Indonesia, have generated the Earth’s free oscillations. The eigenfrequencies of the Earth’s free oscillations are closely related to the Earth’s internal structures. The conventional methods, which mainly focus on calculating the eigenfrequecies by analytical ways, and the analysis on observations can not easily study the whole processes from earthquake occurrence to the Earth’s free oscillation inspired. Therefore, we try to use numerical method incorporated with large-scale parallel computing to study on the Earth’s free oscillations excited by giant earthquakes. We first give a review of researches and developments of the Earth’s free oscillation, and basical theories under spherical coordinate system. We then give a review of the numerical simulation of seismic wave propagation and basical theories of spectral element method to simulate global seismic wave propagation. As a first step to study the Earth’s free oscillations, we use a finite element method to simulate the propagation of elastic waves and the generation of oscillations of the chime bell of Marquis Yi of Zeng, by striking different parts of the bell, which possesses the oval crosssection. The bronze chime bells of Marquis Yi of Zeng are precious cultural relics of China. The bells have a two-tone acoustic characteristic, i.e., striking different parts of the bell generates different tones. By analysis of the vibration in the bell and the spectrum analysis, we further help the understanding of the mechanism of two-tone acoustic characteristics of the chime bell of Marquis Yi of Zeng. The preliminary calculations have clearly shown that two different modes of oscillation can be generated by striking different parts of the bell, and indicate that finite element numerical simulation of the processes of wave propagation and two-tone generation of the chime bell of Marquis Yi of Zeng is feasible. These analyses provide a new quantitative and visual way to explain the mystery of the two-tone acoustic characteristics. The method suggested by this study can be applied to simulate free oscillations excited by great earthquakes with complex Earth structure. Taking into account of such large-scale structure of the Earth, small-scale low-precision numerical simulation can not simply meet the requirement. The increasing capacity in high-performance parallel computing and progress on fully numerical solutions for seismic wave fields in realistic three-dimensional spherical models, Spectral element method and high-performance parallel computing were incorporated to simulate the seismic wave propagation processes in the Earth’s interior, without the effects of the Earth’s gravitational potential. The numerical simulation shows that, the results of the toroidal modes of our calculation agree well with the theoretical values, although the accuracy of our results is much limited, the calculated peaks are little distorted due to three-dimensional effects. There exist much great differences between our calculated values of spheroidal modes and theoretical values, because we don’t consider the effect the Earth’ gravitation in numerical model, which leads our values are smaller than the theoretical values. When , is much smaller, the effect of the Earth’s gravitation make the periods of spheroidal modes become shorter. However, we now can not consider effects of the Earth’s gravitational potential into the numerical model to simulate the spheroidal oscillations, but those results still demonstrate that, the numerical simulation of the Earth’s free oscillation is very feasible. We make the numerical simulation on processes of the Earth’s free oscillations under spherically symmetric Earth model using different special source mechanisms. The results quantitatively show that Earth’s free oscillations excited by different earthquakes are different, and oscillations at different locations are different for free oscillation excited by the same earthquake. We also explore how the Earth’s medium attenuation will take effects on the Earth’s free oscillations, and take comparisons with the observations. The medium attenuation can make influences on the Earth’s free oscillations, though the effects on lower-frequency fundamental oscillations are weak. At last, taking 2008 Wenchuan earthquake for example, we employ spectral element method incorporated with large-scale parallel computing technology to investigate the characteristics of seismic wave propagation excited by Wenchuan earthquake. We calculate synthetic seismograms with one-point source model and three-point source model respectively. Full 3-D visualization of the numerical results displays the profile of the seismic wave propagation with respect to time. The three-point source, which was proposed by the latest investigations through field observation and reverse estimation, can better demonstrate the spatial and temporal characteristics of the source rupture processes than one-point source. Primary results show that those synthetic signals calculated from three-point source agree well with the observations. This can further reveal that the source rupturing process of Wenchuan earthquake is a multi-rupture process, which is composed by at least three or more stages of rupture processes. In conclusion, the numerical simulation can not only solve some problems concluding the Earth’s ellipticity and anisotropy, which can be easily solved by conventional methods, but also finally solve the problems concluding topography model and lateral heterogeneity. We will try to find a way to fully implement self-gravitation in spectral element method in future, and do our best to continue researching the Earth’s free oscillations using the numerical simulations to see how the Earth’ lateral heterogeneous will affect the Earth’s free oscillations. These will make it possible to bring modal spectral data increasingly to bear on furthering our understanding of the Earth’s three-dimensional structure.
Resumo:
The simulation of subsonic aeroacoustic problems such as the flow-generated sound of wind instruments is well suited for parallel computing on a cluster of non-dedicated workstations. Simulations are demonstrated which employ 20 non-dedicated Hewlett-Packard workstations (HP9000/715), and achieve comparable performance on this problem as a 64-node CM-5 dedicated supercomputer with vector units. The success of the present approach depends on the low communication requirements of the problem (low communication to computation ratio) which arise from the coarse-grain decomposition of the problem and the use of local-interaction methods. Many important problems may be suitable for this type of parallel computing including computer vision, circuit simulation, and other subsonic flow problems.
Resumo:
Direct simulations of wind musical instruments using the compressible Navier Stokes equations have recently become possible through the use of parallel computing and through developments in numerical methods. As a first demonstration, the flow of air and the generation of musical tones inside a soprano recorder are simulated numerically. In addition, physical measurements are made of the acoustic signal generated by the recorder at different blowing speeds. The comparison between simulated and physically measured behavior is encouraging and points towards ways of improving the simulations.
Resumo:
Coherent shared memory is a convenient, but inefficient, method of inter-process communication for parallel programs. By contrast, message passing can be less convenient, but more efficient. To get the benefits of both models, several non-coherent memory behaviors have recently been proposed in the literature. We present an implementation of Mermera, a shared memory system that supports both coherent and non-coherent behaviors in a manner that enables programmers to mix multiple behaviors in the same program[HS93]. A programmer can debug a Mermera program using coherent memory, and then improve its performance by selectively reducing the level of coherence in the parts that are critical to performance. Mermera permits a trade-off of coherence for performance. We analyze this trade-off through measurements of our implementation, and by an example that illustrates the style of programming needed to exploit non-coherence. We find that, even on a small network of workstations, the performance advantage of non-coherence is compelling. Raw non-coherent memory operations perform 20-40~times better than non-coherent memory operations. An example application program is shown to run 5-11~times faster when permitted to exploit non-coherence. We conclude by commenting on our use of the Isis Toolkit of multicast protocols in implementing Mermera.
Resumo:
Parallel computing on a network of workstations can saturate the communication network, leading to excessive message delays and consequently poor application performance. We examine empirically the consequences of integrating a flow control protocol, called Warp control [Par93], into Mermera, a software shared memory system that supports parallel computing on distributed systems [HS93]. For an asynchronous iterative program that solves a system of linear equations, our measurements show that Warp succeeds in stabilizing the network's behavior even under high levels of contention. As a result, the application achieves a higher effective communication throughput, and a reduced completion time. In some cases, however, Warp control does not achieve the performance attainable by fixed size buffering when using a statically optimal buffer size. Our use of Warp to regulate the allocation of network bandwidth emphasizes the possibility for integrating it with the allocation of other resources, such as CPU cycles and disk bandwidth, so as to optimize overall system throughput, and enable fully-shared execution of parallel programs.