163 resultados para structured parallel computations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Morse-Smale complex is a useful topological data structure for the analysis and visualization of scalar data. This paper describes an algorithm that processes all mesh elements of the domain in parallel to compute the Morse-Smale complex of large two-dimensional data sets at interactive speeds. We employ a reformulation of the Morse-Smale complex using Forman's Discrete Morse Theory and achieve scalability by computing the discrete gradient using local accesses only. We also introduce a novel approach to merge gradient paths that ensures accurate geometry of the computed complex. We demonstrate that our algorithm performs well on both multicore environments and on massively parallel architectures such as the GPU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A unique code (called Hensel's code) is derived for a rational number by truncating its infinite p-adic expansion. The four basic arithmetic algorithms for these codes are described and their application to rational matrix computations is demonstrated by solving a system of linear equations exactly, using the Gaussian elimination procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A finite element method for solving multidimensional population balance systems is proposed where the balance of fluid velocity, temperature and solute partial density is considered as a two-dimensional system and the balance of particle size distribution as a three-dimensional one. The method is based on a dimensional splitting into physical space and internal property variables. In addition, the operator splitting allows to decouple the equations for temperature, solute partial density and particle size distribution. Further, a nodal point based parallel finite element algorithm for multi-dimensional population balance systems is presented. The method is applied to study a crystallization process assuming, for simplicity, a size independent growth rate and neglecting agglomeration and breakage of particles. Simulations for different wall temperatures are performed to show the effect of cooling on the crystal growth. Although the method is described in detail only for the case of d=2 space and s=1 internal property variables it has the potential to be extendable to d+s variables, d=2, 3 and s >= 1. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A grid adaptation strategy for unstructured data based codes, employing a combination of hexahedral and prismatic elements, generalizable to tetrahedral and pyramidal elements has been developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallel sub-word recognition (PSWR) is a new model that has been proposed for language identification (LID) which does not need elaborate phonetic labeling of the speech data in a foreign language. The new approach performs a front-end tokenization in terms of sub-word units which are designed by automatic segmentation, segment clustering and segment HMM modeling. We develop PSWR based LID in a framework similar to the parallel phone recognition (PPR) approach in the literature. This includes a front-end tokenizer and a back-end language model, for each language to be identified. Considering various combinations of the statistical evaluation scores, it is found that PSWR can perform as well as PPR, even with broad acoustic sub-word tokenization, thus making it an efficient alternative to the PPR system.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laminar separation bubbles are thought to be highly non-parallel, and hence global stability studies start from this premise. However, experimentalists have always realized that the flow is more parallel than is commonly believed, for pressure-gradient-induced bubbles, and this is why linear parallel stability theory has been successful in describing their early stages of transition. The present experimental/numerical study re-examines this important issue and finds that the base flow in such a separation bubble becomes nearly parallel due to a strong-interaction process between the separated boundary layer and the outer potential flow. The so-called dead-air region or the region of constant pressure is a simple consequence of this strong interaction. We use triple-deck theory to qualitatively explain these features. Next, the implications of global analysis for the linear stability of separation bubbles are considered. In particular we show that in the initial portion of the bubble, where the flow is nearly parallel, local stability analysis is sufficient to capture the essential physics. It appears that the real utility of the global analysis is perhaps in the rear portion of the bubble, where the flow is highly non-parallel, and where the secondary/nonlinear instability stages are likely to dominate the dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many common activities, like reading, scanning scenes, or searching for an inconspicuous item in a cluttered environment, entail serial movements of the eyes that shift the gaze from one object to another. Previous studies have shown that the primate brain is capable of programming sequential saccadic eye movements in parallel. Given that the onset of saccades directed to a target are unpredictable in individual trials, what prevents a saccade during parallel programming from being executed in the direction of the second target before execution of another saccade in the direction of the first target remains unclear. Using a computational model, here we demonstrate that sequential saccades inhibit each other and share the brain's limited processing resources (capacity) so that the planning of a saccade in the direction of the first target always finishes first. In this framework, the latency of a saccade increases linearly with the fraction of capacity allocated to the other saccade in the sequence, and exponentially with the duration of capacity sharing. Our study establishes a link between the dual-task paradigm and the ramp-to-threshold model of response time to identify a physiologically viable mechanism that preserves the serial order of saccades without compromising the speed of performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address a scheduling problem for minimizing total weighted flowtime, observed in automobile gear manufacturing. Specifically, the bottleneck operation of the pre-heat treatment stage of gear manufacturing process has been dealt with in scheduling. Many real-life scenarios like unequal release times, sequence dependent setup times, and machine eligibility restrictions have been considered. A mathematical model taking into account dynamic starting conditions has been proposed. The problem is derived to be NP-hard. To approach the problem, a few heuristic algorithms have been proposed. Based on planned computational experiments, the performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small-size problem instances and (b) in comparison with the estimated optimal solution for large-size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently yielding near-statistically estimated optimal solutions in a reasonable computational time.