82 resultados para parallel applications

em University of Queensland eSpace - Australia


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new transceive system for chest imaging for MRI applications is presented. A focused, eight-element transceive torso phased array coil is designed to investigate transmitting a focused radiofrequency field deep within the torso and to enhance signal homogeneity in the heart region. The system is used in conjunction with the SENSE reconstruction technique to enable focused parallel imaging. A hybrid finite-difference-time-domain/method-of-moments method is used to accurately predict the radiofrequency behavior inside the human torso. The simulation results reported herein demonstrate the feasibility of the design concept, which shows that radiofrequency field focusing with SENSE reconstruction is theoretically achievable. (c) 2005 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for ameliorating high-field image distortion caused by radio frequency/tissue interaction is presented and modeled, The proposed method uses, but is not restricted to, a shielded four-element transceive phased array coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both images together, the image distortion can be reduced several-fold. A hybrid finite-difference time-domain/method-of-moments method is used to theoretically demonstrate the method and also to predict the radio frequency behavior inside the human head. in addition, the proposed method is used in conjunction with the GRAPPA reconstruction technique to enable rapid imaging. Simulation results reported herein for IIT (470 MHz) brain imaging applications demonstrate the feasibility of the concept where multiple acquisitions using parallel imaging elements with GRAPPA reconstruction results in improved image quality. (c) 2006 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current debates about educational theory are concerned with the relationship between knowledge and power and thereby issues such as who possesses a truth and how have they arrived at it, what questions are important to ask, and how should they best be answered. As such, these debates revolve around questions of preferred, appropriate, and useful theoretical perspectives. This paper overviews the key theoretical perspectives that are currently used in physical education pedagogy research and considers how these inform the questions we ask and shapes the conduct of research. It also addresses what is contested with respect to these perspectives. The paper concludes with some cautions about allegiances to and use of theories in line with concerns for the applicability of educational research to pressing social issues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optically transparent, mesostructured titanium dioxide thin films were fabricated using an amphiphilic poly(alkylene oxide) block copolymer template in combination with retarded hydrolysis of a titanium isopropoxide precursor. Prior to calcination, the films displayed a stable hexagonal mesophase and high refractive indices (1.5 to 1.6) relative to mesostructured silica (1.43). After calcination, the hexagonal mesophase was retained with surface areas >300 m2 g-1. The dye Rhodamine 6G (commonly used as a laser dye) was incorporated into the copolymer micelle during the templating process. In this way, novel dye-doped mesostructured titanium dioxide films were synthesised. The copolymer not only directs the film structure, but also provides a solubilizing environment suitable for sustaining a high monomer-to-aggregate ratio at elevated dye concentrations. The dye-doped films displayed optical thresholdlike behaviour characteristic of amplified spontaneous emission. Soft lithography was successfully applied to micropattern the dye-doped films. These results pave the way for the fabrication and demonstration of novel microlaser structures and other active optical structures. This new, high-refractive index, mesostructured, dye-doped material could also find applications in areas such as optical coatings, displays and integrated photonic devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this review we demonstrate how the algebraic Bethe ansatz is used for the calculation of the-energy spectra and form factors (operator matrix elements in the basis of Hamiltonian eigenstates) in exactly solvable quantum systems. As examples we apply the theory to several models of current interest in the study of Bose-Einstein condensates, which have been successfully created using ultracold dilute atomic gases. The first model we introduce describes Josephson tunnelling between two coupled Bose-Einstein condensates. It can be used not only for the study of tunnelling between condensates of atomic gases, but for solid state Josephson junctions and coupled Cooper pair boxes. The theory is also applicable to models of atomic-molecular Bose-Einstein condensates, with two examples given and analysed. Additionally, these same two models are relevant to studies in quantum optics; Finally, we discuss the model of Bardeen, Cooper and Schrieffer in this framework, which is appropriate for systems of ultracold fermionic atomic gases, as well as being applicable for the description of superconducting correlations in metallic grains with nanoscale dimensions.; In applying all the above models to. physical situations, the need for an exact analysis of small-scale systems is established due to large quantum fluctuations which render mean-field approaches inaccurate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the recent finding by Muhlhaus et al [1] that bifurcation of crack growth patterns exists for arrays of two-dimensional cracks. This bifurcation is a result of the nonlinear effect due to crack interaction, which is, in the present analysis, approximated by the dipole asymptotic or pseudo-traction method. The nonlinear parameter for the problem is the crack length/ spacing ratio lambda = a/h. For parallel and edge crack arrays under far field tension, uniform crack growth patterns (all cracks having same size) yield to nonuniform crack growth patterns (i.e. bifurcation) if lambda is larger than a critical value lambda(cr) (note that such bifurcation is not found for collinear crack arrays). For parallel and edge crack arrays respectively, the value of lambda(cr) decreases monotonically from (2/9)(1/2) and (2/15.096)(1/2) for arrays of 2 cracks, to (2/3)(1/2)/pi and (2/5.032)(1/2)/pi for infinite arrays of cracks. The critical parameter lambda(cr) is calculated numerically for arrays of up to 100 cracks, whilst discrete Fourier transform is used to obtain the exact solution of lambda(cr) for infinite crack arrays. For geomaterials, bifurcation can also occurs when array of sliding cracks are under compression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coset enumeration is a most important procedure for investigating finitely presented groups. We present a practical parallel procedure for coset enumeration on shared memory processors. The shared memory architecture is particularly interesting because such parallel computation is both faster and cheaper. The lower cost comes when the program requires large amounts of memory, and additional CPU's. allow us to lower the time that the expensive memory is being used. Rather than report on a suite of test cases, we take a single, typical case, and analyze the performance factors in-depth. The parallelization is achieved through a master-slave architecture. This results in an interesting phenomenon, whereby the CPU time is divided into a sequential and a parallel portion, and the parallel part demonstrates a speedup that is linear in the number of processors. We describe an early version for which only 40% of the program was parallelized, and we describe how this was modified to achieve 90% parallelization while using 15 slave processors and a master. In the latter case, a sequential time of 158 seconds was reduced to 29 seconds using 15 slaves.