15 resultados para Parallel or distributed processing

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

YBaCuO-coated conductors offer great potential in terms of performance and cost-saving for superconducting fault current limiter (SFCL). A resistive SFCL based on coated conductors can be made from several tapes connected in parallel or in series. Ideally, the current and voltage are shared uniformly by the tapes when quench occurs. However, due to the non-uniformity of property of the tapes and the relative positions of the tapes, the currents and the voltages of the tapes are different. In this paper, a numerical model is developed to investigate the current and voltage sharing problem for the resistive SFCL. This model is able to simulate the dynamic response of YBCO tapes in normal and quench conditions. Firstly, four tapes with different Jc 's and n values in E-J power law are connected in parallel to carry the fault current. The model demonstrates how the currents are distributed among the four tapes. These four tapes are then connected in series to withstand the line voltage. In this case, the model investigates the voltage sharing between the tapes. Several factors that would affect the process of quenches are discussed including the field dependency of Jc, the magnetic coupling between the tapes and the relative positions of the tapes. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of collaborative product development, new requirements need to be accommodated for Virtual Prototyping Simulation (VPS), such as distributed processing and the integration of models created using different tools or languages. Existing solutions focus mainly on the implementation of distributed processing, but this paper explores the issues of combining different models (some of which may be proprietary) developed in different software environments. In this paper, we discuss several approaches for developing VPS, and suggest how it can best be integrated into the design process. An approach is developed to improve collaborative work in a VPS development by combining disparate computational models. Specifically, a system framework is proposed to separate the system-level modeling from the computational infrastructure. The implementation of a simple prototype demonstrates that such a paradigm is viable and thus provides a new means for distributed VPS development. © 2009 by ASME.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares parallel and distributed implementations of an iterative, Gibbs sampling, machine learning algorithm. Distributed implementations run under Hadoop on facility computing clouds. The probabilistic model under study is the infinite HMM [1], in which parameters are learnt using an instance blocked Gibbs sampling, with a step consisting of a dynamic program. We apply this model to learn part-of-speech tags from newswire text in an unsupervised fashion. However our focus here is on runtime performance, as opposed to NLP-relevant scores, embodied by iteration duration, ease of development, deployment and debugging. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The all-optical nonlinearity of a quantum well waveguide is studied by measuring the intensity dependent transmission through a Fabry-Perot cavity formed around the guide. Values for the nonlinear refractive index coefficient, η 2, at a wavelength of 1.06μm are obtained for light whose polarisation is either parallel or perpendicular to the quantum well layers. A simple measurement to estimate the two photon absorption coefficient, B2, using relatively low optical power levels is also described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An optical fiber strain sensing technique, based on Brillouin Optical Time Domain Reflectometry (BOTDR), was used to obtain the full deformation profile of a secant pile wall during construction of an adjacent basement in London. Details of the installation of sensors as well as data processing are described. By installing optical fiber down opposite sides of the pile, the distributed strain profiles obtained can be used to give both the axial and lateral movements along the pile. Measurements obtained from the BOTDR were found in good agreement with inclinometer data from the adjacent piles. The relative merits of the two different techniques are discussed. © 2007 ASCE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cambridge Flow Solutions Ltd, Compass House, Vision Park, Cambridge, CB4 9AD, UK Real-world simulation challenges are getting bigger: virtual aero-engines with multistage blade rows coupled with their secondary air systems & with fully featured geometry; environmental flows at meta-scales over resolved cities; synthetic battlefields. It is clear that the future of simulation is scalable, end-to-end parallelism. To address these challenges we have reported in a sequence of papers a series of inherently parallel building blocks based on the integration of a Level Set based geometry kernel with an octree-based cut-Cartesian mesh generator, RANS flow solver, post-processing and geometry management & editing. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh driven by the underpinning Level Set and managed by mesh quality optimization algorithms; this permits third party flow solvers to be deployed. This paper continues this sequence by reporting & demonstrating two main novelties: variable depth volume mesh refinement enabling variable surface mesh refinement and a radical rework of the mesh generation into a bottom-up system based on Space Filling Curves. Also reported are the associated extensions to body-conformal mesh export. Everything is implemented in a scalable, parallel manner. As a practical demonstration, meshes of guaranteed quality are generated for a fully resolved, generic aircraft carrier geometry, a cooled disc brake assembly and a B747 in landing configuration. Copyright © 2009 by W.N.Dawes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

High-throughput DNA sequencing (HTS) instruments today are capable of generating millions of sequencing reads in a short period of time, and this represents a serious challenge to current bioinformatics pipeline in processing such an enormous amount of data in a fast and economical fashion. Modern graphics cards are powerful processing units that consist of hundreds of scalar processors in parallel in order to handle the rendering of high-definition graphics in real-time. It is this computational capability that we propose to harness in order to accelerate some of the time-consuming steps in analyzing data generated by the HTS instruments. We have developed BarraCUDA, a novel sequence mapping software that utilizes the parallelism of NVIDIA CUDA graphics cards to map sequencing reads to a particular location on a reference genome. While delivering a similar mapping fidelity as other mainstream programs , BarraCUDA is a magnitude faster in mapping throughput compared to its CPU counterparts. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the mapping throughput. BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the mapping of millions of sequencing reads generated by HTS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available at http://seqbarracuda.sf.net

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We show that the sensor self-localization problem can be cast as a static parameter estimation problem for Hidden Markov Models and we implement fully decentralized versions of the Recursive Maximum Likelihood and on-line Expectation-Maximization algorithms to localize the sensor network simultaneously with target tracking. For linear Gaussian models, our algorithms can be implemented exactly using a distributed version of the Kalman filter and a novel message passing algorithm. The latter allows each node to compute the local derivatives of the likelihood or the sufficient statistics needed for Expectation-Maximization. In the non-linear case, a solution based on local linearization in the spirit of the Extended Kalman Filter is proposed. In numerical examples we demonstrate that the developed algorithms are able to learn the localization parameters. © 2012 IEEE.