6 resultados para implementation method
em Greenwich Academic Literature Archive - UK
Resumo:
A simulation program has been developed to calculate the power-spectral density of thin avalanche photodiodes, which are used in optical networks. The program extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. We describe our experiences in parallelizing the code using both MPI and OpenMP. Several array partitioning schemes and scheduling policies are implemented and tested Our results show that the OpenMP code is scalable up to 64 processors on an SGI Origin 2000 machine and has small average errors.
Resumo:
A new general cell-centered solution procedure based upon the conventional control or finite volume (CV or FV) approach has been developed for numerical heat transfer and fluid flow which encompasses both structured and unstructured meshes for any kind of mixed polygon cell. Unlike conventional FV methods for structured and block structured meshes and both FV and FE methods for unstructured meshes, the irregular control volume (ICV) method does not require the shape of the element or cell to be predefined because it simply exploits the concept of fluxes across cell faces. That is, the ICV method enables meshes employing mixtures of triangular, quadrilateral, and any other higher order polygonal cells to be exploited using a single solution procedure. The ICV approach otherwise preserves all the desirable features of conventional FV procedures for a structured mesh; in the current implementation, collocation of variables at cell centers is used with a Rhie and Chow interpolation (to suppress pressure oscillation in the flow field) in the context of the SIMPLE pressure correction solution procedure. In fact all other FV structured mesh-based methods may be perceived as a subset of the ICV formulation. The new ICV formulation is benchmarked using two standard computational fluid dynamics (CFD) problems i.e., the moving lid cavity and the natural convection driven cavity. Both cases were solved with a variety of structured and unstructured meshes, the latter exploiting mixed polygonal cell meshes. The polygonal mesh experiments show a higher degree of accuracy for equivalent meshes (in nodal density terms) using triangular or quadrilateral cells; these results may be interpreted in a manner similar to the CUPID scheme used in structured meshes for reducing numerical diffusion for flows with changing direction.
Resumo:
Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.
Resumo:
Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.
Resumo:
The solution process for diffusion problems usually involves the time development separately from the space solution. A finite difference algorithm in time requires a sequential time development in which all previous values must be determined prior to the current value. The Stehfest Laplace transform algorithm, however, allows time solutions without the knowledge of prior values. It is of interest to be able to develop a time-domain decomposition suitable for implementation in a parallel environment. One such possibility is to use the Laplace transform to develop coarse-grained solutions which act as the initial values for a set of fine-grained solutions. The independence of the Laplace transform solutions means that we do indeed have a time-domain decomposition process. Any suitable time solver can be used for the fine-grained solution. To illustrate the technique we shall use an Euler solver in time together with the dual reciprocity boundary element method for the space solution
Resumo:
Background: Personal health records were implemented with adults with learning disabilities (AWLD) to try to improve their health-care. Materials and Method: Forty GP practices were randomized to the Personal Health Profile (PHP) implementation or control group. Two hundred and one AWLD were interviewed at baseline and 163 followed up after 12 months intervention (PHP group). AWLD and carers of AWLD were employed as research interviewers. AWLD were full research participants. Results: Annual consultation rates in the intervention and control groups at baseline were low (2.3 and 2.6 visits respectively). A slightly greater increase occurred over the year in the intervention group 0.6 ()0.4 to 1.6) visits ⁄ year compared with controls. AWLD in PHP group reported more health problems at follow-up 0.9 (0.0 to 1.8). AWLD liked their PHP (92%) but only 63% AWLD and 55% carers reported PHP usage. Carers had high turnover (34%). Conclusions: No significant outcomes were achieved by the intervention.