975 resultados para multiple data


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Medication information is a critical part of the information required to ensure residents' safety in the highly collaborative care context of RACFs. Studies report poor medication information as a barrier to improve medication management in RACFs. Research exploring medication work practices in aged care settings remains limited. This study aimed to identify contextual and work practice factors contributing to breakdowns in medication information exchange in RACFs in relation to the medication administration process. We employed non-participant observations and semi-structured interviews to explore information practices in three Australian RACFs. Findings identified inefficiencies due to lack of information timeliness, manual stock management, multiple data transcriptions, inadequate design of essential documents such as administration sheets and a reliance on manual auditing procedures. Technological solutions such as electronic medication administration records offer opportunities to overcome some of the identified problems. However these interventions need to be designed to align with the collaborative team based processes they intend to support.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis used multidisciplinary approaches which greatly enhance our understanding of population structure and can be particularly powerful tools for resolving variation of melon fly over geographic and temporal scales, and for determining invasive pathways. The results from this thesis reinforce the value of integrating multiple data sets to better understand and resolve natural variation within an important pest to determine whether there are cryptic species, discrete lineages or host races, and to identify dispersal pathways in an invasive pest. These results are instructive for regional biosecurity, trade and quarantine, and provide important background for future area-wide management programmes. The integrative methodology adopted in this thesis is applicable to a variety of other insect pests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we develop a multithreaded VLSI processor linear array architecture to render complex environments based on the radiosity approach. The processing elements are identical and multithreaded. They work in Single Program Multiple Data (SPMD) mode. A new algorithm to do the radiosity computations based on the progressive refinement approach[2] is proposed. Simulation results indicate that the architecture is latency tolerant and scalable. It is shown that a linear array of 128 uni-threaded processing elements sustains a throughput close to 0.4 million patches/sec.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Community Climate System Model (CCSM) is a Multiple Program Multiple Data (MPMD) parallel global climate model comprising atmosphere, ocean, land, ice and coupler components. The simulations have a time-step of the order of tens of minutes and are typically performed for periods of the order of centuries. These climate simulations are highly computationally intensive and can take several days to weeks to complete on most of today’s multi-processor systems. ExecutingCCSM on grids could potentially lead to a significant reduction in simulation times due to the increase in number of processors. However, in order to obtain performance gains on grids, several challenges have to be met. In this work,we describe our load balancing efforts in CCSM to make it suitable for grid enabling.We also identify the various challenges in executing CCSM on grids. Since CCSM is an MPI application, we also describe our current work on building a MPI implementation for grids to grid-enable CCSM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Decoherence as an obstacle in quantum computation is viewed as a struggle between two forces [1]: the computation which uses the exponential dimension of Hilbert space, and decoherence which destroys this entanglement by collapse. In this model of decohered quantum computation, a sequential quantum computer loses the battle, because at each time step, only a local operation is carried out but g*(t) number of gates collapse. With quantum circuits computing in parallel way the situation is different- g(t) number of gates can be applied at each time step and number gates collapse because of decoherence. As g(t) ≈ g*(t) competition here is even [1]. Our paper improves on this model by slowing down g*(t) by encoding the circuit in parallel computing architectures and running it in Single Instruction Multiple Data (SIMD) paradigm. We have proposed a parallel ion trap architecture for single-bit rotation of a qubit.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Desde que se inventó el primer ordenador, uno de los objetivos ha sido que el ordenador fuese capaz de ejecutar más y más rápido, para poder así solucionar problemas más complejos. La primera solución fue aumentar la potencia de los procesadores, pero las limitaciones físicas impuestas por la velocidad de los componentes electrónicos han obligado a buscar otras formas de mejorar el rendimiento. Desde entonces, ha habido muchos tipos de tecnologías para aumentar el rendimiento como los multiprocesadores, las arquitecturas MIMD… pero nosotros analizaremos la arquitectura SIMD. Este tipo de procesadores fue muy usado en los supercomputadores de los años 80 y 90, pero el progreso de los microprocesadores hizo que esta tecnología quedara en un segundo plano. Hoy en día la todos los procesadores tienen arquitecturas que implementan las instrucciones SIMD (Single Instruction, Multiple Data). En este documento estudiaremos las tecnologías de SIMD de Intel SSE, AVX y AVX2 para ver si realmente usando el procesador vectorial con las instrucciones SIMD, se obtiene alguna mejora de rendimiento. Hay que tener en cuenta que AVX solo está disponible desde 2011 y AVX2 no ha estado disponible hasta el 2013, por lo tanto estaremos trabajando con nuevas tecnologías. Además este tipo de tecnologías tiene el futuro asegurado, al anunciar Intel su nueva tecnología, AVX- 512 para 2015.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The fundamental aim of clustering algorithms is to partition data points. We consider tasks where the discovered partition is allowed to vary with some covariate such as space or time. One approach would be to use fragmentation-coagulation processes, but these, being Markov processes, are restricted to linear or tree structured covariate spaces. We define a partition-valued process on an arbitrary covariate space using Gaussian processes. We use the process to construct a multitask clustering model which partitions datapoints in a similar way across multiple data sources, and a time series model of network data which allows cluster assignments to vary over time. We describe sampling algorithms for inference and apply our method to defining cancer subtypes based on different types of cellular characteristics, finding regulatory modules from gene expression data from multiple human populations, and discovering time varying community structure in a social network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

多数据源互操作与开放分布处理系统CIMS-MIODP是采用分布对象互操作与代理技术,实现面向CIMS的基于RPC的远程对象访问ROA(RemoteObjectAces)功能和基于SQL3的远程数据库访问RDA(RemotDatabaseAces)功能的系统,为CIMS环境下的信息集成与分布处理提供了不同层次的支持功能。本文讨论了CIMS-MIODP系统的主要设计和实现问题,包括基本模型、扩展服务和协议、对SQL3的支持、系统实现结构等。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, the exploration of fractured reservoir plays a vital role in the further development of petroleum industry through out the world. Fractured hydrocarbon reservoirs are widely distributed in China. Usually, S-wave technique prevails, but it also has its disadvantage, prohibitive expense in S-wave data acquisition and processing. So directly utilizing P-wave data to detect fractures, comes to our mind. We briefly introduce theoretical model (HTI) for fractured reservoir. Then study Ruger’s reflectivity method to recognize reflection P-wave reflection coefficient of the top and bottom interface of HTI layer respectively, and its azimuth anisotropy character. Base on that study, we gives a review and comparison of two seismic exploration technologies for fractures available in the industry-- P-wave AVO and AVA. They has shown great potential for application to the oil and gas prediction of fractured reservoir and the reservoir fine description.Every technique has its disadvantage, AVO limited to small reflection angle; and AVA just offering relatively results. So that, We can draw a conclusion that a better way to any particular field is using synthesis of multiple data sources including core、outcrop、well-test、image logs、3D VSPs, generally to improve the accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stochastic reservoir modeling is a technique used in reservoir describing. Through this technique, multiple data sources with different scales can be integrated into the reservoir model and its uncertainty can be conveyed to researchers and supervisors. Stochastic reservoir modeling, for its digital models, its changeable scales, its honoring known information and data and its conveying uncertainty in models, provides a mathematical framework or platform for researchers to integrate multiple data sources and information with different scales into their prediction models. As a fresher method, stochastic reservoir modeling is on the upswing. Based on related works, this paper, starting with Markov property in reservoir, illustrates how to constitute spatial models for catalogued variables and continuum variables by use of Markov random fields. In order to explore reservoir properties, researchers should study the properties of rocks embedded in reservoirs. Apart from methods used in laboratories, geophysical means and subsequent interpretations may be the main sources for information and data used in petroleum exploration and exploitation. How to build a model for flow simulations based on incomplete information is to predict the spatial distributions of different reservoir variables. Considering data source, digital extent and methods, reservoir modeling can be catalogued into four sorts: reservoir sedimentology based method, reservoir seismic prediction, kriging and stochastic reservoir modeling. The application of Markov chain models in the analogue of sedimentary strata is introduced in the third of the paper. The concept of Markov chain model, N-step transition probability matrix, stationary distribution, the estimation of transition probability matrix, the testing of Markov property, 2 means for organizing sections-method based on equal intervals and based on rock facies, embedded Markov matrix, semi-Markov chain model, hidden Markov chain model, etc, are presented in this part. Based on 1-D Markov chain model, conditional 1-D Markov chain model is discussed in the fourth part. By extending 1-D Markov chain model to 2-D, 3-D situations, conditional 2-D, 3-D Markov chain models are presented. This part also discusses the estimation of vertical transition probability, lateral transition probability and the initialization of the top boundary. Corresponding digital models are used to specify, or testify related discussions. The fifth part, based on the fourth part and the application of MRF in image analysis, discusses MRF based method to simulate the spatial distribution of catalogued reservoir variables. In the part, the probability of a special catalogued variable mass, the definition of energy function for catalogued variable mass as a Markov random field, Strauss model, estimation of components in energy function are presented. Corresponding digital models are used to specify, or testify, related discussions. As for the simulation of the spatial distribution of continuum reservoir variables, the sixth part mainly explores 2 methods. The first is pure GMRF based method. Related contents include GMRF model and its neighborhood, parameters estimation, and MCMC iteration method. A digital example illustrates the corresponding method. The second is two-stage models method. Based on the results of catalogued variables distribution simulation, this method, taking GMRF as the prior distribution for continuum variables, taking the relationship between catalogued variables such as rock facies, continuum variables such as porosity, permeability, fluid saturation, can bring a series of stochastic images for the spatial distribution of continuum variables. Integrating multiple data sources into the reservoir model is one of the merits of stochastic reservoir modeling. After discussing how to model spatial distributions of catalogued reservoir variables, continuum reservoir variables, the paper explores how to combine conceptual depositional models, well logs, cores, seismic attributes production history.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main reservoir type in the south of Dagang Oilfield is alluvial reservoir. In this paper, the reservoir structure model and the distribution of connected body and flow barrier were built on base of the study of high-resolution sequential stratigraphic skeleton and fine sedimentary microfacies on level of single sandbody. Utilizing the static and dynamic data synthetically and carrying out the comparision of the classification method for reservoir flow unit in different reservoir, the criterion, which can be used to classify the flow unit in first section of Kongdian formation of Kongnan area, was defined. The qualitative method of well-to-well correlation and the quantitative method of conditional simulation using multiple data are adopted to disclose the oil and water moving regulation in different flow unit and the distribution rule of remaining oil by physical simulation measure. A set of flow unit study method was formed that is suit for the Dagang Oilfield on account of the remaining oil production according to the flow unit. Several outstanding progresses was obtained in the following aspects:It is considered that the reservoir structure of Zao V iow oil group- Zao Vup4 layerand are jigsaw-puzzled reservoir, while ZaoVup3-ZaoVupi layers are labyrinth reservoir,which are studied on base of high-resolution sequential stratigraphic skeleton on the levelof single sandbody in first section of Kongdian formation of Kongnan area and accordingto the study of fine sedimentary microfacies and fault sealeing.When classifying the flow unit, only permeability is the basic parameter using thestatic and dynamic data and, and also different parameters should be chose or deleted, suchas porosity, effective thickness, fluid viscosity and so on, because of the weak or stronginterlayer heterogeneous and the difference of interlayer crude oil character.The method of building predicting-model of flow unit was proposed. This methodis according to the theories of reservoir sedimentology and high-resolution sequencestratigraphic and adopts the quantitative method of well-to well correlation and the quantitative method of stochastic simulation using integrateddense well data. Finally the 3-D predicting-model of flow unit and the interlay er distribution model in flow unit were built which are for alluvial fan and fan delta fades in first section of Kongdian formation of Kongnan area, and nine genetic model of flow unit of alluvial environment that spread in the space were proposed.(4) Difference of reservoir microscopic pore configuration in various flow units and difference of flow capability and oil displacement effect were demonstrated through the physical experiments such as nuclear magnetic resonance (NMR), constant rate mercury penetration, flow simulation and so on. The distribution of remaining oil in this area was predicted combining the dynamic data and numerical modeling based on the flow unit. Remaining oil production measure was brought up by the clue of flow unit during the medium and late course of the oilfield development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier–Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright © 2000 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Computer Aided Parallelisation Tools (CAPTools) [Ierotheou, C, Johnson SP, Cross M, Leggett PF, Computer aided parallelisation tools (CAPTools)-conceptual overview and performance on the parallelisation of structured mesh codes, Parallel Computing, 1996;22:163±195] is a set of interactive tools aimed to provide automatic parallelisation of serial FORTRAN Computational Mechanics (CM) programs. CAPTools analyses the user's serial code and then through stages of array partitioning, mask and communication calculation, generates parallel SPMD (Single Program Multiple Data) messages passing FORTRAN. The parallel code generated by CAPTools contains calls to a collection of routines that form the CAPTools communications Library (CAPLib). The library provides a portable layer and user friendly abstraction over the underlying parallel environment. CAPLib contains optimised message passing routines for data exchange between parallel processes and other utility routines for parallel execution control, initialisation and debugging. By compiling and linking with different implementations of the library, the user is able to run on many different parallel environments. Even with today's parallel systems the concept of a single version of a parallel application code is more of an aspiration than a reality. However for CM codes the data partitioning SPMD paradigm requires a relatively small set of message-passing communication calls. This set can be implemented as an intermediate `thin layer' library of message-passing calls that enables the parallel code (especially that generated automatically by a parallelisation tool such as CAPTools) to be as generic as possible. CAPLib is just such a `thin layer' message passing library that supports parallel CM codes, by mapping generic calls onto machine specific libraries (such as CRAY SHMEM) and portable general purpose libraries (such as PVM an MPI). This paper describe CAPLib together with its three perceived advantages over other routes: - as a high level abstraction, it is both easy to understand (especially when generated automatically by tools) and to implement by hand, for the CM community (who are not generally parallel computing specialists); - the one parallel version of the application code is truly generic and portable; - the parallel application can readily utilise whatever message passing libraries on a given machine yield optimum performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Schraudolph proposed an excellent exponential approximation providing increased performance particularly suited to the logistic squashing function used within many neural networking applications. This note applies Intel's streaming SIMD Extensions 2 (SSE2), where SIMD is single instruction multiple data, of the Pentum IV class processor to Schraudolph's technique, further increasing the performance of the logistic squashing function. It was found that the calculation of the new 32-bit SSE2 logistic squashing function described here was up to 38 times faster than the conventional exponential function and up to 16 times faster than a Schraudolph-style 32-bit method on an Intel Pentum D 3.6 GHz CPU.