141 resultados para event-driven simulation

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of mapping business contract conditions onto the messages and rules that represent service interactions in a collaborative business process. We describe why this mapping is not straightforward by means of an example. We then consider a message-driven process language as a target for the mapping and use this mapping solution to discuss broad range of problems related to the mapping problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We reviewed the use of advanced display technologies for monitoring in anesthesia. Researchers are investigating displays that integrate information and that, in some cases, also deliver the results continuously to the anesthesiologist. Integrated visual displays reveal higher-order properties of patient state and speed in responding to events, but their benefits under an intensely timeshared load is unknown. Head-mounted displays seem to shorten the time to respond to changes, but their impact on peripheral awareness and attention is unknown. Continuous auditory displays extending pulse oximetry seem to shorten response times and improve the ability to time-share other tasks, but their integration into the already noisy operative environment still needs to be tested. We reviewed the advantages and disadvantages of the three approaches, drawing on findings from other fields, such as aviation, to suggest outcomes where there are still no results for the anesthesia context. Proving that advanced patient monitoring displays improve patient outcomes is difficult, and a more realistic goal is probably to prove that such displays lead to better situational awareness, earlier responding, and less workload, all of which keep anesthesia practice away from the outer boundaries of safe operation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Numerical methods are used to simulate the double-diffusion driven convective pore-fluid flow and rock alteration in three-dimensional fluid-saturated geological fault zones. The double diffusion is caused by a combination of both the positive upward temperature gradient and the positive downward salinity concentration gradient within a three-dimensional fluid-saturated geological fault zone, which is assumed to be more permeable than its surrounding rocks. In order to ensure the physical meaningfulness of the obtained numerical solutions, the numerical method used in this study is validated by a benchmark problem, for which the analytical solution to the critical Rayleigh number of the system is available. The theoretical value of the critical Rayleigh number of a three-dimensional fluid-saturated geological fault zone system can be used to judge whether or not the double-diffusion driven convective pore-fluid flow can take place within the system. After the possibility of triggering the double-diffusion driven convective pore-fluid flow is theoretically validated for the numerical model of a three-dimensional fluid-saturated geological fault zone system, the corresponding numerical solutions for the convective flow and temperature are directly coupled with a geochemical system. Through the numerical simulation of the coupled system between the convective fluid flow, heat transfer, mass transport and chemical reactions, we have investigated the effect of the double-diffusion driven convective pore-fluid flow on the rock alteration, which is the direct consequence of mineral redistribution due to its dissolution, transportation and precipitation, within the three-dimensional fluid-saturated geological fault zone system. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A systematic goal-driven top-down modelling methodology is proposed that is capable of developing a multiscale model of a process system for given diagnostic purposes. The diagnostic goal-set and the symptoms are extracted from HAZOP analysis results, where the possible actions to be performed in a fault situation are also described. The multiscale dynamic model is realized in the form of a hierarchical coloured Petri net by using a novel substitution place-transition pair. Multiscale simulation that focuses automatically on the fault areas is used to predict the effect of the proposed preventive actions. The notions and procedures are illustrated on some simple case studies including a heat exchanger network and a more complex wet granulation process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The estimation of P(S-n > u) by simulation, where S, is the sum of independent. identically distributed random varibles Y-1,..., Y-n, is of importance in many applications. We propose two simulation estimators based upon the identity P(S-n > u) = nP(S, > u, M-n = Y-n), where M-n = max(Y-1,..., Y-n). One estimator uses importance sampling (for Y-n only), and the other uses conditional Monte Carlo conditioning upon Y1,..., Yn-1. Properties of the relative error of the estimators are derived and a numerical study given in terms of the M/G/1 queue in which n is replaced by an independent geometric random variable N. The conclusion is that the new estimators compare extremely favorably with previous ones. In particular, the conditional Monte Carlo estimator is the first heavy-tailed example of an estimator with bounded relative error. Further improvements are obtained in the random-N case, by incorporating control variates and stratification techniques into the new estimation procedures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the past years, the paradigm of component-based software engineering has been established in the construction of complex mission-critical systems. Due to this trend, there is a practical need for techniques that evaluate critical properties (such as safety, reliability, availability or performance) of these systems. In this paper, we review several high-level techniques for the evaluation of safety properties for component-based systems and we propose a new evaluation model (State Event Fault Trees) that extends safety analysis towards a lower abstraction level. This model possesses a state-event semantics and strong encapsulation, which is especially useful for the evaluation of component-based software systems. Finally, we compare the techniques and give suggestions for their combined usage

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present finite element simulations of temperature gradient driven rock alteration and mineralization in fluid saturated porous rock masses. In particular, we explore the significance of production/annihilation terms in the mass balance equations and the dependence of the spatial patterns of rock alteration upon the ratio of the roll over time of large scale convection cells to the relaxation time of the chemical reactions. Special concepts such as the gradient reaction criterion or rock alteration index (RAI) are discussed in light of the present, more general theory. In order to validate the finite element simulation, we derive an analytical solution for the rock alteration index of a benchmark problem on a two-dimensional rectangular domain. Since the geometry and boundary conditions of the benchmark problem can be easily and exactly modelled, the analytical solution is also useful for validating other numerical methods, such as the finite difference method and the boundary element method, when they are used to dear with this kind of problem. Finally, the potential of the theory is illustrated by means of finite element studies related to coupled flow problems in materially homogeneous and inhomogeneous porous rock masses. (C) 1998 Elsevier Science S.A. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional magnetic resonance imaging (FMRI) analysis methods can be quite generally divided into hypothesis-driven and data-driven approaches. The former are utilised in the majority of FMRI studies, where a specific haemodynamic response is modelled utilising knowledge of event timing during the scan, and is tested against the data using a t test or a correlation analysis. These approaches often lack the flexibility to account for variability in haemodynamic response across subjects and brain regions which is of specific interest in high-temporal resolution event-related studies. Current data-driven approaches attempt to identify components of interest in the data, but currently do not utilise any physiological information for the discrimination of these components. Here we present a hypothesis-driven approach that is an extension of Friman's maximum correlation modelling method (Neurolmage 16, 454-464, 2002) specifically focused on discriminating the temporal characteristics of event-related haemodynamic activity. Test analyses, on both simulated and real event-related FMRI data, will be presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The St. Lawrence Island polynya (SLIP) is a commonly occurring winter phenomenon in the Bering Sea, in which dense saline water produced during new ice formation is thought to flow northward through the Bering Strait to help maintain the Arctic Ocean halocline. Winter darkness and inclement weather conditions have made continuous in situ and remote observation of this polynya difficult. However, imagery acquired from the European Space Agency ERS-1 Synthetic Aperture Radar (SAR) has allowed observation of the St. Lawrence Island polynya using both the imagery and derived ice displacement products. With the development of ARCSyM, a high resolution regional model of the Arctic atmosphere/sea ice system, simulation of the SLIP in a climate model is now possible. Intercomparisons between remotely sensed products and simulations can lead to additional insight into the SLIP formation process. Low resolution SAR, SSM/I and AVHRR infrared imagery for the St. Lawrence Island region are compared with the results of a model simulation for the period of 24-27 February 1992. The imagery illustrates a polynya event (polynya opening). With the northerly winds strong and consistent over several days, the coupled model captures the SLIP event with moderate accuracy. However, the introduction of a stability dependent atmosphere-ice drag coefficient, which allows feedbacks between atmospheric stability, open water, and air-ice drag, produces a more accurate simulation of the SLIP in comparison to satellite imagery. Model experiments show that the polynya event is forced primarily by changes in atmospheric circulation followed by persistent favorable conditions: ocean surface currents are found to have a small but positive impact on the simulation which is enhanced when wind forcing is weak or variable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.