984 resultados para order-flow
Resumo:
Para las decisiones urgentes sobre intervenciones quirúrgicas en el sistema cardiovascular se necesitan simulaciones computacionales con resultados fiables y que consuman un tiempo de cálculo razonable. Durante años los investigadores han trabajado en diversos métodos numéricos de cálculo que resulten atractivos para los cirujanos. Estos métodos, precisos pero costosos desde el punto de vista del coste computacional, crean un desajuste entre la oferta de los ingenieros que realizan las simulaciones y los médicos que operan en el quirófano. Por otra parte, los métodos de cálculo más simplificados reducen el tiempo de cálculo pero pueden proporcionar resultados no realistas. El objetivo de esta tesis es combinar los conceptos de autorregulación e impedancia del sistema circulatorio, la interacción flujo sanguíneo-pared arterial y modelos geométricos idealizados tridimensionales de las arterias pero sin pérdida de realismo, con objeto de proponer una metodología de simulación que proporcione resultados correctos y completos, con tiempos de cálculo moderados. En las simulaciones numéricas, las condiciones de contorno basadas en historias de presión presentan inconvenientes por ser difícil conocerlas con detalle, y porque los resultados son muy sensibles ante pequeñas variaciones de dichas historias. La metodología propuesta se basa en los conceptos de autorregulación, para imponer la demanda de flujo aguas abajo del modelo en el ciclo cardiaco, y la impedancia, para representar el efecto que ejerce el flujo en el resto del sistema circulatorio sobre las arterias modeladas. De este modo las historias de presión en el contorno son resultados del cálculo, que se obtienen de manera iterativa. El método propuesto se aplica en una geometría idealizada del arco aórtico sin patologías y en otra geometría correspondiente a una disección Stanford de tipo A, considerando la interacción del flujo pulsátil con las paredes arteriales. El efecto de los tejidos circundantes también se incorpora en los modelos. También se hacen aplicaciones considerando la interacción en una geometría especifica de un paciente anciano que proviene de una tomografía computarizada. Finalmente se analiza una disección Stanford tipo B con tres modelos que incluyen la fenestración del saco. Clinicians demand fast and reliable numerical results of cardiovascular biomechanic simulations for their urgent pre-surgery decissions. Researchers during many years have work on different numerical methods in order to attract the clinicians' confidence to their colorful contours. Though precise but expensive and time-consuming methodologies create a gap between numerical biomechanics and hospital personnel. On the other hand, simulation simplifications with the aim of reduction in computational time may cause in production of unrealistic outcomes. The main objective of the current investigation is to combine ideas such as autoregulation, impedance, fluid-solid interaction and idealized geometries in order to propose a computationally cheap methodology without excessive or unrealistic simplifications. The pressure boundary conditions are critical and polemic in numerical simulations of cardiovascular system, in which a specific arterial site is of interest and the rest of the netwrok is neglected but represented by a boundary condition. The proposed methodology is a pressure boundary condition which takes advantage of numerical simplicity of application of an imposed pressure boundary condition on outlets, while it includes more sophisticated concepts such as autoregulation and impedance to gain more realistic results. Incorporation of autoregulation and impedance converts the pressure boundary conditions to an active and dynamic boundary conditions, receiving feedback from the results during the numerical calculations and comparing them with the physiological requirements. On the other hand, the impedance boundary condition defines the shapes of the pressure history curves applied at outlets. The applications of the proposed method are seen on idealized geometry of the healthy arotic arch as well as idealized Stanford type A dissection, considering the interaction of the arterial walls with the pulsatile blood flow. The effect of surrounding tissues is incorporated and studied in the models. The simulations continue with FSI analysis of a patient-specific CT scanned geometry of an old individual. Finally, inspiring of the statistic results of mortality rates in Stanford type B dissection, three models of fenestrated dissection sac is studied and discussed. Applying the developed boundary condition, an alternative hypothesis is proposed by the author with respect to the decrease in mortality rates in patients with fenestrations.
Resumo:
The three-dimensional wall-bounded open cavity may be considered as a simplified geometry found in industrial applications such as leading gear or slotted flats on the airplane. Understanding the three-dimensional complex flow structure that surrounds this particular geometry is therefore of major industrial interest. At the light of the remarkable former investigations in this kind of flows, enough evidences suggest that the lateral walls have a great influence on the flow features and hence on their instability modes. Nevertheless, even though there is a large body of literature on cavity flows, most of them are based on the assumption that the flow is two-dimensional and spanwise-periodic. The flow over realistic open cavity should be considered. This thesis presents an investigation of three-dimensional wall-bounded open cavity with geometric ratio 6:2:1. To this aim, three-dimensional Direct Numerical Simulation (DNS) and global linear instability have been performed. Linear instability analysis reveals that the onset of the first instability in this open cavity is around Recr 1080. The three-dimensional shear layer mode with a complex structure is shown to be the most unstable mode. I t is noteworthy that the flow pattern of this high-frequency shear layer mode is similar to the observed unstable oscillations in supercritical unstable case. DNS of the cavity flow carried out at different Reynolds number from steady state until a nonlinear saturated state is obtained. The comparison of time histories of kinetic energy presents a clearly dominant energetic mode which shifts between low-frequency and highfrequency oscillation. A complete flow patterns from subcritical cases to supercritical case has been put in evidence. The flow structure at the supercritical case Re=1100 resembles typical wake-shedding instability oscillations with a lateral motion existed in the subcritical cases. Also, This flow pattern is similar to the observations in experiments. In order to validate the linear instability analysis results, the topology of the composite flow fields reconstructed by linear superposition of a three-dimensional base flow and its leading three-dimensional global eigenmodes has been studied. The instantaneous wall streamlines of those composited flows display distinguish influence region of each eigenmode. Attention has been focused on the leading high-frequency shear layer mode; the composite flow fields have been fully recognized with respect to the downstream wave shedding. The three-dimensional shear layer mode is shown to give rise to a typical wake-shedding instability with a lateral motions occurring downstream which is in good agreement with the experiment results. Moreover, the spanwise-periodic, open cavity with the same length to depth ratio has been also studied. The most unstable linear mode is different from the real three-dimensional cavity flow, because of the existence of the side walls. Structure sensitivity of the unstable global mode is analyzed in the flow control context. The adjoint-based sensitivity analysis has been employed to localized the receptivity region, where the flow is more sensible to momentum forcing and mass injection. Because of the non-normality of the linearized Navier-Stokes equations, the direct and adjoint field has a large spatial separation. The strongest sensitivity region is locate in the upstream lip of the three-dimensional cavity. This numerical finding is in agreement with experimental observations. Finally, a prototype of passive flow control strategy is applied.
Resumo:
In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a ? -estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier?Stokes equations. It is shown that the two quasi- a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.
Resumo:
Esta Tesis se centra en el desarrollo de un método para la reconstrucción de bases de datos experimentales incompletas de más de dos dimensiones. Como idea general, consiste en la aplicación iterativa de la descomposición en valores singulares de alto orden sobre la base de datos incompleta. Este nuevo método se inspira en el que ha servido de base para la reconstrucción de huecos en bases de datos bidimensionales inventado por Everson y Sirovich (1995) que a su vez, ha sido mejorado por Beckers y Rixen (2003) y simultáneamente por Venturi y Karniadakis (2004). Además, se ha previsto la adaptación de este nuevo método para tratar el posible ruido característico de bases de datos experimentales y a su vez, bases de datos estructuradas cuya información no forma un hiperrectángulo perfecto. Se usará una base de datos tridimensional de muestra como modelo, obtenida a través de una función transcendental, para calibrar e ilustrar el método. A continuación se detalla un exhaustivo estudio del funcionamiento del método y sus variantes para distintas bases de datos aerodinámicas. En concreto, se usarán tres bases de datos tridimensionales que contienen la distribución de presiones sobre un ala. Una se ha generado a través de un método semi-analítico con la intención de estudiar distintos tipos de discretizaciones espaciales. El resto resultan de dos modelos numéricos calculados en C F D . Por último, el método se aplica a una base de datos experimental de más de tres dimensiones que contiene la medida de fuerzas de una configuración ala de Prandtl obtenida de una campaña de ensayos en túnel de viento, donde se estudiaba un amplio espacio de parámetros geométricos de la configuración que como resultado ha generado una base de datos donde la información está dispersa. ABSTRACT A method based on an iterative application of high order singular value decomposition is derived for the reconstruction of missing data in multidimensional databases. The method is inspired by a seminal gappy reconstruction method for two-dimensional databases invented by Everson and Sirovich (1995) and improved by Beckers and Rixen (2003) and Venturi and Karniadakis (2004). In addition, the method is adapted to treat both noisy and structured-but-nonrectangular databases. The method is calibrated and illustrated using a three-dimensional toy model database that is obtained by discretizing a transcendental function. The performance of the method is tested on three aerodynamic databases for the flow past a wing, one obtained by a semi-analytical method, and two resulting from computational fluid dynamics. The method is finally applied to an experimental database consisting in a non-exhaustive parameter space measurement of forces for a box-wing configuration.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
A high resolution, second-order central difference method for incompressible flows is presented. The method is based on a recent second-order extension of the classic Lax–Friedrichs scheme introduced for hyperbolic conservation laws (Nessyahu H. & Tadmor E. (1990) J. Comp. Physics. 87, 408-463; Jiang G.-S. & Tadmor E. (1996) UCLA CAM Report 96-36, SIAM J. Sci. Comput., in press) and augmented by a new discrete Hodge projection. The projection is exact, yet the discrete Laplacian operator retains a compact stencil. The scheme is fast, easy to implement, and readily generalizable. Its performance was tested on the standard periodic double shear-layer problem; no spurious vorticity patterns appear when the flow is underresolved. A short discussion of numerical boundary conditions is also given, along with a numerical example.
Resumo:
This paper addresses the problem of predicting the critical parameters that characterize thermal runaway in a tubular reactor with wall cooling, introducing a new view of the n-th order kinetics reactions. The paper describes the trajectories of the system in the temperature-(concentration)n plane, and deduces the conditions for the thermal risk.
Resumo:
"TR63-217F."
Resumo:
The influence of three dimensional effects on isochromatic birefringence is evaluated for planar flows by means of numerical simulation. Two fluid models are investigated in channel and abrupt contraction geometries. In practice, the flows are confined by viewing windows, which alter the stresses along the optical path. The observed optical properties differ therefore from their counterpart in an ideal two-dimensional flow. To investigate the influence of these effects, the stress optical rule and the differential propagation Mueller matrix are used. The material parameters are selected so that a retardation of multiple orders is achieved, as is typical for highly birefringent melts. Errors due to three dimensional effects are mainly found on the symmetry plane, and increase significantly with the flow rate. Increasing the geometric aspect ratio improve the accuracy provided that the error on the retardation is less than one order. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
An existing capillarity correction for free surface groundwater flow as modelled by the Boussinesq equation is re-investigated. Existing solutions, based on the shallow flow expansion, have considered only the zeroth-order approximation. Here, a second-order capillarity correction to tide-induced watertable fluctuations in a coastal aquifer adjacent to a sloping beach is derived. A new definition of the capillarity correction is proposed for small capillary fringes, and a simplified solution is derived. Comparisons of the two models show that the simplified model can be used in most cases. The significant effects of higher-order capillarity corrections on tidal fluctuations in a sloping beach are also demonstrated. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.
Resumo:
The effect of acceleration skewness on sheet flow sediment transport rates (q) over bar (s) is analysed using new data which have acceleration skewness and superimposed currents but no boundary layer streaming. Sediment mobilizing forces due to drag and to acceleration (similar to pressure gradients) are weighted by cosine and sine, respectively, of the angle phi(.)(tau)phi(tau) = 0 thus corresponds to drag dominated sediment transport, (q) over bar (s)similar to vertical bar u(infinity)vertical bar u(infinity), while phi(tau) = 90 degrees corresponds to total domination by the pressure gradients, (q) over bar similar to du(infinity)/dt. Using the optimal angle, phi = 51 degrees based on that data, good agreement is subsequently found with data that have strong influence from boundary layer streaming. Good agreement is also maintained with the large body of U-tube data simulating sine waves with superimposed currents and second-order Stokes waves, all of which have zero acceleration skewness. The recommended model can be applied to irregular waves with arbitrary shape as long as the assumption negligible time lag between forcing and sediment transport rate is valid. With respect to irregular waves, the model is much easier to apply than the competing wave-by-wave models. Issues for further model developments are identified through a comprehensive data review.
Resumo:
Stirred Mills are becoming increasingly used for fine and ultra-fine grinding. This technology is still poorly understood when used in the mineral processing context. This makes process optimisation of such devices problematic. 3D DEM simulations of the flow of grinding media in pilot scale tower mills and pin mills are carried out in order to investigate the relative performance of these stirred mills. In the first part of this paper, media flow patterns and energy absorption rates and distributions were analysed to provide a good understanding of the media flow and the collisional environment in these mills. In this second part we analyse steady state coherent flow structures, liner stress and wear by impact and abrasion. We also examine mixing and transport efficiency. Together these provide a comprehensive understanding of all the key processes operating in these mills and a clear understanding of the relative performance issues. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Soil absorption systems (SAS) are used commonly to treat and disperse septic tank effluent (STE). SAS can hydraulically fail as a result of the low permeable biomat zone that develops on the infiltrative surface. The objectives of this experiment were to compare the hydraulic properties of biomats grown in soils of different textures, to investigate the long-term acceptance rates (LTAR) from prolonged application of STE, and to assess if soils were of major importance in determining LTAR. The STE was applied to repacked sand, Oxisol and Vertisol soil columns over a period of 16 months, at equivalent hydraulic loading rates of 50, 35 and 8 L/m(2)/d, respectively Infiltration rates, soil matric potentials, and biomat hydraulic properties were measured either directly from the soil columns or calculated using established soil physics theory. Biomats 1 to 2 cm thick developed in all soils columns with hydraulic resistances of 27 to 39 d. These biomats reduced a 4 order of magnitude variation in saturated hydraulic conductivity (K.) between the soils to a one order of magnitude variation in LTAR. A relationship between biomat resistance and organic loading rate was observed in all soils. Saturated hydraulic conductivity influenced the rate and extent of biomat development. However, once the biomat was established, the LTAR was governed by the resistance of the biomat and the sub-biomat soil unsaturated flow regime induced by the biomat. Results show that whilst initial soil K. is likely to be important in the establishment of the biomat zone in a trench, LTAR is determined by the biomat resistance and the unsaturated soil hydraulic conductivity, not the K, of a soil. The results call into question the commonly used approach of basing the LTAR, and ultimately trench length in SAS, on the initial K, of soils. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..