241 resultados para VISCOUS FLOW
Resumo:
It has been well accepted that over 50% of cerebral ischemic events are the result of rupture of vulnerable carotid atheroma and subsequent thrombosis. Such strokes are potentially preventable by carotid interventions. Selection of patients for intervention is currently based on the severity of carotid luminal stenosis. It has been, however, widely accepted that luminal stenosis alone may not be an adequate predictor of risk. To evaluate the effects of degree of luminal stenosis and plaque morphology on plaque stability, we used a coupled nonlinear time-dependent model with flow-plaque interaction simulation to perform flow and stress/strain analysis for stenotic artery with a plaque. The Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian (ALE) formulation were used as the governing equations for the fluid. The Ogden strain energy function was used for both the fibrous cap and the lipid pool. The plaque Principal stresses and flow conditions were calculated for every case when varying the fibrous cap thickness from 0.1 to 2mm and the degree of luminal stenosis from 10% to 90%. Severe stenosis led to high flow velocities and high shear stresses, but a low or even negative pressure at the throat of the stenosis. Higher degree of stenosis and thinner fibrous cap led to larger plaque stresses, and a 50% decrease of fibrous cap thickness resulted in a 200% increase of maximum stress. This model suggests that fibrous cap thickness is critically related to plaque vulnerability and that, even within presence of moderate stenosis, may play an important role in the future risk stratification of those patients when identified in vivo using high resolution MR imaging.
Resumo:
Background and Purpose Acute cerebral ischemic events are associated with rupture of vulnerable carotid atheroma and subsequent thrombosis. Factors such as luminal stenosis and fibrous cap thickness have been thought to be important risk factors for plaque rupture. We used a flow-structure interaction model to simulate the interaction between blood flow and atheromatous plaque to evaluate the effect of the degree of luminal stenosis and fibrous cap thickness on plaque vulnerability. Methods A coupled nonlinear time-dependent model with a flow-plaque interaction simulation was used to perform flow and stress/strain analysis in a stenotic carotid artery model. The stress distribution within the plaque and the flow conditions within the vessel were calculated for every case when varying the fibrous cap thickness from 0.1 to 2 mm and the degree of luminal stenosis from 10% to 95%. A rupture stress of 300 kPa was chosen to indicate a high risk of plaque rupture. A 1-sample t test was used to compare plaque stresses with the rupture stress. Results High stress concentrations were found in the plaques in arteries with >70% degree of stenosis. Plaque stresses in arteries with 30% to 70% stenosis increased exponentially as fibrous cap thickness decreased. A decrease of fibrous cap thickness from 0.4 to 0.2 mm resulted in an increase of plaque stress from 141 to 409 kPa in a 40% degree stenotic artery. Conclusions There is an increase in plaque stress in arteries with a thin fibrous cap. The presence of a moderate carotid stenosis (30% to 70%) with a thin fibrous cap indicates a high risk for plaque rupture. Patients in the future may be risk stratified by measuring both fibrous cap thickness and luminal stenosis.
Resumo:
Large integration of solar Photo Voltaic (PV) in distribution network has resulted in over-voltage problems. Several control techniques are developed to address over-voltage problem using Deterministic Load Flow (DLF). However, intermittent characteristics of PV generation require Probabilistic Load Flow (PLF) to introduce variability in analysis that is ignored in DLF. The traditional PLF techniques are not suitable for distribution systems and suffer from several drawbacks such as computational burden (Monte Carlo, Conventional convolution), sensitive accuracy with the complexity of system (point estimation method), requirement of necessary linearization (multi-linear simulation) and convergence problem (Gram–Charlier expansion, Cornish Fisher expansion). In this research, Latin Hypercube Sampling with Cholesky Decomposition (LHS-CD) is used to quantify the over-voltage issues with and without the voltage control algorithm in the distribution network with active generation. LHS technique is verified with a test network and real system from an Australian distribution network service provider. Accuracy and computational burden of simulated results are also compared with Monte Carlo simulations.
Resumo:
This research aims to develop an Integrated Lean Six Sigma approach to investigate and resolve the patient flow problems in hospital emergency departments. It was proposed that the voice of the customer and the voice of the process should be considered simultaneously to investigate the current process of patient flow. Statistical analysis, visual process mapping with A3 problem solving sheet, and cause and effect diagrams have been used to identify the major patient flow issues. This research found that engaged frontline workers, long-term leadership obligation, an understanding of patients' requirements and the implementation of a systematic integration of lean strategies could continuously improve patient flow, health care service and growth in the emergency departments.
Resumo:
Based on unique news data relating to gold and crude oil, we investigate how news volume and sentiment, shocks in trading activity, market depth and trader positions unrelated to information flow covary with realized volatility. Positive shocks to the rate of news arrival, and negative shocks to news sentiment exhibit the largest effects. After controlling for the level of news flow and cross-correlations, net trader positions play only a minor role. These findings are at odds with those of [Wang (2002a). The Journal of Futures Markets, 22, 427–450; Wang (2002b). The Financial Review, 37, 295–316], but are consistent with the previous literature which doesn't find a strong link between volatility and trader positions.
Resumo:
- Provided a practical variable-stepsize implementation of the exponential Euler method (EEM). - Introduced a new second-order variant of the scheme that enables the local error to be estimated at the cost of a single additional function evaluation. - New EEM implementation outperformed sophisticated implementations of the backward differentiation formulae (BDF) of order 2 and was competitive with BDF of order 5 for moderate to high tolerances.
Resumo:
Fan forced injection of phosphine gas fumigant into stored grain is a common method to treat infestation by insects. For low injection velocities the transport of fumigant can be modelled as Darcy flow in a porous medium where the gas pressure satisfies Laplace's equation. Using this approach, a closed form series solution is derived for the pressure, velocity and streamlines in a cylindrically stored grain bed with either a circular or annular inlet, from which traverse times are numerically computed. A leading order closed form expression for the traverse time is also obtained and found to be reasonable for inlet configurations close to the central axis of the grain storage. Results are interpreted for the case of a representative 6m high farm wheat store, where the time to advect the phosphine to almost the entire grain bed is found to be approximately one hour.
Resumo:
The phosphine distribution in a cylindrical silo containing grain is predicted. A three-dimensional mathematical model, which accounts for multicomponent gas phase transport and the sorption of phosphine into the grain kernel is developed. In addition, a simple model is presented to describe the death of insects within the grain as a function of their exposure to phosphine gas. The proposed model is solved using the commercially available computational fluid dynamics (CFD) software, FLUENT, together with our own C code to customize the solver in order to incorporate the models for sorption and insect extinction. Two types of fumigation delivery are studied, namely, fan- forced from the base of the silo and tablet from the top of the silo. An analysis of the predicted phosphine distribution shows that during fan forced fumigation, the position of the leaky area is very important to the development of the gas flow field and the phosphine distribution in the silo. If the leak is in the lower section of the silo, insects that exist near the top of the silo may not be eradicated. However, the position of a leak does not affect phosphine distribution during tablet fumigation. For such fumigation in a typical silo configuration, phosphine concentrations remain low near the base of the silo. Furthermore, we find that half-life pressure test readings are not an indicator of phosphine distribution during tablet fumigation.
Resumo:
In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.
Resumo:
A computational model for isothermal axisymmetric turbulent flow in a quarl burner is set up using the CFD package FLUENT, and numerical solutions obtained from the model are compared with available experimental data. A standard k-e model and and two versions of the RNG k-e model are used to model the turbulence. One of the aims of the computational study is to investigate whether the RNG based k-e turbulence models are capable of yielding improved flow predictions compared with the standard k-e turbulence model. A difficulty is that the flow considered here features a confined vortex breakdown which can be highly sensitive to flow behaviour both upstream and downstream of the breakdown zone. Nevertheless, the relatively simple confining geometry allows us to undertake a systematic study so that both grid-independent and domain-independent results can be reported. The systematic study includes a detailed investigation of the effects of upstream and downstream conditions on the predictions, in addition to grid refinement and other tests to ensure that numerical error is not significant. Another important aim is to determine to what extent the turbulence model predictions can provide us with new insights into the physics of confined vortex breakdown flows. To this end, the computations are discussed in detail with reference to known vortex breakdown phenomena and existing theories. A major conclusion is that one of the RNG k-e models investigated here is able to correctly capture the complex forward flow region inside the recirculating breakdown zone. This apparently pathological result is in stark contrast to the findings of previous studies, most of which have concluded that either algebraic or differential Reynolds stress modelling is needed to correctly predict the observed flow features. Arguments are given as to why an isotropic eddy-viscosity turbulence model may well be able to capture the complex flow structure within the recirculating zone for this flow setup. With regard to the flow physics, a major finding is that the results obtained here are more consistent with the view that confined vortex breakdown is a type of axisymmetric boundary layer separation, rather than a manifestation of a subcritical flow state.
Resumo:
We report here a CFD model of highly swirling flow in a quarl burner using three versions of the k-epsilon model. Results for the recirculating zone, the bounding shear layer and the downstream flow are presented. We discuss, with suitable qualifications, how the model predictions can inform our understanding of this class of flows.
Resumo:
Motivated by a problem from fluid mechanics, we consider a generalization of the standard curve shortening flow problem for a closed embedded plane curve such that the area enclosed by the curve is forced to decrease at a prescribed rate. Using formal asymptotic and numerical techniques, we derive possible extinction shapes as the curve contracts to a point, dependent on the rate of decreasing area; we find there is a wider class of extinction shapes than for standard curve shortening, for which initially simple closed curves are always asymptotically circular. We also provide numerical evidence that self-intersection is possible for non-convex initial conditions, distinguishing between pinch-off and coalescence of the curve interior.
Resumo:
The idea of extracting knowledge in process mining is a descendant of data mining. Both mining disciplines emphasise data flow and relations among elements in the data. Unfortunately, challenges have been encountered when working with the data flow and relations. One of the challenges is that the representation of the data flow between a pair of elements or tasks is insufficiently simplified and formulated, as it considers only a one-to-one data flow relation. In this paper, we discuss how the effectiveness of knowledge representation can be extended in both disciplines. To this end, we introduce a new representation of the data flow and dependency formulation using a flow graph. The flow graph solves the issue of the insufficiency of presenting other relation types, such as many-to-one and one-to-many relations. As an experiment, a new evaluation framework is applied to the Teleclaim process in order to show how this method can provide us with more precise results when compared with other representations.
Resumo:
This paper explores the obstacles associated with designing video game levels for the purpose of objectively measuring flow. We sought to create three video game levels capable of inducing a flow state, an overload state (low-flow), and a boredom state (low-flow). A pilot study, in which participants self-reported levels of flow after playing all three game levels, was undertaken. Unexpected results point to the challenges of operationalising flow in video game research, obstacles in experimental design for invoking flow and low-flow, concerns about flow as a construct for measuring video game enjoyment, the applicability of self-report flow scales, and the experience of flow in video game play despite substantial challenge-skill differences.
Resumo:
This study aims to further research in the field of video games by examining flow during individual and co-operative gameplay. Using a puzzle game called Droppit, we examined differences in flow based on two modes of play: single player vs. co-operative gameplay. Co-operative gameplay was found to induce greater flow in participants than single player gameplay. Additionally, co-operative gameplay participants had increased feelings of Challenge-Skill Balance, Unambiguous Feedback, Transformation of Time and Autotelic Experience. Our findings suggest that co-operative gameplay, involving puzzle-based problems, may result in increased flow during video game play.