247 resultados para Discounted Cash Flow


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is to investigate the blood flow pattern in carotid bifurcation with a high degree of luminal stenosis, combining in vivo magnetic resonance imaging (MRI) and computational fluid dynamics (CFD). A newly developed two-equation transitional model was employed to evaluate wall shear stress (WSS) distribution and pressure drop across the stenosis, which are closely related to plaque vulnerability. A patient with an 80% left carotid stenosis was imaged using high resolution MRI, from which a patient-specific geometry was reconstructed and flow boundary conditions were acquired for CFD simulation. A transitional model was implemented to investigate the flow velocity and WSS distribution in the patient-specific model. The peak time-averaged WSS value of approximately 73Pa was predicted by the transitional flow model, and the regions of high WSS occurred at the throat of the stenosis. High oscillatory shear index values up to 0.50 were present in a helical flow pattern from the outer wall of the internal carotid artery immediately after the throat. This study shows the potential suitability of a transitional turbulent flow model in capturing the flow phenomena in severely stenosed carotid arteries using patient-specific MRI data and provides the basis for further investigation of the links between haemodynamic variables and plaque vulnerability. It may be useful in the future for risk assessment of patients with carotid disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been well accepted that over 50% of cerebral ischemic events are the result of rupture of vulnerable carotid atheroma and subsequent thrombosis. Such strokes are potentially preventable by carotid interventions. Selection of patients for intervention is currently based on the severity of carotid luminal stenosis. It has been, however, widely accepted that luminal stenosis alone may not be an adequate predictor of risk. To evaluate the effects of degree of luminal stenosis and plaque morphology on plaque stability, we used a coupled nonlinear time-dependent model with flow-plaque interaction simulation to perform flow and stress/strain analysis for stenotic artery with a plaque. The Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian (ALE) formulation were used as the governing equations for the fluid. The Ogden strain energy function was used for both the fibrous cap and the lipid pool. The plaque Principal stresses and flow conditions were calculated for every case when varying the fibrous cap thickness from 0.1 to 2mm and the degree of luminal stenosis from 10% to 90%. Severe stenosis led to high flow velocities and high shear stresses, but a low or even negative pressure at the throat of the stenosis. Higher degree of stenosis and thinner fibrous cap led to larger plaque stresses, and a 50% decrease of fibrous cap thickness resulted in a 200% increase of maximum stress. This model suggests that fibrous cap thickness is critically related to plaque vulnerability and that, even within presence of moderate stenosis, may play an important role in the future risk stratification of those patients when identified in vivo using high resolution MR imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Purpose Acute cerebral ischemic events are associated with rupture of vulnerable carotid atheroma and subsequent thrombosis. Factors such as luminal stenosis and fibrous cap thickness have been thought to be important risk factors for plaque rupture. We used a flow-structure interaction model to simulate the interaction between blood flow and atheromatous plaque to evaluate the effect of the degree of luminal stenosis and fibrous cap thickness on plaque vulnerability. Methods A coupled nonlinear time-dependent model with a flow-plaque interaction simulation was used to perform flow and stress/strain analysis in a stenotic carotid artery model. The stress distribution within the plaque and the flow conditions within the vessel were calculated for every case when varying the fibrous cap thickness from 0.1 to 2 mm and the degree of luminal stenosis from 10% to 95%. A rupture stress of 300 kPa was chosen to indicate a high risk of plaque rupture. A 1-sample t test was used to compare plaque stresses with the rupture stress. Results High stress concentrations were found in the plaques in arteries with >70% degree of stenosis. Plaque stresses in arteries with 30% to 70% stenosis increased exponentially as fibrous cap thickness decreased. A decrease of fibrous cap thickness from 0.4 to 0.2 mm resulted in an increase of plaque stress from 141 to 409 kPa in a 40% degree stenotic artery. Conclusions There is an increase in plaque stress in arteries with a thin fibrous cap. The presence of a moderate carotid stenosis (30% to 70%) with a thin fibrous cap indicates a high risk for plaque rupture. Patients in the future may be risk stratified by measuring both fibrous cap thickness and luminal stenosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large integration of solar Photo Voltaic (PV) in distribution network has resulted in over-voltage problems. Several control techniques are developed to address over-voltage problem using Deterministic Load Flow (DLF). However, intermittent characteristics of PV generation require Probabilistic Load Flow (PLF) to introduce variability in analysis that is ignored in DLF. The traditional PLF techniques are not suitable for distribution systems and suffer from several drawbacks such as computational burden (Monte Carlo, Conventional convolution), sensitive accuracy with the complexity of system (point estimation method), requirement of necessary linearization (multi-linear simulation) and convergence problem (Gram–Charlier expansion, Cornish Fisher expansion). In this research, Latin Hypercube Sampling with Cholesky Decomposition (LHS-CD) is used to quantify the over-voltage issues with and without the voltage control algorithm in the distribution network with active generation. LHS technique is verified with a test network and real system from an Australian distribution network service provider. Accuracy and computational burden of simulated results are also compared with Monte Carlo simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research aims to develop an Integrated Lean Six Sigma approach to investigate and resolve the patient flow problems in hospital emergency departments. It was proposed that the voice of the customer and the voice of the process should be considered simultaneously to investigate the current process of patient flow. Statistical analysis, visual process mapping with A3 problem solving sheet, and cause and effect diagrams have been used to identify the major patient flow issues. This research found that engaged frontline workers, long-term leadership obligation, an understanding of patients' requirements and the implementation of a systematic integration of lean strategies could continuously improve patient flow, health care service and growth in the emergency departments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on unique news data relating to gold and crude oil, we investigate how news volume and sentiment, shocks in trading activity, market depth and trader positions unrelated to information flow covary with realized volatility. Positive shocks to the rate of news arrival, and negative shocks to news sentiment exhibit the largest effects. After controlling for the level of news flow and cross-correlations, net trader positions play only a minor role. These findings are at odds with those of [Wang (2002a). The Journal of Futures Markets, 22, 427–450; Wang (2002b). The Financial Review, 37, 295–316], but are consistent with the previous literature which doesn't find a strong link between volatility and trader positions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

- Provided a practical variable-stepsize implementation of the exponential Euler method (EEM). - Introduced a new second-order variant of the scheme that enables the local error to be estimated at the cost of a single additional function evaluation. - New EEM implementation outperformed sophisticated implementations of the backward differentiation formulae (BDF) of order 2 and was competitive with BDF of order 5 for moderate to high tolerances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fan forced injection of phosphine gas fumigant into stored grain is a common method to treat infestation by insects. For low injection velocities the transport of fumigant can be modelled as Darcy flow in a porous medium where the gas pressure satisfies Laplace's equation. Using this approach, a closed form series solution is derived for the pressure, velocity and streamlines in a cylindrically stored grain bed with either a circular or annular inlet, from which traverse times are numerically computed. A leading order closed form expression for the traverse time is also obtained and found to be reasonable for inlet configurations close to the central axis of the grain storage. Results are interpreted for the case of a representative 6m high farm wheat store, where the time to advect the phosphine to almost the entire grain bed is found to be approximately one hour.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The phosphine distribution in a cylindrical silo containing grain is predicted. A three-dimensional mathematical model, which accounts for multicomponent gas phase transport and the sorption of phosphine into the grain kernel is developed. In addition, a simple model is presented to describe the death of insects within the grain as a function of their exposure to phosphine gas. The proposed model is solved using the commercially available computational fluid dynamics (CFD) software, FLUENT, together with our own C code to customize the solver in order to incorporate the models for sorption and insect extinction. Two types of fumigation delivery are studied, namely, fan- forced from the base of the silo and tablet from the top of the silo. An analysis of the predicted phosphine distribution shows that during fan forced fumigation, the position of the leaky area is very important to the development of the gas flow field and the phosphine distribution in the silo. If the leak is in the lower section of the silo, insects that exist near the top of the silo may not be eradicated. However, the position of a leak does not affect phosphine distribution during tablet fumigation. For such fumigation in a typical silo configuration, phosphine concentrations remain low near the base of the silo. Furthermore, we find that half-life pressure test readings are not an indicator of phosphine distribution during tablet fumigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a very recent study [1] the Renormalisation Group (RNG) turbulence model was used to obtain flow predictions in a strongly swirling quarl burner, and was found to perform well in predicting certain features that are not well captured using less sophisticated models of turbulence. The implication is that the RNG approach should provide an economical and reliable tool for the prediction of swirling flows in combustor and furnace geometries commonly encountered in technological applications. To test this hypothesis the present work considers flow in a model furnace for which experimental data is available [2]. The essential features of the flow which differentiate it from the previous study [1] are that the annular air jet entry is relatively narrow and the base wall of the cylindrical furnace is at 90 degrees to the inlet pipe. For swirl numbers of order 1 the resulting flow is highly complex with significant inner and outer recirculation regions. The RNG and standard k-epsilon models are used to model the flow for both swirling and non-swirling entry jets and the results compared with experimental data [2]. Near wall viscous effects are accounted for in both models via the standard wall function formulation [3]. For the RNG model, additional computations with grid placement extending well inside the near wall viscous-affected sublayer are performed in order to assess the low Reynolds number capabilities of the model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.