1000 resultados para Simulation numérique
Resumo:
We propose a dynamic mathematical model of tissue oxygen transport by a preexisting three-dimensional microvascular network which provides nutrients for an in situ cancer at the very early stage of primary microtumour growth. The expanding tumour consumes oxygen during its invasion to the surrounding tissues and cooption of host vessels. The preexisting vessel cooption, remodelling and collapse are modelled by the changes of haemodynamic conditions due to the growing tumour. A detailed computational model of oxygen transport in tumour tissue is developed by considering (a) the time-varying oxygen advection diffusion equation within the microvessel segments, (b) the oxygen flux across the vessel walls, and (c) the oxygen diffusion and consumption with in the tumour and surrounding healthy tissue. The results show the oxygen concentration distribution at different time points of early tumour growth. In addition, the influence of preexisting vessel density on the oxygen transport has been discussed. The proposed model not only provides a quantitative approach for investigating the interactions between tumour growth and oxygen delivery, but also is extendable to model other molecules or chemotherapeutic drug transport in the future study.
Resumo:
A three-dimensional (3D) mathematical model of tumour growth at the avascular phase and vessel remodelling in host tissues is proposed with emphasis on the study of the interactions of tumour growth and hypoxic micro-environment in host tissues. The hybrid based model includes the continuum part, such as the distributions of oxygen and vascular endothelial growth factors (VEGFs), and the discrete part of tumour cells (TCs) and blood vessel networks. The simulation shows the dynamic process of avascular tumour growth from a few initial cells to an equilibrium state with varied vessel networks. After a phase of rapidly increasing numbers of the TCs, more and more host vessels collapse due to the stress caused by the growing tumour. In addition, the consumption of oxygen expands with the enlarged tumour region. The study also discusses the effects of certain factors on tumour growth, including the density and configuration of preexisting vessel networks and the blood oxygen content. The model enables us to examine the relationship between early tumour growth and hypoxic micro-environment in host tissues, which can be useful for further applications, such as tumour metastasis and the initialization of tumour angiogenesis.
Resumo:
Background: Coronary tortuosity (CT) is a common coronary angiographic finding. Whether CT leads to an apparent reduction in coronary pressure distal to the tortuous segment of the coronary artery is still unknown. The purpose of this study is to determine the impact of CT on coronary pressure distribution by numerical simulation. Methods: 21 idealized models were created to investigate the influence of coronary tortuosity angle (CTA) and coronary tortuosity number (CTN) on coronary pressure distribution. A 2D incompressible Newtonian flow was assumed and the computational simulation was performed using finite volume method. CTA of 30°, 60°, 90°, 120° and CTN of 0, 1, 2, 3, 4, 5 were discussed under both steady and pulsatile conditions, and the changes of outlet pressure and inlet velocity during the cardiac cycle were considered. Results: Coronary pressure distribution was affected both by CTA and CTN. We found that the pressure drop between the start and the end of the CT segment decreased with CTA, and the length of the CT segment also declined with CTA. An increase in CTN resulted in an increase in the pressure drop. Conclusions: Compared to no-CT, CT can results in more decrease of coronary blood pressure in dependence on the severity of tortuosity and severe CT may cause myocardial ischemia.
Resumo:
The rupture of atherosclerotic plaques is known to be associated with the stresses that act on or within the arterial wall. The extreme wall tensile stress (WTS) is usually recognized as a primary trigger for the rupture of vulnerable plaque. The present study used the in-vivo high-resolution multi-spectral magnetic resonance imaging (MRI) for carotid arterial plaque morphology reconstruction. Image segmentation of different plaque components was based on the multi-spectral MRI and co-registered with different sequences for the patient. Stress analysis was performed on totally four subjects with different plaque burden by fluid-structure interaction (FSI) simulations. Wall shear stress distributions are highly related to the degree of stenosis, while the level of its magnitude is much lower than the WTS in the fibrous cap. WTS is higher in the luminal wall and lower at the outer wall, with the lowest stress at the lipid region. Local stress concentrations are well confined in the thinner fibrous cap region, and usually locating in the plaque shoulder; the introduction of relative stress variation during a cycle in the fibrous cap can be a potential indicator for plaque fatigue process in the thin fibrous cap. According to stress analysis of the four subjects, a risk assessment in terms of mechanical factors could be made, which may be helpful in clinical practice. However, more subjects with patient specific analysis are desirable for plaque-stability study.
Resumo:
It has been well accepted that over 50% of cerebral ischemic events are the result of rupture of vulnerable carotid atheroma and subsequent thrombosis. Such strokes are potentially preventable by carotid interventions. Selection of patients for intervention is currently based on the severity of carotid luminal stenosis. It has been, however, widely accepted that luminal stenosis alone may not be an adequate predictor of risk. To evaluate the effects of degree of luminal stenosis and plaque morphology on plaque stability, we used a coupled nonlinear time-dependent model with flow-plaque interaction simulation to perform flow and stress/strain analysis for stenotic artery with a plaque. The Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian (ALE) formulation were used as the governing equations for the fluid. The Ogden strain energy function was used for both the fibrous cap and the lipid pool. The plaque Principal stresses and flow conditions were calculated for every case when varying the fibrous cap thickness from 0.1 to 2mm and the degree of luminal stenosis from 10% to 90%. Severe stenosis led to high flow velocities and high shear stresses, but a low or even negative pressure at the throat of the stenosis. Higher degree of stenosis and thinner fibrous cap led to larger plaque stresses, and a 50% decrease of fibrous cap thickness resulted in a 200% increase of maximum stress. This model suggests that fibrous cap thickness is critically related to plaque vulnerability and that, even within presence of moderate stenosis, may play an important role in the future risk stratification of those patients when identified in vivo using high resolution MR imaging.
Resumo:
Considering ultrasound propagation through complex composite media as an array of parallel sonic rays, a comparison of computer simulated prediction with experimental data has previously been reported for transmission mode (where one transducer serves as transmitter, the other as receiver) in a series of ten acrylic step-wedge samples, immersed in water, exhibiting varying degrees of transit time inhomogeneity. In this study, the same samples were used but in pulse-echo mode, where the same ultrasound transducer served as both transmitter and receiver, detecting both ‘primary’ (internal sample interface) and ‘secondary’ (external sample interface) echoes. A transit time spectrum (TTS) was derived, describing the proportion of sonic rays with a particular transit time. A computer simulation was performed to predict the transit time and amplitude of various echoes created, and compared with experimental data. Applying an amplitude-tolerance analysis, 91.7±3.7% of the simulated data was within ±1 standard deviation (STD) of the experimentally measured amplitude-time data. Correlation of predicted and experimental transit time spectra provided coefficients of determination (R2) ranging from 100.0% to 96.8% for the various samples tested. The results acquired from this study provide good evidence for the concept of parallel sonic rays. Further, deconvolution of experimental input and output signals has been shown to provide an effective method to identify echoes otherwise lost due to phase cancellation. Potential applications of pulse-echo ultrasound transit time spectroscopy (PE-UTTS) include improvement of ultrasound image fidelity by improving spatial resolution and reducing phase interference artefacts.
Resumo:
Three simulations of evapotranspiration were done with two values of time step,viz 10 min and one day. Inputs to the model were weather data, including directly measured upward and downward radiation, and soil characteristics. Three soils were used for each simulation. Analysis of the results shows that the time step has a direct influence on the prediction of potential evapotranspiration, but a complex interaction of this effect with the soil moisture characteristic, rate of increase of ground cover and bare soil evaporation determines the actual transpiration predicted. The results indicate that as small a time step as possible should be used in the simulation.
Resumo:
The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.
Resumo:
Efficiency of analysis using generalized estimation equations is enhanced when intracluster correlation structure is accurately modeled. We compare two existing criteria (a quasi-likelihood information criterion, and the Rotnitzky-Jewell criterion) to identify the true correlation structure via simulations with Gaussian or binomial response, covariates varying at cluster or observation level, and exchangeable or AR(l) intracluster correlation structure. Rotnitzky and Jewell's approach performs better when the true intracluster correlation structure is exchangeable, while the quasi-likelihood criteria performs better for an AR(l) structure.
Resumo:
The electric field in certain electrostatic devices can be modeled by a grounded plate electrode affected by a corona discharge generated by a series of parallel wires connected to a DC high-voltage supply. The system of differential equations that describe the behaviour (i.e., charging and motion) of the conductive particle in such an electric field has been numerically solved, using several simplifying assumptions. Thus, it was possible to investigate the effect of various electrical and mechanical factors on the trajectories of conductive particles. This model has been employed to study the behaviour of coalparticles in fly-ash corona separators.
Resumo:
Sheep in western Queensland have been predominantly reared for wool. When wool prices became depressed interest in the sheep meat industry, increased. For north west Queensland producers, opportunities may exist to participate in live sheep and meat export to Asia. A simulation model was developed to determine whether this sheep producing area has the capability to provide sufficient numbers of sheep under variable climatic conditions while sustaining the land resources. Maximum capacity for sustainability of resources (as described by stock numbers) was derived from an in-depth study of the agricultural and pastoral potential of Queensland. Decades of sheep production and climatic data spanning differing seasonal conditions were collated for analysis. A ruminant biology model adapted from Grazplan was used to simulate pregnancy rate. Empirical equations predict mortalities, marking rates, and weight characteristics of sheep of various ages from simple climatic measures, stocking rate and reproductive status. The initial age structure of flocks was determined by running the model for several years with historical climatic conditions. Drought management strategies such as selling a proportion of wethers progressively down to two-tooth and oldest ewes were incorporated. Management decisions such as time of joining, age at which ewes were cast-for-age, wether turn-off age and turning-off rate of lambs vary with geographical area and can be specified at run time. The model is run for sequences of climatic conditions generated stochastically from distributions based on historical climatic data correlated in some instances. The model highlights the difficulties of sustaining a consistent supply of sheep under variable climatic conditions.
Resumo:
The widespread and increasing resistance of internal parasites to anthelmintic control is a serious problem for the Australian sheep and wool industry. As part of control programmes, laboratories use the Faecal Egg Count Reduction Test (FECRT) to determine resistance to anthelmintics. It is important to have confidence in the measure of resistance, not only for the producer planning a drenching programme but also for companies investigating the efficacy of their products. The determination of resistance and corresponding confidence limits as given in anthelmintic efficacy guidelines of the Standing Committee on Agriculture (SCA) is based on a number of assumptions. This study evaluated the appropriateness of these assumptions for typical data and compared the effectiveness of the standard FECRT procedure with the effectiveness of alternative procedures. Several sets of historical experimental data from sheep and goats were analysed to determine that a negative binomial distribution was a more appropriate distribution to describe pre-treatment helminth egg counts in faeces than a normal distribution. Simulated egg counts for control animals were generated stochastically from negative binomial distributions and those for treated animals from negative binomial and binomial distributions. Three methods for determining resistance when percent reduction is based on arithmetic means were applied. The first was that advocated in the SCA guidelines, the second similar to the first but basing the variance estimates on negative binomial distributions, and the third using Wadley’s method with the distribution of the response variate assumed negative binomial and a logit link transformation. These were also compared with a fourth method recommended by the International Co-operation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products (VICH) programme, in which percent reduction is based on the geometric means. A wide selection of parameters was investigated and for each set 1000 simulations run. Percent reduction and confidence limits were then calculated for the methods, together with the number of times in each set of 1000 simulations the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been said to occur. These simulations provide the basis for setting conditions under which the methods could be recommended. The authors show that given the distribution of helminth egg counts found in Queensland flocks, the method based on arithmetic not geometric means should be used and suggest that resistance be redefined as occurring when the upper level of percent reduction is less than 95%. At least ten animals per group are required in most circumstances, though even 20 may be insufficient where effectiveness of the product is close to the cut off point for defining resistance.
Resumo:
Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.
Resumo:
This report describes the development and simulation of a variable rate controller for a 6-degree of freedom nonlinear model. The variable rate simulation model represents an off the shelf autopilot. Flight experiment involves risks and can be expensive. Therefore a dynamic model to understand the performance characteristics of the UAS in mission simulation before actual flight test or to obtain parameters needed for the flight is important. The control and guidance is implemented in Simulink. The report tests the use of the model for air search and air sampling path planning. A GUI in which a set of mission scenarios, in which two experts (mission expert, i.e. air sampling or air search and an UAV expert) interact, is presented showing the benefits of the method.
Resumo:
APSIM-ORYZA is a new functionality developed in the APSIM framework to simulate rice production while addressing management issues such as fertilisation and transplanting, which are particularly important in Korean agriculture. To validate the model for Korean rice varieties and field conditions, the measured yields and flowering times from three field experiments conducted by the Gyeonggi Agricultural Research and Extension Services (GARES) in Korea were compared against the simulated outputs for different management practices and rice varieties. Simulated yields of early-, mid- and mid-to-late-maturing varieties of rice grown in a continuous rice cropping system from 1997 to 2004 showed close agreement with the measured data. Similar results were also found for yields simulated under seven levels of nitrogen application. When different transplanting times were modelled, simulated flowering times ranged from within 3 days of the measured values for the early-maturing varieties, to up to 9 days after the measured dates for the mid- and especially mid-to-late-maturing varieties. This was associated with highly variable simulated yields which correlated poorly with the measured data. This suggests the need to accurately calibrate the photoperiod sensitivity parameters of the model for the photoperiod-sensitive rice varieties in Korea.