85 resultados para Digital simulation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective-To compare the accuracy and feasibility of harmonic power Doppler and digitally subtracted colour coded grey scale imaging for the assessment of perfusion defect severity by single photon emission computed tomography (SPECT) in an unselected group of patients. Design-Cohort study. Setting-Regional cardiothoracic unit. Patients-49 patients (mean (SD) age 61 (11) years; 27 women, 22 men) with known or suspected coronary artery disease were studied with simultaneous myocardial contrast echo (MCE) and SPECT after standard dipyridamole stress. Main outcome measures-Regional myocardial perfusion by SPECT, performed with Tc-99m tetrafosmin, scored qualitatively and also quantitated as per cent maximum activity. Results-Normal perfusion was identified by SPECT in 225 of 270 segments (83%). Contrast echo images were interpretable in 92% of patients. The proportion of normal MCE by grey scale, subtracted, and power Doppler techniques were respectively 76%, 74%, and 88% (p < 0.05) at > 80% of maximum counts, compared with 65%, 69%, and 61% at < 60% of maximum counts. For each technique, specificity was lowest in the lateral wail, although power Doppler was the least affected. Grey scale and subtraction techniques were least accurate in the septal wall, but power Doppler showed particular problems in the apex. On a per patient analysis, the sensitivity was 67%, 75%, and 83% for detection of coronary artery disease using grey scale, colour coded, and power Doppler, respectively, with a significant difference between power Doppler and grey scale only (p < 0.05). Specificity was also the highest for power Doppler, at 55%, but not significantly different from subtracted colour coded images. Conclusions-Myocardial contrast echo using harmonic power Doppler has greater accuracy than with grey scale imaging and digital subtraction. However, power Doppler appears to be less sensitive for mild perfusion defects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. Although digital and videotaped images are known to be comparable for the evaluation of left ventricular function, their relative accuracy for assessment of more complex anatomy is unclear. We sought to compare reading time, storage costs, and concordance of video and digital interpretations across multiple observers and sites. Methods. One hundred one patients with valvular (90 mitral, 48 aortic, 80 tricuspid) disease were selected prospectively, and studies were stored according to video and standardized digital protocols. The same reviewer interpreted video and digital images independently and at different times with the use of a standard report form to evaluate 40 items (eg, severity of stenosis or regurgitation, leaflet thickening, and calcification) as normal or mildly, moderately, or severely abnormal Concordance between modalities was expressed at kappa Major discordance (difference of >1 level of severity) was ascribed to the modality that gave the lesser severity. CD-ROM was used to store digital data (20:1 lossy compression), and super-VHS video-tape was used to store video data The reading time and storage costs for each modality were compared Results. Measured parameters were highly concordant (ejection fraction was 52% +/- 13% by both). Major discordance was rare, and lesser values were reported with digital rather than video interpretation in the categories of aortic and mitral valve thicken ing (1% to 2%) and severity of mitral regurgitation (2%). Digital reading time was 6.8 +/- 2.4 minutes, 38% shorter than with video (11.0 +/- 3.0, range 8 to 22 minutes, P < .001). Compressed digital studies had an average size of 60 <plus/minus> 14 megabytes (range 26 to 96 megabytes). Storage cost for video was A$0.62 per patient (18 studies per tape, total cost A$11.20), compared with A$0.31 per patient for digital storage (8 studies per CD-ROM, total cost A$2.50). Conclusion. Digital and video interpretation were highly concordant; in the few cases of major discordance, the digital scores were lower, perhaps reflecting undersampling. Use of additional views and longer clips may be indicated to minimize discordance with video in patients with complex problems. Digital interpretation offers a significant reduction in reading times and the cost of archiving.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational simulations of the title reaction are presented, covering a temperature range from 300 to 2000 K. At lower temperatures we find that initial formation of the cyclopropene complex by addition of methylene to acetylene is irreversible, as is the stabilisation process via collisional energy transfer. Product branching between propargyl and the stable isomers is predicted at 300 K as a function of pressure for the first time. At intermediate temperatures (1200 K), complex temporal evolution involving multiple steady states begins to emerge. At high temperatures (2000 K) the timescale for subsequent unimolecular decay of thermalized intermediates begins to impinge on the timescale for reaction of methylene, such that the rate of formation of propargyl product does not admit a simple analysis in terms of a single time-independent rate constant until the methylene supply becomes depleted. Likewise, at the elevated temperatures the thermalized intermediates cannot be regarded as irreversible product channels. Our solution algorithm involves spectral propagation of a symmetrised version of the discretized master equation matrix, and is implemented in a high precision environment which makes hitherto unachievable low-temperature modelling a reality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We developed a general model to assess patient activity within the primary and secondary health-care sectors following a dermatology outpatient consultation. Based on observed variables from the UK teledermatology trial, the model showed that up to 11 doctor-patient interactions occurred before a patient was ultimately discharged from care. In a cohort of 1000 patients, the average number of health-care visits was 2.4 (range 1-11). Simulation analysis suggested that the most important parameter affecting the total number of doctor-patient Interactions is patient discharge from care following the initial consultation. This implies that resources should be concentrated in this area. The introduction of teledermatology (either realtime or store and forward) changes the values of the model parameters. The model provides a quantitative tool for planning the future provision of dermatology health-care.