983 resultados para Mathematical Techniques - Integration


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer assisted learning has an important role in the teaching of pharmacokinetics to health sciences students because it transfers the emphasis from the purely mathematical domain to an 'experiential' domain in which graphical and symbolic representations of actions and their consequences form the major focus for learning. Basic pharmacokinetic concepts can be taught by experimenting with the interplay between dose and dosage interval with drug absorption (e.g. absorption rate, bioavailability), drug distribution (e.g. volume of distribution, protein binding) and drug elimination (e.g. clearance) on drug concentrations using library ('canned') pharmacokinetic models. Such 'what if' approaches are found in calculator-simulators such as PharmaCalc, Practical Pharmacokinetics and PK Solutions. Others such as SAAM II, ModelMaker, and Stella represent the 'systems dynamics' genre, which requires the user to conceptualise a problem and formulate the model on-screen using symbols, icons, and directional arrows. The choice of software should be determined by the aims of the subject/course, the experience and background of the students in pharmacokinetics, and institutional factors including price and networking capabilities of the package(s). Enhanced learning may result if the computer teaching of pharmacokinetics is supported by tutorials, especially where the techniques are applied to solving problems in which the link with healthcare practices is clearly established.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores an approach to the implementation and evaluation of integrated health service delivery. It identifies the key issues involved in integration evaluation, provides a framework for assessment and identifies areas for the development of new tools and measures. A proactive role for evaluators in responding to health service reform is advocated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective-To compare the accuracy and feasibility of harmonic power Doppler and digitally subtracted colour coded grey scale imaging for the assessment of perfusion defect severity by single photon emission computed tomography (SPECT) in an unselected group of patients. Design-Cohort study. Setting-Regional cardiothoracic unit. Patients-49 patients (mean (SD) age 61 (11) years; 27 women, 22 men) with known or suspected coronary artery disease were studied with simultaneous myocardial contrast echo (MCE) and SPECT after standard dipyridamole stress. Main outcome measures-Regional myocardial perfusion by SPECT, performed with Tc-99m tetrafosmin, scored qualitatively and also quantitated as per cent maximum activity. Results-Normal perfusion was identified by SPECT in 225 of 270 segments (83%). Contrast echo images were interpretable in 92% of patients. The proportion of normal MCE by grey scale, subtracted, and power Doppler techniques were respectively 76%, 74%, and 88% (p < 0.05) at > 80% of maximum counts, compared with 65%, 69%, and 61% at < 60% of maximum counts. For each technique, specificity was lowest in the lateral wail, although power Doppler was the least affected. Grey scale and subtraction techniques were least accurate in the septal wall, but power Doppler showed particular problems in the apex. On a per patient analysis, the sensitivity was 67%, 75%, and 83% for detection of coronary artery disease using grey scale, colour coded, and power Doppler, respectively, with a significant difference between power Doppler and grey scale only (p < 0.05). Specificity was also the highest for power Doppler, at 55%, but not significantly different from subtracted colour coded images. Conclusions-Myocardial contrast echo using harmonic power Doppler has greater accuracy than with grey scale imaging and digital subtraction. However, power Doppler appears to be less sensitive for mild perfusion defects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regional planners, policy makers and policing agencies all recognize the importance of better understanding the dynamics of crime. Theoretical and application-oriented approaches which provide insights into why and where crimes take place are much sought after. Geographic information systems and spatial analysis techniques, in particular, are proving to be essential or studying criminal activity. However, the capabilities of these quantitative methods continue to evolve. This paper explores the use of geographic information systems and spatial analysis approaches for examining crime occurrence in Brisbane, Australia. The analysis highlights novel capabilities for the analysis of crime in urban regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Petrov-Galerkin methods are known to be versatile techniques for the solution of a wide variety of convection-dispersion transport problems, including those involving steep gradients. but have hitherto received little attention by chemical engineers. We illustrate the technique by means of the well-known problem of simultaneous diffusion and adsorption in a spherical sorbent pellet comprised of spherical, non-overlapping microparticles of uniform size and investigate the uptake dynamics. Solutions to adsorption problems exhibit steep gradients when macropore diffusion controls or micropore diffusion controls, and the application of classical numerical methods to such problems can present difficulties. In this paper, a semi-discrete Petrov-Galerkin finite element method for numerically solving adsorption problems with steep gradients in bidisperse solids is presented. The numerical solution was found to match the analytical solution when the adsorption isotherm is linear and the diffusivities are constant. Computed results for the Langmuir isotherm and non-constant diffusivity in microparticle are numerically evaluated for comparison with results of a fitted-mesh collocation method, which was proposed by Liu and Bhatia (Comput. Chem. Engng. 23 (1999) 933-943). The new method is simple, highly efficient, and well-suited to a variety of adsorption and desorption problems involving steep gradients. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some efficient solution techniques for solving models of noncatalytic gas-solid and fluid-solid reactions are presented. These models include those with non-constant diffusivities for which the formulation reduces to that of a convection-diffusion problem. A singular perturbation problem results for such models in the presence of a large Thiele modulus, for which the classical numerical methods can present difficulties. For the convection-diffusion like case, the time-dependent partial differential equations are transformed by a semi-discrete Petrov-Galerkin finite element method into a system of ordinary differential equations of the initial-value type that can be readily solved. In the presence of a constant diffusivity, in slab geometry the convection-like terms are absent, and the combination of a fitted mesh finite difference method with a predictor-corrector method is used to solve the problem. Both the methods are found to converge, and general reaction rate forms can be treated. These methods are simple and highly efficient for arbitrary particle geometry and parameters, including a large Thiele modulus. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The blending of coals has become popular to improve the performance of coals, to meet specifications of power plants and, to reduce the cost of coals, This article reviews the results and provides new information on ignition, flame stability, and carbon burnout studies of blended coals. The reviewed studies were conducted in laboratory-, pilot-, and full-scale facilities. The new information was taken in pilot-scale studies. The results generally show that blending a high-volatile coal with a low-volatile coal or anthracite can improve the ignition, flame stability and burnout of the blends. This paper discusses two general methods to predict the performance of blended coals: (1) experiment; and (2) indices. Laboratory- and pilot-scale tests, at least, provide a relative ranking of the combustion performance of coal/blends in power station boilers. Several indices, volatile matter content, heating value and a maceral index, can be used to predict the relative ranking of ignitability and flame stability of coals and blends. The maceral index, fuel ratio, and vitrinite reflectance can also be used to predict the absolute carbon burnout of coal and blends within limits. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.