983 resultados para Methods: numerical
Resumo:
OBJECTIVE. Coronary MDCT angiography has been shown to be an accurate noninvasive tool for the diagnosis of obstructive coronary artery disease (CAD). Its sensitivity and negative predictive value for diagnosing percentage of stenosis are unsurpassed compared with those of other noninvasive testing methods. However, in its current form, it provides no information regarding the physiologic impact of CAD and is a poor predictor of myocardial ischemia. CORE320 is a multicenter multinational diagnostic study with the primary objective to evaluate the diagnostic accuracy of 320-MDCT for detecting coronary artery luminal stenosis and corresponding myocardial perfusion deficits in patients with suspected CAD compared with the reference standard of conventional coronary angiography and SPECT myocardial perfusion imaging. CONCLUSION. We aim to describe the CT acquisition, reconstruction, and analysis methods of the CORE320 study.
Resumo:
PHWAT is a new model that couples a geochemical reaction model (PHREEQC-2) with a density-dependent groundwater flow and solute transport model (SEAWAT) using the split-operator approach. PHWAT was developed to simulate multi-component reactive transport in variable density groundwater flow. Fluid density in PHWAT depends not on only the concentration of a single species as in SEAWAT, but also the concentrations of other dissolved chemicals that can be subject to reactive processes. Simulation results of PHWAT and PHREEQC-2 were compared in their predictions of effluent concentration from a column experiment. Both models produced identical results, showing that PHWAT has correctly coupled the sub-packages. PHWAT was then applied to the simulation of a tank experiment in which seawater intrusion was accompanied by cation exchange. The density dependence of the intrusion and the snow-plough effect in the breakthrough curves were reflected in the model simulations, which were in good agreement with the measured breakthrough data. Comparison simulations that, in turn, excluded density effects and reactions allowed us to quantify the marked effect of ignoring these processes. Next, we explored numerical issues involved in the practical application of PHWAT using the example of a dense plume flowing into a tank containing fresh water. It was shown that PHWAT could model physically unstable flow and that numerical instabilities were suppressed. Physical instability developed in the model in accordance with the increase of the modified Rayleigh number for density-dependent flow, in agreement with previous research. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.
Resumo:
Little consensus exists in the literature regarding methods for determination of the onset of electromyographic (EMG) activity. The aim of this study was to compare the relative accuracy of a range of computer-based techniques with respect to EMG onset determined visually by an experienced examiner. Twenty-seven methods were compared which varied in terms of EMG processing (low pass filtering at 10, 50 and 500 Hz), threshold value (1, 2 and 3 SD beyond mean of baseline activity) and the number of samples for which the mean must exceed the defined threshold (20, 50 and 100 ms). Three hundred randomly selected trials of a postural task were evaluated using each technique. The visual determination of EMG onset was found to be highly repeatable between days. Linear regression equations were calculated for the values selected by each computer method which indicated that the onset values selected by the majority of the parameter combinations deviated significantly from the visually derived onset values. Several methods accurately selected the time of onset of EMG activity and are recommended for future use. Copyright (C) 1996 Elsevier Science Ireland Ltd.
Resumo:
Nanocomposite materials have received considerable attention in recent years due to their novel properties. Grain boundaries are considered to play an important role in nanostructured materials. This work focuses on the finite element analysis of the effect of grain boundaries on the overall mechanical properties of aluminium/alumina composites. A grain boundary is incorporated into the commonly used unit cell model to investigate its effect on material properties. By combining the unit cell model with an indentation model, coupled with experimental indentation measurements, the ''effective'' plastic property of the grain boundary is estimated. In addition, the strengthening mechanism is also discussed based on the Estrin-Mecking model.
Resumo:
This study investigated the effect of two anti-pronation taping techniques on vertical navicular height, an indicator of foot pronation, after its application and 20 min of exercise. The taping techniques were: the low dye (LD) and low dye with the addition of calcaneal slings and reverse sixes (LDCR). A repeated measures study was used. It found that LDCR was superior to LD and control immediately after application and exercise. LD was better than control immediately after application but not after exercise. These findings provide practical directions to clinicians regularly using anti-pronation taping techniques.
Resumo:
Field studies have shown that the elevation of the beach groundwater table varies with the tide and such variations affect significantly beach erosion or accretion. In this paper, we present a BEM (Boundary Element Method) model for simulating the tidal fluctuation of the beach groundwater table. The model solves the two-dimensional flow equation subject to free and moving boundary conditions, including the seepage dynamics at the beach face. The simulated seepage faces were found to agree with the predictions of a simple model (Turner, 1993). The advantage of the present model is, however, that it can be used with little modification to simulate more complicated cases, e.g., surface recharge from rainfall and drainage in the aquifer may be included (the latter is related to beach dewatering technique). The model also simulated well the field data of Nielsen (1990). In particular, the model replicated three distinct features of local water table fluctuations: steep rising phase versus flat falling phase, amplitude attenuation and phase lagging.
Resumo:
High-pressure homogenization is a key unit operation used to disrupt cells containing intracellular bioproducts. Modeling and optimization of this unit are restrained by a lack of information on the flow conditions within a homogenizer value. A numerical investigation of the impinging radial jet within a homogenizer value is presented. Results for a laminar and turbulent (k-epsilon turbulent model) jet are obtained using the PHOENICS finite-volume code. Experimental measurement of the stagnation region width and correlation of the cell disruption efficiency with jet stagnation pressure both indicate that the impinging jet in the homogenizer system examined is likely to be laminar under normal operating conditions. Correlation of disruption data with laminar stagnation pressure provides a better description of experimental variability than existing correlations using total pressure drop or the grouping 1/Y(2)h(2).
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
Here, we examine morphological changes in cortical thickness of patients with Alzheimer`s disease (AD) using image analysis algorithms for brain structure segmentation and study automatic classification of AD patients using cortical and volumetric data. Cortical thickness of AD patients (n = 14) was measured using MRI cortical surface-based analysis and compared with healthy subjects (n = 20). Data was analyzed using an automated algorithm for tissue segmentation and classification. A Support Vector Machine (SVM) was applied over the volumetric measurements of subcortical and cortical structures to separate AD patients from controls. The group analysis showed cortical thickness reduction in the superior temporal lobe, parahippocampal gyrus, and enthorhinal cortex in both hemispheres. We also found cortical thinning in the isthmus of cingulate gyrus and middle temporal gyrus at the right hemisphere, as well as a reduction of the cortical mantle in areas previously shown to be associated with AD. We also confirmed that automatic classification algorithms (SVM) could be helpful to distinguish AD patients from healthy controls. Moreover, the same areas implicated in the pathogenesis of AD were the main parameters driving the classification algorithm. While the patient sample used in this study was relatively small, we expect that using a database of regional volumes derived from MRI scans of a large number of subjects will increase the SVM power of AD patient identification.
Resumo:
Numerical methods related to Krylov subspaces are widely used in large sparse numerical linear algebra. Vectors in these subspaces are manipulated via their representation onto orthonormal bases. Nowadays, on serial computers, the method of Arnoldi is considered as a reliable technique for constructing such bases. However, although easily parallelizable, this technique is not as scalable as expected for communications. In this work we examine alternative methods aimed at overcoming this drawback. Since they retrieve upon completion the same information as Arnoldi's algorithm does, they enable us to design a wide family of stable and scalable Krylov approximation methods for various parallel environments. We present timing results obtained from their implementation on two distributed-memory multiprocessor supercomputers: the Intel Paragon and the IBM Scalable POWERparallel SP2. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
Aims We have characterized the relative dispersion of vascular and extravascular markers in the limbs of three patients undergoing isolated limb perfusions with the cytotoxic melphalan for recurrent malignant melanoma both before and after melphalan dosing. Methods A bolus of injectate containing [Cr-51] labelled red blood cells, [C-14]-sucrose and [H-3]-water was injected into an iliac or femoral artery and outflow samples collected at 1 s intervals by a fraction collector. The radioactivity due to each isotype was analysed by either gamma [Cr-51] or beta [C-14 and H-3] counting. The moments of the outflow fraction-time profiles were estimated by a nonparametric (numerical integration) method and a parametric model (sum of two inverse Gaussian functions). Results The availability, mean transit time and normalised variance (CV2) obtained for labelled red blood cells, sucrose and water were similar before and after melphalan dosing and with the two methods of calculation but varied between the patients. Conclusions The vascular space is not well-stirred but characterized by a CV2 similar that reported previously for in situ rat hind limb and rat liver perfusions. A flow-limited blood-tissue exchange was observed for the permeating indicators. Administration of melphalan did not influence the distribution characteristics of the indicators.
Resumo:
Objective. The purpose of this study was to estimate the Down syndrome detection and false-positive rates for second-trimester sonographic prenasal thickness (PT) measurement alone and in combination with other markers. Methods. Multivariate log Gaussian modeling was performed using numerical integration. Parameters for the PT distribution, in multiples of the normal gestation-specific median (MoM), were derived from 105 Down syndrome and 1385 unaffected pregnancies scanned at 14 to 27 weeks. The data included a new series of 25 cases and 535 controls combined with 4 previously published series. The means were estimated by the median and the SDs by the 10th to 90th range divided by 2.563. Parameters for other markers were obtained from the literature. Results. A log Gaussian model fitted the distribution of PT values well in Down syndrome and unaffected pregnancies. The distribution parameters were as follows: Down syndrome, mean, 1.334 MoM; log(10) SD, 0.0772; unaffected pregnancies, 0.995 and 0.0752, respectively. The model-predicted detection rates for 1%, 3%, and 5% false-positive rates for PT alone were 35%, 51%, and 60%, respectively. The addition of PT to a 4 serum marker protocol increased detection by 14% to 18% compared with serum alone. The simultaneous sonographic measurement of PT and nasal bone length increased detection by 19% to 26%, and with a third sonographic marker, nuchal skin fold, performance was comparable with first-trimester protocols. Conclusions. Second-trimester screening with sonographic PT and serum markers is predicted to have a high detection rate, and further sonographic markers could perform comparably with first-trimester screening protocols.