939 resultados para Explicit numerical method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ulcerative colitis (UC) is a chronic disease with a wide variety of treatment options many of which are not evidence based. Supplementing available guidelines, which are often broadly defined, consensus-based and generally not tailored to specifically reflect the individual patient situation, we developed explicit appropriateness criteria to assist, and improve treatment decisions. Methods: We used the RAND appropriateness method which does not force consensus. An extensive literature review was compiled based on and supplementing, where necessary, the ECCO UC 2011 guidelines. EPATUC (endorsed by ECCO) was formed by 8 gastroenterologists, 2 surgeons and 2 general practitioners from throughout Europe. Clinical scenarios reflecting practice were rated on a 9-point scale from 1 (extremely inappropriate) to 9 (extremely appropriate), based on the expert's experience and the available literature. After extensive discussion, all scenarios were re-rated at a two-day panel meeting. Median and disagreement were used to categorize ratings into 3 categories: appropriate, uncertain and inappropriate. Results: 718 clinical scenarios were rated, structured in 13 main clinical presentations: not refractory (n=64) or refractory (n=33) proctitis, mild to moderate left-sided (n=72) or extensive (n=48) colitis, severe colitis (n=36), steroid-dependant colitis (n=36), steroid-refractory colitis (n=55), acute pouchitis (n=96), maintenance of remission (n=248), colorectal cancer prevention (n=9) and fulminant colitis (n=9). Overall, 100 indications were judged appropriate (14%), 129 uncertain (18%) and 489 inappropriate (68%). Disagreement between experts was very low (6%). Conclusion: For the very first time, explicit appropriateness criteria for therapy of UC were developed that allow both specific and rapid therapeutic decision making and prospective assessment of treatment appropriateness. Comparison of these detailed scenarios with patient profiles encountered in the Swiss IBD cohort study indicates good concordance. EPATUC criteria will be freely accessible on the internet (epatuc.ch).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: We are interested in the numerical simulation of the anastomotic region comprised between outflow canula of LVAD and the aorta. Segmenta¬tion, geometry reconstruction and grid generation from patient-specific data remain an issue because of the variable quality of DICOM images, in particular CT-scan (e.g. metallic noise of the device, non-aortic contrast phase). We pro¬pose a general framework to overcome this problem and create suitable grids for numerical simulations.Methods: Preliminary treatment of images is performed by reducing the level window and enhancing the contrast of the greyscale image using contrast-limited adaptive histogram equalization. A gradient anisotropic diffusion filter is applied to reduce the noise. Then, watershed segmentation algorithms and mathematical morphology filters allow reconstructing the patient geometry. This is done using the InsightToolKit library (www.itk.org). Finally the Vascular Model¬ing ToolKit (www.vmtk.org) and gmsh (www.geuz.org/gmsh) are used to create the meshes for the fluid (blood) and structure (arterial wall, outflow canula) and to a priori identify the boundary layers. The method is tested on five different patients with left ventricular assistance and who underwent a CT-scan exam.Results: This method produced good results in four patients. The anastomosis area is recovered and the generated grids are suitable for numerical simulations. In one patient the method failed to produce a good segmentation because of the small dimension of the aortic arch with respect to the image resolution.Conclusions: The described framework allows the use of data that could not be otherwise segmented by standard automatic segmentation tools. In particular the computational grids that have been generated are suitable for simulations that take into account fluid-structure interactions. Finally the presented method features a good reproducibility and fast application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The space and time discretization inherent to all FDTD schemesintroduce non-physical dispersion errors, i.e. deviations ofthe speed of sound from the theoretical value predicted bythe governing Euler differential equations. A generalmethodologyfor computing this dispersion error via straightforwardnumerical simulations of the FDTD schemes is presented.The method is shown to provide remarkable accuraciesof the order of 1/1000 in a wide variety of twodimensionalfinite difference schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction The importance of the micromovements in the mechanism of aseptic loosening is clinically difficult to evaluate. To complete the analysis of a series of total knee arthroplasties (TKA), we used a tridimensional numerical model to study the micromovements of the tibial implant.Material and Methods Fifty one patients (with 57 cemented Porous Coated Anatomic TKAs) were reviewed (mean follow-up 4.5 year). Radiolucency at the tibial bone-cement interface was sought on the AP radiographs and divided in 7 areas. The distribution of the radiolucency was then correlated with the axis of the lower limb as measured on the orthoradiograms.The tridimensional numerical model is based on the finite element method. It allowed the measurement of the cemented prosthetic tibial implant's displacements and the microvements generated at bone-ciment interface. A total load (2000 Newton) was applied at first vertically and asymetrically on the tibial plateau, thereby simulating an axial deviation of the lower limbs. The vector's posterior inclination then permitted the addition of a tangential component to the axial load. This type of effort is generated by complex biomechanical phenomena such as knee flexion.Results 81 per cent of the 57 knees had a radiolucent line of at least 1 mm, at one or more of the tibial cement-epiphysis jonctional areas. The distribution of these lucent lines showed that they came out more frequently at the periphery of the implant. The lucent lines appeared most often under the unloaded margin of the tibial plateau, when axial deviation of lower limbs was present.Numerical simulations showed that asymetrical loading on the tibial plateau induced a subsidence of the loaded margin (0-100 microns) and lifting off at the opposite border (0-70 microns). The postero-anterior tangential component induced an anterior displacement of the tibial implant (160-220 microns), and horizontal micromovements with non homogenous distribution at the bone-ciment interface (28-54 microns).Discussion Comparison of clinical and numerical results showed a relation between the development of radiolucent lines and the unloading of the tibial implant's margin. The deleterious effect of lower limbs' axial deviation is thereby proven. The irregular distribution of lucent lines under the tibial plateau was similar of the micromovements' repartition at the bone-cement interface when tangential forces were present. A causative relation between the two phenomenaes could not however be established.Numerical simulation is a truly useful method of study; it permits to calculate micromovements which are relative, non homogenous and of very low amplitude. However, comparative clinical studies remain as essential to ensure the credibility of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We implemented Biot-type porous wave equations in a pseudo-spectral numerical modeling algorithm for the simulation of Stoneley waves in porous media. Fourier and Chebyshev methods are used to compute the spatial derivatives along the horizontal and vertical directions, respectively. To prevent from overly short time steps due to the small grid spacing at the top and bottom of the model as a consequence of the Chebyshev operator, the mesh is stretched in the vertical direction. As a large benefit, the Chebyshev operator allows for an explicit treatment of interfaces. Boundary conditions can be implemented with a characteristics approach. The characteristic variables are evaluated at zero viscosity. We use this approach to model seismic wave propagation at the interface between a fluid and a porous medium. Each medium is represented by a different mesh and the two meshes are connected through the above described characteristics domain-decomposition method. We show an experiment for sealed pore boundary conditions, where we first compare the numerical solution to an analytical solution. We then show the influence of heterogeneity and viscosity of the pore fluid on the propagation of the Stoneley wave and surface waves in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sequence profile method (Gribskov M, McLachlan AD, Eisenberg D, 1987, Proc Natl Acad Sci USA 84:4355-4358) is a powerful tool to detect distant relationships between amino acid sequences. A profile is a table of position-specific scores and gap penalties, providing a generalized description of a protein motif, which can be used for sequence alignments and database searches instead of an individual sequence. A sequence profile is derived from a multiple sequence alignment. We have found 2 ways to improve the sensitivity of sequence profiles: (1) Sequence weights: Usage of individual weights for each sequence avoids bias toward closely related sequences. These weights are automatically assigned based on the distance of the sequences using a published procedure (Sibbald PR, Argos P, 1990, J Mol Biol 216:813-818). (2) Amino acid substitution table: In addition to the alignment, the construction of a profile also needs an amino acid substitution table. We have found that in some cases a new table, the BLOSUM45 table (Henikoff S, Henikoff JG, 1992, Proc Natl Acad Sci USA 89:10915-10919), is more sensitive than the original Dayhoff table or the modified Dayhoff table used in the current implementation. Profiles derived by the improved method are more sensitive and selective in a number of cases where previous methods have failed to completely separate true members from false positives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an extension of the multi-scale finite-volume (MSFV) method is devised, which allows to Simulate flow and transport in reservoirs with complex well configurations. The new framework fits nicely into the data Structure of the original MSFV method,and has the important property that large patches covering the whole well are not required. For each well. an additional degree of freedom is introduced. While the treatment of pressure-constraint wells is trivial (the well-bore reference pressure is explicitly specified), additional equations have to be solved to obtain the unknown well-bore pressure of rate-constraint wells. Numerical Simulations of test cases with multiple complex wells demonstrate the ability of the new algorithm to capture the interference between the various wells and the reservoir accurately. (c) 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse flow velocimetry (DFV) is introduced as a new, noninvasive, optical technique for measuring the velocity of diffuse hydrothermal flow. The technique uses images of a motionless, random medium (e.g.,rocks) obtained through the lens of a moving refraction index anomaly (e.g., a hot upwelling). The method works in two stages. First, the changes in apparent background deformation are calculated using particle image velocimetry (PIV). The deformation vectors are determined by a cross correlation of pixel intensities across consecutive images. Second, the 2-D velocity field is calculated by cross correlating the deformation vectors between consecutive PIV calculations. The accuracy of the method is tested with laboratory and numerical experiments of a laminar, axisymmetric plume in fluids with both constant and temperaturedependent viscosity. Results show that average RMS errors are ∼5%–7% and are most accurate in regions of pervasive apparent background deformation which is commonly encountered in regions of diffuse hydrothermal flow. The method is applied to a 25 s video sequence of diffuse flow from a small fracture captured during the Bathyluck’09 cruise to the Lucky Strike hydrothermal field (September 2009). The velocities of the ∼10°C–15°C effluent reach ∼5.5 cm/s, in strong agreement with previous measurements of diffuse flow. DFV is found to be most accurate for approximately 2‐D flows where background objects have a small spatial scale, such as sand or gravel

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have modeled numerically the seismic response of a poroelastic inclusion with properties applicable to an oil reservoir that interacts with an ambient wavefield. The model includes wave-induced fluid flow caused by pressure differences between mesoscopic-scale (i.e., in the order of centimeters to meters) heterogeneities. We used a viscoelastic approximation on the macroscopic scale to implement the attenuation and dispersion resulting from this mesoscopic-scale theory in numerical simulations of wave propagation on the kilometer scale. This upscaling method includes finite-element modeling of wave-induced fluid flow to determine effective seismic properties of the poroelastic media, such as attenuation of P- and S-waves. The fitted, equivalent, viscoelastic behavior is implemented in finite-difference wave propagation simulations. With this two-stage process, we model numerically the quasi-poroelastic wave-propagation on the kilometer scale and study the impact of fluid properties and fluid saturation on the modeled seismic amplitudes. In particular, we addressed the question of whether poroelastic effects within an oil reservoir may be a plausible explanation for low-frequency ambient wavefield modifications observed at oil fields in recent years. Our results indicate that ambient wavefield modification is expected to occur for oil reservoirs exhibiting high attenuation. Whether or not such modifications can be detected in surface recordings, however, will depend on acquisition design and noise mitigation processing as well as site-specific conditions, such as the geologic complexity of the subsurface, the nature of the ambient wavefield, and the amount of surface noise.