52 resultados para Monte Carlo algorithms
Resumo:
We introduce gradient-domain rendering for Monte Carlo image synthesis.While previous gradient-domain Metropolis Light Transport sought to distribute more samples in areas of high gradients, we show, in contrast, that estimating image gradients is also possible using standard (non-Metropolis) Monte Carlo algorithms, and furthermore, that even without changing the sample distribution, this often leads to significant error reduction. This broadens the applicability of gradient rendering considerably. To gain insight into the conditions under which gradient-domain sampling is beneficial, we present a frequency analysis that compares Monte Carlo sampling of gradients followed by Poisson reconstruction to traditional Monte Carlo sampling. Finally, we describe Gradient-Domain Path Tracing (G-PT), a relatively simple modification of the standard path tracing algorithm that can yield far superior results.
Resumo:
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here we present a graphics processor unit (GPU) based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to auto-regressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and 4 times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a 7-day high-resolution ECG is computed within less than 3 seconds. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.
Resumo:
A main field in biomedical optics research is diffuse optical tomography, where intensity variations of the transmitted light traversing through tissue are detected. Mathematical models and reconstruction algorithms based on finite element methods and Monte Carlo simulations describe the light transport inside the tissue and determine differences in absorption and scattering coefficients. Precise knowledge of the sample's surface shape and orientation is required to provide boundary conditions for these techniques. We propose an integrated method based on structured light three-dimensional (3-D) scanning that provides detailed surface information of the object, which is usable for volume mesh creation and allows the normalization of the intensity dispersion between surface and camera. The experimental setup is complemented by polarization difference imaging to avoid overlaying byproducts caused by inter-reflections and multiple scattering in semitransparent tissue.
Resumo:
One limitation to the widespread implementation of Monte Carlo (MC) patient dose-calculation algorithms for radiotherapy is the lack of a general and accurate source model of the accelerator radiation source. Our aim in this work is to investigate the sensitivity of the photon-beam subsource distributions in a MC source model (with target, primary collimator, and flattening filter photon subsources and an electron subsource) for 6- and 18-MV photon beams when the energy and radial distributions of initial electrons striking a linac target change. For this purpose, phase-space data (PSD) was calculated for various mean electron energies striking the target, various normally distributed electron energy spread, and various normally distributed electron radial intensity distributions. All PSD was analyzed in terms of energy, fluence, and energy fluence distributions, which were compared between the different parameter sets. The energy spread was found to have a negligible influence on the subsource distributions. The mean energy and radial intensity significantly changed the target subsource distribution shapes and intensities. For the primary collimator and flattening filter subsources, the distribution shapes of the fluence and energy fluence changed little for different mean electron energies striking the target, however, their relative intensity compared with the target subsource change, which can be accounted for by a scaling factor. This study indicates that adjustments to MC source models can likely be limited to adjusting the target subsource in conjunction with scaling the relative intensity and energy spectrum of the primary collimator, flattening filter, and electron subsources when the energy and radial distributions of the initial electron-beam change.
Resumo:
PURPOSE Hodgkin lymphoma (HL) is a highly curable disease. Reducing late complications and second malignancies has become increasingly important. Radiotherapy target paradigms are currently changing and radiotherapy techniques are evolving rapidly. DESIGN This overview reports to what extent target volume reduction in involved-node (IN) and advanced radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT) and proton therapy-compared with involved-field (IF) and 3D radiotherapy (3D-RT)- can reduce high doses to organs at risk (OAR) and examines the issues that still remain open. RESULTS Although no comparison of all available techniques on identical patient datasets exists, clear patterns emerge. Advanced dose-calculation algorithms (e.g., convolution-superposition/Monte Carlo) should be used in mediastinal HL. INRT consistently reduces treated volumes when compared with IFRT with the exact amount depending on the INRT definition. The number of patients that might significantly benefit from highly conformal techniques such as IMRT over 3D-RT regarding high-dose exposure to organs at risk (OAR) is smaller with INRT. The impact of larger volumes treated with low doses in advanced techniques is unclear. The type of IMRT used (static/rotational) is of minor importance. All advanced photon techniques result in similar potential benefits and disadvantages, therefore only the degree-of-modulation should be chosen based on individual treatment goals. Treatment in deep inspiration breath hold is being evaluated. Protons theoretically provide both excellent high-dose conformality and reduced integral dose. CONCLUSION Further reduction of treated volumes most effectively reduces OAR dose, most likely without disadvantages if the excellent control rates achieved currently are maintained. For both IFRT and INRT, the benefits of advanced radiotherapy techniques depend on the individual patient/target geometry. Their use should therefore be decided case by case with comparative treatment planning.
Resumo:
Long-term electrocardiogram (ECG) signals might suffer from relevant baseline disturbances during physical activity. Motion artifacts in particular are more pronounced with dry surface or esophageal electrodes which are dedicated to prolonged ECG recording. In this paper we present a method called baseline wander tracking (BWT) that tracks and rejects strong baseline disturbances and avoids concurrent saturation of the analog front-end. The proposed algorithm shifts the baseline level of the ECG signal to the middle of the dynamic input range. Due to the fast offset shifts, that produce much steeper signal portions than the normal ECG waves, the true ECG signal can be reconstructed offline and filtered using computationally intensive algorithms. Based on Monte Carlo simulations we observed reconstruction errors mainly caused by the non-linearity inaccuracies of the DAC. However, the signal to error ratio of the BWT is higher compared to an analog front-end featuring a dynamic input ranges above 15 mV if a synthetic ECG signal was used. The BWT is additionally able to suppress (electrode) offset potentials without introducing long transients. Due to its structural simplicity, memory efficiency and the DC coupling capability, the BWT is dedicated to high integration required in long-term and low-power ECG recording systems.
Resumo:
Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.