963 resultados para radiotherapy treatments, Monte Carlo techniques


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large integration of solar Photo Voltaic (PV) in distribution network has resulted in over-voltage problems. Several control techniques are developed to address over-voltage problem using Deterministic Load Flow (DLF). However, intermittent characteristics of PV generation require Probabilistic Load Flow (PLF) to introduce variability in analysis that is ignored in DLF. The traditional PLF techniques are not suitable for distribution systems and suffer from several drawbacks such as computational burden (Monte Carlo, Conventional convolution), sensitive accuracy with the complexity of system (point estimation method), requirement of necessary linearization (multi-linear simulation) and convergence problem (Gram–Charlier expansion, Cornish Fisher expansion). In this research, Latin Hypercube Sampling with Cholesky Decomposition (LHS-CD) is used to quantify the over-voltage issues with and without the voltage control algorithm in the distribution network with active generation. LHS technique is verified with a test network and real system from an Australian distribution network service provider. Accuracy and computational burden of simulated results are also compared with Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laboratory colonies of 15 economically important species of multi-host fruit flies (Diptera:Tephritidae) have been established in eight South Pacific island countries for the purpose of undertaking biological studies, particularly host status testing and research on quarantine treatments. Laboratory rearing techniques are based on the development of artificial diets for larvae consisting predominately of the pulp of locally available fruits including pawpaw, breadfruit and banana. The pawpaw diet is the standard diet and is used in seven countries for rearing 11 species. Diet ingredients are standard proportions of fruit pulp, hydrolysed protein and a bacterial and fungal inhibitor. The diet is particularly suitable for post-harvest treatment studies when larvae of known age are required. Another major development in the laboratory rearing system is the use of pure strains of Enterobacteriaceae bacterial cultures as important adult-feeding supplements. These bacterial cultures are dissected out of the crop of wild females, isolated by sub-culturing, and identified before supply to adults on peptone yeast extract agar plates. Most species are egged using thin, plastic receptacles perforated with 1 mm oviposition holes, with fruit juice or larval diet smeared internally as an oviposition stimulant. Laboratory rearing techniques have been standardised for all of the Pacific countries. Quality control monitoring is based on acceptable ranges in per cent egg hatch, pupal weight and pupal mortality. Colonies are rejuvenated every 6 to 12 months by crossing wild males with laboratory-reared females and vice versa. The standard rearing techniques, equipment and ingredients used in collecting, establishment, maintenance and quality control of these fruit fly species are detailed in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Head and neck squamous cell cancer (HNSCC) is the sixth most common cancer worldwide. Despite advances in combined modality therapy (surgery, radiotherapy, chemotherapy) the 5-year survival rate in stage III and IV disease remains at 40% - 60%. Short-range Auger-electron emitters, such as In-111 and In-114m, tagged with a drug, molecule, peptide, protein or nanoparticles brought in close proximity to nuclear DNA represent a fascinating alternative for treating cancer. In this thesis, we studied the usefulness of Indium-111-bleomycin complex (In-111-BLMC) in the diagnostics and potential therapy of HNSCC using in vitro HNSCC cell lines, in vivo nude mice, and in vivo HNSCC patients. In in vitro experiments with HNSCC cell lines, the sensitivity to external beam radiation, BLM, In-111-BLMC, and In-111-Cl3 was studied using the 96-well plate clonogenic assay. The influence of BLM and In-111-BLMC on the cell cycle was measured with flow cytometry. In in vivo nude mice xenograft studies, the activity ratios of In-111-BLMC were obtained in gamma camera images. The effect of In-111-BLMC in HNSCC xenografts was studied. In in vivo patient studies, we determined the tumor uptake of In-111-BLMC with gamma camera and the radioactivity from tumor samples using In-111-BLMC with specific activity of 75, 175, or 375 MBq/mg BLM. The S values, i.e. absorbed dose in a target organ per cumulated activity in a source organ, were simulated for In-111 and In-114m. In vitro studies showed the variation of sensitivity for external beam radiation, BLM, and In-111-BLMC between HNSCC cell lines. IC50 values for BLM were 1.6-, 1.8-, and 2.1-fold higher than In-111-BLMC (40 MBq/mg BLM) in three HNSCC cell lines. Specific In-111 activity of 40 MBq/mgBLM was more effective in killing cells than specific In-111 activity of 195MBq/mgBLM (p=0.0023). In-111-Cl3 alone had no killing effect. The percentage of cells in the G2/M phase increased after exposure to BLM and especially to In-111-BLMC in the three cell lines studied, indicating a G2/M block. The tumor-seeking behavior was shown in the in vivo imaging study of xenografted mice. BLM and In-111-BLMC were more effective than NaCl in reducing xenografted tumor size in HNSCC. The uptake ratios received from gamma images in the in vivo patient study varied from 1.2 to 2.8 in malignant tumors. However, the uptake of In-111-BLMC was unaffected by increasing the injected activity. A positive correlation existed between In-111-BLMC uptake, Ki-67/MIB activity, and number of mitoses. Regarding the S values, In-114m delivered a 4-fold absorbed radiation dose into the tumor compared with In-111, and thus, In-114m-BLMC might be more effective than In-111-BLMC at the DNA level. Auger-electron emitters, such as In-111 and In-114m, might have potential in the treatment of HNSCC. Further studies are needed to develop a radiopharmaceutical agent with appropriate physical properties of the radionuclide and a suitable carrier to bring it to the targeted tissue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Renewable energy resources, in particularly PV and battery storage are increasingly becoming part of residential and agriculture premises to manage their electricity consumption. This thesis addresses the tremendous technical, financial and planning challenges for utilities created by these increases, by offering techniques to examine the significance of various renewable resources in electricity network planning. The outcome of this research should assist utilities and customers for adequate planning that can be financially effective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adsorption of n-alkane mixtures in the zeolite LTA-5A under liquid-phase conditions has been studied using grand canonical Monte Carlo (GCMC) simulations combined with parallel tempering. Normal GCMC techniques fail for some of these systems due to the preference of linear molecules to coil within a single cage in the zeolite. The narrow zeolite windows severerly restrict interactions of the molecules, making it difficult to simulate cooperative rearrangements necessary to explore configuration space. Because of these reasons, normal GCMC simulations results show poor reproducibility in some cases. These problems were overcome with parallel tempering techniques. Even with parallel tempering, these are very challenging systems for molecular simulation. Similar problems may arise for other zeolites such as CHA, AFX, ERI, KFI, and RHO having cages connected by narrow windows. The simulations capture the complex selectivity behavior observed in experiments such as selectivity inversion and azeotrope formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impulse response of a typical wireless multipath channel can be modeled as a tapped delay line filter whose non-zero components are sparse relative to the channel delay spread. In this paper, a novel method of estimating such sparse multipath fading channels for OFDM systems is explored. In particular, Sparse Bayesian Learning (SBL) techniques are applied to jointly estimate the sparse channel and its second order statistics, and a new Bayesian Cramer-Rao bound is derived for the SBL algorithm. Further, in the context of OFDM channel estimation, an enhancement to the SBL algorithm is proposed, which uses an Expectation Maximization (EM) framework to jointly estimate the sparse channel, unknown data symbols and the second order statistics of the channel. The EM-SBL algorithm is able to recover the support as well as the channel taps more efficiently, and/or using fewer pilot symbols, than the SBL algorithm. To further improve the performance of the EM-SBL, a threshold-based pruning of the estimated second order statistics that are input to the algorithm is proposed, and its mean square error and symbol error rate performance is illustrated through Monte-Carlo simulations. Thus, the algorithms proposed in this paper are capable of obtaining efficient sparse channel estimates even in the presence of a small number of pilots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High sensitivity detection techniques are required for indoor navigation using Global Navigation Satellite System (GNSS) receivers, and typically, a combination of coherent and non- coherent integration is used as the test statistic for detection. The coherent integration exploits the deterministic part of the signal and is limited due to the residual frequency error, navigation data bits and user dynamics, which are not known apriori. So, non- coherent integration, which involves squaring of the coherent integration output, is used to improve the detection sensitivity. Due to this squaring, it is robust against the artifacts introduced due to data bits and/or frequency error. However, it is susceptible to uncertainty in the noise variance, and this can lead to fundamental sensitivity limits in detecting weak signals. In this work, the performance of the conventional non-coherent integration-based GNSS signal detection is studied in the presence of noise uncertainty. It is shown that the performance of the current state of the art GNSS receivers is close to the theoretical SNR limit for reliable detection at moderate levels of noise uncertainty. Alternate robust post-coherent detectors are also analyzed, and are shown to alleviate the noise uncertainty problem. Monte-Carlo simulations are used to confirm the theoretical predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article we review the current status in the modelling of both thermotropic and lyotropic Liquid crystal. We discuss various coarse-graining schemes as well as simulation techniques such as Monte Carlo (MC) and Molecular dynamics (MD) simulations.In the area of MC simulations we discuss in detail the algorithm for simulating hard objects such as spherocylinders of various aspect ratios where excluded volume interaction enters in the simulation through overlap test. We use this technique to study the phase diagram, of a special class of thermotropic liquid crystals namely banana liquid crystals. Next we discuss a coarse-grain model of surfactant molecules and study the self-assembly of the surfactant oligomers using MD simulations. Finally we discuss an atomistically informed coarse-grained description of the lipid molecules used to study the gel to liquid crystalline phase transition in the lipid bilayer system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an algorithm for joint data detection and tracking of the dominant singular mode of a time varying channel at the transmitter and receiver of a time division duplex multiple input multiple output beamforming system. The method proposed is a modified expectation maximization algorithm which utilizes an initial estimate to track the dominant modes of the channel at the transmitter and the receiver blindly; and simultaneously detects the un known data. Furthermore, the estimates are constrained to be within a confidence interval of the previous estimate in order to improve the tracking performance and mitigate the effect of error propagation. Monte-Carlo simulation results of the symbol error rate and the mean square inner product between the estimated and the true singular vector are plotted to show the performance benefits offered by the proposed method compared to existing techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).