963 resultados para radiotherapy treatments, Monte Carlo techniques
Resumo:
This study considers linear filtering methods for minimising the end-to-end average distortion of a fixed-rate source quantisation system. For the source encoder, both scalar and vector quantisation are considered. The codebook index output by the encoder is sent over a noisy discrete memoryless channel whose statistics could be unknown at the transmitter. At the receiver, the code vector corresponding to the received index is passed through a linear receive filter, whose output is an estimate of the source instantiation. Under this setup, an approximate expression for the average weighted mean-square error (WMSE) between the source instantiation and the reconstructed vector at the receiver is derived using high-resolution quantisation theory. Also, a closed-form expression for the linear receive filter that minimises the approximate average WMSE is derived. The generality of framework developed is further demonstrated by theoretically analysing the performance of other adaptation techniques that can be employed when the channel statistics are available at the transmitter also, such as joint transmit-receive linear filtering and codebook scaling. Monte Carlo simulation results validate the theoretical expressions, and illustrate the improvement in the average distortion that can be obtained using linear filtering techniques.
Resumo:
It is well known that the impulse response of a wide-band wireless channel is approximately sparse, in the sense that it has a small number of significant components relative to the channel delay spread. In this paper, we consider the estimation of the unknown channel coefficients and its support in OFDM systems using a sparse Bayesian learning (SBL) framework for exact inference. In a quasi-static, block-fading scenario, we employ the SBL algorithm for channel estimation and propose a joint SBL (J-SBL) and a low-complexity recursive J-SBL algorithm for joint channel estimation and data detection. In a time-varying scenario, we use a first-order autoregressive model for the wireless channel and propose a novel, recursive, low-complexity Kalman filtering-based SBL (KSBL) algorithm for channel estimation. We generalize the KSBL algorithm to obtain the recursive joint KSBL algorithm that performs joint channel estimation and data detection. Our algorithms can efficiently recover a group of approximately sparse vectors even when the measurement matrix is partially unknown due to the presence of unknown data symbols. Moreover, the algorithms can fully exploit the correlation structure in the multiple measurements. Monte Carlo simulations illustrate the efficacy of the proposed techniques in terms of the mean-square error and bit error rate performance.
Resumo:
The standard approach to signal reconstruction in frequency-domain optical-coherence tomography (FDOCT) is to apply the inverse Fourier transform to the measurements. This technique offers limited resolution (due to Heisenberg's uncertainty principle). We propose a new super-resolution reconstruction method based on a parametric representation. We consider multilayer specimens, wherein each layer has a constant refractive index and show that the backscattered signal from such a specimen fits accurately in to the framework of finite-rate-of-innovation (FRI) signal model and is represented by a finite number of free parameters. We deploy the high-resolution Prony method and show that high-quality, super-resolved reconstruction is possible with fewer measurements (about one-fourth of the number required for the standard Fourier technique). To further improve robustness to noise in practical scenarios, we take advantage of an iterated singular-value decomposition algorithm (Cadzow denoiser). We present results of Monte Carlo analyses, and assess statistical efficiency of the reconstruction techniques by comparing their performance against the Cramer-Rao bound. Reconstruction results on experimental data obtained from technical as well as biological specimens show a distinct improvement in resolution and signal-to-reconstruction noise offered by the proposed method in comparison with the standard approach.
Resumo:
In this work, we address the recovery of block sparse vectors with intra-block correlation, i.e., the recovery of vectors in which the correlated nonzero entries are constrained to lie in a few clusters, from noisy underdetermined linear measurements. Among Bayesian sparse recovery techniques, the cluster Sparse Bayesian Learning (SBL) is an efficient tool for block-sparse vector recovery, with intra-block correlation. However, this technique uses a heuristic method to estimate the intra-block correlation. In this paper, we propose the Nested SBL (NSBL) algorithm, which we derive using a novel Bayesian formulation that facilitates the use of the monotonically convergent nested Expectation Maximization (EM) and a Kalman filtering based learning framework. Unlike the cluster-SBL algorithm, this formulation leads to closed-form EMupdates for estimating the correlation coefficient. We demonstrate the efficacy of the proposed NSBL algorithm using Monte Carlo simulations.
Resumo:
We address the problem of two-dimensional (2-D) phase retrieval from magnitude of the Fourier spectrum. We consider 2-D signals that are characterized by first-order difference equations, which have a parametric representation in the Fourier domain. We show that, under appropriate stability conditions, such signals can be reconstructed uniquely from the Fourier transform magnitude. We formulate the phase retrieval problem as one of computing the parameters that uniquely determine the signal. We show that the problem can be solved by employing the annihilating filter method, particularly for the case when the parameters are distinct. For the more general case of the repeating parameters, the annihilating filter method is not applicable. We circumvent the problem by employing the algebraically coupled matrix pencil (ACMP) method. In the noiseless measurement setup, exact phase retrieval is possible. We also establish a link between the proposed analysis and 2-D cepstrum. In the noisy case, we derive Cramer-Rao lower bounds (CRLBs) on the estimates of the parameters and present Monte Carlo performance analysis as a function of the noise level. Comparisons with state-of-the-art techniques in terms of signal reconstruction accuracy show that the proposed technique outperforms the Fienup and relaxed averaged alternating reflections (RAAR) algorithms in the presence of noise.
Resumo:
The impulse response of wireless channels between the N-t transmit and N-r receive antennas of a MIMO-OFDM system are group approximately sparse (ga-sparse), i.e., NtNt the channels have a small number of significant paths relative to the channel delay spread and the time-lags of the significant paths between transmit and receive antenna pairs coincide. Often, wireless channels are also group approximately cluster-sparse (gac-sparse), i.e., every ga-sparse channel consists of clusters, where a few clusters have all strong components while most clusters have all weak components. In this paper, we cast the problem of estimating the ga-sparse and gac-sparse block-fading and time-varying channels in the sparse Bayesian learning (SBL) framework and propose a bouquet of novel algorithms for pilot-based channel estimation, and joint channel estimation and data detection, in MIMO-OFDM systems. The proposed algorithms are capable of estimating the sparse wireless channels even when the measurement matrix is only partially known. Further, we employ a first-order autoregressive modeling of the temporal variation of the ga-sparse and gac-sparse channels and propose a recursive Kalman filtering and smoothing (KFS) technique for joint channel estimation, tracking, and data detection. We also propose novel, parallel-implementation based, low-complexity techniques for estimating gac-sparse channels. Monte Carlo simulations illustrate the benefit of exploiting the gac-sparse structure in the wireless channel in terms of the mean square error (MSE) and coded bit error rate (BER) performance.
Resumo:
In this paper, we study two multi-dimensional Goodness-of-Fit tests for spectrum sensing in cognitive radios. The multi-dimensional scenario refers to multiple CR nodes, each with multiple antennas, that record multiple observations from multiple primary users for spectrum sensing. These tests, viz., the Interpoint Distance (ID) based test and the h, f distance based tests are constructed based on the properties of stochastic distances. The ID test is studied in detail for a single CR node case, and a possible extension to handle multiple nodes is discussed. On the other hand, the h, f test is applicable in a multi-node setup. A robustness feature of the KL distance based test is discussed, which has connections with Middleton's class A model. Through Monte-Carlo simulations, the proposed tests are shown to outperform the existing techniques such as the eigenvalue ratio based test, John's test, and the sphericity test, in several scenarios.
Resumo:
We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.
Resumo:
In this paper methods are developed for enhancement and analysis of autoregressive moving average (ARMA) signals observed in additive noise which can be represented as mixtures of heavy-tailed non-Gaussian sources and a Gaussian background component. Such models find application in systems such as atmospheric communications channels or early sound recordings which are prone to intermittent impulse noise. Markov Chain Monte Carlo (MCMC) simulation techniques are applied to the joint problem of signal extraction, model parameter estimation and detection of impulses within a fully Bayesian framework. The algorithms require only simple linear iterations for all of the unknowns, including the MA parameters, which is in contrast with existing MCMC methods for analysis of noise-free ARMA models. The methods are illustrated using synthetic data and noise-degraded sound recordings.
Resumo:
Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.
Resumo:
Transmission investments are currently needed to meet an increasing electricity demand, to address security of supply concerns, and to reach carbon-emissions targets. A key issue when assessing the benefits from an expanded grid concerns the valuation of the uncertain cash flows that result from the expansion. We propose a valuation model that accommodates both physical and economic uncertainties following the Real Options approach. It combines optimization techniques with Monte Carlo simulation. We illustrate the use of our model in a simplified, two-node grid and assess the decision whether to invest or not in a particular upgrade. The generation mix includes coal-and natural gas-fired stations that operate under carbon constraints. The underlying parameters are estimated from observed market data.
Resumo:
The micro-scale gas flows are usually low-speed flows and exhibit rarefied gas effects. It is challenging to simulate these flows because traditional CFD method is unable to capture the rarefied gas effects and the direct simulation Monte Carlo (DSMC) method is very inefficient for low-speed flows. In this study we combine two techniques to improve the efficiency of the DSMC method. The information preservation technique is used to reduce the statistical noise and the cell-size relaxed technique is employed to increase the effective cell size. The new cell-size relaxed IP method is found capable of simulating micro-scale gas flows as shown by the 2D lid-driven cavity flows.
Resumo:
The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?
We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.
Resumo:
The interactions of N2, formic acid and acetone on the Ru(001) surface are studied using thermal desorption mass spectrometry (TDMS), electron energy loss spectroscopy (EELS), and computer modeling.
Low energy electron diffraction (LEED), EELS and TDMS were used to study chemisorption of N2 on Ru(001). Adsorption at 75 K produces two desorption states. Adsorption at 95 K fills only the higher energy desorption state and produces a (√3 x √3)R30° LEED pattern. EEL spectra indicate both desorption states are populated by N2 molecules bonded "on-top" of Ru atoms.
Monte Carlo simulation results are presented on Ru(001) using a kinetic lattice gas model with precursor mediated adsorption, desorption and migration. The model gives good agreement with experimental data. The island growth rate was computed using the same model and is well fit by R(t)m - R(t0)m = At, with m approximately 8. The island size was determined from the width of the superlattice diffraction feature.
The techniques, algorithms and computer programs used for simulations are documented. Coordinate schemes for indexing sites on a 2-D hexagonal lattice, programs for simulation of adsorption and desorption, techniques for analysis of ordering, and computer graphics routines are discussed.
The adsorption of formic acid on Ru(001) has been studied by EELS and TDMS. Large exposures produce a molecular multilayer species. A monodentate formate, bidentate formate, and a hydroxyl species are stable intermediates in formic acid decomposition. The monodentate formate species is converted to the bidentate species by heating. Formic acid decomposition products are CO2, CO, H2, H2O and oxygen adatoms. The ratio of desorbed CO with respect to CO2 increases both with slower heating rates and with lower coverages.
The existence of two different forms of adsorbed acetone, side-on, bonded through the oxygen and acyl carbon, and end-on, bonded through the oxygen, have been verified by EELS. On Pt(111), only the end-on species is observed. On dean Ru(001) and p(2 x 2)O precovered Ru(001), both forms coexist. The side-on species is dominant on clean Ru(001), while O stabilizes the end-on form. The end-on form desorbs molecularly. Bonding geometry stability is explained by surface Lewis acidity and by comparison to organometallic coordination complexes.
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.