3 resultados para Multiple sparse cameras
em Aston University Research Archive
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
The rapidly increasing demand for cellular telephony is placing greater demand on the limited bandwidth resources available. This research is concerned with techniques which enhance the capacity of a Direct-Sequence Code-Division-Multiple-Access (DS-CDMA) mobile telephone network. The capacity of both Private Mobile Radio (PMR) and cellular networks are derived and the many techniques which are currently available are reviewed. Areas which may be further investigated are identified. One technique which is developed is the sectorisation of a cell into toroidal rings. This is shown to provide an increased system capacity when the cell is split into these concentric rings and this is compared with cell clustering and other sectorisation schemes. Another technique for increasing the capacity is achieved by adding to the amount of inherent randomness within the transmitted signal so that the system is better able to extract the wanted signal. A system model has been produced for a cellular DS-CDMA network and the results are presented for two possible strategies. One of these strategies is the variation of the chip duration over a signal bit period. Several different variation functions are tried and a sinusoidal function is shown to provide the greatest increase in the maximum number of system users for any given signal-to-noise ratio. The other strategy considered is the use of additive amplitude modulation together with data/chip phase-shift-keying. The amplitude variations are determined by a sparse code so that the average system power is held near its nominal level. This strategy is shown to provide no further capacity since the system is sensitive to amplitude variations. When both strategies are employed, however, the sensitivity to amplitude variations is shown to reduce, thus indicating that the first strategy both increases the capacity and the ability to handle fluctuations in the received signal power.
Resumo:
Optimal paths connecting randomly selected network nodes and fixed routers are studied analytically in the presence of a nonlinear overlap cost that penalizes congestion. Routing becomes more difficult as the number of selected nodes increases and exhibits ergodicity breaking in the case of multiple routers. The ground state of such systems reveals nonmonotonic complex behaviors in average path length and algorithmic convergence, depending on the network topology, and densities of communicating nodes and routers. A distributed linearly scalable routing algorithm is also devised. © 2012 American Physical Society.