240 resultados para Positive Definite Functions

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (−∇2)α/2φ=g(x,y) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A)=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(A)b≈β0Vmf(Tm)e1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We used diffusion tensor magnetic resonance imaging (DTI) to reveal the extent of genetic effects on brain fiber microstructure, based on tensor-derived measures, in 22 pairs of monozygotic (MZ) twins and 23 pairs of dizygotic (DZ) twins (90 scans). After Log-Euclidean denoising to remove rank-deficient tensors, DTI volumes were fluidly registered by high-dimensional mapping of co-registered MP-RAGE scans to a geometrically-centered mean neuroanatomical template. After tensor reorientation using the strain of the 3D fluid transformation, we computed two widely used scalar measures of fiber integrity: fractional anisotropy (FA), and geodesic anisotropy (GA), which measures the geodesic distance between tensors in the symmetric positive-definite tensor manifold. Spatial maps of intraclass correlations (r) between MZ and DZ twins were compared to compute maps of Falconer's heritability statistics, i.e. the proportion of population variance explainable by genetic differences among individuals. Cumulative distribution plots (CDF) of effect sizes showed that the manifold measure, GA, comparably the Euclidean measure, FA, in detecting genetic correlations. While maps were relatively noisy, the CDFs showed promise for detecting genetic influences on brain fiber integrity as the current sample expands.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cities have long held a fascination for people – as they grow and develop, there is a desire to know and understand the intricate interplay of elements that makes cities ‘live’. In part, this is a need for even greater efficiency in urban centres, yet the underlying quest is for a sustainable urban form. In order to make sense of the complex entities that we recognise cities to be, they have been compared to buildings, organisms and more recently machines. However the search for better and more elegant urban centres is hardly new, healthier and more efficient settlements were the aim of Modernism’s rational sub-division of functions, which has been translated into horizontal distribution through zoning, or vertical organisation thought highrise developments. However both of these approaches have been found to be unsustainable, as too many resources are required to maintain this kind or urbanisation and social consequences of either horizontal or vertical isolation must also be considered. From being absolute consumers of resources, of energy and of technology, cities need to change, to become sustainable in order to be more resilient and more efficient in supporting culture, society as well as economy. Our urban centres need to be re-imagined, re-conceptualised and re-defined, to match our changing society. One approach is to re-examine the compartmentalised, mono-functional approach of urban Modernism and to begin to investigate cities like ecologies, where every element supports and incorporates another, fulfilling more than just one function. This manner of seeing the city suggests a framework to guide the re-mixing of urban settlements. Beginning to understand the relationships between supporting elements and the nature of the connecting ‘web’ offers an invitation to investigate the often ignored, remnant spaces of cities. This ‘negative space’ is the residual from which space and place are carved out in the Contemporary city, providing the link between elements of urban settlement. Like all successful ecosystems, cities need to evolve and change over time in order to effectively respond to different lifestyles, development in culture and society as well as to meet environmental challenges. This paper seeks to investigate the role that negative space could have in the reorganisation of the re-mixed city. The space ‘in-between’ is analysed as an opportunity for infill development or re-development which provides to the urban settlement the variety that is a pre-requisite for ecosystem resilience. An analysis of the urban form is suggested as an empirical tool to map the opportunities already present in the urban environment and negative space is evaluated as a key element in achieving a positive development able to distribute diverse environmental and social facilities in the city.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whether to keep products segregated (e.g., unbundled) or integrate some or all of them (e.g., bundle) has been a problem of profound interest in areas such as portfolio theory in finance, risk capital allocations in insurance and marketing of consumer products. Such decisions are inherently complex and depend on factors such as the underlying product values and consumer preferences, the latter being frequently described using value functions, also known as utility functions in economics. In this paper, we develop decision rules for multiple products, which we generally call ‘exposure units’ to naturally cover manifold scenarios spanning well beyond ‘products’. Our findings show, e.g. that the celebrated Thaler's principles of mental accounting hold as originally postulated when the values of all exposure units are positive (i.e. all are gains) or all negative (i.e. all are losses). In the case of exposure units with mixed-sign values, decision rules are much more complex and rely on cataloging the Bell number of cases that grow very fast depending on the number of exposure units. Consequently, in the present paper, we provide detailed rules for the integration and segregation decisions in the case up to three exposure units, and partial rules for the arbitrary number of units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adopting a traffic safety culture approach, this paper identifies and discusses the ongoing challenge of promoting the road safety message in Australia. It is widely acknowledged that mass media and public education initiatives have played a critical role in the significant positive changes witnessed in community attitudes to road safety in the last three to four decades. It could be argued that mass media and education have had a direct influence on behaviours and attitudes, as well as an indirect influence through signposting and awareness raising functions in conjunction with enforcement. Great achievements have been made in reducing fatalities on Australia’s roads; a concept which is well understood among the international road safety fraternity. How well these achievements are appreciated by the general Australian community however, is not clear. This paper explores the lessons that can be learnt from successes in attitudinal and behaviour change in regard to seatbelt use and drink driving in Australia. It also identifies and discusses key challenges associated with achieving further positive changes in community attitudes and behaviours, particularly in relation to behaviours that may not be perceived by the community as dangerous, such as speeding and mobile phone use while driving. Potential strategies for future mass media and public education campaigns to target these challenges are suggested, including ways of harnessing the power of contemporary traffic law enforcement techniques, such as point-to-point speed enforcement and in-vehicle technologies, to help spread the road safety message.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The graft-versus-myeloma (GVM) effect represents a powerful form of immune attack exerted by alloreactive T cells against multiple myeloma cells, which leads to clinical responses in multiple myeloma transplant recipients. Whether myeloma cells are themselves able to induce alloreactive T cells capable of the GVM effect is not defined. Using adoptive transfer of T naive cells into myeloma-bearing mice (established by transplantation of human RPMI8226-TGL myeloma cells into CD122(+) cell-depleted NOD/SCID hosts), we found that myeloma cells induced alloreactive T cells that suppressed myeloma growth and prolonged survival of T cell recipients. Myeloma-induced alloreactive T cells arising in the myeloma-infiltrated bones exerted cytotoxic activity against resident myeloma cells, but limited activity against control myeloma cells obtained from myeloma-bearing mice that did not receive T naive cells. These myeloma-induced alloreactive T cells were derived through multiple CD8(+) T cell divisions and enriched in double-positive (DP) T cells coexpressing the CD8alphaalpha and CD4 coreceptors. MHC class I expression on myeloma cells and contact with T cells were required for CD8(+) T cell divisions and DP-T cell development. DP-T cells present in myeloma-infiltrated bones contained a higher proportion of cells expressing cytotoxic mediators IFN-gamma and/or perforin compared with single-positive CD8(+) T cells, acquired the capacity to degranulate as measured by CD107 expression, and contributed to an elevated perforin level seen in the myeloma-infiltrated bones. These observations suggest that myeloma-induced alloreactive T cells arising in myeloma-infiltrated bones are enriched with DP-T cells equipped with cytotoxic effector functions that are likely to be involved in the GVM effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The appealing concept of optimal harvesting is often used in fisheries to obtain new management strategies. However, optimality depends on the objective function, which often varies, reflecting the interests of different groups of people. The aim of maximum sustainable yield is to extract the greatest amount of food from replenishable resources in a sustainable way. Maximum sustainable yield may not be desirable from an economic point of view. Maximum economic yield that maximizes the profit of fishing fleets (harvesting sector) but ignores socio-economic benefits such as employment and other positive externalities. It may be more appropriate to use the maximum economic yield that which is based on the value chain of the overall fishing sector, to reflect better society's interests. How to make more efficient use of a fishery for society rather than fishing operators depends critically on the gain function parameters including multiplier effects and inclusion or exclusion of certain costs. In particular, the optimal effort level based on the overall value chain moves closer to the optimal effort for the maximum sustainable yield because of the multiplier effect. These issues are illustrated using the Australian Northern Prawn Fishery.