19 resultados para Particular dictionary
Resumo:
In his book Democratic Authority, David Estlund puts forward a case for democracy, which he labels epistemic proceduralism, that relies on democracy's ability to produce good – that is, substantively just – results. Alongside this case for democracy Estlund attacks what he labels ‘utopophobia’, an aversion to idealistic political theory. In this article I make two points. The first is a general point about what the correct level of ‘idealisation’ is in political theory. Various debates are emerging on this question and, to the extent that they are focused on ‘political theory’ as a whole, I argue, they are flawed. This is because there are different kinds of political concept, and they require different kinds of ideal. My second point is about democracy in particular. If we understand democracy as Estlund does, then we should see it as a problem-solving concept – the problem being that we need coercive institutions and rules, but we do not know what justice requires. As democracy is a response to a problem, we should not allow our theories of it, even at the ideal level, to be too idealised – they must be embedded in the nature of the problem they are to solve, and the beings that have it.
Resumo:
Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional data by transforming samples into a one-dimensional (1D) vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding, which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms.