912 resultados para incremental computation
Resumo:
We study noisy computation in randomly generated k-ary Boolean formulas. We establish bounds on the noise level above which the results of computation by random formulas are not reliable. This bound is saturated by formulas constructed from a single majority-like gate. We show that these gates can be used to compute any Boolean function reliably below the noise bound.
Resumo:
Most existing color-based tracking algorithms utilize the statistical color information of the object as the tracking clues, without maintaining the spatial structure within a single chromatic image. Recently, the researches on the multilinear algebra provide the possibility to hold the spatial structural relationship in a representation of the image ensembles. In this paper, a third-order color tensor is constructed to represent the object to be tracked. Considering the influence of the environment changing on the tracking, the biased discriminant analysis (BDA) is extended to the tensor biased discriminant analysis (TBDA) for distinguishing the object from the background. At the same time, an incremental scheme for the TBDA is developed for the tensor biased discriminant subspace online learning, which can be used to adapt to the appearance variant of both the object and background. The experimental results show that the proposed method can track objects precisely undergoing large pose, scale and lighting changes, as well as partial occlusion. © 2009 Elsevier B.V.
Resumo:
* This work has been partially supported by Spanish Project TIC2003-9319-c03-03 “Neural Networks and Networks of Evolutionary Processors”.
Resumo:
Usually, generalization is considered as a function of learning from a set of examples. In present work on the basis of recent neural network assembly memory model (NNAMM), a biologically plausible 'grandmother' model for vision, where each separate memory unit itself can generalize, has been proposed. For such a generalization by computation through memory, analytical formulae and numerical procedure are found to calculate exactly the perfectly learned memory unit's generalization ability. The model's memory has complex hierarchical structure, can be learned from one example by a one-step process, and may be considered as a semi-representational one. A simple binary neural network for bell-shaped tuning is described.
Resumo:
An approach is proposed for inferring implicative logical rules from examples. The concept of a good diagnostic test for a given set of positive examples lies in the basis of this approach. The process of inferring good diagnostic tests is considered as a process of inductive common sense reasoning. The incremental approach to learning algorithms is implemented in an algorithm DIAGaRa for inferring implicative rules from examples.
Resumo:
We extend our previous work into error-free representations of transform basis functions by presenting a novel error-free encoding scheme for the fast implementation of a Linzer-Feig Fast Cosine Transform (FCT) and its inverse. We discuss an 8x8 L-F scaled Discrete Cosine Transform where the architecture uses a new algebraic integer quantization of the 1-D radix-8 DCT that allows the separable computation of a 2-D DCT without any intermediate number representation conversions. The resulting architecture is very regular and reduces latency by 50% compared to a previous error-free design, with virtually the same hardware cost.
Resumo:
In this paper, a modification for the high-order neural network (HONN) is presented. Third order networks are considered for achieving translation, rotation and scale invariant pattern recognition. They require however much storage and computation power for the task. The proposed modified HONN takes into account a priori knowledge of the binary patterns that have to be learned, achieving significant gain in computation time and memory requirements. This modification enables the efficient computation of HONNs for image fields of greater that 100 × 100 pixels without any loss of pattern information.
Resumo:
Linguistic theory, cognitive, information, and mathematical modeling are all useful while we attempt to achieve a better understanding of the Language Faculty (LF). This cross-disciplinary approach will eventually lead to the identification of the key principles applicable in the systems of Natural Language Processing. The present work concentrates on the syntax-semantics interface. We start from recursive definitions and application of optimization principles, and gradually develop a formal model of syntactic operations. The result – a Fibonacci- like syntactic tree – is in fact an argument-based variant of the natural language syntax. This representation (argument-centered model, ACM) is derived by a recursive calculus that generates a mode which connects arguments and expresses relations between them. The reiterative operation assigns primary role to entities as the key components of syntactic structure. We provide experimental evidence in support of the argument-based model. We also show that mental computation of syntax is influenced by the inter-conceptual relations between the images of entities in a semantic space.
Resumo:
Toric coordinates and toric vector field have been introduced in [2]. Let A be an arbitrary vector field. We obtain formulae for the divA, rotA and the Laplace operator in toric coordinates.
Resumo:
Functional programming has a lot to offer to the developers of global Internet-centric applications, but is often applicable only to a small part of the system or requires major architectural changes. The data model used for functional computation is often simply considered a consequence of the chosen programming style, although inappropriate choice of such model can make integration with imperative parts much harder. In this paper we do the opposite: we start from a data model based on JSON and then derive the functional approach from it. We outline the identified principles and present Jsonya/fn — a low-level functional language that is defined in and operates with the selected data model. We use several Jsonya/fn implementations and the architecture of a recently developed application to show that our approach can improve interoperability and can achieve additional reuse of representations and operations at relatively low cost. ACM Computing Classification System (1998): D.3.2, D.3.4.
Resumo:
This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.
Resumo:
ACM Computing Classification System (1998): G.1.1, G.1.2.
Resumo:
This article shows the social importance of subsistence minimum in Georgia. The methodology of its calculation is also shown. We propose ways of improving the calculation of subsistence minimum in Georgia and how to extend it for other developing countries. The weights of food and non-food expenditures in the subsistence minimum baskets are essential in these calculations. Daily consumption value of the minimum food basket has been calculated too. The average consumer expenditures on food supply and the other expenditures to the share are considered in dynamics. Our methodology of the subsistence minimum calculation is applied for the case of Georgia. However, it can be used for similar purposes based on data from other developing countries, where social stability is achieved, and social inequalities are to be actualized. ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.
Resumo:
A number of recent studies have investigated the introduction of decoherence in quantum walks and the resulting transition to classical random walks. Interestingly,it has been shown that algorithmic properties of quantum walks with decoherence such as the spreading rate are sometimes better than their purely quantum counterparts. Not only quantum walks with decoherence provide a generalization of quantum walks that naturally encompasses both the quantum and classical case, but they also give rise to new and different probability distribution. The application of quantum walks with decoherence to large graphs is limited by the necessity of evolving state vector whose sizes quadratic in the number of nodes of the graph, as opposed to the linear state vector of the purely quantum (or classical) case. In this technical report,we show how to use perturbation theory to reduce the computational complexity of evolving a continuous-time quantum walk subject to decoherence. More specifically, given a graph over n nodes, we show how to approximate the eigendecomposition of the n2×n2 Lindblad super-operator from the eigendecomposition of the n×n graph Hamiltonian.