946 resultados para incremental computation
Resumo:
In this paper, a modification for the high-order neural network (HONN) is presented. Third order networks are considered for achieving translation, rotation and scale invariant pattern recognition. They require however much storage and computation power for the task. The proposed modified HONN takes into account a priori knowledge of the binary patterns that have to be learned, achieving significant gain in computation time and memory requirements. This modification enables the efficient computation of HONNs for image fields of greater that 100 × 100 pixels without any loss of pattern information.
Resumo:
Linguistic theory, cognitive, information, and mathematical modeling are all useful while we attempt to achieve a better understanding of the Language Faculty (LF). This cross-disciplinary approach will eventually lead to the identification of the key principles applicable in the systems of Natural Language Processing. The present work concentrates on the syntax-semantics interface. We start from recursive definitions and application of optimization principles, and gradually develop a formal model of syntactic operations. The result – a Fibonacci- like syntactic tree – is in fact an argument-based variant of the natural language syntax. This representation (argument-centered model, ACM) is derived by a recursive calculus that generates a mode which connects arguments and expresses relations between them. The reiterative operation assigns primary role to entities as the key components of syntactic structure. We provide experimental evidence in support of the argument-based model. We also show that mental computation of syntax is influenced by the inter-conceptual relations between the images of entities in a semantic space.
Resumo:
Toric coordinates and toric vector field have been introduced in [2]. Let A be an arbitrary vector field. We obtain formulae for the divA, rotA and the Laplace operator in toric coordinates.
Resumo:
Functional programming has a lot to offer to the developers of global Internet-centric applications, but is often applicable only to a small part of the system or requires major architectural changes. The data model used for functional computation is often simply considered a consequence of the chosen programming style, although inappropriate choice of such model can make integration with imperative parts much harder. In this paper we do the opposite: we start from a data model based on JSON and then derive the functional approach from it. We outline the identified principles and present Jsonya/fn — a low-level functional language that is defined in and operates with the selected data model. We use several Jsonya/fn implementations and the architecture of a recently developed application to show that our approach can improve interoperability and can achieve additional reuse of representations and operations at relatively low cost. ACM Computing Classification System (1998): D.3.2, D.3.4.
Resumo:
This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.
Resumo:
ACM Computing Classification System (1998): G.1.1, G.1.2.
Resumo:
This article shows the social importance of subsistence minimum in Georgia. The methodology of its calculation is also shown. We propose ways of improving the calculation of subsistence minimum in Georgia and how to extend it for other developing countries. The weights of food and non-food expenditures in the subsistence minimum baskets are essential in these calculations. Daily consumption value of the minimum food basket has been calculated too. The average consumer expenditures on food supply and the other expenditures to the share are considered in dynamics. Our methodology of the subsistence minimum calculation is applied for the case of Georgia. However, it can be used for similar purposes based on data from other developing countries, where social stability is achieved, and social inequalities are to be actualized. ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.
Resumo:
A number of recent studies have investigated the introduction of decoherence in quantum walks and the resulting transition to classical random walks. Interestingly,it has been shown that algorithmic properties of quantum walks with decoherence such as the spreading rate are sometimes better than their purely quantum counterparts. Not only quantum walks with decoherence provide a generalization of quantum walks that naturally encompasses both the quantum and classical case, but they also give rise to new and different probability distribution. The application of quantum walks with decoherence to large graphs is limited by the necessity of evolving state vector whose sizes quadratic in the number of nodes of the graph, as opposed to the linear state vector of the purely quantum (or classical) case. In this technical report,we show how to use perturbation theory to reduce the computational complexity of evolving a continuous-time quantum walk subject to decoherence. More specifically, given a graph over n nodes, we show how to approximate the eigendecomposition of the n2×n2 Lindblad super-operator from the eigendecomposition of the n×n graph Hamiltonian.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier–Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager–Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherWe adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Resumo:
Today's wireless networks rely mostly on infrastructural support for their operation. With the concept of ubiquitous computing growing more popular, research on infrastructureless networks have been rapidly growing. However, such types of networks face serious security challenges when deployed. This dissertation focuses on designing a secure routing solution and trust modeling for these infrastructureless networks. ^ The dissertation presents a trusted routing protocol that is capable of finding a secure end-to-end route in the presence of malicious nodes acting either independently or in collusion, The solution protects the network from active internal attacks, known to be the most severe types of attacks in an ad hoc application. Route discovery is based on trust levels of the nodes, which need to be dynamically computed to reflect the malicious behavior in the network. As such, we have developed a trust computational model in conjunction with the secure routing protocol that analyzes the different malicious behavior and quantifies them in the model itself. Our work is the first step towards protecting an ad hoc network from colluding internal attack. To demonstrate the feasibility of the approach, extensive simulation has been carried out to evaluate the protocol efficiency and scalability with both network size and mobility. ^ This research has laid the foundation for developing a variety of techniques that will permit people to justifiably trust the use of ad hoc networks to perform critical functions, as well as to process sensitive information without depending on any infrastructural support and hence will enhance the use of ad hoc applications in both military and civilian domains. ^
Resumo:
This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.
Resumo:
Abstract: Four second-grade students participated in a B-A-B withdrawal single-subject design experiment. The intervention package implemented consisted of three components: self-monitoring, performance feedback, and reinforcers. Participants completed math probes across phases. Accuracy and productivity was recorded and calculated. Results demonstrated the intervention package improved accuracy and productivity for all participants.
Resumo:
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.