67 resultados para Symbolic computation and algebraic computation
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this paper microlevel politics and conflict associated with social and economic change in the countryside and linked changes in rural governance are explored with a focus upon research carried out on a recent rural policy initiative aimed at local 'empowerment'. This acts as a touchstone for a wider theoretical discussion. The paper is theorised within a conceptual framework derived and extended from the work of Pierre Bourdieu and others in order to explore case studies of the English Countryside Commission's Parish Paths Partnership scheme. The micropolitics involved with this scheme are examined and used to highlight more general issues raised by increased 'parish empowerment' in the 'postrural'.
Resumo:
Syntactic theory provides a rich array of representational assumptions about linguistic knowledge and processes. Such detailed and independently motivated constraints on grammatical knowledge ought to play a role in sentence comprehension. However most grammar-based explanations of processing difficulty in the literature have attempted to use grammatical representations and processes per se to explain processing difficulty. They did not take into account that the description of higher cognition in mind and brain encompasses two levels: on the one hand, at the macrolevel, symbolic computation is performed, and on the other hand, at the microlevel, computation is achieved through processes within a dynamical system. One critical question is therefore how linguistic theory and dynamical systems can be unified to provide an explanation for processing effects. Here, we present such a unification for a particular account to syntactic theory: namely a parser for Stabler's Minimalist Grammars, in the framework of Smolensky's Integrated Connectionist/Symbolic architectures. In simulations we demonstrate that the connectionist minimalist parser produces predictions which mirror global empirical findings from psycholinguistic research.
Resumo:
We introduce the perspex machine which unifies projective geometry and Turing computation and results in a supra-Turing machine. We show two ways in which the perspex machine unifies symbolic and non-symbolic AI. Firstly, we describe concrete geometrical models that map perspexes onto neural networks, some of which perform only symbolic operations. Secondly, we describe an abstract continuum of perspex logics that includes both symbolic logics and a new class of continuous logics. We argue that an axiom in symbolic logic can be the conclusion of a perspex theorem. That is, the atoms of symbolic logic can be the conclusions of sub-atomic theorems. We argue that perspex space can be mapped onto the spacetime of the universe we inhabit. This allows us to discuss how a robot might be conscious, feel, and have free will in a deterministic, or semi-deterministic, universe. We ground the reality of our universe in existence. On a theistic point, we argue that preordination and free will are compatible. On a theological point, we argue that it is not heretical for us to give robots free will. Finally, we give a pragmatic warning as to the double-edged risks of creating robots that do, or alternatively do not, have free will.
Resumo:
A novel iterative procedure is described for solving nonlinear optimal control problems subject to differential algebraic equations. The procedure iterates on an integrated modified linear quadratic model based problem with parameter updating in such a manner that the correct solution of the original non-linear problem is achieved. The resulting algorithm has a particular advantage in that the solution is achieved without the need to solve the differential algebraic equations . Convergence aspects are discussed and a simulation example is described which illustrates the performance of the technique. 1. Introduction When modelling industrial processes often the resulting equations consist of coupled differential and algebraic equations (DAEs). In many situations these equations are nonlinear and cannot readily be directly reduced to ordinary differential equations.
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
Resumo:
A polynomial-based ARMA model, when posed in a state-space framework can be regarded in many different ways. In this paper two particular state-space forms of the ARMA model are considered, and although both are canonical in structure they differ in respect of the mode in which disturbances are fed into the state and output equations. For both forms a solution is found to the optimal discrete-time observer problem and algebraic connections between the two optimal observers are shown. The purpose of the paper is to highlight the fact that the optimal observer obtained from the first state-space form, commonly known as the innovations form, is not that employed in an optimal controller, in the minimum-output variance sense, whereas the optimal observer obtained from the second form is. Hence the second form is a much more appropriate state-space description to use for controller design, particularly when employed in self-tuning control schemes.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
We study the regularization problem for linear, constant coefficient descriptor systems Ex' = Ax+Bu, y1 = Cx, y2 = Γx' by proportional and derivative mixed output feedback. Necessary and sufficient conditions are given, which guarantee that there exist output feedbacks such that the closed-loop system is regular, has index at most one and E+BGΓ has a desired rank, i.e., there is a desired number of differential and algebraic equations. To resolve the freedom in the choice of the feedback matrices we then discuss how to obtain the desired regularizing feedback of minimum norm and show that this approach leads to useful results in the sense of robustness only if the rank of E is decreased. Numerical procedures are derived to construct the desired feedback gains. These numerical procedures are based on orthogonal matrix transformations which can be implemented in a numerically stable way.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.
Resumo:
Trust is one of the most important factors that influence the successful application of network service environments, such as e-commerce, wireless sensor networks, and online social networks. Computation models associated with trust and reputation have been paid special attention in both computer societies and service science in recent years. In this paper, a dynamical computation model of reputation for B2C e-commerce is proposed. Firstly, conceptions associated with trust and reputation are introduced, and the mathematical formula of trust for B2C e-commerce is given. Then a dynamical computation model of reputation is further proposed based on the conception of trust and the relationship between trust and reputation. In the proposed model, classical varying processes of reputation of B2C e-commerce are discussed. Furthermore, the iterative trust and reputation computation models are formulated via a set of difference equations based on the closed-loop feedback mechanism. Finally, a group of numerical simulation experiments are performed to illustrate the proposed model of trust and reputation. Experimental results show that the proposed model is effective in simulating the dynamical processes of trust and reputation for B2C e-commerce.
Resumo:
Sequential techniques can enhance the efficiency of the approximate Bayesian computation algorithm, as in Sisson et al.'s (2007) partial rejection control version. While this method is based upon the theoretical works of Del Moral et al. (2006), the application to approximate Bayesian computation results in a bias in the approximation to the posterior. An alternative version based on genuine importance sampling arguments bypasses this difficulty, in connection with the population Monte Carlo method of Cappe et al. (2004), and it includes an automatic scaling of the forward kernel. When applied to a population genetics example, it compares favourably with two other versions of the approximate algorithm.
Resumo:
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC.
Resumo:
There is great interest in using amplified fragment length polymorphism (AFLP) markers because they are inexpensive and easy to produce. It is, therefore, possible to generate a large number of markers that have a wide coverage of species genotnes. Several statistical methods have been proposed to study the genetic structure using AFLP's but they assume Hardy-Weinberg equilibrium and do not estimate the inbreeding coefficient, F-IS. A Bayesian method has been proposed by Holsinger and colleagues that relaxes these simplifying assumptions but we have identified two sources of bias that can influence estimates based on these markers: (i) the use of a uniform prior on ancestral allele frequencies and (ii) the ascertainment bias of AFLP markers. We present a new Bayesian method that avoids these biases by using an implementation based on the approximate Bayesian computation (ABC) algorithm. This new method estimates population-specific F-IS and F-ST values and offers users the possibility of taking into account the criteria for selecting the markers that are used in the analyses. The software is available at our web site (http://www-leca.uif-grenoble.fi-/logiciels.htm). Finally, we provide advice on how to avoid the effects of ascertainment bias.
Resumo:
The estimation of effective population size from one sample of genotypes has been problematic because most estimators have been proven imprecise or biased. We developed a web-based program, ONeSAMP that uses approximate Bayesian computation to estimate effective population size from a sample of microsatellite genotypes. ONeSAMP requires an input file of sampled individuals' microsatellite genotypes along with information about several sampling and biological parameters. ONeSAMP provides an estimate of effective population size, along with 95% credible limits. We illustrate the use of ONeSAMP with an example data set from a re-introduced population of ibex Capra ibex.