100 resultados para Symbolic Computations
Resumo:
This paper draws on ethnographic case-study research conducted amongst a group of first and second generation immigrant children in six inner-city schools in London. It focuses on language attitudes and language choice in relation to cultural maintenance, on the one hand, and career aspirations on the other. It seeks to provide insight into some of the experiences and dilemmatic choices encountered and negotiations engaged in by transmigratory groups, how they define cultural capital, and the processes through which new meanings are shaped as part of the process of defining a space within the host society. Underlying this discussion is the assumption that alternative cultural spaces in which multiple identities and possibilities can be articulated already exist in the rich texture of everyday life amongst transmigratory groups. The argument that whilst the acquisition of 'world languages' is a key variable in accumulating cultural capital, the maintenance of linguistic diversity retains potent symbolic power in sustaining cohesive identities is a recurring theme.
Resumo:
This study was an attempt to identify the epistemological roots of knowledge when students carry out hands-on experiments in physics. We found that, within the context of designing a solution to a stated problem, subjects constructed and ran thought experiments intertwined within the processes of conducting physical experiments. We show that the process of alternating between these two modes- empirically experimenting and experimenting in thought- leads towards a convergence on scientifically acceptable concepts. We call this process mutual projection. In the process of mutual projection, external representations were generated. Objects in the physical environment were represented in an imaginary world and these representations were associated with processes in the physical world. It is through this coupling that constituents of both the imaginary world and the physical world gain meaning. We further show that the external representations are rooted in sensory interaction and constitute a semi-symbolic pictorial communication system, a sort of primitive 'language', which is developed as the practical work continues. The constituents of this pictorial communication system are used in the thought experiments taking place in association with the empirical experimentation. The results of this study provide a model of physics learning during hands-on experimentation.
Resumo:
Fast interceptive actions, such as catching a ball, rely upon accurate and precise information from vision. Recent models rely on flexible combinations of visual angle and its rate of expansion of which the tau parameter is a specific case. When an object approaches an observer, however, its trajectory may introduce bias into tau-like parameters that render these computations unacceptable as the sole source of information for actions. Here we show that observer knowledge of object size influences their action timing, and known size combined with image expansion simplifies the computations required to make interceptive actions and provides a route for experience to influence interceptive action.
Resumo:
Syntactic theory provides a rich array of representational assumptions about linguistic knowledge and processes. Such detailed and independently motivated constraints on grammatical knowledge ought to play a role in sentence comprehension. However most grammar-based explanations of processing difficulty in the literature have attempted to use grammatical representations and processes per se to explain processing difficulty. They did not take into account that the description of higher cognition in mind and brain encompasses two levels: on the one hand, at the macrolevel, symbolic computation is performed, and on the other hand, at the microlevel, computation is achieved through processes within a dynamical system. One critical question is therefore how linguistic theory and dynamical systems can be unified to provide an explanation for processing effects. Here, we present such a unification for a particular account to syntactic theory: namely a parser for Stabler's Minimalist Grammars, in the framework of Smolensky's Integrated Connectionist/Symbolic architectures. In simulations we demonstrate that the connectionist minimalist parser produces predictions which mirror global empirical findings from psycholinguistic research.
Resumo:
Event-related brain potentials (ERP) are important neural correlates of cognitive processes. In the domain of language processing, the N400 and P600 reflect lexical-semantic integration and syntactic processing problems, respectively. We suggest an interpretation of these markers in terms of dynamical system theory and present two nonlinear dynamical models for syntactic computations where different processing strategies correspond to functionally different regions in the system's phase space.
Resumo:
The emergence of mental states from neural states by partitioning the neural phase space is analyzed in terms of symbolic dynamics. Well-defined mental states provide contexts inducing a criterion of structural stability for the neurodynamics that can be implemented by particular partitions. This leads to distinguished subshifts of finite type that are either cyclic or irreducible. Cyclic shifts correspond to asymptotically stable fixed points or limit tori whereas irreducible shifts are obtained from generating partitions of mixing hyperbolic systems. These stability criteria are applied to the discussion of neural correlates of consiousness, to the definition of macroscopic neural states, and to aspects of the symbol grounding problem. In particular, it is shown that compatible mental descriptions, topologically equivalent to the neurodynamical description, emerge if the partition of the neural phase space is generating. If this is not the case, mental descriptions are incompatible or complementary. Consequences of this result for an integration or unification of cognitive science or psychology, respectively, will be indicated.
Resumo:
Finding the smallest eigenvalue of a given square matrix A of order n is computationally very intensive problem. The most popular method for this problem is the Inverse Power Method which uses LU-decomposition and forward and backward solving of the factored system at every iteration step. An alternative to this method is the Resolvent Monte Carlo method which uses representation of the resolvent matrix [I -qA](-m) as a series and then performs Monte Carlo iterations (random walks) on the elements of the matrix. This leads to great savings in computations, but the method has many restrictions and a very slow convergence. In this paper we propose a method that includes fast Monte Carlo procedure for finding the inverse matrix, refinement procedure to improve approximation of the inverse if necessary, and Monte Carlo power iterations to compute the smallest eigenvalue. We provide not only theoretical estimations about accuracy and convergence but also results from numerical tests performed on a number of test matrices.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.
Resumo:
We consider a physical model of ultrafast evolution of an initial electron distribution in a quantum wire. The electron evolution is described by a quantum-kinetic equation accounting for the interaction with phonons. A Monte Carlo approach has been developed for solving the equation. The corresponding Monte Carlo algorithm is NP-hard problem concerning the evolution time. To obtain solutions for long evolution times with small stochastic error we combine both variance reduction techniques and distributed computations. Grid technologies are implemented due to the large computational efforts imposed by the quantum character of the model.
Resumo:
The question "what Monte Carlo models can do and cannot do efficiently" is discussed for some functional spaces that define the regularity of the input data. Data classes important for practical computations are considered: classes of functions with bounded derivatives and Holder type conditions, as well as Korobov-like spaces. Theoretical performance analysis of some algorithms with unimprovable rate of convergence is given. Estimates of computational complexity of two classes of algorithms - deterministic and randomized for both problems - numerical multidimensional integration and calculation of linear functionals of the solution of a class of integral equations are presented. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Detecting a looming object and its imminent collision is imperative to survival. For most humans, it is a fundamental aspect of daily activities such as driving, road crossing and participating in sport, yet little is known about how the brain both detects and responds to such stimuli. Here we use functional magnetic resonance imaging to assess neural response to looming stimuli in comparison with receding stimuli and motion-controlled static stimuli. We demonstrate for the first time that, in the human, the superior colliculus and the pulvinar nucleus of the thalamus respond to looming in addition to cortical regions associated with motor preparation. We also implicate the anterior insula in making timing computations for collision events.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
We describe a high-level design method to synthesize multi-phase regular arrays. The method is based on deriving component designs using classical regular (or systolic) array synthesis techniques and composing these separately evolved component design into a unified global design. Similarity transformations ar e applied to component designs in the composition stage in order to align data ow between the phases of the computations. Three transformations are considered: rotation, re ection and translation. The technique is aimed at the design of hardware components for high-throughput embedded systems applications and we demonstrate this by deriving a multi-phase regular array for the 2-D DCT algorithm which is widely used in many vide ocommunications applications.
Resumo:
This paper is concerned with the uniformization of a system of afine recurrence equations. This transformation is used in the design (or compilation) of highly parallel embedded systems (VLSI systolic arrays, signal processing filters, etc.). In this paper, we present and implement an automatic system to achieve uniformization of systems of afine recurrence equations. We unify the results from many earlier papers, develop some theoretical extensions, and then propose effective uniformization algorithms. Our results can be used in any high level synthesis tool based on polyhedral representation of nested loop computations.