969 resultados para abstract
Resumo:
The present study shows that different neural activity during mental imagery and abstract mentation can be assigned to well-defined steps of the brain's information-processing. During randomized visual presentation of single, imagery-type and abstract-type words, 27 channel event-related potential (ERP) field maps were obtained from 25 subjects (sequence-divided into a first and second group for statistics). The brain field map series showed a sequence of typical map configurations that were quasi-stable for brief time periods (microstates). The microstates were concatenated by rapid map changes. As different map configurations must result from different spatial patterns of neural activity, each microstate represents different active neural networks. Accordingly, microstates are assumed to correspond to discrete steps of information-processing. Comparing microstate topographies (using centroids) between imagery- and abstract-type words, significantly different microstates were found in both subject groups at 286–354 ms where imagery-type words were more right-lateralized than abstract-type words, and at 550–606 ms and 606–666 ms where anterior-posterior differences occurred. We conclude that language-processing consists of several, well-defined steps and that the brain-states incorporating those steps are altered by the stimuli's capacities to generate mental imagery or abstract mentation in a state-dependent manner.
Resumo:
Prompted reports of recall of spontaneous, conscious experiences were collected in a no-input, no-task, no-response paradigm (30 random prompts to each of 13 healthy volunteers). The mentation reports were classified into visual imagery and abstract thought. Spontaneous 19-channel brain electric activity (EEG) was continuously recorded, viewed as series of momentary spatial distributions (maps) of the brain electric field and segmented into microstates, i.e. into time segments characterized by quasi-stable landscapes of potential distribution maps which showed varying durations in the sub-second range. Microstate segmentation used a data-driven strategy. Different microstates, i.e. different brain electric landscapes must have been generated by activity of different neural assemblies and therefore are hypothesized to constitute different functions. The two types of reported experiences were associated with significantly different microstates (mean duration 121 ms) immediately preceding the prompts; these microstates showed, across subjects, for abstract thought (compared to visual imagery) a shift of the electric gravity center to the left and a clockwise rotation of the field axis. Contrariwise, the microstates 2 s before the prompt did not differ between the two types of experiences. The results support the hypothesis that different microstates of the brain as recognized in its electric field implement different conscious, reportable mind states, i.e. different classes (types) of thoughts (mentations); thus, the microstates might be candidates for the `atoms of thought'.
Resumo:
For smart cities applications, a key requirement is to disseminate data collected from both scalar and multimedia wireless sensor networks to thousands of end-users. Furthermore, the information must be delivered to non-specialist users in a simple, intuitive and transparent manner. In this context, we present Sensor4Cities, a user-friendly tool that enables data dissemination to large audiences, by using using social networks, or/and web pages. The user can request and receive monitored information by using social networks, e.g., Twitter and Facebook, due to their popularity, user-friendly interfaces and easy dissemination. Additionally, the user can collect or share information from smart cities services, by using web pages, which also include a mobile version for smartphones. Finally, the tool could be configured to periodically monitor the environmental conditions, specific behaviors or abnormal events, and notify users in an asynchronous manner. Sensor4Cities improves the data delivery for individuals or groups of users of smart cities applications and encourages the development of new user-friendly services.
Resumo:
Energy consumption modelling by state based approaches often assume constant energy consumption values in each state. However, it happens in certain situations that during state transitions or even during a state the energy consumption is not constant and does fluctuate. This paper discusses those issues by presenting some examples from wireless sensor and wireless local area networks for such cases and possible solutions.
Resumo:
One of the most important uses of manipulatives in a classroom is to aid a learner to make connection from tangible concrete object to its abstraction. In this paper we discuss how teacher educators can foster deeper understanding of how manipulatives facilitate student learning of math concepts by emphasizing the connection between concrete objects and math symbolization with, preservice elementary teachers, the future implementers of knowledge. We provide an example and a model, with specific steps of how teacher educators can effectively demonstrate connections between concrete objects and abstract math concepts.
Resumo:
The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, nonfailure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.
Resumo:
This article considers static analysis based on abstract interpretation of logic programs over combined domains. It is known that analyses over combined domains provide more information potentially than obtained by the independent analyses. However, the construction of a combined analysis often requires redefining the basic operations for the combined domain. A practical approach to maintain precision in combined analyses of logic programs which reuses the individual analyses and does not redefine the basic operations is illustrated. The advantages of the approach are that proofs of correctness for the new domains are not required and implementations can be reused. The approach is demonstrated by showing that a combined sharing analysis — constructed from "old" proposals — compares well with other "new" proposals suggested in recent literature both from the point of view of efficiency and accuracy.
Resumo:
We report on a detailed study of the application and effectiveness of program analysis based on abstract interpretation to automatic program parallelization. We study the case of parallelizing logic programs using the notion of strict independence. We first propose and prove correct a methodology for the application in the parallelization task of the information inferred by abstract interpretation, using a parametric domain. The methodology is generic in the sense of allowing the use of different analysis domains. A number of well-known approximation domains are then studied and the transformation into the parametric domain defined. The transformation directly illustrates the relevance and applicability of each abstract domain for the application. Both local and global analyzers are then built using these domains and embedded in a complete parallelizing compiler. Then, the performance of the domains in this context is assessed through a number of experiments. A comparatively wide range of aspects is studied, from the resources needed by the analyzers in terms of time and memory to the actual benefits obtained from the information inferred. Such benefits are evaluated both in terms of the characteristics of the parallelized code and of the actual speedups obtained from it. The results show that data flow analysis plays an important role in achieving efficient parallelizations, and that the cost of such analysis can be reasonable even for quite sophisticated abstract domains. Furthermore, the results also offer significant insight into the characteristics of the domains, the demands of the application, and the trade-offs involved.
Resumo:
Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. We study the múltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While múltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of múltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract múltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups.
Resumo:
Traditional schemes for abstract interpretation-based global analysis of logic programs generally focus on obtaining procedure argument mode and type information. Variable sharing information is often given only the attention needed to preserve the correctness of the analysis. However, such sharing information can be very useful. In particular, it can be used for predicting runtime goal independence, which can eliminate costly run-time checks in and-parallel execution. In this paper, a new algorithm for doing abstract interpretation in logic programs is described which concentrates on inferring the dependencies of the terms bound to program variables with increased precisión and at all points in the execution of the program, rather than just at a procedure level. Algorithms are presented for computing abstract entry and success substitutions which extensively keep track of variable aliasing and term dependence information. In addition, a new, abstract domain independent ñxpoint algorithm is presented and described in detail. The algorithms are illustrated with examples. Finally, results from an implementation of the abstract interpreter are presented.
Resumo:
Abstract machines provide a certain separation between platformdependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the speciflc abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, programindependent proflling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/veriflcation of time properties, certiflcation of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization.