978 resultados para Descriptive and normative in logic
Resumo:
This paper sets out to report on findings about features of task-specific reformulation observed in university students in the middle stretch of the Psychology degree course (N=58) and in a reference group of students from the degree courses in Modern Languages, Spanish and Library Studies (N=33) from the National University of La Plata (Argentina). Three types of reformulation were modeled: summary reformulation, comprehensive and productive reformulation.The study was based on a corpus of 621 reformulations rendered from different kinds of text. The versions obtained were categorised according to the following criteria: presence or absence of normative, morphosyntactic and semantic difficulties. Findings show that problems arise particularly with paraphrase and summary writing. Observation showed difficulties concerning punctuation, text cohesion and coherence , and semantic distortion or omission as regards extracting and/or substituting gist, with limited lexical resources and confusion as to suitability of style/register in writing. The findings in this study match those of earlier, more comprehensive research on the issue and report on problems experienced by a significant number of university students when interacting with both academic texts and others of a general nature. Moreover, they led to questions, on the one hand, as to the nature of such difficulties, which appear to be production-related problems and indirectly account for inadequate text comprehension, and on the other hand, as to the features of university tuition when it comes to text handling.
Resumo:
This paper sets out to report on findings about features of task-specific reformulation observed in university students in the middle stretch of the Psychology degree course (N=58) and in a reference group of students from the degree courses in Modern Languages, Spanish and Library Studies (N=33) from the National University of La Plata (Argentina). Three types of reformulation were modeled: summary reformulation, comprehensive and productive reformulation.The study was based on a corpus of 621 reformulations rendered from different kinds of text. The versions obtained were categorised according to the following criteria: presence or absence of normative, morphosyntactic and semantic difficulties. Findings show that problems arise particularly with paraphrase and summary writing. Observation showed difficulties concerning punctuation, text cohesion and coherence , and semantic distortion or omission as regards extracting and/or substituting gist, with limited lexical resources and confusion as to suitability of style/register in writing. The findings in this study match those of earlier, more comprehensive research on the issue and report on problems experienced by a significant number of university students when interacting with both academic texts and others of a general nature. Moreover, they led to questions, on the one hand, as to the nature of such difficulties, which appear to be production-related problems and indirectly account for inadequate text comprehension, and on the other hand, as to the features of university tuition when it comes to text handling.
Resumo:
This paper sets out to report on findings about features of task-specific reformulation observed in university students in the middle stretch of the Psychology degree course (N=58) and in a reference group of students from the degree courses in Modern Languages, Spanish and Library Studies (N=33) from the National University of La Plata (Argentina). Three types of reformulation were modeled: summary reformulation, comprehensive and productive reformulation.The study was based on a corpus of 621 reformulations rendered from different kinds of text. The versions obtained were categorised according to the following criteria: presence or absence of normative, morphosyntactic and semantic difficulties. Findings show that problems arise particularly with paraphrase and summary writing. Observation showed difficulties concerning punctuation, text cohesion and coherence , and semantic distortion or omission as regards extracting and/or substituting gist, with limited lexical resources and confusion as to suitability of style/register in writing. The findings in this study match those of earlier, more comprehensive research on the issue and report on problems experienced by a significant number of university students when interacting with both academic texts and others of a general nature. Moreover, they led to questions, on the one hand, as to the nature of such difficulties, which appear to be production-related problems and indirectly account for inadequate text comprehension, and on the other hand, as to the features of university tuition when it comes to text handling.
Resumo:
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also,a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with medium to large parallel execution overheads.
Resumo:
We report on a detailed study of the application and effectiveness of program analysis based on abstract interpretation to automatic program parallelization. We study the case of parallelizing logic programs using the notion of strict independence. We first propose and prove correct a methodology for the application in the parallelization task of the information inferred by abstract interpretation, using a parametric domain. The methodology is generic in the sense of allowing the use of different analysis domains. A number of well-known approximation domains are then studied and the transformation into the parametric domain defined. The transformation directly illustrates the relevance and applicability of each abstract domain for the application. Both local and global analyzers are then built using these domains and embedded in a complete parallelizing compiler. Then, the performance of the domains in this context is assessed through a number of experiments. A comparatively wide range of aspects is studied, from the resources needed by the analyzers in terms of time and memory to the actual benefits obtained from the information inferred. Such benefits are evaluated both in terms of the characteristics of the parallelized code and of the actual speedups obtained from it. The results show that data flow analysis plays an important role in achieving efficient parallelizations, and that the cost of such analysis can be reasonable even for quite sophisticated abstract domains. Furthermore, the results also offer significant insight into the characteristics of the domains, the demands of the application, and the trade-offs involved.
Resumo:
Although studies of a number of parallel implementations of logic programming languages are now available, their results are difficult to interpret due to the multiplicity of factors involved, the effect of each of which is difficult to sepárate. In this paper we present the results of a high-level simulation study of or- and independent and-parallelism with a wide selection of Prolog programs that aims to determine the intrinsic amount of parallelism, independently of implementation factors, thus facilitating this separation. We expect this study will be instrumental in better understanding and comparing results from actual implementations, as shown by some examples provided in the paper. In addition, the paper examines some of the issues and tradeoffs associated with the combination of and- and or-parallelism and proposes reasonable solutions based on the simulation data obtained.
Resumo:
This paper proposes a diagnosis algorithm for locating a certain kind of errors in logic programs: variable binding errors that result in abstract symptoms during compile-time checking of assertions based on abstract interpretation. The diagnoser analyzes the graph generated by the abstract interpreter, which is a provably safe approximation of the program semantics. The proposed algorithm traverses this graph to find the point where the actual error originates (a reason of the symptom), leading to the point the error has been reported (the symptom). The procedure is fully automatic, not requiring any interaction with the user. A prototype diagnoser has been implemented and preliminary results are encouraging.
Resumo:
Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.
Resumo:
Abstract is not available.
Resumo:
We study the múltiple specialization of logic programs based on abstract interpretation. This involves in general generating several versions of a program predícate for different uses of such predícate, making use of information obtained from global analysis performed by an abstract interpreter, and finally producing a new, "multiply specialized" program. While the topic of múltiple specialization of logic programs has received considerable theoretical attention, it has never been actually incorporated in a compiler and its effects quantified. We perform such a study in the context of a parallelizing compiler and show that it is indeed a relevant technique in practice. Also, we propose an implementation technique which has the same power as the strongest of the previously proposed techniques but requires little or no modification of an existing abstract interpreter.
Resumo:
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also, a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with médium to large parallel execution overheads.
Resumo:
Andorra-I is the first implementation of a language based on the Andorra Principie, which states that determinate goals can (and shonld) be run before other goals, and even in a parallel fashion. This principie has materialized in a framework called the Basic Andorra model, which allows or-parallelism as well as (dependent) and-parallelism for determinate goals. In this report we show that it is possible to further extend this model in order to allow general independent and-parallelism for nondeterminate goals, withont greatly modifying the underlying implementation machinery. A simple an easy way to realize such an extensión is to make each (nondeterminate) independent goal determinate, by using a special "bagof" constract. We also show that this can be achieved antomatically by compile-time translation from original Prolog programs. A transformation that fulfüls this objective and which can be easily antomated is presented in this report.