181 resultados para Prolog


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vol. 2: Tillægshefte til Christiania videnskabs-selskabs Forhandlinger for 1875... I commission hos J. Dybwad.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notes on "The stacycons of Rome", by W.M. Rossetti. -- Political poems, etc. -- Love poems, etc. -- Religious poems from ms. Harl. 7322. An A B C poem on the passion of Christ. The Fifty-first psalm. -- Additions: Verse prolog and epilog to a book on medicine, A.B. 1440. A prentise unto woe, by Henry Baradoun, A.B. 1483. Hymn to the Virgin, by William Huchen, A.B. 1460. 'Peare of Provence and the fair Maguelone', a fragment. The Knight Amoryus and the Lady Cleopes ... by John Metham.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An implementation of a Lexical Functional Grammar (LFG) natural language front-end to a database is presented, and its capabilities demonstrated by reference to a set of queries used in the Chat-80 system. The potential of LFG for such applications is explored. Other grammars previously used for this purpose are briefly reviewed and contrasted with LFG. The basic LFG formalism is fully described, both as to its syntax and semantics, and the deficiencies of the latter for database access application shown. Other current LFG implementations are reviewed and contrasted with the LFG implementation developed here specifically for database access. The implementation described here allows a natural language interface to a specific Prolog database to be produced from a set of grammar rule and lexical specifications in an LFG-like notation. In addition to this the interface system uses a simple database description to compile metadata about the database for later use in planning the execution of queries. Extensions to LFG's semantic component are shown to be necessary to produce a satisfactory functional analysis and semantic output for querying a database. A diverse set of natural language constructs are analysed using LFG and the derivation of Prolog queries from the F-structure output of LFG is illustrated. The functional description produced from LFG is proposed as sufficient for resolving many problems of quantification and attachment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis describes the work carried out to develop a prototype knowledge-based system 'KBS-SETUPP' to generate process plans for the manufacture of seamless tubes. The work is specifically related to a plant in which hollows are made from solid billets using a rotary piercing process and then reduced to required size and finished properties using the fixed plug cold drawing process. The thesis first discusses various methods of tube production in order to give a general background of tube manufacture. Then a review of the automation of the process planning function is presented in terms of its basic sub-tasks and the techniques and suitability of a knowledge-based system is established. In the light of such a review and a case study, the process planning problem is formulated in the domain of seamless tube manufacture, its basic sub-tasks are identified and capabilities and constraints of the available equipment in the specific plant are established. The task of collecting and collating the process planning knowledge in seamless tube manufacture is discussed and is mostly fulfilled from domain experts, analysing of existing manufacturing records specific to plant, textbooks and applicable Standards. For the cold drawing mill, tube-drawing schedules have been rationalised to correspond with practice. The validation of such schedules has been achieved by computing the process parameters and then comparing these with the drawbench capacity to avoid over-loading. The existing models cannot be simulated in the computer program as such, therefore a mathematical model has been proposed which estimates the process parameters which are in close agreement with experimental values established by other researchers. To implement the concepts, a Knowledge-Based System 'KBS- SETUPP' has been developed on Personal Computer using Turbo- Prolog. The system is capable of generating process plans, production schedules and some additional capabilities to supplement process planning. The system generated process plans have been compared with the actual plans of the company and it has been shown that the results are satisfactory and encouraging and that the system has the capabilities which are useful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes work done exploring the application of expert system techniques to the domain of designing durable concrete. The nature of concrete durability design is described and some problems from the domain are discussed. Some related work on expert systems in concrete durability are described. Various implementation languages are considered - PROLOG and OPS5, and rejected in favour of a shell - CRYSTAL3 (later CRYSTAL4). Criteria for useful expert system shells in the domain are discussed. CRYSTAL4 is evaluated in the light of these criteria. Modules in various sub-domains (mix-design, sulphate attack, steel-corrosion and alkali aggregate reaction) are developed and organised under a BLACKBOARD system (called DEX). Extensions to the CRYSTAL4 modules are considered for different knowledge representations. These include LOTUS123 spreadsheets implementing models incorporating some of the mathematical knowledge in the domain. Design databases are used to represent tabular design knowledge. Hypertext representations of the original building standards texts are proposed as a tool for providing a well structured and extensive justification/help facility. A standardised approach to module development is proposed using hypertext development as a structured basis for expert systems development. Some areas of deficient domain knowledge are highlighted particularly in the use of data from mathematical models and in gaps and inconsistencies in the original knowledge source Digests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study was concerned with the computer automation of land evaluation. This is a broad subject with many issues to be resolved, so the study concentrated on three key problems: knowledge based programming; the integration of spatial information from remote sensing and other sources; and the inclusion of socio-economic information into the land evaluation analysis. Land evaluation and land use planning were considered in the context of overseas projects in the developing world. Knowledge based systems were found to provide significant advantages over conventional programming techniques for some aspects of the land evaluation process. Declarative languages, in particular Prolog, were ideally suited to integration of social information which changes with every situation. Rule-based expert system shells were also found to be suitable for this role, including knowledge acquisition at the interview stage. All the expert system shells examined suffered from very limited constraints to problem size, but new products now overcome this. Inductive expert system shells were useful as a guide to knowledge gaps and possible relationships, but the number of examples required was unrealistic for typical land use planning situations. The accuracy of classified satellite imagery was significantly enhanced by integrating spatial information on soil distribution for Thailand data. Estimates of the rice producing area were substantially improved (30% change in area) by the addition of soil information. Image processing work on Mozambique showed that satellite remote sensing was a useful tool in stratifying vegetation cover at provincial level to identify key development areas, but its full utility could not be realised on typical planning projects, without treatment as part of a complete spatial information system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary objective of this research was to understand what kinds of knowledge and skills people use in `extracting' relevant information from text and to assess the extent to which expert systems techniques could be applied to automate the process of abstracting. The approach adopted in this thesis is based on research in cognitive science, information science, psycholinguistics and textlinguistics. The study addressed the significance of domain knowledge and heuristic rules by developing an information extraction system, called INFORMEX. This system, which was implemented partly in SPITBOL, and partly in PROLOG, used a set of heuristic rules to analyse five scientific papers of expository type, to interpret the content in relation to the key abstract elements and to extract a set of sentences recognised as relevant for abstracting purposes. The analysis of these extracts revealed that an adequate abstract could be generated. Furthermore, INFORMEX showed that a rule based system was a suitable computational model to represent experts' knowledge and strategies. This computational technique provided the basis for a new approach to the modelling of cognition. It showed how experts tackle the task of abstracting by integrating formal knowledge as well as experiential learning. This thesis demonstrated that empirical and theoretical knowledge can be effectively combined in expert systems technology to provide a valuable starting approach to automatic abstracting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presenting a reading of prefaces written by Luís da Câmara Cascudo, related to literary and non-literary books from the early 20TH century (1921-1984), is the goal of this thesis. Considering the word in its meaning: "Latin praefatio, the action of speaking in the commencement. Synonym for 'prologue ', in the sense of text that precedes or introduces a work" (MOISÉS, 1999, p.416). In this research, a preface is understood as the text written and published with the intent to provide information to facilitate reading and/or understanding of the work to which it refers, regardless it is set at initial pages, when it is named as „prolog‟, „letter to the reader‟, „proem‟, „prelude‟, „preamble, forewords, summary, etc., or when only appears in the last pages of the book and turns to be named as „afterword‟. It is a qualitative research, with a bibliographic and interpretive feature, considering that part of the analysis of texts employs the inductive method, focuses on the depth of understanding that the researcher has on the researched object. For the study of this genre we recourse to Sales (2003), Teles (1986; 1989; 2010), Clemente (1986) and Candido (2005); as for the notion of tradition, we resort to Eliot (1997) and Candido (1997; 1980). The set of prefaces is a wide material for research that will allow scholars of Brazilian culture to continue work started by Luís da Câmara Cascudo, still in 1921, when he started his career as a prefacer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presenting a reading of prefaces written by Luís da Câmara Cascudo, related to literary and non-literary books from the early 20TH century (1921-1984), is the goal of this thesis. Considering the word in its meaning: "Latin praefatio, the action of speaking in the commencement. Synonym for 'prologue ', in the sense of text that precedes or introduces a work" (MOISÉS, 1999, p.416). In this research, a preface is understood as the text written and published with the intent to provide information to facilitate reading and/or understanding of the work to which it refers, regardless it is set at initial pages, when it is named as „prolog‟, „letter to the reader‟, „proem‟, „prelude‟, „preamble, forewords, summary, etc., or when only appears in the last pages of the book and turns to be named as „afterword‟. It is a qualitative research, with a bibliographic and interpretive feature, considering that part of the analysis of texts employs the inductive method, focuses on the depth of understanding that the researcher has on the researched object. For the study of this genre we recourse to Sales (2003), Teles (1986; 1989; 2010), Clemente (1986) and Candido (2005); as for the notion of tradition, we resort to Eliot (1997) and Candido (1997; 1980). The set of prefaces is a wide material for research that will allow scholars of Brazilian culture to continue work started by Luís da Câmara Cascudo, still in 1921, when he started his career as a prefacer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il termine pervasive computing incarna l’idea di andare oltre il paradigma dei personal computers: è l’idea che qualsiasi device possa essere tecnologizzato ed interconnesso con un network distribuito, costituendo un nuovo modello di interazione uomo-macchina. All’interno di questo paradigma gioca un ruolo fondamentale il concetto di context-awareness, che fa riferimento all’idea che i computer possano raccogliere dati dall’ambiente circostante e reagire in maniera intelligente e proattiva basandosi su di essi. Un sistema siffatto necessita da un lato di una infrastruttura per la raccolta dei dati dall’ambiente, dall'altro di un supporto per la componente intelligente e reattiva. In tale scenario, questa tesi ha l'obiettivo di progettare e realizzare una libreria per l'interfacciamento di un sistema distribuito di sensori Java-based con l’interprete tuProlog, un sistema Prolog leggero e configurabile, scritto anch'esso in Java ma disponibile per una pluralità di piattaforme, in modo da porre le basi per la costruzione di sistemi context-aware in questo ambiente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The authors wish to acknowledge the generous financial support provided in association with this volume to the Geological Society and the Petroleum Group by Badley Geoscience Ltd, BP, CGG Robertson, Dana Petroleum Ltd, Getech Group plc, Maersk Oil North Sea UK Ltd, Midland Valley Exploration Ltd, Rock Deformation Research (Schlumberger) and Borehole Image & Core Specialists (Wildcat Geoscience, Walker Geoscience and Prolog Geoscience). We would like to thank the fine team at the Geological Society’s Publishing House for the excellent support and encouragement that they have provided to the editors and authors of this Special Publication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The authors wish to acknowledge the generous financial support provided in association with this volume to the Geological Society and the Petroleum Group by Badley Geoscience Ltd, BP, CGG Robertson, Dana Petroleum Ltd, Getech Group plc, Maersk Oil North Sea UK Ltd, Midland Valley Exploration Ltd, Rock Deformation Research (Schlumberger) and Borehole Image & Core Specialists (Wildcat Geoscience, Walker Geoscience and Prolog Geoscience). We would like to thank the fine team at the Geological Society’s Publishing House for the excellent support and encouragement that they have provided to the editors and authors of this Special Publication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the NumbersWithNames program which performs data-mining on the Encyclopedia of Integer Sequences to find interesting conjectures in number theory. The program forms conjectures by finding empirical relationships between a sequence chosen by the user and those in the Encyclopedia. Furthermore, it transforms the chosen sequence into another set of sequences about which conjectures can also be formed. Finally, the program prunes and sorts the conjectures so that themost plausible ones are presented first. We describe here the many improvements to the previous Prolog implementation which have enabled us to provide NumbersWithNames as an online program. We also present some new results from using NumbersWithNames, including details of an automated proof plan of a conjecture NumbersWithNames helped to discover.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'intelligenza artificiale (IA) trova nei giochi un campo di applicazione molto vasto, nel quale poter sperimentare svariate tecniche e proporre nuove stimolanti sfide che spingano i partecipanti ad esplorare nuovi orizzonti nell'ambito delle applicazioni di IA. La Keke AI Competition rappresenta una di queste sfide, introducendo una gara tra agenti intelligenti per il gioco Baba is You, puzzle game in cui i giocatori possono creare regole che influenzano le meccaniche del gioco in maniera temporanea o permanente. La natura di queste regole dinamiche crea una sfida per l'intelligenza artificiale, che deve adattarsi ad una varietà di combinazioni di meccaniche per risolvere un livello. Questo progetto di tesi si propone di realizzare un agente intelligente che possa idealmente partecipare alla competizione sfruttando tecniche di pianificazione automatica. In particolare, l'agente progettato si basa sull'algoritmo di pianificazione graphplan che opera a diversi livelli di astrazione posti in gerarchia tra loro ed è stato realizzato completamente in Prolog. Questo progetto mostra quindi come le tecniche di pianificazione automatica siano un valido strumento per risolvere alcune tipologie di giochi innovativi complessi nell'ambito dell'IA.