23 resultados para Artificial intelligence -- Computer programs

em Greenwich Academic Literature Archive - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract not available

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a knowledge-based approach is proposed for the management of temporal information in process control. A common-sense theory of temporal constraints over processes/events, allowing relative temporal knowledge, is employed here as the temporal basis for the system. This theory supports duration reasoning and consistency checking, and accepts relative temporal knowledge which is in a form normally used by human operators. An architecture for process control is proposed which centres on an historical database consisting of events and processes, together with the qualitative temporal relationships between their occurrences. The dynamics of the system is expressed by means of three types of rule: database updating rules, process control rules, and data deletion rules. An example is provided in the form of a life scheduler, to illustrate the database and the rule sets. The example demonstrates the transitions of the database over time, and identifies the procedure in terms of a state transition model for the application. The dividing instant problem for logical inference is discussed with reference to this process control example, and it is shown how the temporal theory employed can be used to deal with the problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SMARTFIRE is a fire field model based on an open architecture integrated CFD code and knowledge-based system. It makes use of the expert system to assist the user in setting up the problem specification and new computational techniques such as Group Solvers to reduce the computational effort involved in solving the equations. This paper concentrates on recent research into the use of artificial intelligence techniques to assist in dynamic solution control of fire scenarios being simulated using fire field modelling techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxations using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate enhanced solution reliability due to obtaining acceptable convergence within each time step unlike some of the comparison simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, we reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: "The Jama Model. On Legal Narratives and Interpretation Patterns"), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story, is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability was infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for AI researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially bayesian probability) in accounts of evidence has been flouishing among legal scholars. Nowadays both the the Bayesians (e.g. Peter Tillers) and Bayesioskeptics (e.g. Ron Allen) among those legal scholars whoare involved in the controversy are willing to give AI researchers a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application or probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making (Rosoni 1995). Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Belief revision is a well-research topic within AI. We argue that the new model of distributed belief revision as discussed here is suitable for general modelling of judicial decision making, along with extant approach as known from jury research. The new approach to belief revision is of general interest, whenever attitudes to information are to be simulated within a multi-agent environment with agents holding local beliefs yet by interaction with, and influencing, other agents who are deliberating collectively. In the approach proposed, it's the entire group of agents, not an external supervisor, who integrate the different opinions. This is achieved through an election mechanism, The principle of "priority to the incoming information" as known from AI models of belief revision are problematic, when applied to factfinding by a jury. The present approach incorporates a computable model for local belief revision, such that a principle of recoverability is adopted. By this principle, any previously held belief must belong to the current cognitive state if consistent with it. For the purposes of jury simulation such a model calls for refinement. Yet we claim, it constitutes a valid basis for an open system where other AI functionalities (or outer stiumuli) could attempt to handle other aspects of the deliberation which are more specifi to legal narrative, to argumentation in court, and then to the debate among the jurors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we discuss the problem of maintenance of a CBR system for retrieval of rotationally symmetric shapes. The special feature of this system is that similarity is derived primarily from graph matching algorithms. The special problem of such a system is that it does not operate on search indices that may be derived from single cases and then used for visualisation and principle component analyses. Rather, the system is built on a similarity metric defined directly over pairs of cases. The problems of efficiency, consistency, redundancy, completeness and correctness are discussed for such a system. Performance measures for the CBR system are given, and the results for trials of the system are presented. The competence of the current case-base is discussed, with reference to a representation of cases as points in an n-dimensional feature space, and a Gramian visualisation. A refinement of the case base is performed as a result of the competence analysis and the performance of the case-base before and after refinement is compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Acknowledgement refers to the special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches, which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, I reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: 'The JAMA Model and Narrative Interpretation Patterns'), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability were infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for Artificial Intelligence (AI) researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially Bayesian probability) in accounts of evidence has been flourishing among legal scholars; nowadays both the Bayesians (e.g. Peter Tillers) and the Bayesio-skeptics (e.g. Ron Allen), among those legal scholars who are involved in the controversy, are willing to give AI research a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application of probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making. Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Belief revision is a well-researched topic within Artificial Intelligence (AI). We argue that the new model of belief revision as discussed here is suitable for general modelling of judicial decision making, along with the extant approach as known from jury research. The new approach to belief revision is of general interest, whenever attitudes to information are to be simulated within a multi-agent environment with agents holding local beliefs yet by interacting with, and influencing, other agents who are deliberating collectively. The principle of 'priority to the incoming information', as known from AI models of belief revision, is problematic when applied to factfinding by a jury. The present approach incorporates a computable model for local belief revision, such that a principle of recoverability is adopted. By this principle, any previously held belief must belong to the current cognitive state if consistent with it. For the purposes of jury simulation such a model calls for refinement. Yet, we claim, it constitutes a valid basis for an open system where other AI functionalities (or outer stimuli) could attempt to handle other aspects of the deliberation which are more specific to legal narratives, to argumentation in court, and then to the debate among the jurors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes progress on a project to utilise case based reasoning methods in the design and manufacture of furniture products. The novel feature of this research is that cases are represented as structures in a relational database of products, components and materials. The paper proposes a method for extending the usual "weighted sum" over attribute similarities for a ·single table to encompass relational structures over several tables. The capabilities of the system are discussed, particularly with respect to differing user objectives, such as cost estimation, CAD, cutting scheme re-use, and initial design. It is shown that specification of a target case as a relational structure combined with suitable weights can fulfil several user functions. However, it is also shown that some user functions cannot satisfactorily be specified via a single target case. For these functions it is proposed to allow the specification of a set of target cases. A derived similarity measure between individuals and sets of cases is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The so-called dividing instant (DI) problem is an ancient historical puzzle encountered when attempting to represent what happens at the boundary instant which divides two successive states. The specification of such a problem requires a thorough exploration of the primitives of the temporal ontology and the corresponding time structure, as well as the conditions that the resulting temporal models must satisfy. The problem is closely related to the question of how to characterize the relationship between time periods with positive duration and time instants with no duration. It involves the characterization of the ‘closed’ and ‘open’ nature of time intervals, i.e. whether time intervals include their ending points or not. In the domain of artificial intelligence, the DI problem may be treated as an issue of how to represent different assumptions (or hypotheses) about the DI in a consistent way. In this paper, we shall examine various temporal models including those based solely on points, those based solely on intervals and those based on both points and intervals, and point out the corresponding DI problem with regard to each of these temporal models. We shall propose a classification of assumptions about the DI and provide a solution to the corresponding problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a generalisation of the k-nearest neighbour (k-NN) retrieval method based on an error function using distance metrics in the solution and problem space. It is an interpolative method which is proposed to be effective for sparse case bases. The method applies equally to nominal, continuous and mixed domains, and does not depend upon an embedding n-dimensional space. In continuous Euclidean problem domains, the method is shown to be a generalisation of the Shepard's Interpolation method. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The performance of the retrieval method is examined with reference to the Iris classification problem,and to a simulated sparse nominal value test problem. The introducion of a solution-space metric is shown to out-perform conventional nearest neighbours methods on sparse case bases.