35 resultados para artificial intelligence
Resumo:
Review of: Handbook of Psychology in Legal Contexts. Ray Bull and David Carson (eds.) Wiley-Blackwell. 1999.
Resumo:
This Acknowledgement refers to the special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.
Resumo:
This special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.
Resumo:
For the purposes of starting to tackle, within artificial intelligence (AI), the narrative aspects of legal narratives in a criminal evidence perspective, traditional AI models of narrative understanding can arguably supplement extant models of legal narratives from the scholarly literature of law, jury studies, or the semiotics of law. Not only: the literary (or cinematic) models prominent in a given culture impinge, with their poetic conventions, on the way members of the culture make sense of the world. This shows glaringly in the sample narrative from the Continent-the Jama murder, the inquiry, and the public outcry-we analyse in this paper. Apparently in the same racist crime category as the case of Stephen Lawrence's murder (in Greenwich on 22 April 1993) with the ensuing still current controversy in the UK, the Jama case (some 20 years ago) stood apart because of a very unusual element: the eyewitnesses identifying the suspects were a group of football referees and linesmen eating together at a restaurant, and seeing the sleeping man as he was set ablaze in a public park nearby. Professional background as witnesses-cum-factfinders in a mass sport, and public perceptions of their required characteristics, couldn't but feature prominently in the public perception of the case, even more so as the suspects were released by the magistrate conducting the inquiry. There are sides to this case that involve different expected effects in an inquisitorial criminal procedure system from the Continent, where an investigating magistrate leads the inquiry and prepares the prosecution case, as opposed to trial by jury under the Anglo-American adversarial system. In the JAMA prototype, we tried to approach the given case from the coign of vantage of narrative models from AI.
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches, which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, I reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: 'The JAMA Model and Narrative Interpretation Patterns'), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability were infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for Artificial Intelligence (AI) researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially Bayesian probability) in accounts of evidence has been flourishing among legal scholars; nowadays both the Bayesians (e.g. Peter Tillers) and the Bayesio-skeptics (e.g. Ron Allen), among those legal scholars who are involved in the controversy, are willing to give AI research a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application of probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making. Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.
Resumo:
Belief revision is a well-researched topic within Artificial Intelligence (AI). We argue that the new model of belief revision as discussed here is suitable for general modelling of judicial decision making, along with the extant approach as known from jury research. The new approach to belief revision is of general interest, whenever attitudes to information are to be simulated within a multi-agent environment with agents holding local beliefs yet by interacting with, and influencing, other agents who are deliberating collectively. The principle of 'priority to the incoming information', as known from AI models of belief revision, is problematic when applied to factfinding by a jury. The present approach incorporates a computable model for local belief revision, such that a principle of recoverability is adopted. By this principle, any previously held belief must belong to the current cognitive state if consistent with it. For the purposes of jury simulation such a model calls for refinement. Yet, we claim, it constitutes a valid basis for an open system where other AI functionalities (or outer stimuli) could attempt to handle other aspects of the deliberation which are more specific to legal narratives, to argumentation in court, and then to the debate among the jurors.
Resumo:
This paper describes progress on a project to utilise case based reasoning methods in the design and manufacture of furniture products. The novel feature of this research is that cases are represented as structures in a relational database of products, components and materials. The paper proposes a method for extending the usual "weighted sum" over attribute similarities for a ·single table to encompass relational structures over several tables. The capabilities of the system are discussed, particularly with respect to differing user objectives, such as cost estimation, CAD, cutting scheme re-use, and initial design. It is shown that specification of a target case as a relational structure combined with suitable weights can fulfil several user functions. However, it is also shown that some user functions cannot satisfactorily be specified via a single target case. For these functions it is proposed to allow the specification of a set of target cases. A derived similarity measure between individuals and sets of cases is proposed.
Resumo:
The so-called dividing instant (DI) problem is an ancient historical puzzle encountered when attempting to represent what happens at the boundary instant which divides two successive states. The specification of such a problem requires a thorough exploration of the primitives of the temporal ontology and the corresponding time structure, as well as the conditions that the resulting temporal models must satisfy. The problem is closely related to the question of how to characterize the relationship between time periods with positive duration and time instants with no duration. It involves the characterization of the ‘closed’ and ‘open’ nature of time intervals, i.e. whether time intervals include their ending points or not. In the domain of artificial intelligence, the DI problem may be treated as an issue of how to represent different assumptions (or hypotheses) about the DI in a consistent way. In this paper, we shall examine various temporal models including those based solely on points, those based solely on intervals and those based on both points and intervals, and point out the corresponding DI problem with regard to each of these temporal models. We shall propose a classification of assumptions about the DI and provide a solution to the corresponding problem.
Resumo:
In this paper we propose a generalisation of the k-nearest neighbour (k-NN) retrieval method based on an error function using distance metrics in the solution and problem space. It is an interpolative method which is proposed to be effective for sparse case bases. The method applies equally to nominal, continuous and mixed domains, and does not depend upon an embedding n-dimensional space. In continuous Euclidean problem domains, the method is shown to be a generalisation of the Shepard's Interpolation method. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The performance of the retrieval method is examined with reference to the Iris classification problem,and to a simulated sparse nominal value test problem. The introducion of a solution-space metric is shown to out-perform conventional nearest neighbours methods on sparse case bases.
Resumo:
One of the fundamental questions regarding the temporal ontology is what is time composed of. While the traditional time structure is based on a set of points, a notion that has been prevalently adopted in classical physics and mathematics, it has also been noticed that intervals have been widely adopted for expre~sion of common sense temporal knowledge, especially in the domain of artificial intelligence. However, there has been a longstanding debate on how intervals should be addressed, leading to two different approaches to the treatment of intervals. In the first, intervals are addressed as derived objects constructed from points, e.g., as sets of points, or as pairs of points. In the second, intervals are taken as primitive themselves. This article provides a critical examination of these two approaches. By means of proposing a definition of intervals in terms of points and types, we shall demonstrate that, while the two different approaches have been viewed as rivals in the literature, they are actually reducible to logically equivalent expressions under some requisite interpretations, and therefore they can also be viewed as allies.
Resumo:
In this paper we propose a method for interpolation over a set of retrieved cases in the adaptation phase of the case-based reasoning cycle. The method has two advantages over traditional systems: the first is that it can predict “new” instances, not yet present in the case base; the second is that it can predict solutions not present in the retrieval set. The method is a generalisation of Shepard’s Interpolation method, formulated as the minimisation of an error function defined in terms of distance metrics in the solution and problem spaces. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The method is illustrated in the paper with reference to the Irises classification problem. It is evaluated with reference to a simulated nominal value test problem, and to a benchmark case base from the travel domain. The algorithm is shown to out-perform conventional nearest neighbour methods on these problems. Finally, GSNN is shown to improve in efficiency when used in conjunction with a diverse retrieval algorithm.
Resumo:
Numerical models are important tools used in engineering fields to predict the behaviour and the impact of physical elements. There may be advantages to be gained by combining Case-Based Reasoning (CBR) techniques with numerical models. This paper considers how CBR can be used as a flexible query engine to improve the usability of numerical models. Particularly they can help to solve inverse and mixed problems, and to solve constraint problems. We discuss this idea with reference to the illustrative example of a pneumatic conveyor problem. The paper describes example problems faced by design engineers in this context and the issues that need to be considered in this approach. Solution of these problems require methods to handle constraints in both the retrieval phase and the adaptation phase of a typical CBR cycle. We show approaches to the solution of these problesm via a CBR tool.
Resumo:
The traditional approach of dealing with cases from Multiple Case Bases is to map these to one central case base that is used for knowledge extraction and problem solving. Accessing Multiple Case Bases should not require a change to their data structure. This paper presents an investigation into applying Case-Based Reasoning to Multiple Heterogeneous Case Bases. A case study is presented to illustrate and evaluate the approach.
Resumo:
This paper examines different ways for measuring similarity between software design models for the purpose of software reuse. Current approaches to this problem are discussed and a set of suitable similarity metrics are proposed and evaluated. Work on the optimisation of weights to increase the competence of a CBR system is presented. A graph matching algorithm and associated metrics capturing the structural similarity between UML class diagrams is presented and demonstrated through an example case.
Resumo:
This paper describes the use of a blackboard architecture for building a hybrid case based reasoning (CBR) system. The Smartfire fire field modelling package has been built using this architecture and includes a CBR component. It allows the integration into the system of qualitative spatial reasoning knowledge from domain experts. The system can be used for the automatic set-up of fire field models. This enables fire safety practitioners who are not expert in modelling techniques to use a fire modelling tool. The paper discusses the integrating powers of the architecture, which is based on a common knowledge representation comprising a metric diagram and place vocabulary and mechanisms for adaptation and conflict resolution built on the Blackboard.