25 resultados para Legal tools
Resumo:
Computer Aided Parallelisation Tools (CAPTools) is a toolkit designed to automate as much as possible of the process of parallelising scalar FORTRAN 77 codes. The toolkit combines a very powerful dependence analysis together with user supplied knowledge to build an extremely comprehensive and accurate dependence graph. The initial version has been targeted at structured mesh computational mechanics codes (eg. heat transfer, Computational Fluid Dynamics (CFD)) and the associated simple mesh decomposition paradigm is utilised in the automatic code partition, execution control mask generation and communication call insertion. In this, the first of a series of papers [1–3] the authors discuss the parallelisations of a number of case study codes showing how the various component tools may be used to develop a highly efficient parallel implementation in a few hours or days. The details of the parallelisation of the TEAMKE1 CFD code are described together with the results of three other numerical codes. The resulting parallel implementations are then tested on workstation clusters using PVM and an i860-based parallel system showing efficiencies well over 80%.
Resumo:
User supplied knowledge and interaction is a vital component of a toolkit for producing high quality parallel implementations of scalar FORTRAN numerical code. In this paper we consider the necessary components that such a parallelisation toolkit should possess to provide an effective environment to identify, extract and embed user relevant user knowledge. We also examine to what extent these facilities are available in leading parallelisation tools; in particular we discuss how these issues have been addressed in the development of the user interface of the Computer Aided Parallelisation Tools (CAPTools). The CAPTools environment has been designed to enable user exploration, interaction and insertion of user knowledge to facilitate the automatic generation of very efficient parallel code. A key issue in the user's interaction is control of the volume of information so that the user is focused on only that which is needed. User control over the level and extent of information revealed at any phase is supplied using a wide variety of filters. Another issue is the way in which information is communicated. Dependence analysis and its resulting graphs involve a lot of sophisticated rather abstract concepts unlikely to be familiar to most users of parallelising tools. As such, considerable effort has been made to communicate with the user in terms that they will understand. These features, amongst others, and their use in the parallelisation process are described and their effectiveness discussed.
Resumo:
The consecutive, partly overlapping emergence of expert systems and then neural computation methods among intelligent technologies, is reflected in the evolving scene of their application to nuclear engineering. This paper provides a bird's eye view of the state of the application in the domain, along with a review of a particular task, the one perhaps economically more important: refueling design in nuclear power reactors.
Resumo:
This paper presents a formalism for representing temporal knowledge in legal discourse that allows an explicit expression of time and event occurrences. The fundamental time structure is characterized as a well‐ordered discrete set of primitive times, i.e. non‐decomposable intervals with positive duration or points with zero duration), from which decomposable intervals can be constructed. The formalism supports a full representation of both absolute and relative temporal knowledge, and a formal mechanism for checking the temporal consistency of a given set of legal statements is provided. The general consistency checking algorithm which addresses both absolute and relative temporal knowledge turns out to be a linear programming problem, while in the special case where only relative temporal relations are involved, it becomes a simple question of searching for cycles in the graphical representation of the corresponding legal text.
Resumo:
The Production Workstation developed at the University of Greenwich is evaluated as a tool for assisting all those concerned with production. It enables the producer, director, and cinematographer to explore the quality of the images obtainable when using a plethora of tools. Users are free to explore many possible choices, ranging from 35mm to DV, and combine them with the many image manipulation tools of the cinematographer. The validation required for the system is explicitly examined, concerning the accuracy of the resulting imagery. Copyright © 1999 by the Society of Motion Picture and Television Engineers, Inc.
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, we reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: "The Jama Model. On Legal Narratives and Interpretation Patterns"), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story, is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability was infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for AI researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially bayesian probability) in accounts of evidence has been flouishing among legal scholars. Nowadays both the the Bayesians (e.g. Peter Tillers) and Bayesioskeptics (e.g. Ron Allen) among those legal scholars whoare involved in the controversy are willing to give AI researchers a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application or probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making (Rosoni 1995). Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.
Resumo:
Lennart Åqvist (1992) proposed a logical theory of legal evidence, based on the Bolding-Ekelöf of degrees of evidential strength. This paper reformulates Åqvist's model in terms of the probabilistic version of the kappa calculus. Proving its acceptability in the legal context is beyond the present scope, but the epistemological debate about Bayesian Law isclearly relevant. While the present model is a possible link to that lineof inquiry, we offer some considerations about the broader picture of thepotential of AI & Law in the evidentiary context. Whereas probabilisticreasoning is well-researched in AI, calculations about the threshold ofpersuasion in litigation, whatever their value, are just the tip of theiceberg. The bulk of the modeling desiderata is arguably elsewhere, if one isto ideally make the most of AI's distinctive contribution as envisaged forlegal evidence research.
Resumo:
Review of: Handbook of Psychology in Legal Contexts. Ray Bull and David Carson (eds.) Wiley-Blackwell. 1999.
Resumo:
This Acknowledgement refers to the special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.
Resumo:
This special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.
Resumo:
For the purposes of starting to tackle, within artificial intelligence (AI), the narrative aspects of legal narratives in a criminal evidence perspective, traditional AI models of narrative understanding can arguably supplement extant models of legal narratives from the scholarly literature of law, jury studies, or the semiotics of law. Not only: the literary (or cinematic) models prominent in a given culture impinge, with their poetic conventions, on the way members of the culture make sense of the world. This shows glaringly in the sample narrative from the Continent-the Jama murder, the inquiry, and the public outcry-we analyse in this paper. Apparently in the same racist crime category as the case of Stephen Lawrence's murder (in Greenwich on 22 April 1993) with the ensuing still current controversy in the UK, the Jama case (some 20 years ago) stood apart because of a very unusual element: the eyewitnesses identifying the suspects were a group of football referees and linesmen eating together at a restaurant, and seeing the sleeping man as he was set ablaze in a public park nearby. Professional background as witnesses-cum-factfinders in a mass sport, and public perceptions of their required characteristics, couldn't but feature prominently in the public perception of the case, even more so as the suspects were released by the magistrate conducting the inquiry. There are sides to this case that involve different expected effects in an inquisitorial criminal procedure system from the Continent, where an investigating magistrate leads the inquiry and prepares the prosecution case, as opposed to trial by jury under the Anglo-American adversarial system. In the JAMA prototype, we tried to approach the given case from the coign of vantage of narrative models from AI.
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches, which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, I reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: 'The JAMA Model and Narrative Interpretation Patterns'), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability were infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for Artificial Intelligence (AI) researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially Bayesian probability) in accounts of evidence has been flourishing among legal scholars; nowadays both the Bayesians (e.g. Peter Tillers) and the Bayesio-skeptics (e.g. Ron Allen), among those legal scholars who are involved in the controversy, are willing to give AI research a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application of probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making. Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.