193 resultados para Automated reasoning programs
Resumo:
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Introducing automation into a managed environment includes significant initial overhead and abstraction, creating a disconnect between the administrator and the system. In order to facilitate the transition to automated management, this paper proposes an approach whereby automation increases gradually, gathering data from the task deployment process. This stored data is analysed to determine the task outcome status and can then be used for comparison against future deployments of the same task and alerting the administrator to deviations from the expected outcome. Using a machinelearning
approach, the automation tool can learn from the administrator's reaction to task failures and eventually react to faults autonomously.
Resumo:
BACKGROUND: To compare the ability of Glaucoma Progression Analysis (GPA) and Threshold Noiseless Trend (TNT) programs to detect visual-field deterioration.
METHODS: Patients with open-angle glaucoma followed for a minimum of 2 years and a minimum of seven reliable visual fields were included. Progression was assessed subjectively by four masked glaucoma experts, and compared with GPA and TNT results. Each case was judged to be stable, deteriorated or suspicious of deterioration
RESULTS: A total of 56 eyes of 42 patients were followed with a mean of 7.8 (SD 1.0) tests over an average of 5.5 (1.04) years. Interobserver agreement to detect progression was good (mean kappa = 0.57). Progression was detected in 10-19 eyes by the experts, in six by GPA and in 24 by TNT. Using the consensus expert opinion as the gold standard (four clinicians detected progression), the GPA sensitivity and specificity were 75% and 83%, respectively, while the TNT sensitivity and specificity was 100% and 77%, respectively.
CONCLUSION: TNT showed greater concordance with the experts than GPA in the detection of visual-field deterioration. GPA showed a high specificity but lower sensitivity, mainly detecting cases of high focality and pronounced mean defect slopes.
Resumo:
We address the problem of multi-target tracking in realistic crowded conditions by introducing a novel dual-stage online tracking algorithm. The problem of data-association between tracks and detections, based on appearance, is often complicated by partial occlusion. In the first stage, we address the issue of occlusion with a novel method of robust data-association, that can be used to compute the appearance similarity between tracks and detections without the need for explicit knowledge of the occluded regions. In the second stage, broken tracks are linked based on motion and appearance, using an online-learned linking model. The online-learned motion-model for track linking uses the confident tracks from the first stage tracker as training examples. The new approach has been tested on the town centre dataset and has performance comparable with the present state-of-the-art
Resumo:
Reasoning about problems with empirically false content can be hard, as the inferences that people draw are heavily influenced by their background knowledge. However, presenting empirically false premises in a fantasy context helps children and adolescents to disregard their beliefs, and to reason on the basis of the premises. The aim of the present experiments was to see if high-functioning adolescents with autism are able to utilize fantasy context to the same extent as typically developing adolescents when they reason about empirically false premises. The results indicate that problems with engaging in pretence in autism persist into adolescence, and this hinders the ability of autistic individuals to disregard their beliefs when empirical knowledge is irrelevant.
Resumo:
The inherent difficulty of thread-based shared-memory programming has recently motivated research in high-level, task-parallel programming models. Recent advances of Task-Parallel models add implicit synchronization, where the system automatically detects and satisfies data dependencies among spawned tasks. However, dynamic dependence analysis incurs significant runtime overheads, because the runtime must track task resources and use this information to schedule tasks while avoiding conflicts and races.
We present SCOOP, a compiler that effectively integrates static and dynamic analysis in code generation. SCOOP combines context-sensitive points-to, control-flow, escape, and effect analyses to remove redundant dependence checks at runtime. Our static analysis can work in combination with existing dynamic analyses and task-parallel runtimes that use annotations to specify tasks and their memory footprints. We use our static dependence analysis to detect non-conflicting tasks and an existing dynamic analysis to handle the remaining dependencies. We evaluate the resulting hybrid dependence analysis on a set of task-parallel programs.
Resumo:
Based on the Dempster-Shafer (D-S) theory of evidence and G. Yen's (1989), extension of the theory, the authors propose approaches to representing heuristic knowledge by evidential mapping and pooling the mass distribution in a complex frame by partitioning that frame using Shafter's partition technique. The authors have generalized Yen's model from Bayesian probability theory to the D-S theory of evidence. Based on such a generalized model, an extended framework for evidential reasoning systems is briefly specified in which a semi-graph method is used to describe the heuristic knowledge. The advantage of such a method is that it can avoid the complexity of graphs without losing the explicitness of graphs. The extended framework can be widely used to build expert systems
Resumo:
Abstract. Modern business practices in engineering are increasingly turning to post manufacture service provision in an attempt to generate additional revenue streams and ensure commercial sustainability. Maintainability has always been a consideration during the design process but in the past it has been generally considered to be of tertiary importance behind manufacturability and primary product function in terms of design priorities. The need to draw whole life considerations into concurrent engineering (CE) practice has encouraged companies to address issues such as maintenance, earlier in the design process giving equal importance to all aspects of the product lifecycle. The consideration of design for maintainability (DFM) early in the design process has the potential to significantly reduce maintenance costs, and improve overall running efficiencies as well as safety levels. However a lack of simulation tools still hinders the adaptation of CE to include practical elements of design and therefore further research is required to develop methods by which ‘hands on’ activities such as maintenance can be fully assessed and optimised as concepts develop. Virtual Reality (VR) has the potential to address this issue but the application of these traditionally high cost systems can require complex infrastructure and their use has typically focused on aesthetic aspects of mature designs. This paper examines the application of cost effective VR technology to the rapid assessment of aircraft interior inspection during conceptual design. It focuses on the integration of VR hardware with a typical desktop engineering system and examines the challenges with data transfer, graphics quality and the development of practical user functions within the VR environment. Conclusions drawn to date indicate that the system has the potential to improve maintenance planning through the provision of a usable environment for inspection which is available as soon as preliminary structural models are generated as part of the conceptual design process. Challenges still exist in the efficient transfer of data between the CAD and VR environments as well as the quantification of any benefits that result from the proposed approach. The result of this research will help to improve product maintainability, reduce product development cycle times and lower maintenance costs.
Resumo:
CCTV systems are broadly deployed in the present world. Despite this, the impact on anti-social and criminal behaviour has been minimal. Subject reacquisition is a fundamental task to ensure in-time reaction for intelligent surveillance. However, traditional reacquisition based on face recognition is not scalable, hence in this paper we use reasoning techniques to reduce the computational effort which deploys the time-of-flight information between interested zones such as airport security corridors. Also, to improve accuracy of reacquisition, we introduce the idea of revision as a method of post-processing.We demonstrate the significance and usefulness of our framework with an experiment which shows much less computational effort and better accuracy.