158 resultados para Automated Reasoning
Resumo:
A new method for automated coronal loop tracking, in both spatial and temporal domains, is presented. Applying this technique to TRACE data, obtained using the 171 angstrom filter on 1998 July 14, we detect a coronal loop undergoing a 270 s kink-mode oscillation, as previously found by Aschwanden et al. However, we also detect flare-induced, and previously unnoticed, spatial periodicities on a scale of 3500 km, which occur along the coronal loop edge. Furthermore, we establish a reduction in oscillatory power for these spatial periodicities of 45% over a 222 s interval. We relate the reduction in detected oscillatory power to the physical damping of these loop-top oscillations.
Resumo:
In this paper we describe how an evidential-reasoner can be used as a component of risk assessment of engineering projects using a direct way of reasoning. Guan & Bell (1991) introduced this method by using the mass functions to express rule strengths. Mass functions are also used to express data strengths. The data and rule strengths are combined to get a mass distribution for each rule; i.e., the first half of our reasoning process. Then we combine the prior mass and the evidence from the different rules; i.e., the second half of the reasoning process. Finally, belief intervals are calculated to help in identifying the risks. We apply our evidential-reasoner on an engineering project and the results demonstrate the feasibility and applicability of this system in this environment.
Resumo:
The purpose of this study is to develop a decision making system to evaluate the risks in E-Commerce (EC) projects. Competitive software businesses have the critical task of assessing the risk in the software system development life cycle. This can be conducted on the basis of conventional probabilities, but limited appropriate information is available and so a complete set of probabilities is not available. In such problems, where the analysis is highly subjective and related to vague, incomplete, uncertain or inexact information, the Dempster-Shafer (DS) theory of evidence offers a potential advantage. We use a direct way of reasoning in a single step (i.e., extended DS theory) to develop a decision making system to evaluate the risk in EC projects. This consists of five stages 1) establishing knowledge base and setting rule strengths, 2) collecting evidence and data, 3) determining evidence and rule strength to a mass distribution for each rule; i.e., the first half of a single step reasoning process, 4) combining prior mass and different rules; i.e., the second half of the single step reasoning process, 5) finally, evaluating the belief interval for the best support decision of EC project. We test the system by using potential risk factors associated with EC development and the results indicate that the system is promising way of assisting an EC project manager in identifying potential risk factors and the corresponding project risks.
Resumo:
Exam timetabling is one of the most important administrative activities that takes place in academic institutions. In this paper we present a critical discussion of the research on exam timetabling in the last decade or so. This last ten years has seen an increased level of attention on this important topic. There has been a range of significant contributions to the scientific literature both in terms of theoretical andpractical aspects. The main aim of this survey is to highlight the new trends and key research achievements that have been carried out in the last decade.We also aim to outline a range of relevant important research issues and challenges that have been generated by this body of work.
We first define the problem and review previous survey papers. Algorithmic approaches are then classified and discussed. These include early techniques (e.g. graph heuristics) and state-of-the-art approaches including meta-heuristics, constraint based methods, multi-criteria techniques, hybridisations, and recent new trends concerning neighbourhood structures, which are motivated by raising the generality of the approaches. Summarising tables are presented to provide an overall view of these techniques. We discuss some issues on decomposition techniques, system tools and languages, models and complexity. We also present and discuss some important issues which have come to light concerning the public benchmark exam timetabling data. Different versions of problem datasetswith the same name have been circulating in the scientific community in the last ten years which has generated a significant amount of confusion. We clarify the situation and present a re-naming of the widely studied datasets to avoid future confusion. We also highlight which research papershave dealt with which dataset. Finally, we draw upon our discussion of the literature to present a (non-exhaustive) range of potential future research directions and open issues in exam timetabling research.
Resumo:
In this paper, we present a random iterative graph based hyper-heuristic to produce a collection of heuristic sequences to construct solutions of different quality. These heuristic sequences can be seen as dynamic hybridisations of different graph colouring heuristics that construct solutions step by step. Based on these sequences, we statistically analyse the way in which graph colouring heuristics are automatically hybridised. This, to our knowledge, represents a new direction in hyper-heuristic research. It is observed that spending the search effort on hybridising Largest Weighted Degree with Saturation Degree at the early stage of solution construction tends to generate high quality solutions. Based on these observations, an iterative hybrid approach is developed to adaptively hybridise these two graph colouring heuristics at different stages of solution construction. The overall aim here is to automate the heuristic design process, which draws upon an emerging research theme on developing computer methods to design and adapt heuristics automatically. Experimental results on benchmark exam timetabling and graph colouring problems demonstrate the effectiveness and generality of this adaptive hybrid approach compared with previous methods on automatically generating and adapting heuristics. Indeed, we also show that the approach is competitive with the state of the art human produced methods.
Resumo:
In many domains when we have several competing classifiers available we want to synthesize them or some of them to get a more accurate classifier by a combination function. In this paper we propose a ‘class-indifferent’ method for combining classifier decisions represented by evidential structures called triplet and quartet, using Dempster's rule of combination. This method is unique in that it distinguishes important elements from the trivial ones in representing classifier decisions, makes use of more information than others in calculating the support for class labels and provides a practical way to apply the theoretically appealing Dempster–Shafer theory of evidence to the problem of ensemble learning. We present a formalism for modelling classifier decisions as triplet mass functions and we establish a range of formulae for combining these mass functions in order to arrive at a consensus decision. In addition we carry out a comparative study with the alternatives of simplet and dichotomous structure and also compare two combination methods, Dempster's rule and majority voting, over the UCI benchmark data, to demonstrate the advantage our approach offers. (A continuation of the work in this area that was published in IEEE Trans on KDE, and conferences)
Resumo:
Use of the Dempster-Shafer (D-S) theory of evidence to deal with uncertainty in knowledge-based systems has been widely addressed. Several AI implementations have been undertaken based on the D-S theory of evidence or the extended theory. But the representation of uncertain relationships between evidence and hypothesis groups (heuristic knowledge) is still a major problem. This paper presents an approach to representing such knowledge, in which Yen’s probabilistic multi-set mappings have been extended to evidential mappings, and Shafer’s partition technique is used to get the mass function in a complex evidence space. Then, a new graphic method for describing the knowledge is introduced which is an extension of the graphic model by Lowrance et al. Finally, an extended framework for evidential reasoning systems is specified.
Resumo:
Annotation of programs using embedded Domain-Specific Languages (embedded DSLs), such as the program annotation facility for the Java programming language, is a well-known practice in computer science. In this paper we argue for and propose a specialized approach for the usage of embedded Domain-Specific Modelling Languages (embedded DSMLs) in Model-Driven Engineering (MDE) processes that in particular supports automated many-step model transformation chains. It can happen that information defined at some point, using an embedded DSML, is not required in the next immediate transformation step, but in a later one. We propose a new approach of model annotation enabling flexible many-step transformation chains. The approach utilizes a combination of embedded DSMLs, trace models and a megamodel. We demonstrate our approach based on an example MDE process and an industrial case study.
Resumo:
Magnetic bright points (MBPs) in the internetwork are among the smallest objects in the solar photosphere and appear bright against the ambient environment. An algorithm is presented that can be used for the automated detection of the MBPs in the spatial and temporal domains. The algorithm works by mapping the lanes through intensity thresholding. A compass search, combined with a study of the intensity gradient across the detected objects, allows the disentanglement of MBPs from bright pixels within the granules. Object growing is implemented to account for any pixels that might have been removed when mapping the lanes. The images are stabilized by locating long-lived objects that may have been missed due to variable light levels and seeing quality. Tests of the algorithm, employing data taken with the Swedish Solar Telescope, reveal that approximate to 90 per cent of MBPs within a 75 x 75 arcsec(2) field of view are detected.