12 resultados para 280213 Other Artificial Intelligence
em Aston University Research Archive
Resumo:
We compare two methods in order to predict inflation rates in Europe. One method uses a standard back propagation neural network and the other uses an evolutionary approach, where the network weights and the network architecture is evolved. Results indicate that back propagation produces superior results. However, the evolving network still produces reasonable results with the advantage that the experimental set-up is minimal. Also of interest is the fact that the Divisia measure of money is superior as a predictive tool over simple sum.
Resumo:
This paper compares two methods to predict in°ation rates in Europe. One method uses a standard back propagation neural network and the other uses an evolutionary approach, where the network weights and the network architecture are evolved. Results indicate that back propagation produces superior results. However, the evolving network still produces reasonable results with the advantage that the experimental set-up is minimal. Also of interest is the fact that the Divisia measure of money is superior as a predictive tool over simple sum.
Resumo:
Machine breakdowns are one of the main sources of disruption and throughput fluctuation in highly automated production facilities. One element in reducing this disruption is ensuring that the maintenance team responds correctly to machine failures. It is, however, difficult to determine the current practice employed by the maintenance team, let alone suggest improvements to it. 'Knowledge based improvement' is a methodology that aims to address this issue, by (a) eliciting knowledge on current practice, (b) evaluating that practice and (c) looking for improvements. The methodology, based on visual interactive simulation and artificial intelligence methods, and its application to a Ford engine assembly facility are described. Copyright © 2002 Society of Automotive Engineers, Inc.
Resumo:
The performance of most operations systems is significantly affected by the interaction of human decision-makers. A methodology, based on the use of visual interactive simulation (VIS) and artificial intelligence (AI), is described that aims to identify and improve human decision-making in operations systems. The methodology, known as 'knowledge-based improvement' (KBI), elicits knowledge from a decision-maker via a VIS and then uses AI methods to represent decision-making. By linking the VIS and AI representation, it is possible to predict the performance of the operations system under different decision-making strategies and to search for improved strategies. The KBI methodology is applied to the decision-making surrounding unplanned maintenance operations at a Ford Motor Company engine assembly plant.
Resumo:
Humans consciously and subconsciously establish various links, emerge semantic images and reason in mind, learn linking effect and rules, select linked individuals to interact, and form closed loops through links while co-experiencing in multiple spaces in lifetime. Machines are limited in these abilities although various graph-based models have been used to link resources in the cyber space. The following are fundamental limitations of machine intelligence: (1) machines know few links and rules in the physical space, physiological space, psychological space, socio space and mental space, so it is not realistic to expect machines to discover laws and solve problems in these spaces; and, (2) machines can only process pre-designed algorithms and data structures in the cyber space. They are limited in ability to go beyond the cyber space, to learn linking rules, to know the effect of linking, and to explain computing results according to physical, physiological, psychological and socio laws. Linking various spaces will create a complex space — the Cyber-Physical-Physiological-Psychological-Socio-Mental Environment CP3SME. Diverse spaces will emerge, evolve, compete and cooperate with each other to extend machine intelligence and human intelligence. From multi-disciplinary perspective, this paper reviews previous ideas on various links, introduces the concept of cyber-physical society, proposes the ideal of the CP3SME including its definition, characteristics, and multi-disciplinary revolution, and explores the methodology of linking through spaces for cyber-physical-socio intelligence. The methodology includes new models, principles, mechanisms, scientific issues, and philosophical explanation. The CP3SME aims at an ideal environment for humans to live and work. Exploration will go beyond previous ideals on intelligence and computing.
Resumo:
Yorick Wilks is a central figure in the fields of Natural Language Processing and Artificial Intelligence. His influence extends to many areas and includes contributions to Machines Translation, word sense disambiguation, dialogue modeling and Information Extraction. This book celebrates the work of Yorick Wilks in the form of a selection of his papers which are intended to reflect the range and depth of his work. The volume accompanies a Festschrift which celebrates his contribution to the fields of Computational Linguistics and Artificial Intelligence. The papers include early work carried out at Cambridge University, descriptions of groundbreaking work on Machine Translation and Preference Semantics as well as more recent works on belief modeling and computational semantics. The selected papers reflect Yorick’s contribution to both practical and theoretical aspects of automatic language processing.
Resumo:
Expert systems, and artificial intelligence more generally, can provide a useful means for representing decision-making processes. By linking expert systems software to simulation software an effective means of including these decision-making processes in a simulation model can be achieved. This paper demonstrates how a commercial-off-the-shelf simulation package (Witness) can be linked to an expert systems package (XpertRule) through a Visual Basic interface. The methodology adopted could be used for models, and possibly software, other than those presented here.
Resumo:
This collection of papers records a series of studies, carried out over a period of some 50 years, on two aspects of river pollution control - the prevention of pollution by sewage biological filtration and the monitoring of river pollution by biological surveillance. The earlier studies were carried out to develop methods of controlling flies which bred in the filters and caused serious nuisance and possible public health hazard, when they dispersed to surrounding villages. Although the application of insecticides proved effective as an alleviate measure, because it resulted in only a temporary disturbance of the ecological balance, it was considered ecologically unsound as a long-term solution. Subsequent investigations showed that the fly populations in filters were largely determined by the amount of food available to the grazing larval stage in the form of filter film. It was also established that the winter deterioration in filter performance was due to the excessive accumulation of film. Subsequent investigations were therefore carried out to determine the factors responsible for the accumulation of film in different types of filter. Methods of filtration which were considered to control film accumulation by increasing the flushing action of the sewage, were found to control fungal film by creating nutrient limiting conditions. In some filters increasing the hydraulic flushing reduced the grazing fauna population in the surface layers and resulted in an increase in film. The results of these investigations were successfully applied in modifying filters and in the design of a Double Filtration process. These studies on biological filters lead to the conclusion that they should be designed and operated as ecological systems and not merely as hydraulic ones. Studies on the effects of sewage effluents on Birmingham streams confirmed the findings of earlier workers justifying their claim for using biological methods for detecting and assessing river pollution. Further ecological studies showed the sensitivity of benthic riffle communities to organic pollution. Using experimental channels and laboratory studies the different environmental conditions associated with organic pollution were investigated. The degree and duration of the oxygen depletion during the dark hours were found to be a critical factor. The relative tolerance of different taxa to other pollutants, such as ammonia, differed. Although colonisation samplers proved of value in sampling difficult sites, the invertebrate data generated were not suitable for processing as any of the commonly used biotic indexes. Several of the papers, which were written by request for presentation at conferences etc., presented the biological viewpoint on river pollution and water quality issues at the time and advocated the use of biological methods. The information and experiences gained in these investigations was used as the "domain expert" in the development of artificial intelligence systems for use in the biological surveillance of river water quality.
Resumo:
Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.
Resumo:
Much research pursues machine intelligence through better representation of semantics. What is semantics? People in different areas view semantics from different facets although it accompanies interaction through civilization. Some researchers believe that humans have some innate structure in mind for processing semantics. Then, what the structure is like? Some argue that humans evolve a structure for processing semantics through constant learning. Then, how the process is like? Humans have invented various symbol systems to represent semantics. Can semantics be accurately represented? Turing machines are good at processing symbols according to algorithms designed by humans, but they are limited in ability to process semantics and to do active interaction. Super computers and high-speed networks do not help solve this issue as they do not have any semantic worldview and cannot reflect themselves. Can future cyber-society have some semantic images that enable machines and individuals (humans and agents) to reflect themselves and interact with each other with knowing social situation through time? This paper concerns these issues in the context of studying an interactive semantics for the future cyber-society. It firstly distinguishes social semantics from natural semantics, and then explores the interactive semantics in the category of social semantics. Interactive semantics consists of an interactive system and its semantic image, which co-evolve and influence each other. The semantic worldview and interactive semantic base are proposed as the semantic basis of interaction. The process of building and explaining semantic image can be based on an evolving structure incorporating adaptive multi-dimensional classification space and self-organized semantic link network. A semantic lens is proposed to enhance the potential of the structure and help individuals build and retrieve semantic images from different facets, abstraction levels and scales through time.
Resumo:
Yorick Wilks is a central figure in the fields of Natural Language Processing and Artificial Intelligence. His influence has extends to many areas of these fields and includes contributions to Machine Translation, word sense disambiguation, dialogue modeling and Information Extraction.This book celebrates the work of Yorick Wilks from the perspective of his peers. It consists of original chapters each of which analyses an aspect of his work and links it to current thinking in that area. His work has spanned over four decades but is shown to be pertinent to recent developments in language processing such as the Semantic Web.This volume forms a two-part set together with Words and Intelligence I, Selected Works by Yorick Wilks, by the same editors.
Resumo:
The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.