876 resultados para Artificial Intelligence, Constraint Programming, set variables, representation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report describes research about flow graphs - labeled, directed, acyclic graphs which abstract representations used in a variety of Artificial Intelligence applications. Flow graphs may be derived from flow grammars much as strings may be derived from string grammars; this derivation process forms a useful model for the stepwise refinement processes used in programming and other engineering domains. The central result of this report is a parsing algorithm for flow graphs. Given a flow grammar and a flow graph, the algorithm determines whether the grammar generates the graph and, if so, finds all possible derivations for it. The author has implemented the algorithm in LISP. The intent of this report is to make flow-graph parsing available as an analytic tool for researchers in Artificial Intelligence. The report explores the intuitions behind the parsing algorithm, contains numerous, extensive examples of its behavior, and provides some guidance for those who wish to customize the algorithm to their own uses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objects move, collide, flow, bend, heat up, cool down, stretch, compress and boil. These and other things that cause changes in objects over time are intuitively characterized as processes. To understand common sense physical reasoning and make programs that interact with the physical world as well as people do we must understand qualitative reasoning about processes, when they will occur, their effects, and when they will stop. Qualitative Process theory defines a simple notion of physical process that appears useful as a language in which to write dynamical theories. Reasoning about processes also motivates a new qualitative representation for quantity in terms of inequalities, called quantity space. This report describes the basic concepts of Qualitative Process theory, several different kinds of reasoning that can be performed with them, and discusses its impact on other issues in common sense reasoning about the physical world, such as causal reasoning and measurement interpretation. Several extended examples illustrate the utility of the theory, including figuring out that a boiler can blow up, that an oscillator with friction will eventually stop, and how to say that you can pull with a string but not push with it. This report also describes GIZMO, an implemented computer program which uses Qualitative Process theory to make predictions and interpret simple measurements. The represnetations and algorithms used in GIZMO are described in detail, and illustrated using several examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method will be described for finding the shape of a smooth apaque object form a monocular image, given a knowledge of the surface photometry, the position of the lightsource and certain auxiliary information to resolve ambiguities. This method is complementary to the use of stereoscopy which relies on matching up sharp detail and will fail on smooth objects. Until now the image processing of single views has been restricted to objects which can meaningfully be considered two-dimensional or bounded by plane surfaces. It is possible to derive a first-order non-linear partial differential equation in two unknowns relating the intensity at the image points to the shape of the objects. This equation can be solved by means of an equivalent set of five ordinary differential equations. A curve traced out by solving this set of equations for one set of starting values is called a characteristic strip. Starting one of these strips from each point on some initial curve will produce the whole solution surface. The initial curves can usually be constructed around so-called singular points. A number of applications of this metod will be discussed including one to lunar topography and one to the scanning electron microscope. In both of these cases great simplifications occur in the equations. A note on polyhedra follows and a quantitative theory of facial make-up is touched upon. An implementation of some of these ideas on the PDP-6 computer with its attached image-dissector camera at the Artificial intelligence Laboratory will be described, and also a nose-recognition program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial Intelligence research involves the creation of extremely complex programs which must possess the capability to introspect, learn, and improve their expertise. Any truly intelligent program must be able to create procedures and to modify them as it gathers information from its experience. [Sussman, 1975] produced such a system for a 'mini-world'; but truly intelligent programs must be considerably more complex. A crucial stepping stone in AI research is the development of a system which can understand complex programs well enough to modify them. There is also a complexity barrier in the world of commercial software which is making the cost of software production and maintenance prohibitive. Here too a system which is capable of understanding complex programs is a necessary step. The Programmer's Apprentice Project [Rich and Shrobe, 76] is attempting to develop an interactive programming tool which will help expert programmers deal with the complexity involved in engineering a large software system. This report describes REASON, the deductive component of the programmer's apprentice. REASON is intended to help expert programmers in the process of evolutionary program design. REASON utilizes the engineering techniques of modelling, decomposition, and analysis by inspection to determine how modules interact to achieve the desired overall behavior of a program. REASON coordinates its various sources of knowledge by using a dependency-directed structure which records the justification for each deduction it makes. Once a program has been analyzed these justifications can be summarized into a teleological structure called a plan which helps the system understand the impact of a proposed program modification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lee, M. H., Lacey, N. J. (2003). The Influence of Epistemology on the Design of Artificial Agents. Minds and Machines, 13 (3), 367-395

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Schierz, A. (2007). Monitoring knowledge: a text-based approach. Terminology, 13 (2), 125-154. Sponsorship: EPSRC DTG Project IQ, EU IST-FET FP6-516169

Relevância:

100.00% 100.00%

Publicador:

Resumo:

R. Daly, Q. Shen and S. Aitken. Speeding up the learning of equivalence classes of Bayesian network structures. Proceedings of the 10th International Conference on Artificial Intelligence and Soft Computing, pages 34-39.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

R. Marti, C. Rubin, E. Denton and R. Zwiggelaar, '2D-3D correspondence in mammography', Cybernetics and Systems 35 (1), 85-105 (2004)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless Inertial Measurement Units (WIMUs) combine motion sensing, processing & communications functionsin a single device. Data gathered using these sensors has the potential to be converted into high quality motion data. By outfitting a subject with multiple WIMUs full motion data can begathered. With a potential cost of ownership several orders of magnitude less than traditional camera based motion capture, WIMU systems have potential to be crucially important in supplementing or replacing traditional motion capture and opening up entirely new application areas and potential markets particularly in the rehabilitative, sports & at-home healthcarespaces. Currently WIMUs are underutilized in these areas. A major barrier to adoption is perceived complexity. Sample rates, sensor types & dynamic sensor ranges may need to be adjusted on multiple axes for each device depending on the scenario. As such we present an advanced WIMU in conjunction with a Smart WIMU system to simplify this aspect with 3 usage modes: Manual, Intelligent and Autonomous. Attendees will be able to compare the 3 different modes and see the effects of good andbad set-ups on the quality of data gathered in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In decision making problems where we need to choose a particular decision or alternative from a set of possible choices, we often have some preferences which determine if we prefer one decision over another. When these preferences give us an ordering on the decisions that is complete, then it is easy to choose the best or one of the best decisions. However it often occurs that the preferences relation is partially ordered, and we have no best decision. In this thesis, we look at what happens when we have such a partial order over a set of decisions, in particular when we have multiple orderings on a set of decisions, and we present a framework for qualitative decision making. We look at the different natural notions of optimal decision that occur in this framework, which gives us different optimality classes, and we examine the relationships between these classes. We then look in particular at a qualitative preference relation called Sorted-Pareto Dominance, which is an extension of Pareto Dominance, and we give a semantics for this relation as one that is compatible with any order-preserving mapping of an ordinal preference scale to a numerical one. We apply Sorted-Pareto dominance to a Soft Constraints setting, where we solve problems in which the soft constraints associate qualitative preferences to decisions in a decision problem. We also examine the Sorted-Pareto dominance relation in the context of our qualitative decision making framework, looking at the relevant optimality classes for the Sorted-Pareto case, which gives us classes of decisions that are necessarily optimal, and optimal for some choice of mapping of an ordinal scale to a quantitative one. We provide some empirical analysis of Sorted-Pareto constraints problems and examine the optimality classes that result.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contribution of buildings towards total worldwide energy consumption in developed countries is between 20% and 40%. Heating Ventilation and Air Conditioning (HVAC), and more specifically Air Handling Units (AHUs) energy consumption accounts on average for 40% of a typical medical device manufacturing or pharmaceutical facility’s energy consumption. Studies have indicated that 20 – 30% energy savings are achievable by recommissioning HVAC systems, and more specifically AHU operations, to rectify faulty operation. Automated Fault Detection and Diagnosis (AFDD) is a process concerned with potentially partially or fully automating the commissioning process through the detection of faults. An expert system is a knowledge-based system, which employs Artificial Intelligence (AI) methods to replicate the knowledge of a human subject matter expert, in a particular field, such as engineering, medicine, finance and marketing, to name a few. This thesis details the research and development work undertaken in the development and testing of a new AFDD expert system for AHUs which can be installed in minimal set up time on a large cross section of AHU types in a building management system vendor neutral manner. Both simulated and extensive field testing was undertaken against a widely available and industry known expert set of rules known as the Air Handling Unit Performance Assessment Rules (APAR) (and a later more developed version known as APAR_extended) in order to prove its effectiveness. Specifically, in tests against a dataset of 52 simulated faults, this new AFDD expert system identified all 52 derived issues whereas the APAR ruleset identified just 10. In tests using actual field data from 5 operating AHUs in 4 manufacturing facilities, the newly developed AFDD expert system for AHUs was shown to identify four individual fault case categories that the APAR method did not, as well as showing improvements made in the area of fault diagnosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel hybrid data-driven approach is developed for forecasting power system parameters with the goal of increasing the efficiency of short-term forecasting studies for non-stationary time-series. The proposed approach is based on mode decomposition and a feature analysis of initial retrospective data using the Hilbert-Huang transform and machine learning algorithms. The random forests and gradient boosting trees learning techniques were examined. The decision tree techniques were used to rank the importance of variables employed in the forecasting models. The Mean Decrease Gini index is employed as an impurity function. The resulting hybrid forecasting models employ the radial basis function neural network and support vector regression. A part from introduction and references the paper is organized as follows. The second section presents the background and the review of several approaches for short-term forecasting of power system parameters. In the third section a hybrid machine learningbased algorithm using Hilbert-Huang transform is developed for short-term forecasting of power system parameters. Fourth section describes the decision tree learning algorithms used for the issue of variables importance. Finally in section six the experimental results in the following electric power problems are presented: active power flow forecasting, electricity price forecasting and for the wind speed and direction forecasting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pragmatics of 'vegetarian' and 'carnivorous' exhibits an asymmetry that we put in evidence by analyzing a newspaper report about vegetarian dog-owners imposing a vegetarian diet on their pets. More fundamental is the problem of partonomy versus containment, for which we attempt a naive but formal analysis applied to ingestion and the food chain, an issue we derive from the same text analyzed. Our formal tools belong in commonsense modelling, a domain of artificial intelligence related to extra-linguistic knowledge and pragmatics. We first provide an interpretation of events analyzed, and express it graphically in a semantic-network related representation, and propose an alternative that we express in terms of a modal logic, avoiding the full representational power of Hayes's "ontology for liquids".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are three main approaches to the representation of temporal information in AI literature: the so-called method of temporal arguments that simply extends functions and predicates of first-order language to include time as the additional argument; modal temporal logics which are extensions ofthe propositional or predicate calculus with modal temporal operators; and reified temporal logics which reify standard propositions of some initial language (e.g., the classical first-order or modal logic) as objects denoting propositional terms. The objective of this paper is to provide an overview onthe temporal reified approach by looking closely atsome representative existing systems featuring reified propositions, including those of Allen, McDermott, Shoham, Reichgelt, Galton, and Ma and Knight. We shall demonstrate that, although reified logics might be more complicated in expressing assertions about some given objects with respect to different times, they accord a special status to time and therefore have several distinct advantages in talking about some important issues which would be difficult (if not impossible) to express in other approaches.