973 resultados para Hierarchical censored production rule (HCPR)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One common problem in all basic techniques of knowledge representation is the handling of the trade-off between precision of inferences and resource constraints, such as time and memory. Michalski and Winston (1986) suggested the Censored Production Rule (CPR) as an underlying representation and computational mechanism to enable logic based systems to exhibit variable precision in which certainty varies while specificity stays constant. As an extension of CPR, the Hierarchical Censored Production Rules (HCPRs) system of knowledge representation, proposed by Bharadwaj & Jain (1992), exhibits both variable certainty as well as variable specificity and offers mechanisms for handling the trade-off between the two. An HCPR has the form: Decision If(preconditions) Unless(censor) Generality(general_information) Specificity(specific_information). As an attempt towards evolving a generalized knowledge representation, an Extended Hierarchical Censored Production Rules (EHCPRs) system is suggested in this paper. With the inclusion of new operators, an Extended Hierarchical Censored Production Rule (EHCPR) takes the general form: Concept If (Preconditions) Unless (Exceptions) Generality (General-Concept) Specificity (Specific Concepts) Has_part (default: structural-parts) Has_property (default:characteristic-properties) Has_instance (instances). How semantic networks and frames are represented in terms of an EHCPRs is shown. Multiple inheritance, inheritance with and without cancellation, recognition with partial match, and a few default logic problems are shown to be tackled efficiently in the proposed system.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work supplements rule-based reasoning with case based reasoning and intelligent information retrieval. This research, specifies an approach to the case based retrieval problem which relies heavily on an extended object-oriented / rule-based system architecture that is supplemented with causal background information. Machine learning techniques and a distributed agent architecture are used to help simulate the reasoning process of lawyers. In this paper, we outline our implementation of the hybrid IKBALS II Rule Based Reasoning / Case Based Reasoning system. It makes extensive use of an automated case representation editor and background information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval. When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers. Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process. Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions. The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Texts in the work of a city department: A study of the language and context of benefit decisions This dissertation examines documents granting or denying the access to municipal services. The data consist of decisions on transport services made by the Social Services Department of the City of Helsinki. The circumstances surrounding official texts and their language and production are studied through textual analysis and interviews. The dissertation describes the textual features of the above decisions, and seeks to explain such features. Also explored are the topics and methods of genre studies, especially the relationship between text and context. Although the approach is linguistic, the dissertation also touches on research in social work and administrative decision making, and contributes to more general discussion on the language and duties of public administration. My key premise is that a text is more than a mere psycholinguistic phenomenon. Rather, a text is also a physical object and the result of certain production processes. This dissertation thus not only describes genre-specific features, but also sheds light on the work that generates the texts examined. Textual analysis and analyses of discursive practices are linked through an analysis of intertextuality: written decisions are compared with other application documents, such as expert statements and the applications themselves. The study shows that decisions are texts governed by strict rules and written with modest resources. Textwork is organised as hierarchical mass production. The officials who write decisions rely on standard phrases extracted from a computer system. This allows them to produce texts of uniform quality which have been approved by the department s legal experts. Using a computer system in text production does not, however, serve all the needs of the writers. This leads to many problems in the texts themselves. Intertextual analysis indicates that medical argumentation weighs most heavily in an application process, although a social appraisal should be carried out when deciding on applications for transport services. The texts reflect a hierarchy in which a physician ranks above the applicant, and the department s own expert physician ranks above the applicant s physician. My analysis also highlights good, but less obvious practices. The social workers and secretaries who write decisions must balance conflicting demands. They use delicate linguistic means to adjust the standard phrases to suit individual cases, and employ subtle strategies of politeness. The dissertation suggests that the customer contact staff who write official texts should be allowed to make better use of their professional competence. A more general concern is that legislation and new management strategies require more and more documentation. Yet, textwork is only rarely taken into account in the allocation of resources. Keywords: (Critical) text analysis, genre analysis, administration, social work, administrative language, texts, genres, context, intertextuality, discursive practices

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to developing reservoir of Upper of Ng at high-speed and high-efficient in Chengdao oilfield which is located in the bally shallow sea, the paper builds up a series of theory and means predicting and descripting reservoir in earlier period of oilfield development. There are some conclusions as follows. 1. It is the first time to form a series of technique of fine geological modeling of the channel-sandy reservoir by means of mainly seismic methods. These technique include the logging restriction seismic inversion, the whole three dimension seismic interpretation, seismic properties analysis and so on which are used to the 3-dimension distributing prediction of sandy body, structure and properties of the channel reservoir by a lot of the seismic information and a small quantity of the drilling and the logging information in the earlier stage of the oil-field development. It is the first time that these methods applied to production and the high-speed development of the shallow sea oilfield. The prediction sandy body was modified by the data of new drilling, the new reservoir prediction thinking of traced inversion is built. The applied effect of the technique was very well, according to approximately 200 wells belonging to 30 well groups in Chengdao oilfield, the drilling succeeded rate of the predicting sandy body reached 100%, the error total thickness only was 8%. 2. The author advanced the thinking and methods of the forecasting residual-oil prediction at the earlier stage of production. Based on well data and seismic data, correlation of sediment units was correlated by cycle-correlation and classification control methods, and the normalization and finely interpretation of the well logging and sedimentation micro-facies were acquired. On the region of poor well, using the logging restriction inversion technique and regarding finished drilling production well as the new restriction condition, the sand body distributing and its property were predicted again and derived 3-dimension pool geologic model including structure, reservoir, fluid, reservoir engineering parameter and producing dynamic etc. According to the reservoir geologic model, the reservoir engineering design was optimized, the tracking simulation of the reservoir numerical simulation was done by means of the dynamic data (pressure, yield and water content) of development well, the production rule and oil-water distributing rule was traced, the distributing of the remaining oil was predicted and controlled. The dynamic reservoir modeling method in metaphase of development was taken out. Based on the new drilling data, the static reservoir geologic model was momentarily modified, the research of the flow units was brought up including identifying flow units, evaluating flow units capability and establishing the fine flow units model; according to the dynamic data of production and well testing data, the dynamic tracing reservoir description was realized through the constant modification of the reservoir geologic model restricted these dynamic data by the theory of well testing and the reservoir numerical simulation. It was built the dynamic tracing reservoir model, which was used to track survey of the remaining oil on earlier period. The reservoir engineering tracking analysis technique on shallow sea oilfield was founded. After renewing the structure history since tertiary in Chengdao area by the balance section technique and estimating the activity character of the Chengbei fault by the sealing fault analysis technique, the meandering stream sediment pattern of the Upper of Ng was founded in which the meandering border was the uppermost reservoir unit. Based on the specialty of the lower rock component maturity and the structure maturity, the author founded 3 kinds of pore structure pattern in the Guanshang member of Chengdao oil-field in which the storing space mainly was primary (genetic) inter-granular pore, little was secondary solution pore and the inter-crystal pore tiny pore, and the type of throat mainly distributed as the slice shape and the contract neck shape. The positive rhythmic was briefly type included the simple positive rhythm, the complex positive rhythm and the compound rhythm. Interbed mainly is mudstone widely, the physical properties and the calcite interbed distribute localized. 5. The author synthetically analyzed the influence action of the micro-heterogeneity, the macro-heterogeneity and the structure heterogeneity to the oilfield water flood development. The efficiency of water flood is well in tiny structure of convex type or even type at top and bottom in which the water breakthrough of oil well is soon at the high part of structure when inject at the low part of structure, and the efficiency of water flood is poor in tiny structure of concave type at top and bottom. The remaining oil was controlled by sedimentary facies; the water flooding efficiency is well in the border or channel bar and is bad in the floodplain or the levee. The separation and inter layer have a little influence to the non-obvious positive rhythm reservoir, in which the remaining oil commonly locate within the 1-3 meter of the lower part of the separation and inter layer with lower water flooding efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SMARTFIRE is a fire field model based on an open architecture integrated CFD code and knowledge-based system. It makes use of the expert system to assist the user in setting up the problem specification and new computational techniques such as Group Solvers to reduce the computational effort involved in solving the equations. This paper concentrates on recent research into the use of artificial intelligence techniques to assist in dynamic solution control of fire scenarios being simulated using fire field modelling techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxations using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate enhanced solution reliability due to obtaining acceptable convergence within each time step unlike some of the comparison simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The automatic implementation of decoders for a visual perception is achieved as follows. The action described by a production rule is realized by means of the decoder in which a pattern of connections coreesponds to that of stimuli. According to "S.Karasawa,(Proc. of CCCT, Vol.5, pp.194-1999, Austin, Texas, August, 2004)", each program mable controllable connection among inputs is realized by a floating gate avalanche injection MOS FET, where inverted signals are used at writing, and the detection of matching between inputs and connections is carried out by using the signal source in which low level signal is provided via comparatively smaller resistance than high level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We designed and implemented a traffic accident analysis system (TAAS) in the paper. TAAS is the system faced traffic accident analysis, which uses the traffic rules (law) as knowledge sources to judge if the driver is responsible for a traffic accident. TAAS has characteristics of separating knowledge base and inference engine, using production rule and backward chaining. Besides, TAAS used predefined text and tracing program to realize explanation mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital factory is a concept that offers a collaborative approach to enhance product and production engineering processes through simulation. Products, processes and resources are modeled to be used to develop and test the product conception and manufacturing processes, before their use in the real factory. The purpose of this paper is to present the steps to identify the Critical Success Factors (CSF) priorities in a digital factory project implementation in a Brazilian company and how the Delphi and AHP Methods are aiding to identify these CSF priorities. Copyright © 2008 SAE International.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classification of metamorphic rocks is normally carried out using a poorly defined, subjective classification scheme making this an area in which many undergraduate geologists experience difficulties. An expert system to assist in such classification is presented which is capable of classifying rocks and also giving further details about a particular rock type. A mixed knowledge representation is used with frame, semantic and production rule systems available. Classification in the domain requires that different facets of a rock be classified. To implement this, rocks are represented by 'context' frames with slots representing each facet. Slots are satisfied by calling a pre-defined ruleset to carry out the necessary inference. The inference is handled by an interpreter which uses a dependency graph representation for the propagation of evidence. Uncertainty is handled by the system using a combination of the MYCIN certainty factor system and the Dempster-Shafer range mechanism. This allows for positive and negative reasoning, with rules capable of representing necessity and sufficiency of evidence, whilst also allowing the implementation of an alpha-beta pruning algorithm to guide question selection during inference. The system also utilizes a semantic net type structure to allow the expert to encode simple relationships between terms enabling rules to be written with a sensible level of abstraction. Using frames to represent rock types where subclassification is possible allows the knowledge base to be built in a modular fashion with subclassification frames only defined once the higher level of classification is functioning. Rulesets can similarly be added in modular fashion with the individual rules being essentially declarative allowing for simple updating and maintenance. The knowledge base so far developed for metamorphic classification serves to demonstrate the performance of the interpreter design whilst also moving some way towards providing a useful assistant to the non-expert metamorphic petrologist. The system demonstrates the possibilities for a fully developed knowledge base to handle the classification of igneous, sedimentary and metamorphic rocks. The current knowledge base and interpreter have been evaluated by potential users and experts. The results of the evaluation show that the system performs to an acceptable level and should be of use as a tool for both undergraduates and researchers from outside the metamorphic petrography field. .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.