961 resultados para schema-based reasoning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this paper is on handling non-monotone information in the modelling process of a single-input target monotone system. On one hand, the monotonicity property is a piece of useful prior (or additional) information which can be exploited for modelling of a monotone target system. On the other hand, it is difficult to model a monotone system if the available information is not monotonically-ordered. In this paper, an interval-based method for analysing non-monotonically ordered information is proposed. The applicability of the proposed method to handling a non-monotone function, a non-monotone data set, and an incomplete and/or non-monotone fuzzy rule base is presented. The upper and lower bounds of the interval are firstly defined. The region governed by the interval is explained as a coverage measure. The coverage size represents uncertainty pertaining to the available information. The proposed approach constitutes a new method to transform non-monotonic information to interval-valued monotone system. The proposed interval-based method to handle an incomplete and/or non-monotone fuzzy rule base constitutes a new fuzzy reasoning approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical reasoning has been emphasised as one of the key proficiencies for mathematics in the Australian curriculum since 2011 and in the Canadian curriculum since 2007. This study explores primary teachers’ perceptions of mathematical reasoning at a time of further curriculum change. Twenty-four primary teachers from Canada and Australia were interviewed after engagement in the first stage of the Mathematical Reasoning Professional Learning Program incorporating demonstration lessons focused on reasoning conducted in their schools. Phenomenographic analysis of interview transcripts exploring variation in the perceptions of mathematical reasoning held by these teachers revealed seven categories of description based on four dimensions of variation. The categories delineate the different perceptions of mathematical reasoning expressed by the participants of this study. The resulting outcome space establishes a framework that facilitates tracking of growth in primary teachers’ awareness of aspects of mathematical reasoning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new Fuzzy Inference System (FIS)-based Risk Priority Number (RPN) model for the prioritization of failures in Failure Mode and Effect Analysis (FMEA). In FMEA, the monotonicity property of the RPN scores is important. To maintain the monotonicity property of an FIS-based RPN model, a complete and monotonically-ordered fuzzy rule base is necessary. However, it is impractical to gather all (potentially a large number of) fuzzy rules from FMEA users. In this paper, we introduce a new two-stage approach to reduce the number of fuzzy rules that needs to be gathered, and to satisfy the monotonicity property. In stage-1, a Genetic Algorithm (GA) is used to search for a small set of fuzzy rules to be gathered from FMEA users. In stage-2, the remaining fuzzy rules are deduced approximately by a monotonicity-preserving similarity reasoning scheme. The monotonicity property is exploited as additional qualitative information for constructing the FIS-based RPN model. To assess the effectiveness of the proposed approach, a real case study with information collected from a semiconductor manufacturing plant is conducted. The outcomes indicate that the proposed approach is effective in developing an FIS-based RPN model with only a small set of fuzzy rules, which is able to satisfy the monotonicity property for prioritization of failures in FMEA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Communication is an important area in health professional education curricula, however it has been dealt with as discrete skills that can be learned and taught separate to the underlying thinking. Communication of clinical reasoning is a phenomenon that has largely been ignored in the literature. This research sought to examine how experienced physiotherapists communicate their clinical reasoning and to identify the core processes of this communication. A hermeneutic phenomenological research study was conducted using multiple methods of text construction including repeated semi-structured interviews, observation and written exercises. Hermeneutic analysis of texts involved iterative reading and interpretation of texts with the development of themes and sub-themes. Communication of clinical reasoning was perceived to be complex, dynamic and largely automatic. A key finding was that articulating reasoning (particularly during research) does not completely represent actual reasoning processes but represents a (re)construction of the more complex, rapid and multi-layered processes that operate in practice. These communications are constructed in ways that are perceived as being most relevant to the audience, context and purpose of the communication. Five core components of communicating clinical reasoning were identified: active listening, framing and presenting the message, matching the co-communicator, metacognitive aspects of communication and clinical reasoning abilities. We propose that communication of clinical reasoning is both an inherent part of reasoning as well as an essential and complementary skill based on the contextual demands of the task and situation. In this way clinical reasoning and its communication are intertwined, providing evidence for the argument that they should be learned (and explicitly taught) in synergy and in context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Problem-based learning (PBL) was developed as a facilitated small group learning process based around a clinical problem. Originally designed for pre-clinical years of medical education, its application across all years poses a number of difficulties, including the risk of reducing patient contact, providing a learning process that is skewed towards an understanding of pathophysiological processes, which may not be well understood in all areas of medicine, and failing to provide exposure to clinically relevant reasoning skills. CONTEXT: Curriculum review identified dissatisfaction with PBLs in the clinical years of the Sydney Medical School's Graduate Medical Program, from both staff and students. A new model was designed and implemented in the Psychiatry and Addiction Medicine rotation, and is currently being evaluated. INNOVATION: We describe an innovative model of small-group, student-generated, case-based learning in psychiatry - clinical reasoning sessions (CRS) - led by expert facilitators. IMPLICATIONS: The CRS format returns the student to the patient, emphasises clinical assessment skills and considers treatment in the real-world context of the patient. Students practise a more sophisticated reasoning process with real patients modelled upon that of their expert tutor. This has increased student engagement compared with the previous PBL programme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combining goal-oriented and use case modeling has been proven to be an effective method in requirements elicitation and elaboration. To ensure the quality of such modeled artifacts, a detailed model analysis needs to be performed. However, current requirements engineering approaches generally lack reliable support for automated analysis of consistency, correctness and completeness (3Cs problems) between and within goal models and use case models. In this paper, we present a goal–use case integration framework with tool support to automatically identify such 3Cs problems. Our new framework relies on the use of ontologies of domain knowledge and semantics and our goal–use case integration meta-model. Moreover, functional grammar is employed to enable the semiautomated transformation of natural language specifications into Manchester OWL Syntax for automated reasoning. The evaluation of our tool support shows that for representative example requirements, our approach achieves over 85 % soundness and completeness rates and detects more problems than the benchmark applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detecting inconsistencies is a critical part of requirements engineering (RE) and has been a topic of interest for several decades. Domain knowledge and semantics of requirements not only play important roles in elaborating requirements but are also a crucial way to detect conflicts among them. In this paper, we present a novel knowledge-based RE framework (KBRE) in which domain knowledge and semantics of requirements are central to elaboration, structuring, and management of captured requirements. Moreover, we also show how they facilitate the identification of requirements inconsistencies and other-related problems. In our KBRE model, description logic (DL) is used as the fundamental logical system for requirements analysis and reasoning. In addition, the application of DL in the form of Manchester OWL Syntax brings simplicity to the formalization of requirements while preserving sufficient expressive power. A tool has been developed and applied to an industrial use case to validate our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the popularity of the Web encourages the development of Hypermedia Systems dedicated to e-learning. Nevertheless, most of the available Web teaching systems apply the traditional paper-based learning resources presented as HTML pages making no use of the new capabilities provided by the Web. There is a challenge to develop educative systems that adapt the educative content to the style of learning, context and background of each student. Another research issue is the capacity to interoperate on the Web reusing learning objects. This work presents an approach to address these two issues by using the technologies of the Semantic Web. The approach presented here models the knowledge of the educative content and the learner’s profile with ontologies whose vocabularies are a refinement of those defined on standards situated on the Web as reference points to provide semantics. Ontologies enable the representation of metadata concerning simple learning objects and the rules that define the way that they can feasibly be assembled to configure more complex ones. These complex learning objects could be created dynamically according to the learners’ profile by intelligent agents that use the ontologies as the source of their beliefs. Interoperability issues were addressed by using an application profile of the IEEE LOM- Learning Object Metadata standard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, more than half of the computer development projects fail to meet the final users' expectations. One of the main causes is insufficient knowledge about the organization of the enterprise to be supported by the respective information system. The DEMO methodology (Design and Engineering Methodology for Organizations) has been proved as a well-defined method to specify, through models and diagrams, the essence of any organization at a high level of abstraction. However, this methodology is platform implementation independent, lacking the possibility of saving and propagating possible changes from the organization models to the implemented software, in a runtime environment. The Universal Enterprise Adaptive Object Model (UEAOM) is a conceptual schema being used as a basis for a wiki system, to allow the modeling of any organization, independent of its implementation, as well as the previously mentioned change propagation in a runtime environment. Based on DEMO and UEAOM, this project aims to develop efficient and standardized methods, to enable an automatic conversion of DEMO Ontological Models, based on UEAOM specification into BPMN (Business Process Model and Notation) models of processes, using clear semantics, without ambiguities, in order to facilitate the creation of processes, almost ready for being executed on workflow systems that support BPMN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Making diagnoses in oral pathology are often difficult and confusing in dental practice, especially for the lessexperienced dental student. One of the most promising areas in bioinformatics is computer-aided diagnosis, where a computer system is capable of imitating human reasoning ability and provides diagnoses with an accuracy approaching that of expert professionals. This type of system could be an alternative tool for assisting dental students to overcome the difficulties of the oral pathology learning process. This could allow students to define variables and information, important to improving the decision-making performance. However, no current open data management system has been integrated with an artificial intelligence system in a user-friendly environment. Such a system could also be used as an education tool to help students perform diagnoses. The aim of the present study was to develop and test an open case-based decisionsupport system.Methods: An open decision-support system based on Bayes' theorem connected to a relational database was developed using the C++ programming language. The software was tested in the computerisation of a surgical pathology service and in simulating the diagnosis of 43 known cases of oral bone disease. The simulation was performed after the system was initially filled with data from 401 cases of oral bone disease.Results: the system allowed the authors to construct and to manage a pathology database, and to simulate diagnoses using the variables from the database.Conclusion: Combining a relational database and an open decision-support system in the same user-friendly environment proved effective in simulating diagnoses based on information from an updated database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aiming to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Databases (KDD) and is responsible for eliminating problems and adjust the data for the later stages, especially for the stage of data mining. Such problems occur in the instance level and schema, namely, missing values, null values, duplicate tuples, values outside the domain, among others. Several algorithms were developed to perform the cleaning step in databases, some of them were developed specifically to work with the phonetics of words, since a word can be written in different ways. Within this perspective, this work presents as original contribution an optimization of algorithm for the detection of duplicate tuples in databases through phonetic based on multithreading without the need for trained data, as well as an independent environment of language to be supported for this. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding consciousness is one of the most fascinating challenges of our time. From ancient civilizations to modern philosophers, questions have been asked on how one is conscious of his/her own existence and about the world that surrounds him/her. Although there is no precise definition for consciousness, there is an agreement that it is strongly related to human cognitive processes such as: thinking, reasoning, emotions, wishes. One of the key processes to the arising of the consciousness is the attention, a process capable of promoting a selection of a few stimuli from a huge amount of information that reaches us constantly. Machine consciousness is the field of the artificial intelligence that investigate the possibility of the production of conscious processes in artificial devices. This work presents a review about the theme of consciousness - in both natural and artificial aspects -, discussing this theme from the philosophical and computational perspectives, and investigates the feasibility of the adoption of an attentional schema as the base to the cognitive processing. A formal computational model is proposed for conscious agents that integrates: short and long term memories, reasoning, planning, emotion, decision making, learning, motivation and volition. Computer experiments in a mobile robotics domain under USARSim simulation environment, proposed by RoboCup, suggest that the agent can be able to use these elements to acquire experiences based on environment stimuli. The adoption of the cognitive architecture over the attentional model has potential to allow the emergence of behaviours usually associated to the consciousness in the simulated mobile robots. Further implementation under this model could potentially allow the agent to express sentience, selfawareness, self-consciousness, autonoetic consciousness, mineness and perspectivalness. By performing computation over an attentional space, the model also allows the ...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a logic-based formalism for qualitative spatial reasoning with cast shadows (Perceptual Qualitative Relations on Shadows, or PQRS) and presents results of a mobile robot qualitative self-localisation experiment using this formalism. Shadow detection was accomplished by mapping the images from the robot’s monocular colour camera into a HSV colour space and then thresholding on the V dimension. We present results of selflocalisation using two methods for obtaining the threshold automatically: in one method the images are segmented according to their grey-scale histograms, in the other, the threshold is set according to a prediction about the robot’s location, based upon a qualitative spatial reasoning theory about shadows. This theory-driven threshold search and the qualitative self-localisation procedure are the main contributions of the present research. To the best of our knowledge this is the first work that uses qualitative spatial representations both to perform robot self-localisation and to calibrate a robot’s interpretation of its perceptual input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamicity and heterogeneity that characterize pervasive environments raise new challenges in the design of mobile middleware. Pervasive environments are characterized by a significant degree of heterogeneity, variability, and dynamicity that conventional middleware solutions are not able to adequately manage. Originally designed for use in a relatively static context, such middleware systems tend to hide low-level details to provide applications with a transparent view on the underlying execution platform. In mobile environments, however, the context is extremely dynamic and cannot be managed by a priori assumptions. Novel middleware should therefore support mobile computing applications in the task of adapting their behavior to frequent changes in the execution context, that is, it should become context-aware. In particular, this thesis has identified the following key requirements for novel context-aware middleware that existing solutions do not fulfil yet. (i) Middleware solutions should support interoperability between possibly unknown entities by providing expressive representation models that allow to describe interacting entities, their operating conditions and the surrounding world, i.e., their context, according to an unambiguous semantics. (ii) Middleware solutions should support distributed applications in the task of reconfiguring and adapting their behavior/results to ongoing context changes. (iii) Context-aware middleware support should be deployed on heterogeneous devices under variable operating conditions, such as different user needs, application requirements, available connectivity and device computational capabilities, as well as changing environmental conditions. Our main claim is that the adoption of semantic metadata to represent context information and context-dependent adaptation strategies allows to build context-aware middleware suitable for all dynamically available portable devices. Semantic metadata provide powerful knowledge representation means to model even complex context information, and allow to perform automated reasoning to infer additional and/or more complex knowledge from available context data. In addition, we suggest that, by adopting proper configuration and deployment strategies, semantic support features can be provided to differentiated users and devices according to their specific needs and current context. This thesis has investigated novel design guidelines and implementation options for semantic-based context-aware middleware solutions targeted to pervasive environments. These guidelines have been applied to different application areas within pervasive computing that would particularly benefit from the exploitation of context. Common to all applications is the key role of context in enabling mobile users to personalize applications based on their needs and current situation. The main contributions of this thesis are (i) the definition of a metadata model to represent and reason about context, (ii) the definition of a model for the design and development of context-aware middleware based on semantic metadata, (iii) the design of three novel middleware architectures and the development of a prototypal implementation for each of these architectures, and (iv) the proposal of a viable approach to portability issues raised by the adoption of semantic support services in pervasive applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.