55 resultados para Knowledge-based Teams


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity markets are complex environments with very particular characteristics. MASCEM is a market simulator developed to allow deep studies of the interactions between the players that take part in the electricity market negotiations. This paper presents a new proposal for the definition of MASCEM players’ strategies to negotiate in the market. The proposed methodology is multiagent based, using reinforcement learning algorithms to provide players with the capabilities to perceive the changes in the environment, while adapting their bids formulation according to their needs, using a set of different techniques that are at their disposal. Each agent has the knowledge about a different method for defining a strategy for playing in the market, the main agent chooses the best among all those, and provides it to the market player that requests, to be used in the market. This paper also presents a methodology to manage the efficiency/effectiveness balance of this method, to guarantee that the degradation of the simulator processing times takes the correct measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A ESTSP-IPP implementou em 2008-2009 um novo modelo pedagógico, o PBL, em três licenciaturas. Este modelo tem sido considerado capaz de promover a aquisição de conhecimentos mas também o desenvolvimento de competências transversais valorizadas no mercado de trabalho; orienta-se em torno de problemas significativos da realidade profissional, trabalhados segundo a metodologia dos sete passos, destacando-se a aprendizagem através de pesquisa individual e trabalho de grupo; e visa ainda desenvolver processos cognitivos e metacognitivos como levantar hipóteses, comparar, analisar, interpretar e avaliar. Neste artigo, caracterizamos brevemente o modelo e respectivas implicações, justificando o interesse em investigar as repercussões da sua implementação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to present a multi-agent model for a simulation, whose goal is to help one specific participant of multi-criteria group decision making process.This model has five main intervenient types: the human participant, who is using the simulation and argumentation support system; the participant agents, one associated to the human participant and the others simulating the others human members of the decision meeting group; the directory agent; the proposal agents, representing the different alternatives for a decision (the alternatives are evaluated based on criteria); and the voting agent responsiblefor all voting machanisms.At this stage it is proposed a two phse algorithm. In the first phase each participantagent makes his own evaluation of the proposals under discussion, and the voting agent proposes a simulation of a voting process.In the second phase, after the dissemination of the voting results,each one ofthe partcipan agents will argue to convince the others to choose one of the possible alternatives. The arguments used to convince a specific participant are dependent on agent knowledge about that participant. This two-phase algorithm is applied iteratively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring is a very important aspect to consider when developing real-time systems. However, it is also important to consider the impact of the monitoring mechanisms in the actual application. The use of Reflection can provide a clear separation between the real-time application and the implemented monitoring mechanisms, which can be introduced (reflected) into the underlying system without changing the actual application part of the code. Nevertheless, controlling the monitoring system itself is still a topic of research. The monitoring mechanisms must contain knowledge about “how to get the information out”. Therefore, this paper presents the ongoing work to define a suitable strategy for monitoring real-time systems through the use of Reflection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we analyse the ability of Profibus fieldbus to cope with the real-time requirements of a Distributed Computer Control System (DCCS), where messages associated to discrete events must be made available within a maximum bound time. Our methodology is based on the knowledge of real-time traffic characteristics, setting the network parameters in order to cope with timing requirements. Since non-real-time traffic characteristics are usually unknown at the design stage, we consider an operational profile where, constraining non-real-time traffic at the application level, we assure that realtime requirements are met.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the increased need to support dynamic task-level parallelism in embedded real-time systems and proposes a Java framework that combines the Real-Time Specification for Java (RTSJ) with the Fork/Join (FJ) model, following a fixed priority-based scheduling scheme. Our work intends to support parallel runtimes that will coexist with a wide range of other complex independently developed applications, without any previous knowledge about their real execution requirements, number of parallel sub-tasks, and when those sub-tasks will be generated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional Real-Time Operating Systems (RTOS) are not designed to accommodate application specific requirements. They address a general case and the application must co-exist with any limitations imposed by such design. For modern real-time applications this limits the quality of services offered to the end-user. Research in this field has shown that it is possible to develop dynamic systems where adaptation is the key for success. However, adaptation requires full knowledge of the system state. To overcome this we propose a framework to gather data, and interact with the operating system, extending the traditional POSIX trace model with a partial reflective model. Such combination still preserves the trace mechanism semantics while creating a powerful platform to develop new dynamic systems, with little impact in the system and avoiding complex changes in the kernel source code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This text is based on a research, which is still in progress, whose main objective is to identify and understand what are the main difficulties of future mathematics teachers of basic education are, regarding their content knowledge in geometry in the context of the curricular unit of Geometry during their undergraduate degree. We chose a qualitative approach in the form of case study, in which data collection was done through observation, interviews, a diverse set of tasks, a diagnostic test and other documents. This paper focuses on the test given to prospective teachers at the beginning of the course. The preliminary analysis of the data points to a weak performance of preservice teachers in the test issues addressing elementary knowledge of Geometry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This text is based on a research, which is still in progress, whose main objective is to identify and understand what are the main difficulties of future mathematics teachers of basic education are, regarding their content knowledge in geometry in the context of the curricular unit of Geometry during their undergraduate degree. We chose a qualitative approach in the form of case study, in which data collection was done through observation, interviews, a diverse set of tasks, a diagnostic test and other documents. This paper focuses on the test given to prospective teachers at the beginning of the course. The preliminary analysis of the data points to a weak performance of preservice teachers in the test issues addressing elementary knowledge of Geometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated it—the justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferred—the assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Belief revision is a critical issue in real world DAI applications. A Multi-Agent System not only has to cope with the intrinsic incompleteness and the constant change of the available knowledge (as in the case of its stand alone counterparts), but also has to deal with possible conflicts between the agents’ perspectives. Each semi-autonomous agent, designed as a combination of a problem solver – assumption based truth maintenance system (ATMS), was enriched with improved capabilities: a distributed context management facility allowing the user to dynamically focus on the more pertinent contexts, and a distributed belief revision algorithm with two levels of consistency. This work contributions include: (i) a concise representation of the shared external facts; (ii) a simple and innovative methodology to achieve distributed context management; and (iii) a reduced inter-agent data exchange format. The different levels of consistency adopted were based on the relevance of the data under consideration: higher relevance data (detected inconsistencies) was granted global consistency while less relevant data (system facts) was assigned local consistency. These abilities are fully supported by the ATMS standard functionalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TICEduca. III Congresso Internacional TIC e Educação. 14 a 16 Novembro, Lisboa

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.