983 resultados para Interoperability Framework
Resumo:
Nowadays, the consumption of goods and services on the Internet are increasing in a constant motion. Small and Medium Enterprises (SMEs) mostly from the traditional industry sectors are usually make business in weak and fragile market sectors, where customized products and services prevail. To survive and compete in the actual markets they have to readjust their business strategies by creating new manufacturing processes and establishing new business networks through new technological approaches. In order to compete with big enterprises, these partnerships aim the sharing of resources, knowledge and strategies to boost the sector’s business consolidation through the creation of dynamic manufacturing networks. To facilitate such demand, it is proposed the development of a centralized information system, which allows enterprises to select and create dynamic manufacturing networks that would have the capability to monitor all the manufacturing process, including the assembly, packaging and distribution phases. Even the networking partners that come from the same area have multi and heterogeneous representations of the same knowledge, denoting their own view of the domain. Thus, different conceptual, semantic, and consequently, diverse lexically knowledge representations may occur in the network, causing non-transparent sharing of information and interoperability inconsistencies. The creation of a framework supported by a tool that in a flexible way would enable the identification, classification and resolution of such semantic heterogeneities is required. This tool will support the network in the semantic mapping establishments, to facilitate the various enterprises information systems integration.
Resumo:
As the complexity of markets and the dynamicity of systems evolve, the need for interoperable systems capable of strengthening enterprise communication effectiveness increases. This is particularly significant when it comes to collaborative enterprise networks, like manufacturing supply chains, where several companies work, communicate, and depend on each other, in order to achieve a specific goal. Once interoperability is achieved, that is once all network parties are able to communicate with and understand each other, organisations are able to exchange information along a stable environment that follows agreed laws. However, as markets adapt to new requirements and demands, an evolutionary behaviour is triggered giving space to interoperability problems, thus disrupting the sustainability of interoperability and raising the need to develop monitoring activities capable of detecting and preventing unexpected behaviour. This work seeks to contribute to the development of monitoring techniques for interoperable SOA-based enterprise networks. It focuses on the automatic detection of harmonisation breaking events during real-time communications, and strives to develop and propose a methodological approach to handle these disruptions with minimal or no human intervention, hence providing existing service-based networks with the ability to detect and promptly react to interoperability issues.
Resumo:
The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.
Resumo:
This thesis proposes a methodology for modelling business interoperability in a context of cooperative industrial networks. The purpose is to develop a methodology that enables the design of cooperative industrial network platforms that are able to deliver business interoperability and the analysis of its impact on the performance of these platforms. To achieve the proposed objective, two modelling tools have been employed: the Axiomatic Design Theory for the design of interoperable platforms; and Agent-Based Simulation for the analysis of the impact of business interoperability. The sequence of the application of the two modelling tools depends on the scenario under analysis, i.e. whether the cooperative industrial network platform exists or not. If the cooperative industrial network platform does not exist, the methodology suggests first the application of the Axiomatic Design Theory to design different configurations of interoperable cooperative industrial network platforms, and then the use of Agent-Based Simulation to analyse or predict the business interoperability and operational performance of the designed configurations. Otherwise, one should start by analysing the performance of the existing platform and based on the achieved results, decide whether it is necessary to redesign it or not. If the redesign is needed, simulation is once again used to predict the performance of the redesigned platform. To explain how those two modelling tools can be applied in practice, a theoretical modelling framework, a theoretical Axiomatic Design model and a theoretical Agent-Based Simulation model are proposed. To demonstrate the applicability of the proposed methodology and/or to validate the proposed theoretical models, a case study regarding a Portuguese Reverse Logistics cooperative network (Valorpneu network) and a case study regarding a Portuguese construction project (Dam Baixo Sabor network) are presented. The findings of the application of the proposed methodology to these two case studies suggest that indeed the Axiomatic Design Theory can effectively contribute in the design of interoperable cooperative industrial network platforms and that Agent-Based Simulation provides an effective set of tools for analysing the impact of business interoperability on the performance of those platforms. However, these conclusions cannot be generalised as only two case studies have been carried out. In terms of relevance to theory, this is the first time that the network effect is addressed in the analysis of the impact of business interoperability on the performance of networked companies and also the first time that a holistic approach is proposed to design interoperable cooperative industrial network platforms. Regarding the practical implications, the proposed methodology is intended to provide industrial managers a management tool that can guide them easily, and in practical and systematic way, in the design of configurations of interoperable cooperative industrial network platforms and/or in the analysis of the impact of business interoperability on the performance of their companies and the networks where their companies operate.
Resumo:
In the current global and competitive business context, it is essential that enterprises adapt their knowledge resources in order to smoothly interact and collaborate with others. However, due to the existent multiculturalism of people and enterprises, there are different representation views of business processes or products, even inside a same domain. Consequently, one of the main problems found in the interoperability between enterprise systems and applications is related to semantics. The integration and sharing of enterprises knowledge to build a common lexicon, plays an important role to the semantic adaptability of the information systems. The author proposes a framework to support the development of systems to manage dynamic semantic adaptability resolution. It allows different organisations to participate in a common knowledge base building, letting at the same time maintain their own views of the domain, without compromising the integration between them. Thus, systems are able to be aware of new knowledge, and have the capacity to learn from it and to manage its semantic interoperability in a dynamic and adaptable way. The author endorses the vision that in the near future, the semantic adaptability skills of the enterprise systems will be the booster to enterprises collaboration and the appearance of new business opportunities.
Resumo:
AbstractDigitalization gives to the Internet the power by allowing several virtual representations of reality, including that of identity. We leave an increasingly digital footprint in cyberspace and this situation puts our identity at high risks. Privacy is a right and fundamental social value that could play a key role as a medium to secure digital identities. Identity functionality is increasingly delivered as sets of services, rather than monolithic applications. So, an identity layer in which identity and privacy management services are loosely coupled, publicly hosted and available to on-demand calls could be more realistic and an acceptable situation. Identity and privacy should be interoperable and distributed through the adoption of service-orientation and implementation based on open standards (technical interoperability). Ihe objective of this project is to provide a way to implement interoperable user-centric digital identity-related privacy to respond to the need of distributed nature of federated identity systems. It is recognized that technical initiatives, emerging standards and protocols are not enough to guarantee resolution for the concerns surrounding a multi-facets and complex issue of identity and privacy. For this reason they should be apprehended within a global perspective through an integrated and a multidisciplinary approach. The approach dictates that privacy law, policies, regulations and technologies are to be crafted together from the start, rather than attaching it to digital identity after the fact. Thus, we draw Digital Identity-Related Privacy (DigldeRP) requirements from global, domestic and business-specific privacy policies. The requirements take shape of business interoperability. We suggest a layered implementation framework (DigldeRP framework) in accordance to model-driven architecture (MDA) approach that would help organizations' security team to turn business interoperability into technical interoperability in the form of a set of services that could accommodate Service-Oriented Architecture (SOA): Privacy-as-a-set-of- services (PaaSS) system. DigldeRP Framework will serve as a basis for vital understanding between business management and technical managers on digital identity related privacy initiatives. The layered DigldeRP framework presents five practical layers as an ordered sequence as a basis of DigldeRP project roadmap, however, in practice, there is an iterative process to assure that each layer supports effectively and enforces requirements of the adjacent ones. Each layer is composed by a set of blocks, which determine a roadmap that security team could follow to successfully implement PaaSS. Several blocks' descriptions are based on OMG SoaML modeling language and BPMN processes description. We identified, designed and implemented seven services that form PaaSS and described their consumption. PaaSS Java QEE project), WSDL, and XSD codes are given and explained.
Resumo:
In order to improve the management of copyright in the Internet, known as Digital Rights Management, there is the need for a shared language for copyright representation. Current approaches are based on purely syntactic solutions, i.e. a grammar that defines a rights expression language. These languages are difficult to put into practise due to the lack of explicit semantics that facilitate its implementation. Moreover, they are simple from the legal point of view because they are intended just to model the usage licenses granted by content providers to end-users. Thus, they ignore the copyright framework that lies behind and the whole value chain from creators to end-users. Our proposal is to use a semantic approach based on semantic web ontologies. We detail the development of a copyright ontology in order to put this approach into practice. It models the copyright core concepts for creation, rights and the basic kinds of actions that operate on content. Altogether, it allows building a copyright framework for the complete value chain. The set of actions operating on content are our smaller building blocks in order to cope with the complexity of copyright value chains and statements and, at the same time, guarantee a high level of interoperability and evolvability. The resulting copyright modelling framework is flexible and complete enough to model many copyright scenarios, not just those related to the economic exploitation of content. The ontology also includes moral rights, so it is possible to model this kind of situations as it is shown in the included example model for a withdrawal scenario. Finally, the ontology design and the selection of tools result in a straightforward implementation. Description Logic reasoners are used for license checking and retrieval. Rights are modelled as classes of actions, action patterns are modelled also as classes and the same is done for concrete actions. Then, to check if some right or license grants an action is reduced to check for class subsumption, which is a direct functionality of these reasoners.
Resumo:
The purpose of this thesis is to investigate projects funded in European 7th framework Information and Communication Technology- work programme. The research has been limited to issue ”Pervasive and trusted network and service infrastructure” and the aim is to find out which are the most important topics into which research will concentrate in the future. The thesis will provide important information for the Department of Information Technology in Lappeenranta University of Technology. First in this thesis will be investigated what are the requirements for the projects which were funded in “Pervasive and trusted network and service infrastructure” – programme 2007. Second the projects funded according to “Pervasive and trusted network and service infrastructure”-programme will be listed in to tables and the most important keywords will be gathered. Finally according to the keyword appearances the vision of the most important future topics will be defined. According to keyword-analysis the wireless networks are in important role in the future and core networks will be implemented with fiber technology to ensure fast data transfer. Software development favors Service Oriented Architecture (SOA) and open source solutions. The interoperability and ensuring the privacy are in key role in the future. 3D in all forms and content delivery are important topics as well. When all the projects were compared, the most important issue was discovered to be SOA which leads the way to cloud computing.
Resumo:
Depuis plus de cinquante ans, les puissances occidentales ont créé toutes sortes de réseaux militaires internationaux, afin de renforcer leurs liens et harmoniser leurs techniques, leurs équipements et leurs façons de faire. Jusqu’à ce jour, ces regroupements sont demeurés largement ignorés de la discipline des relations internationales. Or, la mondialisation des échanges et l’essor des technologies de l’information ont ouvert les processus politiques à de nouveaux acteurs, y compris en matière de sécurité, jetant un éclairage nouveau sur le rôle, la mission et les responsabilités que les États délèguent à ces réseaux. En menant une analyse approfondie d’un réseau militaire, le Multinational Interoperability Council, cette recherche a pour objectifs de définir les réseaux militaires internationaux en tant que catégorie d’analyse des relations internationales, de documenter empiriquement leur fonctionnement et de mieux comprendre leur rôle dans le champ de la sécurité internationale. Pour ce faire, la démarche propose de recourir à l’appareil conceptuel de l’institutionnalisme relationnel, de la théorie des champs et du tournant pratiques en relations internationales. Cette combinaison permet d’aborder les dimensions institutionnelle, cognitive et pratique de l’action collective au sein du réseau étudié. L’analyse nous apprend que, malgré une influence limitée, le MIC produit une identité, des capacités, des préférences et des effets qui lui sont propres. Les acteurs du MIC ont eux-mêmes généré certaines conditions de son institutionnalisation, et sont parvenus à faire du réseau, d’abord conçu comme une structure d’échanges d’informations, un acteur intentionnel du champ de la sécurité internationale. Le MIC ne peut agir de façon autonome, sans contrôle des États. Cependant, les relations établies entre les militaires qui y participent leur offrent des capacités – le capital social, politique et d’expertise – dont ils ne disposeraient pas autrement, et qu’ils peuvent mobiliser dans leurs interactions avec les autres acteurs du champ.
Resumo:
Anticipating the increase in video information in future, archiving of news is an important activity in the visual media industry. When the volume of archives increases, it will be difficult for journalists to find the appropriate content using current search tools. This paper provides the details of the study we conducted about the news extraction systems used in different news channels in Kerala. Semantic web technologies can be used effectively since news archiving share many of the characteristics and problems of WWW. Since visual news archives of different media resources follow different metadata standards, interoperability between the resources is also an issue. World Wide Web Consortium has proposed a draft for an ontology framework for media resource which addresses the intercompatiblity issues. In this paper, the w3c proposed framework and its drawbacks is also discussed
Resumo:
Most of studies on interoperability of systems integration focus on technical and semantic levels, but hardly extend investigations on pragmatic level. Our past work has addressed pragmatic interoperability, which is concerned with the relationship between signs and the potential behaviour and intention of responsible agents. We also define the pragmatic interoperability as a level concerning with the aggregation and optimisation of various business processes for achieving intended purposes of different information systems. This paper, as the extension of our previous research, is to propose an assessment method for measuring pragmatic interoperability of information systems. We firstly propose interoperability analysis framework, which is based on the concept of semiosis. We then develop pragmatic interoperability assessment process from two dimensions including six aspects (informal, formal, technical, substantive, communication, and control). We finally illustrate the assessment process in an example.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Logic courses represent a pedagogical challenge and the recorded number of cases of failures and of discontinuity in them is often high. Amont other difficulties, students face a cognitive overload to understand logical concepts in a relevant way. On that track, computational tools for learning are resources that help both in alleviating the cognitive overload scenarios and in allowing for the practical experimenting with theoretical concepts. The present study proposes an interactive tutorial, namely the TryLogic, aimed at teaching to solve logical conjectures either by proofs or refutations. The tool was developed from the architecture of the tool TryOcaml, through support of the communication of the web interface ProofWeb in accessing the proof assistant Coq. The goals of TryLogic are: (1) presenting a set of lessons for applying heuristic strategies in solving problems set in Propositional Logic; (2) stepwise organizing the exposition of concepts related to Natural Deduction and to Propositional Semantics in sequential steps; (3) providing interactive tasks to the students. The present study also aims at: presenting our implementation of a formal system for refutation; describing the integration of our infrastructure with the Virtual Learning Environment Moodle through the IMS Learning Tools Interoperability specification; presenting the Conjecture Generator that works for the tasks involving proving and refuting; and, finally to evaluate the learning experience of Logic students through the application of the conjecture solving task associated to the use of the TryLogic
Resumo:
Includes bibliography
Resumo:
With the increasing production of information from e-government initiatives, there is also the need to transform a large volume of unstructured data into useful information for society. All this information should be easily accessible and made available in a meaningful and effective way in order to achieve semantic interoperability in electronic government services, which is a challenge to be pursued by governments round the world. Our aim is to discuss the context of e-Government Big Data and to present a framework to promote semantic interoperability through automatic generation of ontologies from unstructured information found in the Internet. We propose the use of fuzzy mechanisms to deal with natural language terms and present some related works found in this area. The results achieved in this study are based on the architectural definition and major components and requirements in order to compose the proposed framework. With this, it is possible to take advantage of the large volume of information generated from e-Government initiatives and use it to benefit society.