939 resultados para Semantic Web Services
Resumo:
Web services are software units that allow access to one or more resources, supporting the deployment of business processes in the Web. They use well-defined interfaces, using web standard protocols, making possible the communication between entities implemented on different platforms. Due to these features, Web services can be integrated as services compositions to form more robust loose coupling applications. Web services are subject to failures, unwanted situations that may compromise the business process partially or completely. Failures can occur both in the design of compositions as in the execution of compositions. As a result, it is essential to create mechanisms to make the implementation of service compositions more robust and to treat failures. Specifically, we propose the support for fault recovery in service compositions described in PEWS language and executed on PEWS-AM, an graph reduction machine. To support recovery failure on PEWS-AM, we extend the PEWS language specification and adapted the rules of translation and reduction of graphs for this machine. These contributions were made both in the model of abstract machine as at the implementation level
Resumo:
Includes bibliography
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Information retrieval is a recurrent subject in search of information science. This kind of study aim to improve results in both searches on the Web and in various other digital information environment. In this context, the Iterative Representation model suggested for digital repositories, appears as a differential that changes the paradigm of self-archiving of digital objects, creating a concept of relationship between terms that link the user thought the material deposited in the digital environment. The links effect by the Iterative Representation aided Assisted Folksonomy generate a shaped structure that connects networks, vertically and horizontally, the objects deposited, relying on some kind of structure for representing knowledge of specialty areas and therefore, creating an information network based on knowledge of users. The network of information created, called the network of tags is dynamic and effective a different model of information retrieval and study of digital information repositories.Keywords Digital Repositories; Iterative Representation; Folksonomy; Folksonomy Assisted; Semantic Web; Network Tags.
Resumo:
Information retrieval has been much discussed within Information Science lately. The search for quality information compatible with the users needs became the object of constant research.Using the Internet as a source of dissemination of knowledge has suggested new models of information storage, such as digital repositories, which have been used in academic research as the main form of autoarchiving and disseminating information, but with an information structure that suggests better descriptions of resources and hence better retrieval.Thus the objective is to improve the process of information retrieval, presenting a proposal for a structural model in the context of the semantic web, addressing the use of web 2.0 and web 3.0 in digital repositories, enabling semantic retrieval of information through building a data layer called Iterative Representation.The present study is characterized as descriptive and analytical, based on document analysis, divided into two parts: the first, characterized by direct observation of non-participatory tools that implement digital repositories, as well as digital repositories already instantiated, and the second with scanning feature, which suggests an innovative model for repositories, with the use of structures of knowledge representation and user participation in building a vocabulary domain. The model suggested and proposed ─ Iterative Representation ─ will allow to tailor the digital repositories using Folksonomy and also controlled vocabulary of the field in order to generate a data layer iterative, which allows feedback information, and semantic retrieval of information, through the structural model designed for repositories. The suggested model resulted in the formulation of the thesis that through Iterative Representation it is possible to establish a process of semantic retrieval of information in digital repositories.
Resumo:
In this paper we study the intersection of Knowledge Organization with Information Technologies and the challenges and opportunities for Knowledge Organization experts that, in our view, are important to be studied and for them to be aware of. We start by giving some definitions necessary for providing the context for our work. Then we review the history of the Web, beginning with the Internet and continuing with the World Wide Web, the Semantic Web, problems of Artificial Intelligence, Web 2.0, and Linked Data. Finally, we conclude our paper with IT applications for Knowledge Organization in libraries, such as FRBR, BIBFRAME, and several OCLC initiatives, as well as with some of the challenges and opportunities in which Knowledge Organization experts and researchers might play a key role in relation to the Semantic Web.
Resumo:
This work aims viewing weather information, by building isosurfaces enabling enjoy the advantages of three-dimensional geometric models, to communicate the meaning of the data used in a clear and efficient way. The evolving technology of data processing makes possible the interpretation of masses of data increasing, through robust algorithms. In meteorology, in particular, we can benefit from this fact, due to the large amount of data required for analysis and statistics. The manipulation of data, by users from other areas, is facilitated by the choice of algorithm and the tools involved in this work. The project was further developed into distinct modules, increasing their flexibility and reusability for future studies
Resumo:
Given the exponential growth in the spread of the virus world wide web (Internet) and its increasing complexity, it is necessary to adopt more complex systems for the extraction of malware finger-prints (malware fingerprints - malicious software; is the name given to extracting unique information leading to identification of the virus, equivalent to humans, the fingerprint). The architecture and protocol proposed here aim to achieve more efficient fingerprints, using techniques that make a single fingerprint enough to compromise an entire group of viruses. This efficiency is given by the use of a hybrid approach of extracting fingerprints, taking into account the analysis of the code and the behavior of the sample, so called viruses. The main targets of this proposed system are Polymorphics and Metamorphics Malwares, given the difficulty in creating fingerprints that identify an entire family from these viruses. This difficulty is created by the use of techniques that have as their main objective compromise analysis by experts. The parameters chosen for the behavioral analysis are: File System; Records Windows; RAM Dump and API calls. As for the analysis of the code, the objective is to create, in binary virus, divisions in blocks, where it is possible to extract hashes. This technique considers the instruction there and its neighborhood, characterized as being accurate. In short, with this information is intended to predict and draw a profile of action of the virus and then create a fingerprint based on the degree of kinship between them (threshold), whose goal is to increase the ability to detect viruses that do not make part of the same family
Resumo:
Different vocabularies and contexts are barriers to the communication between people or software systems. It is necessary a common understanding in the domain that is talked about, so it can be obtained a correct interpretation of the information. An ontology formally models the structure of a domain and turn explicit the shared understanding in the form of concepts and relations that emerge from its observation. Constitutes a sort of framework used in the mapping to the meaning of the information that is talked about. The formal accuracy in which they are defined, by means of axioms, allow machine processing, implicating in systems interoperability. Structured this way, the knowledge is easily transferred between people or systems from different contexts. Ontologies present several applications nowadays. They are considered the infra-structure to the Semantic Web, which is composed by Web resources with embedded meaning. Thereby, the automatic execution of complex tasks is allowed, with the benefit from the effective communication between Web software agents. Among other applications, they also have been used to structure the knowledge generated from several areas, like Biology and Software Engineering.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)