908 resultados para Semantic Web Services


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information retrieval is a recurrent subject in search of information science. This kind of study aim to improve results in both searches on the Web and in various other digital information environment. In this context, the Iterative Representation model suggested for digital repositories, appears as a differential that changes the paradigm of self-archiving of digital objects, creating a concept of relationship between terms that link the user thought the material deposited in the digital environment. The links effect by the Iterative Representation aided Assisted Folksonomy generate a shaped structure that connects networks, vertically and horizontally, the objects deposited, relying on some kind of structure for representing knowledge of specialty areas and therefore, creating an information network based on knowledge of users. The network of information created, called the network of tags is dynamic and effective a different model of information retrieval and study of digital information repositories.Keywords Digital Repositories; Iterative Representation; Folksonomy; Folksonomy Assisted; Semantic Web; Network Tags.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information retrieval has been much discussed within Information Science lately. The search for quality information compatible with the users needs became the object of constant research.Using the Internet as a source of dissemination of knowledge has suggested new models of information storage, such as digital repositories, which have been used in academic research as the main form of autoarchiving and disseminating information, but with an information structure that suggests better descriptions of resources and hence better retrieval.Thus the objective is to improve the process of information retrieval, presenting a proposal for a structural model in the context of the semantic web, addressing the use of web 2.0 and web 3.0 in digital repositories, enabling semantic retrieval of information through building a data layer called Iterative Representation.The present study is characterized as descriptive and analytical, based on document analysis, divided into two parts: the first, characterized by direct observation of non-participatory tools that implement digital repositories, as well as digital repositories already instantiated, and the second with scanning feature, which suggests an innovative model for repositories, with the use of structures of knowledge representation and user participation in building a vocabulary domain. The model suggested and proposed ─ Iterative Representation ─ will allow to tailor the digital repositories using Folksonomy and also controlled vocabulary of the field in order to generate a data layer iterative, which allows feedback information, and semantic retrieval of information, through the structural model designed for repositories. The suggested model resulted in the formulation of the thesis that through Iterative Representation it is possible to establish a process of semantic retrieval of information in digital repositories.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we study the intersection of Knowledge Organization with Information Technologies and the challenges and opportunities for Knowledge Organization experts that, in our view, are important to be studied and for them to be aware of. We start by giving some definitions necessary for providing the context for our work. Then we review the history of the Web, beginning with the Internet and continuing with the World Wide Web, the Semantic Web, problems of Artificial Intelligence, Web 2.0, and Linked Data. Finally, we conclude our paper with IT applications for Knowledge Organization in libraries, such as FRBR, BIBFRAME, and several OCLC initiatives, as well as with some of the challenges and opportunities in which Knowledge Organization experts and researchers might play a key role in relation to the Semantic Web.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work aims viewing weather information, by building isosurfaces enabling enjoy the advantages of three-dimensional geometric models, to communicate the meaning of the data used in a clear and efficient way. The evolving technology of data processing makes possible the interpretation of masses of data increasing, through robust algorithms. In meteorology, in particular, we can benefit from this fact, due to the large amount of data required for analysis and statistics. The manipulation of data, by users from other areas, is facilitated by the choice of algorithm and the tools involved in this work. The project was further developed into distinct modules, increasing their flexibility and reusability for future studies

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given the exponential growth in the spread of the virus world wide web (Internet) and its increasing complexity, it is necessary to adopt more complex systems for the extraction of malware finger-prints (malware fingerprints - malicious software; is the name given to extracting unique information leading to identification of the virus, equivalent to humans, the fingerprint). The architecture and protocol proposed here aim to achieve more efficient fingerprints, using techniques that make a single fingerprint enough to compromise an entire group of viruses. This efficiency is given by the use of a hybrid approach of extracting fingerprints, taking into account the analysis of the code and the behavior of the sample, so called viruses. The main targets of this proposed system are Polymorphics and Metamorphics Malwares, given the difficulty in creating fingerprints that identify an entire family from these viruses. This difficulty is created by the use of techniques that have as their main objective compromise analysis by experts. The parameters chosen for the behavioral analysis are: File System; Records Windows; RAM Dump and API calls. As for the analysis of the code, the objective is to create, in binary virus, divisions in blocks, where it is possible to extract hashes. This technique considers the instruction there and its neighborhood, characterized as being accurate. In short, with this information is intended to predict and draw a profile of action of the virus and then create a fingerprint based on the degree of kinship between them (threshold), whose goal is to increase the ability to detect viruses that do not make part of the same family

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Different vocabularies and contexts are barriers to the communication between people or software systems. It is necessary a common understanding in the domain that is talked about, so it can be obtained a correct interpretation of the information. An ontology formally models the structure of a domain and turn explicit the shared understanding in the form of concepts and relations that emerge from its observation. Constitutes a sort of framework used in the mapping to the meaning of the information that is talked about. The formal accuracy in which they are defined, by means of axioms, allow machine processing, implicating in systems interoperability. Structured this way, the knowledge is easily transferred between people or systems from different contexts. Ontologies present several applications nowadays. They are considered the infra-structure to the Semantic Web, which is composed by Web resources with embedded meaning. Thereby, the automatic execution of complex tasks is allowed, with the benefit from the effective communication between Web software agents. Among other applications, they also have been used to structure the knowledge generated from several areas, like Biology and Software Engineering.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The automatic disambiguation of word senses (i.e., the identification of which of the meanings is used in a given context for a word that has multiple meanings) is essential for such applications as machine translation and information retrieval, and represents a key step for developing the so-called Semantic Web. Humans disambiguate words in a straightforward fashion, but this does not apply to computers. In this paper we address the problem of Word Sense Disambiguation (WSD) by treating texts as complex networks, and show that word senses can be distinguished upon characterizing the local structure around ambiguous words. Our goal was not to obtain the best possible disambiguation system, but we nevertheless found that in half of the cases our approach outperforms traditional shallow methods. We show that the hierarchical connectivity and clustering of words are usually the most relevant features for WSD. The results reported here shed light on the relationship between semantic and structural parameters of complex networks. They also indicate that when combined with traditional techniques the complex network approach may be useful to enhance the discrimination of senses in large texts. Copyright (C) EPLA, 2012

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.