943 resultados para SIB Semantic Information Broker OSGI Semantic Web
Resumo:
This dissertation of Mestrado investigated the performance and quality of web sites. The target of the research is the proposal of an integrated model of evaluation of services of digital information in web educational sites. The universe of the research was constituted by eighteen Brazilian Universities that offer after-graduation courses, in the levels of mestrado and doutorado in the area of Engineering of Production. The adopted methodology was a descriptive and exploratory research, using the technique of systematic comment and focus group, for the collection of the data, using itself changeable independent dependents and, through the application of two instruments of research. The analysis protocol was the instrument adopted for evaluation and attainment of qualitative results. E the analysis grating was applied for evaluation and attainment of the quantitative results. The qualitative results had identified to the lack of standardization of web sites, under the attributes of content, hierarchy of information, design of the colors and letters. It of accessibility for carriers of auditory and visual special necessities was observed inexistence, as well as the lack of convergence of medias and assistivas technologies. The language of the sites also was evaluated and all present Portuguese only language. The general result demonstrated in grafico and tables with classification of the Universities, predominating the Good note As for the quantitative results, analysis method ed was estatistico, in order to get the descriptive and inferencial result between the dependent and independent variaveis. How much a category of analysis of the services of the evaluated sites, was found it props up and the index generality weighed. These results had served of base for ranking of existence or inexistence the Universities, how much of the information of services in its web sites. In analysis inferencial the result of the test of correlation or association of the independent variaveis (level, concept of the CAPES and period of existence of the program) with the caracteristicas, called was gotten categories of services. For this analysis the estatisticos methods had been used: coefficient of Spearman and the Test of Fisher. But the category you discipline of the Program of Mestrado presented significance with variavel independent and concept of the CAPES. Main conclusion of this study it was ausencia of satandardization o how much to the subjective aspects, design, hierarchy of information navigability and content precision and the accessibility inexistence and convergence. How much to the quantitative aspects, the information services offered by web sites of the evaluated Universities, still they do not present a satisfactory and including quality. Absence of strategies, adoption of tools web, techniques of institucional marketing and services that become them more interactive, navigable is perceived and with aggregate value
Resumo:
The structure of Industrial Automation bases on a hierarchical pyramid, where restricted information islands are created. Those information islands are characterized by systems where hardware and software used are proprietors. In other words, they are supplied for just a manufacturer, doing with that customer is entailed to that supplier. That solution causes great damages to companies. Once the connection and integration with other equipments, that are not of own supplier, it is very complicated. Several times it is impossible of being accomplished, because of high cost of solution or for technical incompatibility. This work consists to specify and to implement the visualization module via Web of GERINF. GERINF is a FINEP/CTPetro project that has the objective of developing a software for information management in industrial processes. GERINF is divided in three modules: visualization via Web, compress and storage and communication module. Are presented results of the utilization of a proposed system to information management of a Natural Gas collected Unit of Guamar´e on the PETROBRAS UN-RNCE.
Resumo:
One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism
Resumo:
With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios
Resumo:
The uses of Information and Communication Technologies (ICT) and Web environments for creation, treatment and availability of information have supported the emergence of new social-cultural patterns represented by convergences in textual, image and audio languages. This paper describes and analyzes the National Archives Experience Digital Vaults as a digital publishing web environment and as a cultural heritage. It is a complex system - synthesizer of information design options at information setting, provides new aesthetic aspects, but specially enlarges the cognition of the subjects who interact with the environment. It also enlarges the institutional spaces that guard the collective memory beyond its role of keeping the physical patrimony collected there. Digital Vaults lies as a mix of guide and interactive catalogue to be dealt in a ludic way. The publishing design of the information held on the Archives is meant to facilitate access to knowledge. The documents are organized in a dynamic and not chronological way. They are not divided in fonds or distinct categories, but in controlled interaction of documents previously indexed and linked by the software. The software creates information design and view of documental content that can be considered a new paradigm in Information Science and are part of post-custodial regime, independent from physical spaces and institutions. Information professionals must be prepared to understand and work with the paradigmatic changes described and represented by the new hybrid digital environments; hence the importance of this paper. Cyberspace interactivity between user and the content provided by the environment design provide cooperation, collaboration and sharing knowledge actions, all features of networks, transforming culture globally.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Resumo:
[ES] Uno de los cinco componentes de la arquitectura triskel, una base de datos NoSQL que trata de dar solución al problema de Big data de la web semántica, el gran número de identificadores de recursos que se necesitarían debido al creciente número de sitios web, concretamente el motor de gestión de ejecución de patrones basados en tripletas y en la tecnología RDF. Se encarga de recoger la petición de consulta por parte del intérprete, analizar los patrones que intervienen en la consulta en busca de dependencias explotables entre ellos, y así poder realizar la consulta con mayor rapidez además de ir resolviendo los diferentes patrones contra el almacenamiento, un TripleStore, y devolver el resultado de la petición en una tabla.
Resumo:
[ES] SPARQL Interpreter es uno de los cinco componentes de la Arquitectura Triskel, una arquitectura de software para una base de datos NoSQL que intenta aportar una solución al problema de Big Data en la web semántica. Este componente da solución al problema de la comunicación entre el lenguaje y el motor, interpretando las consultas que se realicen contra el almacenamiento en lenguaje SPARQL y generando una estructura de datos que los componentes inferiores puedan leer y ejecutar.
Resumo:
Electronic business surely represents the new development perspective for world-wide trade. Together with the idea of ebusiness, and the exigency to exchange business messages between trading partners, the concept of business-to-business (B2B) integration arouse. B2B integration is becoming necessary to allow partners to communicate and exchange business documents, like catalogues, purchase orders, reports and invoices, overcoming architectural, applicative, and semantic differences, according to the business processes implemented by each enterprise. Business relationships can be very heterogeneous, and consequently there are variousways to integrate enterprises with each other. Moreover nowadays not only large enterprises, but also the small- and medium- enterprises are moving towards ebusiness: more than two-thirds of Small and Medium Enterprises (SMEs) use the Internet as a business tool. One of the business areas which is actively facing the interoperability problem is that related with the supply chain management. In order to really allow the SMEs to improve their business and to fully exploit ICT technologies in their business transactions, there are three main players that must be considered and joined: the new emerging ICT technologies, the scenario and the requirements of the enterprises and the world of standards and standardisation bodies. This thesis presents the definition and the development of an interoperability framework (and the bounded standardisation intiatives) to provide the Textile/Clothing sectorwith a shared set of business documents and protocols for electronic transactions. Considering also some limitations, the thesis proposes a ontology-based approach to improve the functionalities of the developed framework and, exploiting the technologies of the semantic web, to improve the standardisation life-cycle, intended as the development, dissemination and adoption of B2B protocols for specific business domain. The use of ontologies allows the semantic modellisation of knowledge domains, upon which it is possible to develop a set of components for a better management of B2B protocols, and to ease their comprehension and adoption for the target users.