7 resultados para Web Semantico semantic open data geoSPARQL
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The Brazilian CAPES Journal Portal aims to provide Information in Science and Technology (IST) for academic users. Thus, it is considered a relevant instrument for post-graduation dynamics and the country´s Science and Technology (S&T) development. Despite its importance, there are still few studies that focus on the policy analysis and efficiency of these resources. This research aims to fill in this gap once it proposes an analysis of the use of the CAPES Journal Portal done on behalf of the master´s and doctoral alumni of the Post Graduate Program in Management (PPGA) at the Federal University of Rio Grande do Norte (UFRN). The operationalization of the research´s main objective was possible through the specific objectives: characterize graduate profile as CAPES Journal Portal users b) identify motivation for the use of CAPES Journal Portal c) detect graduate satisfaction degree in information seeking done at CAPES Journal Portal d) verify graduate satisfaction regarding the use of the CAPES Journal Portal e) verify the use of the information that is obtained by graduates in the development of their academic activities. The research is of descriptive nature employing a mixed methodological strategy in which quantitative approach predominates. Data collection was done through a web survey questionnaire. Quantitative data analysis was made possible through the use of a statistical method. As for qualitative analysis, there was use of the Brenda Dervin´s sense-making approach as well as content analysis in open ended questions. The research samples were composed by 90 graduate students who had defended their dissertation/thesis in the PPGA program at UFRN in the time span of 2010-2013. This represented by 88% of this population. As for user profile, the analysis has made evident that there are no quantitative differences related to gender. There is predominance of male graduates that were aged 26 to 30 years old. As for female graduates, the great majority were 31 o 35 years old. Most graduates had Master´s degree scholarship in order to support their study. It was also seen that the great majority claim to use the Portal during their post graduation studies. The main reasons responsible for non use was: preference for the use of other data bases and lack of knowledge regarding the Portal. It was observed that the most used information resources were theses and dissertations. Data also indicate preference for complete text. Those who have used the Portal also claimed to have used other electronic information fonts in order to fulfill their information needs. The information fonts that were researched outside in the Portal were monographs, dissertations and thesis. Scielo was the most used information font. Results reveal that access and use of the Portal has been done in a regular manner during post graduation studies. But on the other hand, graduates also make use of other electronic information fonts in order to meet their information needs. The study also confirmed the important mission performed by the Portal regarding Brazilian scientific communication production. This was seen even though users have reported the need for improvement in some aspects such as: periodic training in order to promote, encourage and teach more effective use of the portal; investment aiming the expansion of Social Sciences Collection in the Portal as well as the need to implement continuous evaluation process related to user satisfaction in regarding the services provided.
Resumo:
One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism
Resumo:
Graph Reduction Machines, are a traditional technique for implementing functional programming languages. They allow to run programs by transforming graphs by the successive application of reduction rules. Web service composition enables the creation of new web services from existing ones. BPEL is a workflow-based language for creating web service compositions. It is also the industrial and academic standard for this kind of languages. As it is designed to compose web services, the use of BPEL in a scenario where multiple technologies need to be used is problematic: when operations other than web services need to be performed to implement the business logic of a company, part of the work is done on an ad hoc basis. To allow heterogeneous operations to be part of the same workflow, may help to improve the implementation of business processes in a principled way. This work uses a simple variation of the BPEL language for creating compositions containing not only web service operations but also big data tasks or user-defined operations. We define an extensible graph reduction machine that allows the evaluation of BPEL programs and implement this machine as proof of concept. We present some experimental results.
Resumo:
Recently the focus given to Web Services and Semantic Web technologies has provided the development of several research projects in different ways to addressing the Web services composition issue. Meanwhile, the challenge of creating an environment that provides the specification of an abstract business process and that it is automatically implemented by a composite service in a dynamic way is considered a currently open problem. WSDL and BPEL provided by industry support only manual service composition because they lack needed semantics so that Web services are discovered, selected and combined by software agents. Services ontology provided by Semantic Web enriches the syntactic descriptions of Web services to facilitate the automation of tasks, such as discovery and composition. This work presents an environment for specifying and ad-hoc executing Web services-based business processes, named WebFlowAH. The WebFlowAH employs common domain ontology to describe both Web services and business processes. It allows processes specification in terms of users goals or desires that are expressed based on the concepts of such common domain ontology. This approach allows processes to be specified in an abstract high level way, unburdening the user from the underline details needed to effectively run the process workflow
Resumo:
Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.