857 resultados para Service-oriented grid computing
Resumo:
Existing registry technologies such as UDDI can be enhanced to support capabilities for semantic reasoning and inquiry, which subsequently increases its usability range. The Grimoires registry was developed to provide such support through the use of metadata attachments to registry entities. The use of such attachments provides a way for allowing service operators to specify security assertions pertaining to registry entities owned by them. These assertions may however have to be reconciled with existing registry policies. A security architecture based on the XACML standard and deployed in the OMII framework is outlined to demonstrate how this goal is achieved in the registry.
Resumo:
Notification Services mediate between information publishers and consumers that wish to subscribe to periodic updates. In many cases, however, there is a mismatch between the dissemination of these updates and the delivery preferences of the consumer, often in terms of frequency of delivery, quality, etc. In this paper, we present an automated negotiation engine that identifies mutually acceptable terms; we study its performance, and discuss its application to a Grid Notification Service. We also demonstrate how the negotiation engine enables users to control the Quality of Service levels they require.
Resumo:
Notification Services mediate between information publishers and consumers that wish to subscribe to periodic updates. In many cases, however, there is a mismatch between the dissemination of these updates and the delivery preferences of the consumer, often in terms of frequency of delivery, quality, etc. In this paper, we present an automated negotiation engine that identifies mutually acceptable terms; we study its performance, and discuss its application to a Grid Notification Service. We also demonstrate how the negotiation engine enables users to control the Quality of Service levels they require.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
The increase of higher education offer is a basic need of developed and emerging countries. It requires increasing and ongoing investments. The offer of higher education, by means of Distance Learning, based on the Internet, is one of the most efficient manners for the massification of this offer, as it allows ample coverage and lower costs. In this scenario, we highlight Moodle, an open and low-cost environment for Distance Learning. Its utilization may be amplified through the adoption of an emerging Information and Communication Technology (ICT), Cloud Computing, which allows the virtualization of Moodle sites, cutting costs, facilitating management and increasing its service capacity. This article diffuses a public tool, opened and free, for automatic conversion of Moodle sites, such that these may be hosted on Azure: the Cloud Computing environment of Microsoft.
Resumo:
A Internet atual vem sofrendo vários problemas em termos de escalabilidade, desempenho, mobilidade, etc., devido ao vertiginoso incremento no número de usuários e o surgimento de novos serviços com novas demandas, propiciando assim o nascimento da Internet do Futuro. Novas propostas sobre redes orientadas a conteúdo, como a arquitetura Entidade Titulo (ETArch), proveem novos serviços para este tipo de cenários, implementados sobre o paradigma de redes definidas por software. Contudo, o modelo de transporte do ETArch é equivalente ao modelo best-effort da Internet atual, e vem limitando a confiabilidade das suas comunicações. Neste trabalho, ETArch é redesenhado seguindo o paradigma do sobreaprovisionamento de recursos para conseguir uma alocação de recursos avançada integrada com OpenFlow. Como resultado, o framework SMART (Suporte de Sessões Móveis com Alta Demanda de Recursos de Transporte), permite que a rede defina semanticamente os requisitos qualitativos das sessões para assim gerenciar o controle de Qualidade de Serviço visando manter a melhor Qualidade de Experiência possível. A avaliação do planos de dados e de controle teve lugar na plataforma de testes na ilha do projeto OFELIA, mostrando o suporte de aplicações móveis multimídia com alta demanda de recursos de transporte com QoS e QoE garantidos através de um esquema de sinalização restrito em comparação com o ETArch legado
Resumo:
Purpose - This paper proposes an interpolating approach of the element-free Galerkin method (EFGM) coupled with a modified truncation scheme for solving Poisson's boundary value problems in domains involving material non-homogeneities. The suitability and efficiency of the proposed implementation are evaluated for a given set of test cases of electrostatic field in domains involving different material interfaces.Design/methodology/approach - the authors combined an interpolating approximation with a modified domain truncation scheme, which avoids additional techniques for enforcing the Dirichlet boundary conditions and for dealing with material interfaces usually employed in meshfree formulations.Findings - the local electric potential and field distributions were correctly described as well as the global quantities like the total potency and resistance. Since, the treatment of the material interfaces becomes practically the same for both the finite element method (FEM) and the proposed EFGM, FEM-oriented programs can, thus, be easily extended to provide EFGM approximations.Research limitations/implications - the robustness of the proposed formulation became evident from the error analyses of the local and global variables, including in the case of high-material discontinuity.Practical implications - the proposed approach has shown to be as robust as linear FEM. Thus, it becomes an attractive alternative, also because it avoids the use of additional techniques to deal with boundary/interface conditions commonly employed in meshfree formulations.Originality/value - This paper reintroduces the domain truncation in the EFGM context, but by using a set of interpolating shape functions the authors avoided the use of Lagrange multipliers as well Mathematics in Engineering high-material discontinuity.
Resumo:
The DO experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed capabilities of any one institution. Moreover, the widely scattered geographical distribution of DO collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in DO by developing a grid in the DO Southern Analysis Region (DOSAR), DOSAR-Grid, using a available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the DOSAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.
Resumo:
This classical way to manage product development processes for massive production seems to be changing: high pressure for cost reduction, higher quality standards, markets reaching for innovation lead to the necessity of new tools for development control. Into this, and learning from the automotive and aerospace industries factories from other segments are starting to understand and apply manufacturing and assembly oriented projects to ease the task of generate goods and from this obtain at least a part of the expected results. This paper is intended to demonstrate the applicability of the concepts of Concurrent Engineering and DFM/DFA (Design for Manufacturing and Assembly) in the development of products and parts for the White Goods industry in Brazil (major appliances as refrigerators, cookers and washing machines), showing one case concerning the development and releasing of a component. Finally is demonstrated in a short term how was reached a solution that could provide cost savings and reduction on the time to delivery using those techniques.
Resumo:
This paper presents simulation results of the DNP3 communication protocol over a TCP/IP network, for Smart Grid applications. The simulation was performed using the NS-2 network simulator. This study aimed to use the simulation to verify the performance of the DNP3 protocol in a heterogeneous LAN. Analyzing the results it was possible to verify that the DNP3 over a heterogeneous traffic network, with communication channel capacity between 60 and 85 percent, it works well with low packet loss and low delay, however, with traffic values upper 85 percent, the DNP3 usage becomes unfeasible because the information lost, re-transmissions and latency are significantly increased. © 2013 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Faced with an imminent restructuring of the electric power system, over the past few years many countries have invested in a new paradigm known as Smart Grid. This paradigm targets optimization and automation of electric power network, using advanced information and communication technologies. Among the main communication protocols for Smart Grids we have the DNP3 protocol, which provides secure data transmission with moderate rates. The IEEE 802.15.4 is another communication protocol also widely used in Smart Grid, especially in the so-called Home Area Network (HAN). Thus, many applications of Smart Grid depends on the interaction of these two protocols. This paper proposes modeling, in the traditional network simulator NS-2, the integration of DNP3 protocol and the IEEE 802.15.4 wireless standard for low cost simulations of Smart Grid applications.
Resumo:
Wireless networks are widely deployed and have many uses, for example in critical embedded systems. The applications of this kind of network meets the common needs of most embedded systems and addressing the particularities of each scenario, such as limitations of computing resources and energy supply. Problems such as denial of service attacks are common place and cause great inconvenience. Thus, this study presents simulations of denial of service attacks on 802.11 wireless networks using the network simulator OMNeT++. Furthermore, we present an approach to mitigate such attack, obtaining significant results for improving wireless networks.
Resumo:
The development of cloud computing services is speeding up the rate in which the organizations outsource their computational services or sell their idle computational resources. Even though migrating to the cloud remains a tempting trend from a financial perspective, there are several other aspects that must be taken into account by companies before they decide to do so. One of the most important aspect refers to security: while some cloud computing security issues are inherited from the solutions adopted to create such services, many new security questions that are particular to these solutions also arise, including those related to how the services are organized and which kind of service/data can be placed in the cloud. Aiming to give a better understanding of this complex scenario, in this article we identify and classify the main security concerns and solutions in cloud computing, and propose a taxonomy of security in cloud computing, giving an overview of the current status of security in this emerging technology.