905 resultados para Web content management systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este relatório consolida os trabalhos de pesquisa, desenvolvidos entre abril de 2005 e abril de 2006, sobre o estado de adoção e oportunidades para uso de novas tecnologias de informação em processos de governo. A ampliação de fronteiras, para além dos limites tradicionais das organizações, traz uma nova e mais forte demanda por flexibilidade que possibilite o tratamento integrado de organismos de diferentes constituições, arquiteturas e processos operacionais, sem falar nos diferentes sistemas de informações. Isto é ainda mais importante nas organizações públicas. Por outro lado, uma das principais características negativas dos órgãos públicos é a morosidade e a burocracia nos processos administrativos e de atendimento ao cidadão. A falta de uma visão tecnológica moderna, isto é, a falta de um Plano Diretor de Tecnologia da Informação (PDTI) voltada para novas soluções, como é o caso do BPM, alinhada à falta de integração entre os sistemas e processos, faz com que muitos órgãos governamentais estejam caminhando na contramão do desenvolvimento tecnológico. Este projeto de pesquisa reveste-se, portanto, de alto interesse, pois focaliza as possibilidades e impactos da adoção das novas tecnologias orientadas a processos e web services (BPM - Business Process Management e BPMS - Business Process Management Systems) na área governamental, bastante desprovida de soluções integradas de serviços aos cidadãos e empresas. Estas novas tecnologias trazem paradigmas completamente diferentes dos até aqui adotados na implementação de sistemas de informações e automação de processos. Apesar das dificuldades inerentes ao tratamento de um tema complexo e novo, mais ainda em organismos governamentais, acreditamos ter desenvolvido um trabalho bastante aprofundado, atendendo aos objetivos estabelecidos no plano original, com os necessários acertos de rota e foco dos trabalhos. Cremos, também, que este trabalho estabelece uma referência relevante no conhecimento relacionados à melhoria de processos de governo, com base em novas tecnologias. Como sub-produtos planejados e realizados, inseridos no caderno de anexos a este relatório, estão conteúdos já desenvolvidos para a edição um ou dois livros sobre o tema, diversos artigos produzidos, além de diversos eventos realizados na EAESP, envolvendo o tema do projeto, que proporcionaram a oportunidade de excelentes trocas de experiências. Este relatório, apresentado de forma objetiva e sintética, focalizando somente os principais aspectos tratados, é complementado por um extenso conteúdo complementar, entregue em um caderno de Anexos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A promoção de qualquer evento passa, em muito, pela utilização da Internet como meio de disponibilização e propagação de conteúdos, através de um website ou de redes sociais. Mas não é só para a disponibilização de conteúdos que estes websites são úteis. A adição de funcionalidades permite torná-los em plataformas mais completas e com os mais diversos propósitos, incluindo elementos para a gestão do próprio evento. Este projeto surge da necessidade da organização do Rali Vinho Madeira (RVM) em substituir a plataforma existente, insuficiente para as necessidades atuais na área da divulgação pública do evento e na gestão das inscrições e registo de diversas entidades participantes no evento. Assim, o objetivo principal deste projeto passou pelo desenvolvimento de um novo website que implementasse os requisitos de forma satisfatória tanto para a organização do RVM como para os utilizadores. Ao mesmo tempo foi também importante garantir que o servidor onde estaria alojada a nova plataforma possuiria o melhor desempenho possível em condições reais, usando para o efeito um plano de testes de carga para validar as configurações escolhidas e detetar atempadamente possíveis problemas. Um último componente da plataforma do RVM, desenvolvido ainda no âmbito deste trabalho, foi o desenvolvimento de uma aplicação web para consulta dos resultados em equipamentos mobile, como smartphones e tablet’s. Ao longo deste documento são descritas as várias etapas do projeto, de onde se destacam: (1) a avaliação de websites para melhorar a caracterização dos requisitos, (2) o processo de análise, especificação e desenvolvimento da plataforma, e (3) a realização de testes de carga como meio de validação das configurações do servidor para um desempenho satisfatório durante a prova. O módulo Rally Entries, central para a organização e também para este projeto, transforma uma plataforma simples de disponibilização de conteúdos num sistema para gestão das inscrições de diversas entidades no âmbito do RVM. Além da descrição da implementação e das funcionalidades deste módulo, é ainda descrita a forma como este componente será capaz de se adaptar a novos requisitos em futuros eventos. A validação da plataforma desenvolvida passou por um contato com os utilizadores através de inquéritos. No geral os resultados obtidos foram positivos, comparativamente à plataforma existente e a websites de outros ralis. Como evento integrante das atividades da Federação Internacional do Automóvel (FIA) e da Federação Portuguesa de Automobilismo e Karting, o website também integrou a avaliação do evento feito por estas organizações, tendo recebido em ambas as avaliações 4 pontos em 5 possíveis. Por último, os testes de carga realizados revelaram ser uma grande ajuda na preparação da plataforma, principalmente para os períodos de pico de acessos, tendo esta sido capaz de responder de forma previsível à carga a que foi sujeita.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure of Industrial Automation bases on a hierarchical pyramid, where restricted information islands are created. Those information islands are characterized by systems where hardware and software used are proprietors. In other words, they are supplied for just a manufacturer, doing with that customer is entailed to that supplier. That solution causes great damages to companies. Once the connection and integration with other equipments, that are not of own supplier, it is very complicated. Several times it is impossible of being accomplished, because of high cost of solution or for technical incompatibility. This work consists to specify and to implement the visualization module via Web of GERINF. GERINF is a FINEP/CTPetro project that has the objective of developing a software for information management in industrial processes. GERINF is divided in three modules: visualization via Web, compress and storage and communication module. Are presented results of the utilization of a proposed system to information management of a Natural Gas collected Unit of Guamar´e on the PETROBRAS UN-RNCE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control of industrial processes has become increasingly complex due to variety of factory devices, quality requirement and market competition. Such complexity requires a large amount of data to be treated by the three levels of process control: field devices, control systems and management softwares. To use data effectively in each one of these levels is extremely important to industry. Many of today s industrial computer systems consist of distributed software systems written in a wide variety of programming languages and developed for specific platforms, so, even more companies apply a significant investment to maintain or even re-write their systems for different platforms. Furthermore, it is rare that a software system works in complete isolation. In industrial automation is common that, software had to interact with other systems on different machines and even written in different languages. Thus, interoperability is not just a long-term challenge, but also a current context requirement of industrial software production. This work aims to propose a middleware solution for communication over web service and presents an user case applying the solution developed to an integrated system for industrial data capture , allowing such data to be available simplified and platformindependent across the network

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents considerations about viability on reutilize existing web based e-Learning systems on Interactive Digital TV environment according to Digital TV standard adopted in Brazil. Considering the popularity of Moodle system in academic and corporative area, such system was chosen as a foundation for a survey into its properties to create a specification of an Application Programming Interface (API) for convergence to t-Learning characteristics that demands efforts in interface design area due the fact that computer and TV concepts are totally different. This work aims to present studies concerning user interface design during two stages: survey and detail of functionalities from an e-Learning system and how to adapt them for the Interactive TV regarding usability context and Information Architecture concepts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In geophysics and seismology, raw data need to be processed to generate useful information that can be turned into knowledge by researchers. The number of sensors that are acquiring raw data is increasing rapidly. Without good data management systems, more time can be spent in querying and preparing datasets for analyses than in acquiring raw data. Also, a lot of good quality data acquired at great effort can be lost forever if they are not correctly stored. Local and international cooperation will probably be reduced, and a lot of data will never become scientific knowledge. For this reason, the Seismological Laboratory of the Institute of Astronomy, Geophysics and Atmospheric Sciences at the University of São Paulo (IAG-USP) has concentrated fully on its data management system. This report describes the efforts of the IAG-USP to set up a seismology data management system to facilitate local and international cooperation. © 2011 by the Istituto Nazionale di Geofisica e Vulcanologia. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Organizations are expending more and more, the spaces electronic / digital (Internet / intranet / extranet) as a way to efficiently manage information and knowledge in the organizational environment. The management of inputs informational and intellectual belongings ranges from the strategic level to the operational level; the results demonstrate the strength of the socialization of organizational strategies. Objective: To reflect on the role of information architecture for the development of electronic spaces / digital in organizational environments. Methodology: Analytical study supported by specialized literature, based on three aspects emphasized by Morville and Rosenfeld (2006) and applied to information architecture: context, content and user studies beyond the search and use of information Choo (2006) which also highlights three aspects: situational dimensions, cognitive needs and emotional reactions. Results: In the context of the Web environment organizations have a large number of sites for brands / products that have mostly no organizational structure or shared navigation. The results show that when a department needs to contact another department must do so in order offline. Conclusion: The information architecture has become essential for the development of management information systems that makes possible to easily find and access data and information, as well as helps in developing distinct hierarchies to structure the distribution of content, promoting developed quality and effectiveness of the management systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Part 1 of this article we discussed the need for information quality and the systematic management of learning materials and learning arrangements. Digital repositories, often called Learning Object Repositories (LOR), were introduced as a promising answer to this challenge. We also derived technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. This second part presents technical solutions that particularly address the demands of open education movements, which aspire to a global reuse and sharing culture. From this viewpoint, we develop core requirements for scalable network architectures for educational content management. We then present edu-sharing, an advanced example of a network of homogeneous repositories for learning resources, and discuss related technology. We conclude with an outlook in terms of emerging developments towards open and networked system architectures in e-learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.