9 resultados para Enterprise Systems, Curricula, Packaged Software
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.
Resumo:
In this paper we discuss the problem of how to discriminate moments of interest on videos or live broadcast shows. The primary contribution is a system which allows users to personalize their programs with previously created media stickers-pieces of content that may be temporarily attached to the original video. We present the system's architecture and implementation, which offer users operators to transparently annotate videos while watching them. We offered a soccer fan the opportunity to add stickers to the video while watching a live match: the user reported both enjoying and being comfortable using the stickers during the match-relevant results even though the experience was not fully representative.
Resumo:
Telecommunications have been in constant evolution during past decades. Among the technological innovations, the use of digital technologies is very relevant. Digital communication systems have proven their efficiency and brought a new element in the chain of signal transmitting and receiving, the digital processor. This device offers to new radio equipments the flexibility of a programmable system. Nowadays, the behavior of a communication system can be modified by simply changing its software. This gave rising to a new radio model called Software Defined Radio (or Software-Defined Radio - SDR). In this new model, one moves to the software the task to set radio behavior, leaving to hardware only the implementation of RF front-end. Thus, the radio is no longer static, defined by their circuits and becomes a dynamic element, which may change their operating characteristics, such as bandwidth, modulation, coding rate, even modified during runtime according to software configuration. This article aims to present the use of GNU Radio software, an open-source solution for SDR specific applications, as a tool for development configurable digital radio.
Resumo:
A power transformer needs continuous monitoring and fast protection as it is a very expensive piece of equipment and an essential element in an electrical power system. The most common protection technique used is the percentage differential logic, which provides discrimination between an internal fault and different operating conditions. Unfortunately, there are some operating conditions of power transformers that can mislead the conventional protection affecting the power system stability negatively. This study proposes the development of a new algorithm to improve the protection performance by using fuzzy logic, artificial neural networks and genetic algorithms. An electrical power system was modelled using Alternative Transients Program software to obtain the operational conditions and fault situations needed to test the algorithm developed, as well as a commercial differential relay. Results show improved reliability, as well as a fast response of the proposed technique when compared with conventional ones.
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
El objeto de este poster es presentar la experiencia de manejamiento de un portal de revistas científicas de acceso abierto, en una universidad pública y que integra los editores científicos, profesores y bibliotecarios. Iniciativas de esta naturaleza tienen importancia estratégica para la consolidación y el fortalecimiento del Movimiento de Acceso Abierto en los países en desarrollo, puesto que ofrecen la oportunidad de se vivir plenamente una cultura de acceso abierto en todas las etapas de certificación del conocimiento. La experiencia de la Universidade de São Paulo, se convierte en relevante, ya que promueve la integración de los diversos actores involucrados en la producción de revistas científicas. El Portal de Revistas da USP (http://www.revistas.usp.br), lanzado en 2008, es una iniciativa del Departamento Técnico do Sistema Integrado de Bibliotecas de USP que tiene el Programa de Apoio às Publicações Científicas Periódicas da USP. Esta iniciativa, basada en los principios del Movimiento de Acceso Abierto, tiene como objetivo promover la visibilidad y la accesibilidad de las revistas científicas publicadas oficialmente por la USP. Considerase que las revistas cumplen con un doble papel, como objeto y también vehículo de comunicación. Con esto las inversiones para calificación de las revistas incluyen recursos informáticos para garantizar la interoperabilidad entre distintos sistemas de información (bases de datos, catálogos, repositorios etc.), sistemas de gestión editorial en línea (para agilizar la tramitación de los manuscritos y la disminución de la publicación); la formación de los equipos (técnicos y bibliotecarios), atribuición de nomes DOI (digital object identifier), software de verificación de plágio. En 2012 se cambió el software de gestión del Portal para Open Journal Systems - OJS por lo cual se reunió por la primera vez, en um mismo dominio web las revistas científicas editadas en USP. De las más de 100 revistas publicadas en el Portal 29 están en DOAJ, 27 en SciELO Brasil, 20 en Scopus, 11 en JCR entre outros. En el Portal hay revistas con distintos perfiles, unas más institucionales y otras de calidad internacional. Algunas revistas más antíguas y consolidadas se utilizan de distintos sistemas de gestión de los manuscriptos y en el Portal es como un espejo para garantizar la presencia en la Universidad. Con tantas y tan distintas publicaciones el OJS se presenta como una promisora herramienta para la gestión del Portal de Revistas. El OJS surge como un sistema para la gestión individual de revistas científicos y con los avances de las tecnologías y principalmente del uso por la comunidad se conviertió en herramienta de gestión de Portal. Para una mejor gestión del Portal se presenta algunas recomendaciones, basadas en la experiencia, y que podrán mejorar el trabajo de gestión del conjunto de las revistas: en la pantalla de creación de revistas se deberá incluir todos los datos del registro oficial en ISSN, sin posibilidad de edición por los editores-gerentes; en el listado general de revistas se podría incluir una indicación de las revistas que están en línea y las que están aún en configuración; clara identificación de las revistas vigentes y no-vigentes, una vinculación mas clara de las revistas que cambiaran su título y que están vigentes; una tabla para selección de fuentes de indización; edición de contenidos ya publicados solamente por el administrador del sistema; gestión centralizada del DOI y de la preservación digital en LOCKSS y un painel con los datos estadísticos (total de revistas, ediciones, estatísticas Counter, autores etc). Además el fortalecimiento y creación de centros de desarrollo de OJS en las Universidades Latinoamericanas podría impulsionar el uso del sistema en su totalidad por los editores de la región.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Resumo:
The University of São Paulo has been experiencing the increase in contents in electronic and digital formats, distributed by different suppliers and hosted remotely or in clouds, and is faced with the also increasing difficulties related to facilitating access to this digital collection by its users besides coexisting with the traditional world of physical collections. A possible solution was identified in the new generation of systems called Web Scale Discovery, which allow better management, data integration and agility of search. Aiming to identify if and how such a system would meet the USP demand and expectation and, in case it does, to identify what the analysis criteria of such a tool would be, an analytical study with an essentially documental base was structured, as from a revision of the literature and from data available in official websites and of libraries using this kind of resources. The conceptual base of the study was defined after the identification of software assessment methods already available, generating a standard with 40 analysis criteria, from details on the unique access interface to information contents, web 2.0 characteristics, intuitive interface, facet navigation, among others. The details of the studies conducted into four of the major systems currently available in this software category are presented, providing subsidies for the decision-making of other libraries interested in such systems.