901 resultados para WEB systems
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Introduction: In the Web environment, there is a need for greater care with regard to the processing of descriptive and thematic information. The concern with the recovery of information in computer systems precedes the development of the first personal computers. Models of information retrieval have been and are today widely used in databases specific to a field whose scope is known. Objectives: Verify how the issue of relevance is treated in the main computer models of information retrieval and, especially, as the issue is addressed in the future of the Web, the called Semantic Web. Methodology: Bibliographical research. Results: In the classical models studied here, it was realized that the main concern is retrieving documents whose description is closest to the search expression used by the user, which does not necessarily imply that this really needs. In semantic retrieval is the use of ontologies, feature that extends the user's search for a wider range of possible relevant options. Conclusions: The relevance is a subjective judgment and inherent to the user, it will depend on the interaction with the system and especially the fact that he expects to recover in your search. Systems that are based on a model of relevance are not popular, because it requires greater interaction and depend on the user's disposal. The Semantic Web is so far the initiative more efficient in the case of information retrieval in the digital environment.
Resumo:
This work, entitled Websislapam: People Rating System Based on Web Technologies, allows the creation of questionnaires, and the organization of entities and people who participate in evaluations. Entities collect data from people with the help of resources that reduce typing mistakes. The Websislapam maintains a database and provides graphical reporting, which enable the analysis of those tested. Developed using Web technologies such as PHP, Javascript, CSS, and others. Developed with the paradigm of object-oriented programming and the MySQL DBMS. For the theoretical basis, research in the areas of System Database, Web Technologies and Web Engineering were performed. It was determined the evaluation process, systems and Web-based applications, Web and System Engineering Database. Technologies applied in the implementation of Websislapam been described. In a separate chapter presented the main features and artifacts used in the development of Websislapam. A case study demonstrates the practical use of the system
Resumo:
Over the last decades changes have occurred in communication within and between enterprises, made easier by technologies suchas E-commerce, Internet, ERP systems and remote meetings and there was a rapid progress in network technology, which has changed the way business is done. A standardized way to offer services over the internet is using web services. Web services are a kind of remote procedure call and are generally used to integrate systems, independent of language, both client and server. It is common to use several web services run in sequence to perform a business process. To this type of process, gives the name of workflow. Thus, Web services are the primary components of workflows. A tool that provides a way of visualizing the behavior of a workflow can assist the administrator and is required. The present work presents the development of a tool that allows the administrator to classify visually services components and evaluate their importance in the final performance of a workflow. As proof of concept we used several virtual servers and computers where each computer has received a set of web services. A proxy was added between each call of workflows collecting relevant information and storing them in a database for later analysis. The analysis was based on Quality of Service parameters
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Server responsiveness and scalability are more important than ever in today’s client/server dominated network environments. Recently, researchers have begun to consider cluster-based computers using commodity hardware as an alternative to expensive specialized hardware for building scalable Web servers. In this paper, we present performance results comparing two cluster-based Web servers based on different server infrastructures: MAC-based dispatching (LSMAC) and IP-based dispatching (LSNAT). Both cluster-based server systems were implemented as application-space programs running on commodity hardware. We point out the advantages and disadvantages of both systems. We also identify when servers should be clustered and when clustering will not improve performance.
Resumo:
Patterns of species interactions affect the dynamics of food webs. An important component of species interactions that is rarely considered with respect to food webs is the strengths of interactions, which may affect both structure and dynamics. In natural systems, these strengths are variable, and can be quantified as probability distributions. We examined how variation in strengths of interactions can be described hierarchically, and how this variation impacts the structure of species interactions in predator-prey networks, both of which are important components of ecological food webs. The stable isotope ratios of predator and prey species may be particularly useful for quantifying this variability, and we show how these data can be used to build probabilistic predator-prey networks. Moreover, the distribution of variation in strengths among interactions can be estimated from a limited number of observations. This distribution informs network structure, especially the key role of dietary specialization, which may be useful for predicting structural properties in systems that are difficult to observe. Finally, using three mammalian predator-prey networks ( two African and one Canadian) quantified from stable isotope data, we show that exclusion of link-strength variability results in biased estimates of nestedness and modularity within food webs, whereas the inclusion of body size constraints only marginally increases the predictive accuracy of the isotope-based network. We find that modularity is the consequence of strong link-strengths in both African systems, while nestedness is not significantly present in any of the three predator-prey networks.
Resumo:
Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.
Resumo:
El objeto de este poster es presentar la experiencia de manejamiento de un portal de revistas científicas de acceso abierto, en una universidad pública y que integra los editores científicos, profesores y bibliotecarios. Iniciativas de esta naturaleza tienen importancia estratégica para la consolidación y el fortalecimiento del Movimiento de Acceso Abierto en los países en desarrollo, puesto que ofrecen la oportunidad de se vivir plenamente una cultura de acceso abierto en todas las etapas de certificación del conocimiento. La experiencia de la Universidade de São Paulo, se convierte en relevante, ya que promueve la integración de los diversos actores involucrados en la producción de revistas científicas. El Portal de Revistas da USP (http://www.revistas.usp.br), lanzado en 2008, es una iniciativa del Departamento Técnico do Sistema Integrado de Bibliotecas de USP que tiene el Programa de Apoio às Publicações Científicas Periódicas da USP. Esta iniciativa, basada en los principios del Movimiento de Acceso Abierto, tiene como objetivo promover la visibilidad y la accesibilidad de las revistas científicas publicadas oficialmente por la USP. Considerase que las revistas cumplen con un doble papel, como objeto y también vehículo de comunicación. Con esto las inversiones para calificación de las revistas incluyen recursos informáticos para garantizar la interoperabilidad entre distintos sistemas de información (bases de datos, catálogos, repositorios etc.), sistemas de gestión editorial en línea (para agilizar la tramitación de los manuscritos y la disminución de la publicación); la formación de los equipos (técnicos y bibliotecarios), atribuición de nomes DOI (digital object identifier), software de verificación de plágio. En 2012 se cambió el software de gestión del Portal para Open Journal Systems - OJS por lo cual se reunió por la primera vez, en um mismo dominio web las revistas científicas editadas en USP. De las más de 100 revistas publicadas en el Portal 29 están en DOAJ, 27 en SciELO Brasil, 20 en Scopus, 11 en JCR entre outros. En el Portal hay revistas con distintos perfiles, unas más institucionales y otras de calidad internacional. Algunas revistas más antíguas y consolidadas se utilizan de distintos sistemas de gestión de los manuscriptos y en el Portal es como un espejo para garantizar la presencia en la Universidad. Con tantas y tan distintas publicaciones el OJS se presenta como una promisora herramienta para la gestión del Portal de Revistas. El OJS surge como un sistema para la gestión individual de revistas científicos y con los avances de las tecnologías y principalmente del uso por la comunidad se conviertió en herramienta de gestión de Portal. Para una mejor gestión del Portal se presenta algunas recomendaciones, basadas en la experiencia, y que podrán mejorar el trabajo de gestión del conjunto de las revistas: en la pantalla de creación de revistas se deberá incluir todos los datos del registro oficial en ISSN, sin posibilidad de edición por los editores-gerentes; en el listado general de revistas se podría incluir una indicación de las revistas que están en línea y las que están aún en configuración; clara identificación de las revistas vigentes y no-vigentes, una vinculación mas clara de las revistas que cambiaran su título y que están vigentes; una tabla para selección de fuentes de indización; edición de contenidos ya publicados solamente por el administrador del sistema; gestión centralizada del DOI y de la preservación digital en LOCKSS y un painel con los datos estadísticos (total de revistas, ediciones, estatísticas Counter, autores etc). Además el fortalecimiento y creación de centros de desarrollo de OJS en las Universidades Latinoamericanas podría impulsionar el uso del sistema en su totalidad por los editores de la región.
Resumo:
Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.