884 resultados para web content
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
A graphics package has been developed to display the main chain torsion angles phi, psi (phi, Psi); (Ramachandran angles) in a protein of known structure. In addition, the package calculates the Ramachandran angles at the central residue in the stretch of three amino acids having specified the flanking residue types. The package displays the Ramachandran angles along with a detailed analysis output. This software is incorporated with all the protein structures available in the Protein Databank.
Resumo:
Several replacement policies for web caches have been proposed and studied extensively in the literature. Different replacement policies perform better in terms of (i) the number of objects found in the cache (cache hit), (ii) the network traffic avoided by fetching the referenced object from the cache, or (iii) the savings in response time. In this paper, we propose a simple and efficient replacement policy (hereafter known as SE) which improves all three performance measures. Trace-driven simulations were done to evaluate the performance of SE. We compare SE with two widely used and efficient replacement policies, namely Least Recently Used (LRU) and Least Unified Value (LUV) algorithms. Our results show that SE performs at least as well as, if not better than, both these replacement policies. Unlike various other replacement policies proposed in literature, our SE policy does not require parameter tuning or a-priori trace analysis and has an efficient and simple implementation that can be incorporated in any existing proxy server or web server with ease.
Resumo:
Many web sites incorporate dynamic web pages to deliver customized contents to their users. However, dynamic pages result in increased user response times due to their construction overheads. In this paper, we consider mechanisms for reducing these overheads by utilizing the excess capacity with which web servers are typically provisioned. Specifically, we present a caching technique that integrates fragment caching with anticipatory page pre-generation in order to deliver dynamic pages faster during normal operating situations. A feedback mechanism is used to tune the page pre-generation process to match the current system load. The experimental results from a detailed simulation study of our technique indicate that, given a fixed cache budget, page construction speedups of more than fifty percent can be consistently achieved as compared to a pure fragment caching approach.
Resumo:
Impreso por la Diputación Foral de Álava, D.L. VI-430/99.
Resumo:
La Facultad de Ciencias Sociales y de la Comunicación a través del Departamento de Periodismo II viene organizando desde hace cinco años el Congreso Internacional de Ciberperiodismo y Web 2.0, un evento relacionado con el Periodismo e Internet, en general y con la Web 2.0, en particular. Un concepto éste, el de la Web 2.0, en el que el verdadero protagonismo recae en las audiencias. El público se está convirtiendo en el editor de información; es él el que define cómo quiere ver la información; y está constituyendo comunidades en este proceso. La Web 2.0 refuerza la idea del usuario como creador y no sólo como consumidor de medios. Aquellas personas que antes eran clientes de información se convierten paulatinamente en editores, y muchas de las aplicaciones asociadas con la web 2.0 pretenden ayudarles a organizar y publicar sus contenidos. El Congreso de este año, que se celebra los días 17 y 18 de noviembre en el Bizkaia Aretoa lleva por título "¿Son las audiencias indicadores de calidad?". La edición de este año del Congreso intentará responder acerca de cuáles son las estrategias de los medios de comunicación considerados de referencia, están adoptando ante el hecho de que las audiencias demanden más participación y, como consecuencia, estén cada vez más aceptando contenidos generados por los usuarios (User-Generated Content). Se explorarán características, herramientas, impacto y consecuencias para comprender, desde un punto de vista crítico, la naturaleza o el alcance de estos nuevos modelos. El objetivo es nuevamente reunir a especialistas en el área para analizar y debatir cuestiones centradas en la práctica del Ciberperiodismo actual a la luz de las nuevas realidades empresariales, profesionales y de formación. Los desafíos y los cambios provocados por la convergencia y la multitextualidad, por el también llamado “Periodismo ciudadano”, por las innovaciones tecnológicas y las experiencias emprendedoras en esta área serán temas a destacar. Se pretende, igualmente, que el congreso constituya un momento ideal para la actualización de conocimientos científicos sobre el Ciberperiodismo. Para ello, se cuenta con la presencia de académicos, tanto nacionales como extranjeros, que constituyen un referente en la investigación.
Resumo:
Background: Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. Results: We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. Conclusions: We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.
Resumo:
[ES]Este proyecto trata el problema del bloqueo de anuncios que existe actualmente en la web. Teniendo en cuenta la opinión de diversos autores sobre el tema, se analiza la problemática del uso de bloqueadores de anuncios. Además, se estudia cómo funcionan estos bloqueadores. En concreto, se analiza cómo está construido y cómo funciona, mediante ingeniería inversa, la extensión más popular en este campo, es decir, AdBlock Plus. Aparte de esto, se proponen una serie de soluciones para atacar al problema. Por último, se desarrolla e implementa una de las propuestas. Como resultado, se mejora el funcionamiento de AdBlock Plus, en el sentido de que da más libertad al usuario para elegir lo que bloquea dando la oportunidad a anunciantes y proveedores de contenido de mantener su modelo de negocio basado en la publicidad.
Resumo:
Este trabalho tem por objetivo propor um modelo de ontologia simples e generalista, capaz de descrever os conceitos mais básicos que permeiam o domínio de conhecimento dos jornais on-line brasileiros não especializados, fundamentado tanto na prática quanto conceitualmente, em conformidade com os princípios da Web Semântica. A partir de uma nova forma de classificação e organização do conteúdo, a ontologia proposta deve ter condições de atender as necessidades comuns de ambas as partes, jornal e leitor, que são, resumidamente, a busca e a recuperação das informações.
Resumo:
En pleno siglo XXI, el uso de Internet y los avances no sólo afectan a las personas sino que las empresas también deben evolucionar al mismo ritmo y adaptar todas sus prácticas a dichos avances. Con la aparición de la Web 2.0, ciertos aspectos de las empresas han quedado obsoletos y se han debido adaptar a la nueva era: la era de la comunicación e de la interacción a través de Internet. Se han creado nuevos modelos de negocio, se han mejorado actividades de la cadena de valor, han surgido nuevas estrategias de marketing y comunicación corporativa y se han creado unos nuevos canales de venta, alrededor del fenómeno e-Commerce. En cuanto a los trabajadores, las empresas han comenzado a valorar nuevas competencias relacionadas con el uso de Internet y la Web 2.0. Dichas competencias pueden ser comunes para muchos puestos de trabajo, por ejemplo el uso de redes sociales o la gestión de la información, otras son más específicas y dependen del puesto de trabajo que consideremos. Finalmente, la aparición de la Web 2.0 ha exigido a las empresas a crear nuevas áreas y puestos de trabajo o modificar los actuales para adecuarse a los nuevos tiempos y tendencias. Así surgen los diferentes perfiles profesionales de las áreas de Estrategia Digital, Marketing Digital, Contenido Digital, Social Media, Análisis Big Data, e-Commerce y Mobile Marketing. Estos perfiles gozan de mucha popularidad y demanda por parte de las empresas y se estima que va a crecer aún más el número de puestos relacionados con el ámbito digital, ya que son las profesiones del futuro.
Resumo:
No campo da educação permanente na área da saúde, podem ser citadas diferentes iniciativas que visam formar profissionais com o uso das Tecnologias de Informação e Comunicação (TICs). No entanto, pouco se sabe ainda sobre o uso da web por profissionais da saúde como estratégia da aprendizagem formal, menos ainda quando se aborda a aprendizagem informal. Percebe-se que as ações no campo da educação com uso e, sobretudo, para o uso da tecnologia como ferramenta de aprendizagem ainda são feitas de forma muito intuitivas, por acerto e erro, tendo em vista a própria evolução da tecnologia em um curto período temporal. Sendo assim, o objetivo geral da pesquisa é compreender o perfil, as percepções e representações sociais sobre aprendizagem na web de médicos, enfermeiros e cirurgiões-dentistas e uma possível influência desse uso no cotidiano profissional. Para atingir o objetivo delimitado, foi empregada a metodologia quali-quantitativa através da utilização de um questionário on-line, contendo questões fechadas e questões abertas, respondido por 277 alunos do Curso de Especialização em Saúde da Família oferecido pelo núcleo da Universidade do Estado do Rio de Janeiro (UERJ) da Universidade Aberta do Sistema Único de Saúde (UNA-SUS). Para análise das questões fechadas, foi utilizada a estatística descritiva e testes bivariados não paramétricos. A análise das questões abertas foi feita à luz da teoria da representações sociais com emprego da técnica da análise do conteúdo e das evocações livres. Os resultados da pesquisa foram apresentados em formato de três trabalhos para apresentação em eventos e quatro artigos submetidos para publicação em revistas de alta qualidade acadêmica. Com base nos resultados, destaca-se como preocupação que o simples consumo de informações esteja justificando e a ele esteja restrito o uso da internet para os sujeitos, em detrimento às possibilidades educacionais da cibercultura. Acredita-se ser necessário o desenvolvimento de ações que subsidiem uma prática mais reflexiva a fim de reverter um possível uso reduzido das potencialidades da TICs.
Resumo:
The Architecture, Engineering, Construction and Facilities Management (AEC/FM) industry is rapidly becoming a multidisciplinary, multinational and multi-billion dollar economy, involving large numbers of actors working concurrently at different locations and using heterogeneous software and hardware technologies. Since the beginning of the last decade, a great deal of effort has been spent within the field of construction IT in order to integrate data and information from most computer tools used to carry out engineering projects. For this purpose, a number of integration models have been developed, like web-centric systems and construction project modeling, a useful approach in representing construction projects and integrating data from various civil engineering applications. In the modern, distributed and dynamic construction environment it is important to retrieve and exchange information from different sources and in different data formats in order to improve the processes supported by these systems. Previous research demonstrated that a major hurdle in AEC/FM data integration in such systems is caused by its variety of data types and that a significant part of the data is stored in semi-structured or unstructured formats. Therefore, new integrative approaches are needed to handle non-structured data types like images and text files. This research is focused on the integration of construction site images. These images are a significant part of the construction documentation with thousands stored in site photographs logs of large scale projects. However, locating and identifying such data needed for the important decision making processes is a very hard and time-consuming task, while so far, there are no automated methods for associating them with other related objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.
Resumo:
A 2-year investigation of growth and food availability of silver carp and bighead was carried out using stable isotope and gut content analysis in a large pen in Meiliang Bay of Lake Taihu, China. Both silver carp and bighead exhibited significantly higher delta 13C in 2005 than in 2004, which can probably be attributed to two factors: (i) the difference between isotopic compositions at the base of the pelagic food web and (ii) the difference between the compositions of prey items and stable isotopes. The significantly positive correlations between body length, body weight and stable isotope ratios indicated that isotopic changes in silver carp and bighead resulted from the accumulation of biomass concomitant with rapid growth. Because of the drastic decrease in zooplankton in the diet in 2005, silver carp and bighead grew faster in 2004 than in 2005. Bighead carp showed a lower trophic level than silver carp in 2005 as indicated by stable nitrogen isotope ratios, which was possibly explained by the interspecific difference between the prey species and the food quality of silver carp and bighead.
Resumo:
In this paper, accumulation and distribution of microcystins (MCs) was examined monthly in six species of fish with different trophic levels in Meiliang Bay, Lake Taihu, China, from June to November 2005, Microcystins were analyzed by liquid chromatography electrospray ionization mass spectrometry (LC-ESI-MS). Average recoveries of spiked fish samples were 67.7% for MC-RR, 85.3% for MC-YR, and 88.6% for MC-LR. The MCs (MC-RR+MC-YR+MC-LR) concentration in liver and gut content was highest in phytoplanktivorous fish, followed by omnivorous fish, and was lowest in carnivorous fish; while MCs concentration in muscle was highest in omnivorous fish, followed by phytoplanktivorous fish, and was lowest in carnivorous fish. This is the first study reporting MCs accumulation in the gonad of fish in field. The main uptake of MC-YR in fish seems to be through the gills from the dissolved MCs. The WHO limit for tolerable daily intake was exceeded only in common carp muscle. (C) 2008 Elsevier B.V. All rights reserved.