946 resultados para WEB SERVER


Relevância:

70.00% 70.00%

Publicador:

Resumo:

藏文属于拼音文字,她的书写规则与英语书写规则一样是从左向右,从上到下,但每个单词之间没有空格,只用音节符把每个单词给分隔开。根据藏文文法,藏文的换行只能发生在音节符、单垂符,双垂符与空格的后面。目前主流浏览器(如Firefox,Netscape等)都不能处理藏文的这一断行特性,所以这些浏览器无法正常显示藏文文本,如Firefox将整个一段没有空格文本当作一个单词,造成在屏幕的右边无法换行。结果是用户必须拖动鼠标来浏览整篇文章,给用户带来了很大的麻烦。又由于藏文中大部分的拼音字母的宽度是不同的,在编写HTML文档时候也无法根据藏文字符串的多少来决定字符串的长度。该算法将采用了一个粗略的方法得到一个字符串长度的近似值,再根据行宽的限制在字符串的适当的位置找到一个可断行点进行断行。虽然得到的是近似值,但是基本上解决了主流浏览器无法处理藏文排版的问题。

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper addresses the problem of analyzing performance of WWW servers. The web has experienced a phenomenal growth and has become the most popular Internet application. As a consequence of its large popularity, the Internet has suffered from various performance problems, such as network congestion and overloaded servers. These days, it is not uncommon to find servers refusing connections because they are overloaded. Performance has always been a key issue in the design and operation of on-line systems. With regard to Internet, performance is also critical, because users want fast and easy access to all objects (i.e., documents, pictures, audio, and video) available on the net. Thus, it is important to understand WWW performance issues. This paper focuses on the performance analysis of a Web server. Using a synthetic benchmark (WebStone), we analyze three different Web server software running on top of a Windows NT platform and performing some typical WWW tasks.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Under high loads, a Web server may be servicing many hundreds of connections concurrently. In traditional Web servers, the question of the order in which concurrent connections are serviced has been left to the operating system. In this paper we ask whether servers might provide better service by using non-traditional service ordering. In particular, for the case when a Web server is serving static files, we examine the costs and benefits of a policy that gives preferential service to short connections. We start by assessing the scheduling behavior of a commonly used server (Apache running on Linux) with respect to connection size and show that it does not appear to provide preferential service to short connections. We then examine the potential performance improvements of a policy that does favor short connections (shortest-connection-first). We show that mean response time can be improved by factors of four or five under shortest-connection-first, as compared to an (Apache-like) size-independent policy. Finally we assess the costs of shortest-connection-first scheduling in terms of unfairness (i.e., the degree to which long connections suffer). We show that under shortest-connection-first scheduling, long connections pay very little penalty. This surprising result can be understood as a consequence of heavy-tailed Web server workloads, in which most connections are small, but most server load is due to the few large connections. We support this explanation using analysis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

One of the most vexing questions facing researchers interested in the World Wide Web is why users often experience long delays in document retrieval. The Internet's size, complexity, and continued growth make this a difficult question to answer. We describe the Wide Area Web Measurement project (WAWM) which uses an infrastructure distributed across the Internet to study Web performance. The infrastructure enables simultaneous measurements of Web client performance, network performance and Web server performance. The infrastructure uses a Web traffic generator to create representative workloads on servers, and both active and passive tools to measure performance characteristics. Initial results based on a prototype installation of the infrastructure are presented in this paper.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A mapping between chains in the Protein Databank and Enzyme Classification numbers is invaluable for research into structure-function relationships. Mapping at the chain level is a non-trivial problem and we present an automatically updated Web-server, which provides this link in a queryable form and as a downloadable XML or flat file.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

IntFOLD is an independent web server that integrates our leading methods for structure and function prediction. The server provides a simple unified interface that aims to make complex protein modelling data more accessible to life scientists. The server web interface is designed to be intuitive and integrates a complex set of quantitative data, so that 3D modelling results can be viewed on a single page and interpreted by non-expert modellers at a glance. The only required input to the server is an amino acid sequence for the target protein. Here we describe major performance and user interface updates to the server, which comprises an integrated pipeline of methods for: tertiary structure prediction, global and local 3D model quality assessment, disorder prediction, structural domain prediction, function prediction and modelling of protein-ligand interactions. The server has been independently validated during numerous CASP (Critical Assessment of Techniques for Protein Structure Prediction) experiments, as well as being continuously evaluated by the CAMEO (Continuous Automated Model Evaluation) project. The IntFOLD server is available at: http://www.reading.ac.uk/bioinf/IntFOLD/

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Web content hosting, in which a Web server stores and provides Web access to documents for different customers, is becoming increasingly common. For example, a web server can host webpages for several different companies and individuals. Traditionally, Web Service Providers (WSPs) provide all customers with the same level of performance (best-effort service). Most service differentiation has been in the pricing structure (individual vs. business rates) or the connectivity type (dial-up access vs. leased line, etc.). This report presents DiffServer, a program that implements two simple, server-side, application-level mechanisms (server-centric and client-centric) to provide different levels of web service. The results of the experiments show that there is not much overhead due to the addition of this additional layer of abstraction between the client and the Apache web server under light load conditions. Also, the average waiting time for high priority requests decreases significantly after they are assigned priorities as compared to a FIFO approach.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

[ES] El objetivo de este Trabajo Final de Grado (TFG) es la creación de un prototipo de aplicación web para la gestión de recursos geoespaciales. Esta propuesta surgió a partir de la necesidad de disponer de una herramienta que no tuviera que ser instalada en un dispositivo, sino servida por un servidor web, permitiendo su acceso desde cualquier parte y dispositivo. El resultado fue el Gestor Web de Recursos Geoespaciales con Tecnología OpenLayers, una aplicación que combina diversas herramientas (OpenLayers, GeoServer, PostgreSQL, jQuery…) – todas ellas basadas en Software Libre – para cumplir funcionalidades como la creación de primitivas vectoriales sobre un mapa, gestión y visualización de la información asociada, edición de estilos, modificación de coordenadas, etc. siendo todas éstas funcionalidades características de un Sistema de Información Geográfica (SIG) y ofreciendo una interfaz de uso cómoda y eficaz, que abstraiga al usuario de detalles internos y complejos. El material desarrollado dispone del potencial necesario para convertirse en una solución a las necesidades de gestión de información geoespacial de la ULPGC, especialmente en el campus de Tafira, sobre el que se ha ejemplificado su uso. Además, a diferencia de las herramientas ofertadas por empresas como Google o Microsoft, esta aplicación está por completo bajo una licencia GNU GPL v3, lo que permite que se pueda indagar dentro de su código, mejorarlo y añadir funcionalidades a cualquier persona interesada.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Este Proyecto Fin de Carrera (PFC) tiene como objetivos el análisis, diseño e implementación de un sistema web que permita a los usuarios familiarizarse con el Índice de Desarrollo Humano (IDH), publicado anualmente por Naciones Unidas, ofreciendo un servicio de gestión y descarga de una aplicación móvil relacionada con dicho índice. La aplicación móvil es un juego educativo basado en preguntas sobre el IDH de los países, desarrollada en paralelo con este proyecto. El servicio web implementado en este proyecto facilita tanto la descarga, administración y actualización de contenidos como la interacción entre los usuarios. El sistema está formado por un servidor web, una base de datos de usuarios y contenidos y un portal web desde el cual puede descargarse la aplicación móvil, realizar consultas sobre estadísticas de juego y conocer el IDH sin necesidad de jugar. El buscador avanzado que ha sido desarrollado para conocer el IDH permite al usuario adquirir destrezas y entrenarse por sí solo para mejorar sus resultados de juego. Los administradores del sistema tienen la capacidad de gestionar el contenido del portal, los usuarios que solicitan darse de alta y la funcionalidad ofrecida, es decir, actualización del juego, foros y noticias. La instalación del sistema implementado en un servidor web ha permitido su verificación exitosa así como la provisión del servicio de información y sensibilización sobre el IDH, actualizado mediante la información de Naciones Unidas, motivación original del proyecto. ABSTRACT This Final Year Project takes as targets the analysis, design and implementation of a web system that allows to the users to familiarize with the Human Development Index (HDI), published annually by United Nations, offering a service of management and download a mobile application associated with that index. The mobile application is an educational game based on questions on the IDH of the countries, developed in parallel with this project. The web service implemented by means of this Project facilitates download, administration and update of contents and the interaction between the users across the cooperative game. The system consists of a web server, a database of users and content and a web portal from which you can download the mobile application, perform queries on game statistics, or discover the HDI without need for play. The advanced search engine that has been developed for the HDI allows the user to purchase and train for skills to improve their game results. System administrators have the ability to manage the content of the portal, users requesting register and the functionality offered, i.e., update to the game, forums and news. The installation of the system that was implemented has allowed successful verification and the provision of an information and awareness on the HDI, updated with the information from the United Nations, original motivation of the project.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The emergence of pen-based mobile devices such as PDAs and tablet PCs provides a new way to input mathematical expressions to computer by using handwriting which is much more natural and efficient for entering mathematics. This paper proposes a web-based handwriting mathematics system, called WebMath, for supporting mathematical problem solving. The proposed WebMath system is based on client-server architecture. It comprises four major components: a standard web server, handwriting mathematical expression editor, computation engine and web browser with Ajax-based communicator. The handwriting mathematical expression editor adopts a progressive recognition approach for dynamic recognition of handwritten mathematical expressions. The computation engine supports mathematical functions such as algebraic simplification and factorization, and integration and differentiation. The web browser provides a user-friendly interface for accessing the system using advanced Ajax-based communication. In this paper, we describe the different components of the WebMath system and its performance analysis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The binding between antigenic peptides (epitopes) and the MHC molecule is a key step in the cellular immune response. Accurate in silico prediction of epitope-MHC binding affinity can greatly expedite epitope screening by reducing costs and experimental effort. Recently, we demonstrated the appealing performance of SVRMHC, an SVR-based quantitative modeling method for peptide-MHC interactions, when applied to three mouse class I MHC molecules. Subsequently, we have greatly extended the construction of SVRMHC models and have established such models for more than 40 class I and class II MHC molecules. Here we present the SVRMHC web server for predicting peptide-MHC binding affinities using these models. Benchmarked percentile scores are provided for all predictions. The larger number of SVRMHC models available allowed for an updated evaluation of the performance of the SVRMHC method compared to other well- known linear modeling methods. SVRMHC is an accurate and easy-to-use prediction server for epitope-MHC binding with significant coverage of MHC molecules. We believe it will prove to be a valuable resource for T cell epitope researchers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.