998 resultados para servers


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we examine a number of admission control and scheduling protocols for high-performance web servers based on a 2-phase policy for serving HTTP requests. The first "registration" phase involves establishing the TCP connection for the HTTP request and parsing/interpreting its arguments, whereas the second "service" phase involves the service/transmission of data in response to the HTTP request. By introducing a delay between these two phases, we show that the performance of a web server could be potentially improved through the adoption of a number of scheduling policies that optimize the utilization of various system components (e.g. memory cache and I/O). In addition, to its premise for improving the performance of a single web server, the delineation between the registration and service phases of an HTTP request may be useful for load balancing purposes on clusters of web servers. We are investigating the use of such a mechanism as part of the Commonwealth testbed being developed at Boston University.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy-efficient communication has recently become a key challenge for both researchers and industries. In this paper, we propose a new model in which a Content Provider and an Internet Service Provider cooperate to reduce the total power consumption. We solve the problem optimally and compare it with a classic formulation, whose aim is to minimize user delay. Results, although preliminary, show that power savings can be huge: up to 71% on real ISP topologies. We also show how the degree of cooperation impacts overall power consumption. Finally, we consider the impact of the Content Provider location on the total power savings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Under high loads, a Web server may be servicing many hundreds of connections concurrently. In traditional Web servers, the question of the order in which concurrent connections are serviced has been left to the operating system. In this paper we ask whether servers might provide better service by using non-traditional service ordering. In particular, for the case when a Web server is serving static files, we examine the costs and benefits of a policy that gives preferential service to short connections. We start by assessing the scheduling behavior of a commonly used server (Apache running on Linux) with respect to connection size and show that it does not appear to provide preferential service to short connections. We then examine the potential performance improvements of a policy that does favor short connections (shortest-connection-first). We show that mean response time can be improved by factors of four or five under shortest-connection-first, as compared to an (Apache-like) size-independent policy. Finally we assess the costs of shortest-connection-first scheduling in terms of unfairness (i.e., the degree to which long connections suffer). We show that under shortest-connection-first scheduling, long connections pay very little penalty. This surprising result can be understood as a consequence of heavy-tailed Web server workloads, in which most connections are small, but most server load is due to the few large connections. We support this explanation using analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a mathematically rigorous Quality-of-Service (QoS) metric which relates the achievable quality of service metric (QoS) for a real-time analytics service to the server energy cost of offering the service. Using a new iso-QoS evaluation methodology, we scale server resources to meet QoS targets and directly rank the servers in terms of their energy-efficiency and by extension cost of ownership. Our metric and method are platform-independent and enable fair comparison of datacenter compute servers with significant architectural diversity, including micro-servers. We deploy our metric and methodology to compare three servers running financial option pricing workloads on real-life market data. We find that server ranking is sensitive to data inputs and desired QoS level and that although scale-out micro-servers can be up to two times more energy-efficient than conventional heavyweight servers for the same target QoS, they are still six times less energy efficient than high-performance computational accelerators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mainframes, corporate and central servers are becoming information servers. The requirement for more powerful information servers is the best opportunity to exploit the potential of parallelism. ICL recognized the opportunity of the 'knowledge spectrum' namely to convert raw data into information and then into high grade knowledge. Parallel Processing and Data Management Its response to this and to the underlying search problems was to introduce the CAFS retrieval engine. The CAFS product demonstrates that it is possible to move functionality within an established architecture, introduce a different technology mix and exploit parallelism to achieve radically new levels of performance. CAFS also demonstrates the benefit of achieving this transparently behind existing interfaces. ICL is now working with Bull and Siemens to develop the information servers of the future by exploiting new technologies as available. The objective of the joint Esprit II European Declarative System project is to develop a smoothly scalable, highly parallel computer system, EDS. EDS will in the main be an SQL server and an information server. It will support the many data-intensive applications which the companies foresee; it will also support application-intensive and logic-intensive systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of state-of-the-art protein structure prediction servers have been developed by researchers working in the Bioinformatics Unit at University College London. The popular PSIPRED server allows users to perform secondary structure prediction, transmembrane topology prediction and protein fold recognition. More recent servers include DISOPRED for the prediction of protein dynamic disorder and DomPred for domain boundary prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses update ordering and its impact on the performance of a cluster of replicated servers. We propose a model for update orderings and constraints and develop a number of algorithms for implementing different ordering constraints. A performance study is then carried out to analyse the update ordering model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quality-of-Service is an important issue in multimedia applications; so far most of the research focuses on bandwidth guarantee, few pays attention to the server performance guarantee. In this paper we pay more attention to the server performance guarantee under the prerequisite of guaranteed bandwidth quality. We take advantage of anycast to find the "best" multimedia server among a distributed server group in terms of bandwidth, the request will be submitted to the selected server, moreover, the selected server's neighbours' (all the servers with feasible paths) addresses are delivered to the selected server simultaneously. If the selected server can not guarantee the QoS for the request in terms of server performance, then a proposed QoS-Aware Server Load Deviation (QASLD) mechanism wiII be employed, which will deliver the request to one of its neighbours until there exists a suitable server that can guarantee the server performance for the request. Our experiments show that the proposed QASLD algorithm works well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plenty of research has been done for any cast service, but few research touches the fault-tolerant problem based on the best of our knowledge. In this paper, we propose and analyse a fault-tolerant model, called twin server model, for anycast communication to provide reliable and continuous anycast services. We select a twin server in an anycast group for a given anycast server, the primary server. If the twin server suspects that its primary server is dead, it will take the unfinished job(s) of its primary server. We propose two algorithms: the server failure detecting algorithm and the server failure broadcasting algorithm. We then analyse the performance change when a primary server fails using queue theory and obtain some interesting conclusions. At the end, we summary the paper and present the future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the characteristics of the current Web services is that many clients request the same or similar service from a group of replicated servers, e.g. music or movie downloading in peer-to-peer networks. Most of the time, servers are heterogeneous ones in terms of service rate. Much of research has been done in the homogeneous environment. However, there is has been little done on the heterogeneous scenario. It is important and urgent that we have models for heterogeneous server groups for the current Internet applications design and analysis. In this paper, we deploy an approximation method to transform heterogeneous systems into a group of homogeneous system. As a result, the previous results of homogeneous studies can be applied in heterogeneous cases. In order to test the approximation ratio of the proposed model to real applications, we conducted simulations to obtain the degree of similarity. We use two common strategies: random selection algorithm and Firs-Come-First-Serve (FCFS) algorithm to test the approximation ratio of the proposed model. The simulations indicate that the approximation model works well.