16 resultados para Scalable web servers

em Deakin Research Online - Australia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web caching is a widely deployed technique to reduce the load to web servers and to reduce the latency for web browsers. Peer-to-Peer (P2P) web caching has been a hot research topic in recent years as it can create scalable and robust designs for decentralized internet-scale applications. However, many P2P web caching systems suffer expensive overheads such as lookup and publish messages, and lack locality awareness. In this paper, we present the development of a locality aware cache diffusion system that makes use of routing table locality, aggregation, and soft state to overcome these limitations. The analysis and experiments show that our cache diffusion system reduces the amount of information processed by nodes, reduces the number of index messages sent by nodes, and improves the locality of cache pointers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web servers are usually located in a well-organized data center where these servers connect with the outside Internet directly through backbones. Meanwhile, the application-layer distributed denials of service (AL-DDoS) attacks are critical threats to the Internet, particularly to those business web servers. Currently, there are some methods designed to handle the AL-DDoS attacks, but most of them cannot be used in heavy backbones. In this paper, we propose a new method to detect AL-DDoS attacks. Our work distinguishes itself from previous methods by considering AL-DDoS attack detection in heavy backbone traffic. Besides, the detection of AL-DDoS attacks is easily misled by flash crowd traffic. In order to overcome this problem, our proposed method constructs a Real-time Frequency Vector (RFV) and real-timely characterizes the traffic as a set of models. By examining the entropy of AL-DDoS attacks and flash crowds, these models can be used to recognize the real AL-DDoS attacks. We integrate the above detection principles into a modularized defense architecture, which consists of a head-end sensor, a detection module and a traffic filter. With a swift AL-DDoS detection speed, the filter is capable of letting the legitimate requests through but the attack traffic is stopped. In the experiment, we adopt certain episodes of real traffic from Sina and Taobao to evaluate our AL-DDoS detection method and architecture. Compared with previous methods, the results show that our approach is very effective in defending AL-DDoS attacks at backbones. © 2013 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a case study of a compromised Web server that was being used to distribute illegal 'warez'. The mechanism by which the server was compromised is discussed as if the way in which it was found. The hacker organisations that engage in these activities are viewed as a Virtual Community and their rules and code of ethics investigated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed denial of service (DDoS) attack is a continuous critical threat to the Internet. Derived from the low layers, new application-layer-based DDoS attacks utilizing legitimate HTTP requests to overwhelm victim resources are more undetectable. The case may be more serious when suchattacks mimic or occur during the flash crowd event of a popular Website. In this paper, we present the design and implementation of CALD, an architectural extension to protect Web servers against various DDoS attacks that masquerade as flash crowds. CALD provides real-time detection using mess tests but is different from other systems that use resembling methods. First, CALD uses a front-end sensor to monitor thetraffic that may contain various DDoS attacks or flash crowds. Intense pulse in the traffic means possible existence of anomalies because this is the basic property of DDoS attacks and flash crowds. Once abnormal traffic is identified, the sensor sends ATTENTION signal to activate the attack detection module. Second, CALD dynamically records the average frequency of each source IP and check the total mess extent. Theoretically, the mess extent of DDoS attacks is larger than the one of flash crowds. Thus, with some parameters from the attack detection module, the filter is capable of letting the legitimate requests through but the attack traffic stopped. Third, CALD may divide the security modules away from the Web servers. As a result, it keeps maximum performance on the kernel web services, regardless of the harassment from DDoS. In the experiments, the records from www.sina.com and www.taobao.com have proved the value of CALD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

 Many web servers contain some dangerous pages (we name them eigenpages) that can indicate their vulnerabilities. Therefore, some worms such as Santy locate their targets by searching for these eigenpages in search engines with well-crafted queries. In this paper, we focus on the modeling and containment of these special worms targeting web applications. We propose a containment system based on honey pots. We make search engines randomly insert a few honey pages that will induce visitors to the pre-established honey pots among the search results for the arriving queries. And then infectious can be detected and reported to the search engines when their malicious scans hit the honey pots. We find that the Santy worm can be well stopped by inserting no more than two honey pages in every one hundred search results. We also solve the challenging issue to dynamically generate matching honey pages for those dynamically arriving queries. Finally, a prototype is implemented to prove the technical feasibility of this system. © 2013 by CESER Publications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Quality-of-Service is an important issue in multimedia applications; so far most of the research focuses on bandwidth guarantee, few pays attention to the server performance guarantee. In this paper we pay more attention to the server performance guarantee under the prerequisite of guaranteed bandwidth quality. We take advantage of anycast to find the "best" multimedia server among a distributed server group in terms of bandwidth, the request will be submitted to the selected server, moreover, the selected server's neighbours' (all the servers with feasible paths) addresses are delivered to the selected server simultaneously. If the selected server can not guarantee the QoS for the request in terms of server performance, then a proposed QoS-Aware Server Load Deviation (QASLD) mechanism wiII be employed, which will deliver the request to one of its neighbours until there exists a suitable server that can guarantee the server performance for the request. Our experiments show that the proposed QASLD algorithm works well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the current web-based database systems suffer from poor performance, complicated heterogeneity, and synchronization issues. In this paper, we propose a novel mechanism for web-based database system on multicast and anycast protocols to deal with these issues. In the model, we put a castway, a network interface for database server, between database server and Web server. Castway deals with the multicast and anycast requests and responses. We propose a requirement-based server selection algorithm and an atomic multicast update algorithm for data queries and synchronizations. The model is independent from the Internet environment, it can synchronise the databases efficiently and automatically. Furthermore, the model can reduce the possibility of transaction deadlocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cost of recovery protocols is important with respect to system performance during normal operation and failure in terms of overhead, and time taken to recover failed transactions. The cost of recovery protocols for web database systems has not been addressed much. In this paper, we present a quantitative study of cost of recovery protocols. For this purpose, we use an experiment setup to evaluate the performance of two recovery algorithms, namely the, two-phase commit algorithm and log-based algorithm. Our work is a step towards building reliable protocols for web database systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel model for web-based database systems based on the multicast and anycast' protocols. In the model, we design a middleware, castway, which locates between the database server and the Web server. Every castway in a distributed system operates as a multicast node and an anycast node independently, respectively. The proposed mechanism can balance the workload among the distributed database servers, and offers the "best" server to serve for a query. Three algorithms are employed for the model: the requirement-based probing algorithm for anycast routing, the atomic multicast update algorithm for database synchronization, and the job deviation algorithm for system workload balance. The simulations and experiments show that the proposed model works very well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How to operate database efficiently and unfailingly in agent-based heterogeneous data source environment is becoming a big problem. In this paper, we contribute a framework and develop a couple of agent-oriented matchmakers with logical ring organization structure to match task agents' requests with middleware agents of databases. The middleware agent is a wrapper of a specific database and is run on the same server with the database management system. The matchmaker is of the features of proliferation and self-cancellation according to the sensory input from its environment. The ring-based coordination mechanism of matchmakers is designed. Two kinds of matchmakers, namely, host and duplicate, are designed for improving the scalability and robustness of agent-based system. The middleware agents are improved for satisfying the framework. We demonstrate the potentials of the framework by case study and present theoretical and empirical evidence that our approach is scalable and robust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a model for discovering frequent sequential patterns, phrases, which can be used as profile descriptors of documents. It is indubitable that we can obtain numerous phrases using data mining algorithms. However, it is difficult to use these phrases effectively for answering what users want. Therefore, we present a pattern taxonomy extraction model which performs the task of extracting descriptive frequent sequential patterns by pruning the meaningless ones. The model then is extended and tested by applying it to the information filtering system. The results of the experiment show that pattern-based methods outperform the keyword-based methods. The results also indicate that removal of meaningless patterns not only reduces the cost of computation but also improves the effectiveness of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the characteristics of the current Web services is that many clients request the same or similar service from a group of replicated servers, e.g. music or movie downloading in peer-to-peer networks. Most of the time, servers are heterogeneous ones in terms of service rate. Much of research has been done in the homogeneous environment. However, there is has been little done on the heterogeneous scenario. It is important and urgent that we have models for heterogeneous server groups for the current Internet applications design and analysis. In this paper, we deploy an approximation method to transform heterogeneous systems into a group of homogeneous system. As a result, the previous results of homogeneous studies can be applied in heterogeneous cases. In order to test the approximation ratio of the proposed model to real applications, we conducted simulations to obtain the degree of similarity. We use two common strategies: random selection algorithm and Firs-Come-First-Serve (FCFS) algorithm to test the approximation ratio of the proposed model. The simulations indicate that the approximation model works well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer-to-Peer (P2P) Web caching has been a hot research topic in recent years as it can create scalable and robust designs for decentralized Internet-scale applications. However, many P2P Web caching systems suffer expensive overheads such as lookup and publish messages, and lack of locality awareness. In this paper we present the development of a locality aware P2P cache system to overcome these limitations by using routing table locality, aggregation and soft state. The experiments show that our P2P cache system improves the performance of index operations through the reduction of the amount of information processed by nodes, the reduction of the number of index messages sent by nodes, and the improvement of the locality of cache pointers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid development of Internet, the amount of information on the Web grows explosively, people often feel puzzled and helpless in finding and getting the information they really need. For overcoming this problem, recommender systems such as singular value decomposition (SVD) method help users finding relevant information, products or services by providing personalized recommendations based on their profiles. SVD is a powerful technique for dimensionality reduction. However, due to its expensive computational requirements and weak performance for large sparse matrices, it has been considered inappropriate for practical applications involving massive data. Thus, to extract information in which the user is interested from a massive amount of data, we propose a personalized recommendation algorithm which is called ApproSVD algorithm based on approximating SVD in this paper. The trick behind our algorithm is to sample some rows of a user-item matrix, rescale each row by an appropriate factor to form a relatively smaller matrix, and then reduce the dimensionality of the smaller matrix. Finally, we present an empirical study to compare the prediction accuracy of our proposed algorithm with that of Drineas's LINEARTIMESVD algorithm and the standard SVD algorithm on MovieLens dataset and Flixster dataset, and show that our method has the best prediction quality. Furthermore, in order to show the superiority of the ApproSVD algorithm, we also conduct an empirical study to compare the prediction accuracy and running time between ApproSVD algorithm and incremental SVD algorithm on MovieLens dataset and Flixster dataset, and demonstrate that our proposed method has better performance overall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Given the rising rates of obesity in children and adolescents, developing evidence-based weight loss or weight maintenance interventions that can be widely disseminated, well implemented, and are highly scalable is a public health necessity. Such interventions should ensure that adolescents establish healthy weight regulation practices while also reducing eating disorder risk.

Objective:
This study describes an online program, StayingFit, which has two tracks for universal and targeted delivery and was designed to enhance healthy living skills, encourage healthy weight regulation, and improve weight/shape concerns among high school adolescents.

Methods:
Ninth grade students in two high schools in the San Francisco Bay area and in St Louis were invited to participate. Students who were overweight (body mass index [BMI] >85th percentile) were offered the weight management track of StayingFit; students who were normal weight were offered the healthy habits track. The 12-session program included a monitored discussion group and interactive self-monitoring logs. Measures completed pre- and post-intervention included self-report height and weight, used to calculate BMI percentile for age and sex and standardized BMI (zBMI), Youth Risk Behavior Survey (YRBS) nutrition data, the Weight Concerns Scale, and the Center for Epidemiological Studies Depression Scale.

Results: A total of 336 students provided informed consent and were included in the analyses. The racial breakdown of the sample was as follows: 46.7% (157/336) multiracial/other, 31.0% (104/336) Caucasian, 16.7% (56/336) African American, and 5.7% (19/336) did not specify; 43.5% (146/336) of students identified as Hispanic/Latino. BMI percentile and zBMI significantly decreased among students in the weight management track. BMI percentile and zBMI did not significantly change among students in the healthy habits track, demonstrating that these students maintained their weight. Weight/shape concerns significantly decreased among participants in both tracks who had elevated weight/shape concerns at baseline. Fruit and vegetable consumption increased for both tracks. Physical activity increased among participants in the weight management track, while soda consumption and television time decreased.

Conclusions: Results suggest that an Internet-based, universally delivered, targeted intervention may support healthy weight regulation, improve weight/shape concerns among participants with eating disorders risk, and increase physical activity in high school students. Tailored content and interactive features to encourage behavior change may lead to sustainable improvements in adolescent health.