11 resultados para web servers

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Web servers are usually located in a well-organized data center where these servers connect with the outside Internet directly through backbones. Meanwhile, the application-layer distributed denials of service (AL-DDoS) attacks are critical threats to the Internet, particularly to those business web servers. Currently, there are some methods designed to handle the AL-DDoS attacks, but most of them cannot be used in heavy backbones. In this paper, we propose a new method to detect AL-DDoS attacks. Our work distinguishes itself from previous methods by considering AL-DDoS attack detection in heavy backbone traffic. Besides, the detection of AL-DDoS attacks is easily misled by flash crowd traffic. In order to overcome this problem, our proposed method constructs a Real-time Frequency Vector (RFV) and real-timely characterizes the traffic as a set of models. By examining the entropy of AL-DDoS attacks and flash crowds, these models can be used to recognize the real AL-DDoS attacks. We integrate the above detection principles into a modularized defense architecture, which consists of a head-end sensor, a detection module and a traffic filter. With a swift AL-DDoS detection speed, the filter is capable of letting the legitimate requests through but the attack traffic is stopped. In the experiment, we adopt certain episodes of real traffic from Sina and Taobao to evaluate our AL-DDoS detection method and architecture. Compared with previous methods, the results show that our approach is very effective in defending AL-DDoS attacks at backbones. © 2013 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a case study of a compromised Web server that was being used to distribute illegal 'warez'. The mechanism by which the server was compromised is discussed as if the way in which it was found. The hacker organisations that engage in these activities are viewed as a Virtual Community and their rules and code of ethics investigated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Web caching is a widely deployed technique to reduce the load to web servers and to reduce the latency for web browsers. Peer-to-Peer (P2P) web caching has been a hot research topic in recent years as it can create scalable and robust designs for decentralized internet-scale applications. However, many P2P web caching systems suffer expensive overheads such as lookup and publish messages, and lack locality awareness. In this paper, we present the development of a locality aware cache diffusion system that makes use of routing table locality, aggregation, and soft state to overcome these limitations. The analysis and experiments show that our cache diffusion system reduces the amount of information processed by nodes, reduces the number of index messages sent by nodes, and improves the locality of cache pointers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed denial of service (DDoS) attack is a continuous critical threat to the Internet. Derived from the low layers, new application-layer-based DDoS attacks utilizing legitimate HTTP requests to overwhelm victim resources are more undetectable. The case may be more serious when suchattacks mimic or occur during the flash crowd event of a popular Website. In this paper, we present the design and implementation of CALD, an architectural extension to protect Web servers against various DDoS attacks that masquerade as flash crowds. CALD provides real-time detection using mess tests but is different from other systems that use resembling methods. First, CALD uses a front-end sensor to monitor thetraffic that may contain various DDoS attacks or flash crowds. Intense pulse in the traffic means possible existence of anomalies because this is the basic property of DDoS attacks and flash crowds. Once abnormal traffic is identified, the sensor sends ATTENTION signal to activate the attack detection module. Second, CALD dynamically records the average frequency of each source IP and check the total mess extent. Theoretically, the mess extent of DDoS attacks is larger than the one of flash crowds. Thus, with some parameters from the attack detection module, the filter is capable of letting the legitimate requests through but the attack traffic stopped. Third, CALD may divide the security modules away from the Web servers. As a result, it keeps maximum performance on the kernel web services, regardless of the harassment from DDoS. In the experiments, the records from www.sina.com and www.taobao.com have proved the value of CALD.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

 Many web servers contain some dangerous pages (we name them eigenpages) that can indicate their vulnerabilities. Therefore, some worms such as Santy locate their targets by searching for these eigenpages in search engines with well-crafted queries. In this paper, we focus on the modeling and containment of these special worms targeting web applications. We propose a containment system based on honey pots. We make search engines randomly insert a few honey pages that will induce visitors to the pre-established honey pots among the search results for the arriving queries. And then infectious can be detected and reported to the search engines when their malicious scans hit the honey pots. We find that the Santy worm can be well stopped by inserting no more than two honey pages in every one hundred search results. We also solve the challenging issue to dynamically generate matching honey pages for those dynamically arriving queries. Finally, a prototype is implemented to prove the technical feasibility of this system. © 2013 by CESER Publications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Quality-of-Service is an important issue in multimedia applications; so far most of the research focuses on bandwidth guarantee, few pays attention to the server performance guarantee. In this paper we pay more attention to the server performance guarantee under the prerequisite of guaranteed bandwidth quality. We take advantage of anycast to find the "best" multimedia server among a distributed server group in terms of bandwidth, the request will be submitted to the selected server, moreover, the selected server's neighbours' (all the servers with feasible paths) addresses are delivered to the selected server simultaneously. If the selected server can not guarantee the QoS for the request in terms of server performance, then a proposed QoS-Aware Server Load Deviation (QASLD) mechanism wiII be employed, which will deliver the request to one of its neighbours until there exists a suitable server that can guarantee the server performance for the request. Our experiments show that the proposed QASLD algorithm works well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the current web-based database systems suffer from poor performance, complicated heterogeneity, and synchronization issues. In this paper, we propose a novel mechanism for web-based database system on multicast and anycast protocols to deal with these issues. In the model, we put a castway, a network interface for database server, between database server and Web server. Castway deals with the multicast and anycast requests and responses. We propose a requirement-based server selection algorithm and an atomic multicast update algorithm for data queries and synchronizations. The model is independent from the Internet environment, it can synchronise the databases efficiently and automatically. Furthermore, the model can reduce the possibility of transaction deadlocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cost of recovery protocols is important with respect to system performance during normal operation and failure in terms of overhead, and time taken to recover failed transactions. The cost of recovery protocols for web database systems has not been addressed much. In this paper, we present a quantitative study of cost of recovery protocols. For this purpose, we use an experiment setup to evaluate the performance of two recovery algorithms, namely the, two-phase commit algorithm and log-based algorithm. Our work is a step towards building reliable protocols for web database systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel model for web-based database systems based on the multicast and anycast' protocols. In the model, we design a middleware, castway, which locates between the database server and the Web server. Every castway in a distributed system operates as a multicast node and an anycast node independently, respectively. The proposed mechanism can balance the workload among the distributed database servers, and offers the "best" server to serve for a query. Three algorithms are employed for the model: the requirement-based probing algorithm for anycast routing, the atomic multicast update algorithm for database synchronization, and the job deviation algorithm for system workload balance. The simulations and experiments show that the proposed model works very well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a model for discovering frequent sequential patterns, phrases, which can be used as profile descriptors of documents. It is indubitable that we can obtain numerous phrases using data mining algorithms. However, it is difficult to use these phrases effectively for answering what users want. Therefore, we present a pattern taxonomy extraction model which performs the task of extracting descriptive frequent sequential patterns by pruning the meaningless ones. The model then is extended and tested by applying it to the information filtering system. The results of the experiment show that pattern-based methods outperform the keyword-based methods. The results also indicate that removal of meaningless patterns not only reduces the cost of computation but also improves the effectiveness of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the characteristics of the current Web services is that many clients request the same or similar service from a group of replicated servers, e.g. music or movie downloading in peer-to-peer networks. Most of the time, servers are heterogeneous ones in terms of service rate. Much of research has been done in the homogeneous environment. However, there is has been little done on the heterogeneous scenario. It is important and urgent that we have models for heterogeneous server groups for the current Internet applications design and analysis. In this paper, we deploy an approximation method to transform heterogeneous systems into a group of homogeneous system. As a result, the previous results of homogeneous studies can be applied in heterogeneous cases. In order to test the approximation ratio of the proposed model to real applications, we conducted simulations to obtain the degree of similarity. We use two common strategies: random selection algorithm and Firs-Come-First-Serve (FCFS) algorithm to test the approximation ratio of the proposed model. The simulations indicate that the approximation model works well.