51 resultados para Distributed Denial-of-Service
Resumo:
The mechanical behaviour of composite materials differs from that of conventional structural materials owing to their heterogeneous and anisotropic nature. Different types of defects and anomalies get induced in these materials during the fabrication process. Further, during their service life, the components made of composite materials develop different types of damage. The performance and life of such components is governed by the combined effect of all these defects and damage. While porosity, voids, inclusions etc., are some defects those can get induced during the fabrication of composites, matrix cracks, interface debonds, delaminations and fiber breakage are major types of service induced damage which are of concern. During the service life of components made of composites, one type of damage can grow and initiate another type of damage. For example, matrix cracks can gradually grow to the interface and initiate debonds. Interface debonds in a particular plane can lead to delaminations. Consequently, the combined effect of different types of distributed damage causes the failure of the component. A set of non-destructive evaluation (NDE) methods is well established for testing conventional metallic materials. Some of them can also be utilized for composite materials as they are, and in some cases with a little different approach or modification. Ultrasonics, Radiography, Thermography, Fiber Optics, Acoustic Emision Techniques etc., to name a few. Detection, evaluation and characterization of different types of defects and damage encountered in composite materials and structures using different NDE tools is discussed briefly in this paper.
Resumo:
Distributed system has quite a lot of servers to attain increased availability of service and for fault tolerance. Balancing the load among these servers is an important task to achieve better performance. There are various hardware and software based load balancing solutions available. However there is always an overhead on Servers and the Load Balancer while communicating with each other and sharing their availability and the current load status information. Load balancer is always busy in listening to clients' request and redirecting them. It also needs to collect the servers' availability status frequently, to keep itself up-to-date. Servers are busy in not only providing service to clients but also sharing their current load information with load balancing algorithms. In this paper we have proposed and discussed the concept and system model for software based load balancer along with Availability-Checker and Load Reporters (LB-ACLRs) which reduces the overhead on server and the load balancer. We have also described the architectural components with their roles and responsibilities. We have presented a detailed analysis to show how our proposed Availability Checker significantly increases the performance of the system.
Resumo:
Mesh topologies are important for large-scale peer-to-peer systems that use low-power transceivers. The Quality of Service (QoS) in such systems is known to decrease as the scale increases. We present a scalable approach for dissemination that exploits all the shortest paths between a pair of nodes and improves the QoS. Despite th presence of multiple shortest paths in a system, we show that these paths cannot be exploited by spreading the messages over the paths in a simple round-robin manner; nodes along one of these paths will always handle more messages than the nodes along the other paths. We characterize the set of shortest paths between a pair of nodes in regular mesh topologies and derive rules, using this characterization, to effectively spread the messages over all the available paths. These rules ensure that all the nodes that are at the same distance from the source handle roughly the same number of messages. By modeling the multihop propagation in the mesh topology as a multistage queuing network, we present simulation results from a variety of scenarios that include link failures and propagation irregularities to reflect real-world characteristics. Our method achieves improved QoS in all these scenarios.
Resumo:
Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.
Resumo:
Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The IEEE 802.1le medium access control (MAC) standard provides distributed service differentiation or Quality-of- Service (QoS) by employing a priority system. In 802.1 le networks, network traffic is classified into different priorities or access categories (ACs). Nodes maintain separate queues for each AC and packets at the head-of-line (HOL) of each queue contend for channel access using AC-specific parameters. Such a mechanism allows the provision of differentiated QoS where high priority, performance sensitive traffic such as voice and video applications will enjoy less delay, greater throughput and smaller loss, compared to low priority traffic (e. g. file transfer). The standard implicitly assumes that nodes are honest and will truthfully classify incoming traffic into its appropriate AC. However, in the absence of any additional mechanism, selfish users can gain enhanced performance by selectively classifying low priority traffic as high priority, potentially destroying the QoS capability of the system.
Resumo:
Represented by approximately 85 species, Hemidactylus is one of the most diverse and widely distributed genera of reptiles in the world. In the Indian subcontinent, this genus is represented by 28 species out of which at least 13 are endemic to this region. Here, we report the phylogeny of the Indian Hemidactylus geckos based on mitochondrial and nuclear DNA markers sequenced from multiple individuals of widely distributed as well as endemic congeners of India. Results indicate that a majority of the species distributed in India form a distinct clade whose members are largely confined to the Indian subcontinent thus representing a unique Indian radiation. The remaining Hemidactylus geckos of India belong to two other geographical clades representing the Southeast Asian and West-Asian arid zone species. Additionally, the three widely distributed, commensal species (H. brookii, H. frenatus and H. flaviviridis) are nested within the Indian radiation suggesting their Indian origin. Dispersal-vicariance analysis also supports their Indian origin and subsequent dispersal out-of-India into West-Asian arid zone and Southeast Asia. Thus, Indian subcontinent has served as an important arena for diversification amongst the Hemidactylus geckos and in the evolution and spread of its commensal geckos. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The high cost and extraordinary demands made on sophisticated air defence systems, pose hard challenges to the managers and engineers who plan the operation and maintenance of such systems. This paper presents a study aimed at developing simulation and systems analysis techniques for the effective planning and efficient operation of small fleets of aircraft, typical of the air force of a developing country. We consider an important aspect of fleet management: the problem of resource allocation for achieving prescribed operational effectiveness of the fleet. At this stage, we consider a single flying-base, where the operationally ready aircraft are stationed, and a repair-depot, where the planes are overhauled. An important measure of operational effectiveness is ‘ availability ’, which may be defined as the expected fraction of the fleet fit for use at a given instant. The tour of aircraft in a flying-base, repair-depot system through a cycle of ‘ operationally ready ’ and ‘ scheduled overhaul ’ phases is represented first by a deterministic flow process and then by a cyclic queuing process. Initially the steady-state availability at the flying-base is computed under the assumptions of Poisson arrivals, exponential service times and an equivalent singleserver repair-depot. This analysis also brings out the effect of fleet size on availability. It defines a ‘ small ’ fleet essentially in terms of the important ‘ traffic ’ parameter of service rate/maximum arrival rate.A simulation model of the system has been developed using GPSS to study sensitivity to distributional assumptions, to validate the principal assumptions of the analytical model such as the single-server assumption and to obtain confidence intervals for the statistical parameters of interest.
Resumo:
We develop new scheduling algorithms for the IEEE 802.16d OFDMA/TDD based broadband wireless access system, in which radio resources of both time and frequency slots are dynamically shared by all users. Our objective is to provide a fair and efficient allocation to all the users to satisfy their quality of service.
Resumo:
This paper reports new results concerning the capabilities of a family of service disciplines aimed at providing per-connection end-to-end delay (and throughput) guarantees in high-speed networks. This family consists of the class of rate-controlled service disciplines, in which traffic from a connection is reshaped to conform to specific traffic characteristics, at every hop on its path. When used together with a scheduling policy at each node, this reshaping enables the network to provide end-to-end delay guarantees to individual connections. The main advantages of this family of service disciplines are their implementation simplicity and flexibility. On the other hand, because the delay guarantees provided are based on summing worst case delays at each node, it has also been argued that the resulting bounds are very conservative which may more than offset the benefits. In particular, other service disciplines such as those based on Fair Queueing or Generalized Processor Sharing (GPS), have been shown to provide much tighter delay bounds. As a result, these disciplines, although more complex from an implementation point-of-view, have been considered for the purpose of providing end-to-end guarantees in high-speed networks. In this paper, we show that through ''proper'' selection of the reshaping to which we subject the traffic of a connection, the penalty incurred by computing end-to-end delay bounds based on worst cases at each node can be alleviated. Specifically, we show how rate-controlled service disciplines can be designed to outperform the Rate Proportional Processor Sharing (RPPS) service discipline. Based on these findings, we believe that rate-controlled service disciplines provide a very powerful and practical solution to the problem of providing end-to-end guarantees in high-speed networks.
Resumo:
We provide a comparative performance evaluation of packet queuing and link admission strategies for low-speed wide area network Links (e.g. 9600 bps, 64 kbps) that interconnect relatively highspeed, connectionless local area networks (e.g. 10 Mbps). In particular, we are concerned with the problem of providing differential quality of service to interLAN remote terminal and file transfer sessions, and throughput fairness between interLAN file transfer sessions. We use analytical and simulation models to study a variety of strategies. Our work also serves to address the performance comparison of connectionless vs. connection-oriented interconnection of CLNS LANS. When provision of priority at the physical transmission level is not feasible, we show, for low-speed WAN links (e.g. 9600 bps), the superiority of connection-oriented interconnection of connectionless LANs, with segregation of traffic streams with different QoS requirements into different window flow controlled connections. Such an implementation can easily be obtained by transporting IP packets over an X.25 WAN. For 64 kbps WAN links, there is a drop in file transfer throughputs, owing to connection overheads, but the other advantages are retained, The same solution also helps to provide throughput fairness between interLAN file transfer sessions. We also provide a corroboration of some of our modelling results with results from an experimental test-bed.
Resumo:
In this paper we consider an N x N non-blocking, space division ATM switch with input cell queueing. At each input, the cell arrival process comprises geometrically distributed bursts of consecutive cells for the various outputs. Motivated by the fact that some input links may be connected to metropolitan area networks, and others directly to B-ISDN terminals, we study the situation where there are two classes of inputs with different values of mean burst length. We show that when inputs contend for an output, giving priority to an input with smaller expected burst length yields a saturation throughput larger than if the reverse priority is given. Further, giving priority to less bursty traffic can give better throughput than if all the inputs were occupied by this less bursty traffic. We derive the asymptotic (as N --> infinity) saturation throughputs for each priority class.
Resumo:
Analytical and numerical solutions have been obtained for some moving boundary problems associated with Joule heating and distributed absorption of oxygen in tissues. Several questions have been examined which are concerned with the solutions of classical formulation of sharp melting front model and the classical enthalpy formulation in which solid, liquid and mushy regions are present. Thermal properties and heat sources in the solid and liquid regions have been taken as unequal. The short-time analytical solutions presented here provide useful information. An effective numerical scheme has been proposed which is accurate and simple.
Resumo:
Service discovery is vital in ubiquitous applications, where a large number of devices and software components collaborate unobtrusively and provide numerous services without user intervention. Existing service discovery schemes use a service matching process in order to offer services of interest to the users. Potentially, the context information of the users and surrounding environment can be used to improve the quality of service matching. To make use of context information in service matching, a service discovery technique needs to address certain challenges. Firstly, it is required that the context information shall have unambiguous representation. Secondly, the devices in the environment shall be able to disseminate high level and low level context information seamlessly in the different networks. And thirdly, dynamic nature of the context information be taken into account. We propose a C-IOB(Context-Information, Observation and Belief) based service discovery model which deals with the above challenges by processing the context information and by formulating the beliefs based on the observations. With these formulated beliefs the required services will be provided to the users. The method has been tested with a typical ubiquitous museum guide application over different cases. The simulation results are time efficient and quite encouraging.