819 resultados para Distributed Denial of Service


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quality of Service (QoS) is a new issue in cloud-based MapReduce, which is a popular computation model for parallel and distributed processing of big data. QoS guarantee is challenging in a dynamical computation environment due to the fact that a fixed resource allocation may become under-provisioning, which leads to QoS violation, or over-provisioning, which increases unnecessary resource cost. This requires runtime resource scaling to adapt environmental changes for QoS guarantee. Aiming to guarantee the QoS, which is referred as to hard deadline in this work, this paper develops a theory to determine how and when resource is scaled up/down for cloud-based MapReduce. The theory employs a nonlinear transformation to define the problem in a reverse resource space, simplifying the theoretical analysis significantly. Then, theoretical results are presented in three theorems on sufficient conditions for guaranteeing the QoS of cloud-based MapReduce. The superiority and applications of the theory are demonstrated through case studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In wireless ad hoc networks, nodes communicate with far off destinations using intermediate nodes as relays. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios (complete cooperation and complete noncooperation) are inimical to the interests of a user. In this paper, we address the issue of user cooperation in ad hoc networks. We assume that nodes are rational, i.e., their actions are strictly determined by self interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we are able to determine the optimal share of service that each node should receive. We define this to be the rational Pareto optimal operating point. We then propose a distributed and scalable acceptance algorithm called Generous TIT-FOR-TAT (GTFT). The acceptance algorithm is used by the nodes to decide whether to accept or reject a relay request. We show that GTFT results in a Nash equilibrium and prove that the system converges to the rational and optimal operating point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The starting point of this thesis is the notion that in order for organisations to understand what customers value and how customers experience service, they need to learn about customers. The first and perhaps most important link in an organisation-wide learning process directed at customers is the frontline contact person. Service- and sales organisations can only learn about customers if the individual frontline contact persons learn about customers. Even though it is commonly recognised that learning about customers is the basis for an organisation’s success, few contributions within marketing investigate the fundamental nature of the phenomenon as it occurs in everyday customer service. Thus, what learning about customers is and how it takes place in a customer-service setting is an issue that is neglected in marketing research. In order to explore these questions, this thesis presents a socio-cultural approach to understanding learning about customers. Hence, instead of considering learning equal to cognitive processes in the mind of the frontline contact person or learning as equal to organisational information processing, the interactive, communication-based, socio-cultural aspect of learning about customers is brought to the fore. Consequently, the theoretical basis of the study can be found both in socio-cultural and practice-oriented lines of reasoning, as well as in the fields of service- and relationship marketing. As it is argued that learning about customers is an integrated part of everyday practices, it is also clear that it should be studied in a naturalistic and holistic way as it occurs in a customer-service setting. This calls for an ethnographic research approach, which involves direct, first-hand experience of the research setting during an extended period of time. Hence, the empirical study employs participant observations, informal discussions and interviews among car salespersons and service advisors at a car retailing company. Finally, as a synthesis of theoretically and empirically gained understanding, a set of concepts are developed and they are integrated into a socio-cultural model of learning about customers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A defining characteristic of most service encounters is that they are strongly influenced by interactions in which both the consumer and the service personnel are playing integral roles. Such is the importance of this interaction that it has even been argued that for the consumer, these encounters are in fact the service. Given this, it is not surprising that interactions involving communication and customer participation in the service encounters have received considerable attention within the field of services marketing. Much of the research on interactions and communication in services, however, appear to have assumed that the consumer and the service personnel by definition are perfectly able to interact and communicate effortlessly with each other. Such communication would require a common language, and in order to be able to take this for granted the market would need to be fairly homogenous. The homogenous country, however, and with it the homogenous market, would appear to be gone. It is estimated that more than half the consumers in the world are already speaking more than one language. For a company entering a new market, language can be a major barrier that firms may underestimate, and understanding language influence across different markets is important for international companies. The service literature has taken a common language between companies and consumers for granted but this is not matched by the realities on the ground in many markets. Owing to the communicational and interaction-oriented nature of services, the lack of a common language between the consumer and the service provider is a situation that could cause problems. A gap exists in the service theory, consisting of a lack of knowledge concerning how language influences consumers in service encounters. By addressing this gap, the thesis contributes to an increased understanding of service theory and provides a better practical understanding for service companies of the importance of native language use for consumers. The thesis consists of four essays. Essay one is conceptual and addresses how sociolinguistic research can be beneficial for understanding consumer language preferences. Essay two empirically shows how the influence of language varies depending on the nature of the service, essay three shows that there is a significant difference in language preferences between female and male consumers while essay four empirically compares consumer language preferences in Canada and Finland, finding strong similarities but also indications of difference in the motives for preferring native language use. The introduction of the thesis outlines the existence of a research gap within the service literature, a gap consisting of the lack of research into how native language use may influence consumers in service encounters. In addition, it is described why this gap is of importance to services and why its importance is growing. Building on this situation, the purpose of the thesis is to establish the existence of language influence in service encounters and to extend the knowledge of how language influences consumers on multilingual markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to further develop the logic of service, value creation, value co-creation and value have to be formally and rigorously defined, so that the nature, content and locus of value and the roles of service providers and customers in value creation can be unambiguously assessed. In the present article, following the underpinning logic of value-in-use, it is demonstrated that in order to achieve this, value creation is best defined as the customer’s creation of value-in-use. The analysis shows that the firm’s and customer’s processes and activities can be divided into a provider sphere, closed for the customer, and a customer sphere, closed for the firm. Value creation occurs in the customer sphere, whereas firms in the provider sphere facilitate value creation by producing resources and processes which represent potential value or expected value-in use for their customers. By getting access to the closed customer sphere, firms can create a joint value sphere and engage in customers’ value creation as co-creators of value with them. This approach establishes a theoretically sound foundation for understanding value creation in service logic, and enables meaningful managerial implications, for example as to what is required for co-creation of value, and also further theoretical elaborations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using enhanced distributed channel access (EDCA). We build upon the fixed point analysis and performance insights. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures. The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanical behaviour of composite materials differs from that of conventional structural materials owing to their heterogeneous and anisotropic nature. Different types of defects and anomalies get induced in these materials during the fabrication process. Further, during their service life, the components made of composite materials develop different types of damage. The performance and life of such components is governed by the combined effect of all these defects and damage. While porosity, voids, inclusions etc., are some defects those can get induced during the fabrication of composites, matrix cracks, interface debonds, delaminations and fiber breakage are major types of service induced damage which are of concern. During the service life of components made of composites, one type of damage can grow and initiate another type of damage. For example, matrix cracks can gradually grow to the interface and initiate debonds. Interface debonds in a particular plane can lead to delaminations. Consequently, the combined effect of different types of distributed damage causes the failure of the component. A set of non-destructive evaluation (NDE) methods is well established for testing conventional metallic materials. Some of them can also be utilized for composite materials as they are, and in some cases with a little different approach or modification. Ultrasonics, Radiography, Thermography, Fiber Optics, Acoustic Emision Techniques etc., to name a few. Detection, evaluation and characterization of different types of defects and damage encountered in composite materials and structures using different NDE tools is discussed briefly in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed system has quite a lot of servers to attain increased availability of service and for fault tolerance. Balancing the load among these servers is an important task to achieve better performance. There are various hardware and software based load balancing solutions available. However there is always an overhead on Servers and the Load Balancer while communicating with each other and sharing their availability and the current load status information. Load balancer is always busy in listening to clients' request and redirecting them. It also needs to collect the servers' availability status frequently, to keep itself up-to-date. Servers are busy in not only providing service to clients but also sharing their current load information with load balancing algorithms. In this paper we have proposed and discussed the concept and system model for software based load balancer along with Availability-Checker and Load Reporters (LB-ACLRs) which reduces the overhead on server and the load balancer. We have also described the architectural components with their roles and responsibilities. We have presented a detailed analysis to show how our proposed Availability Checker significantly increases the performance of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the role of information in the servitization of manufacturing which has led to “the innovation of an organisation’s capabilities and processes as equipment manufacturers seek to offer services around their products” (Neely 2009, Baines et al 2009). This evolution has resulted in an information requirement (IR) shift as companies move from discrete provision of equipment and spare parts to long-term service contracts guaranteeing prescribed performance levels. Organisations providing such services depend on a very high level of availability and quality of information throughout the service life-cycle (Menor et al 2002). This work focuses on whether, for a proposed contract based around complex equipment, the Information System is capable of providing information at an acceptable quality and requires the IRs to be examined in a formal manner. We apply a service information framework (Cuthbert et al 2008, McFarlane & Cuthbert 2012) to methodically assess IRs for different contract types to understand the information gap between them. Results from case examples indicate that this gap includes information required for the different contract types and a set of contract-specific IRs. Furthermore, the control, ownership and use of information differs across contract types as the boundary of operation and responsibility changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concentrations of seven phytochemical constituents (swertiamarin, mangiferin, swertisin, oleanolic acid, 1,5,8-trihydroxy-3methoxyxanthone, 1,8-dihydroxy-3,7-dimethoxyxanthone and 1,8-dihydroxy-3,5-dimethoxyxanthone) of "ZangYinChen" (Swertia mussotii, a herb used in Tibetan folk medicine) were determined and compared in plants collected from naturally distributed high-altitude populations and counterparts that had been artificially cultivated at low altitudes. Levels of mangiferin, the most abundant active compound in this herb, were significantly lower in cultivated samples and showed a negative correlation with altitude. The other constituents were neither positively nor negatively correlated with cultivation at low altitude. Concentrations of all of the constituents varied substantially with growth stage and were highest at the bud stage in the cultivars, but there were no distinct differences between flowering and fruiting stages in this respect. (c) 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The loss of working-aged adults to HIV/AIDS has been shown to increase the costs of labor to the private sector in Africa. There is little corresponding evidence for the public sector. This study evaluated the impact of AIDS on the capacity of a government agency, the Zambia Wildlife Authority (ZAWA), to patrol Zambia’s national parks. Methods: Data were collected from ZAWA on workforce characteristics, recent mortality, costs, and the number of days spent on patrol between 2003 and 2005 by a sample of 76 current patrol officers (reference subjects) and 11 patrol officers who died of AIDS or suspected AIDS (index subjects). An estimate was made of the impact of AIDS on service delivery capacity and labor costs and the potential net benefits of providing treatment. Results: Reference subjects spent an average of 197.4 days on patrol per year. After adjusting for age, years of service, and worksite, index subjects spent 62.8 days on patrol in their last year of service (68% decrease, p<0.0001), 96.8 days on patrol in their second to last year of service (51% decrease, p<0.0001), and 123.7 days on patrol in their third to last year of service (37% decrease, p<0.0001). For each employee who died, ZAWA lost an additional 111 person-days for management, funeral attendance, vacancy, and recruitment and training of a replacement, resulting in a total productivity loss per death of 2.0 person-years. Each AIDS-related death also imposed budgetary costs for care, benefits, recruitment, and training equivalent to 3.3 years’ annual compensation. In 2005, AIDS reduced service delivery capacity by 6.2% and increased labor costs by 9.7%. If antiretroviral therapy could be provided for $500/patient/year, net savings to ZAWA would approach $285,000/year. Conclusion: AIDS is constraining ZAWA’s ability to protect Zambia’s wildlife and parks. Impacts on this government agency are substantially larger than have been observed in the private sector. Provision of ART would result in net budgetary savings to ZAWA and greatly increase its service delivery capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current research on Internet-based distributed systems emphasizes the scalability of overlay topologies for efficient search and retrieval of data items, as well as routing amongst peers. However, most existing approaches fail to address the transport of data across these logical networks in accordance with quality of service (QoS) constraints. Consequently, this paper investigates the use of scalable overlay topologies for routing real-time media streams between publishers and potentially many thousands of subscribers. Specifically, we analyze the costs of using k-ary n-cubes for QoS-constrained routing. Given a number of nodes in a distributed system, we calculate the optimal k-ary n-cube structure for minimizing the average distance between any pair of nodes. Using this structure, we describe a greedy algorithm that selects paths between nodes in accordance with the real-time delays along physical links. We show this method improves the routing latencies by as much as 67%, compared to approaches that do not consider physical link costs. We are in the process of developing a method for adaptive node placement in the overlay topology, based upon the locations of publishers, subscribers, physical link costs and per-subscriber QoS constraints. One such method for repositioning nodes in logical space is discussed, to improve the likelihood of meeting service requirements on data routed between publishers and subscribers. Future work will evaluate the benefits of such techniques more thoroughly.