943 resultados para Network Management


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Iteration is unavoidable in the design process and should be incorporated when planning and managing projects in order to minimize surprises and reduce schedule distortions. However, planning and managing iteration is challenging because the relationships between its causes and effects are complex. Most approaches which use mathematical models to analyze the impact of iteration on the design process focus on a relatively small number of its causes and effects. Therefore, insights derived from these analytical models may not be robust under a broader consideration of potential influencing factors. In this article, we synthesize an explanatory framework which describes the network of causes and effects of iteration identified from the literature, and introduce an analytic approach which combines a task network modeling approach with System Dynamics simulation. Our approach models the network of causes and effects of iteration alongside the process architecture which is required to analyze the impact of iteration on design process performance. We show how this allows managers to assess the impact of changes to process architecture and to management levers which influence iterative behavior, accounting for the fact that these changes can occur simultaneously and can accumulate in non-linear ways. We also discuss how the insights resulting from this analysis can be visualized for easier consumption by project participants not familiar with simulation methods. Copyright © 2010 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - In recent years there has been increasing interest in Product Service Systems (PSSs) as a business model for selling integrated product and service offerings. To date, there has been extensive research into the benefits of PSS to manufacturers and their customers, but there has been limited research into the effect of PSS on the upstream supply chain. This paper seeks to address this gap in the research. Design/methodology/approach - The research uses case-based research which is appropriate for exploratory research of this type. In-depth interviews were conducted with key personnel in a focal firm and two members of its supply chain, and the results were analysed to identify emergent themes.b Findings - The research has identified differences in supplier behaviour dependent on their role in PSS delivery and their relationship with the PSS provider. In particular, it suggests that for a successful partnership it is important to align the objectives between PSS provider and suppliers. Originality/value - This research provides a detailed investigation into a PSS supply chain and highlights the complexity of roles and relationships among the organizations within it. It will be of value to other PSS researchers and organizations transitioning to the delivery of PSS. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This paper aims to improve understanding of how to manage global network operations from an engineering perspective. Design/methodology/approach: This research adopted a theory building approach based on case studies. Grounded in the existing literature, the theoretical framework was refined and enriched through nine in-depth case studies in the industry sectors of aerospace, automotives, defence and electrics and electronics. Findings: This paper demonstrates the main value creation mechanisms of global network operations along the engineering value chain. Typical organisational features to support the value creation mechanisms are captured, and the key issues in engineering network design and operations are presented with an overall framework. Practical implications: Evidenced by a series of pilot applications, outputs of this research can help companies to improve the performance of their current engineering networks and design new engineering networks to better support their global businesses and customers in a systematic way. Originality/value: Issues about the design and operations of global engineering networks (GEN) are poorly understood in the existing literature in contrast to their apparent importance in value creation and realisation. To address this knowledge gap, this paper introduces the concept of engineering value chain to highlight the potential of a value chain approach to the exploration of engineering activities in a complex business context. At the same time, it develops an overall framework for managing GEN along the engineering value chain. This improves our understanding of engineering in industrial value chains and extends the theoretical understanding of GEN through integrating the engineering network theories and the value chain concepts. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compared with structured data sources that are usually stored and analyzed in spreadsheets, relational databases, and single data tables, unstructured construction data sources such as text documents, site images, web pages, and project schedules have been less intensively studied due to additional challenges in data preparation, representation, and analysis. In this paper, our vision for data management and mining addressing such challenges are presented, together with related research results from previous work, as well as our recent developments of data mining on text-based, web-based, image-based, and network-based construction databases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managing product information for product items during their whole lifetime is challenging, especially during their usage and end-of-life phases. A major challenge is how to keep a link between the product item and its associated information that may be stored in backend systems of different organizations. This chapter analyses and compares three approaches for addressing this task-that is, the electronic product code (EPC) Network, DIALOG, and World Wide Article Information (WWAI). The EPC network has three key strengths with respect to Product lifecycle management (PLM): First, it is an internationally accepted standard that is supported by a world-wide standards body (GSI). Second, the lookup mechanism helps to insulate the data on the tag from change. Third, because it is becoming widespread and that this tag can also be used for PLM. WWAI is more technically sophisticated than the other approaches. The DIALOG approach might be the most general purpose one of the three because it places few restrictions on the format of the data on the tag. © 2006 Copyright © 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge management is a critical issue for the next-generation web application, because the next-generation web is becoming a semantic web, a knowledge-intensive network. XML Topic Map (XTM), a new standard, is appearing in this field as one of the structures for the semantic web. It organizes information in a way that can be optimized for navigation. In this paper, a new set of hyper-graph operations on XTM (HyO-XTM) is proposed to manage the distributed knowledge resources.HyO-XTM is based on the XTM hyper-graph model. It is well applied upon XTM to simplify the workload of knowledge management.The application of the XTM hyper-graph operations is demonstrated by the knowledge management system of a consulting firm. HyO-XTM shows the potential to lead the knowledge management to the next-generation web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natl Chiao Tung Univ, Dept Comp Sci

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a given TCP flow, exogenous losses are those occurring on links other than the flow's bottleneck link. Exogenous losses are typically viewed as introducing undesirable "noise" into TCP's feedback control loop, leading to inefficient network utilization and potentially severe global unfairness. This has prompted much research on mechanisms for hiding such losses from end-points. In this paper, we show through analysis and simulations that low levels of exogenous losses are surprisingly beneficial in that they improve stability and convergence, without sacrificing efficiency. Based on this, we argue that exogenous loss awareness should be taken into account in any AQM design that aims to achieve global fairness. To that end, we propose an exogenous-loss aware Queue Management (XQM) that actively accounts for and leverages exogenous losses. We use an equation based approach to derive the quiescent loss rate for a connection based on the connection's profile and its global fair share. In contrast to other queue management techniques, XQM ensures that a connection sees its quiescent loss rate, not only by complementing already existing exogenous losses, but also by actively hiding exogenous losses, if necessary, to achieve global fairness. We establish the advantages of exogenous-loss awareness using extensive simulations in which, we contrast the performance of XQM to that of a host of traditional exogenous-loss unaware AQM techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NetSketch is a tool that enables the specification of network-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system so as to retain sufficient enough details to enable future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis approach based on a strongly-typed, Domain-Specific Language (DSL) to specify network configurations at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we overview NetSketch, highlight its salient features, and illustrate how it could be used in applications, including the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications). In a companion paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity.