911 resultados para Intelligent load management
Resumo:
When considering the potential uptake and utilization of technology management tools by industry, it must be recognized that companies face the difficult challenges of selecting, adopting and integrating individual tools into a toolkit that must be implemented within their current organizational processes and systems. This situation is compounded by the lack of sound advice on integrating well-founded individual tools into a robust toolkit that has the necessary degree of flexibility such that they can be tailored for application to specific problems faced by individual organizations. As an initial stepping stone to offering a toolkit with empirically proven utility, this paper provides a conceptual foundation to the development of toolkits by outlining an underlying philosophical position based on observations from multiple research and commercial collaborations with industry. This stance is underpinned by a set of operationalized principles that can offer guidance to organizations when deciding upon the appropriate form, functions and features that should be embodied by any potential tool/toolkit. For example, a key objective of any tool is to aid decision-making and a core set of powerful, flexible, scaleable and modular tools should be sufficient to allow users to generate, explore, shape and implement possible solutions across a wide array of strategic issues. From our philosophical stance, the preferred mode of engagement is facilitated workshops with a participatory process that enables multiple perspectives and structures the conversation through visual representations in order to manage the cognitive load in the collaborative environment. The generic form of the tools should be configurable for the given context and utilized in a lightweight manner based on the premise of start small and iterate fast. © 2011 IEEE.
Resumo:
Establishing connectivity of products with real-time information about themselves can at one level provide accurate data, and at another, allow products to assess and influence their own destiny. In this way, the specification for an intelligent product is being built - one whose information content is permanently bound to its material content. This paper explores the impact of such development on supply chains, contrasting between simple and complex product supply chains. The Auto-ID project is on track to enable such connectivity between products and information using a single, open-standard, data repository for storage and retrieval of product information. The potential impact on the design and management of supply chains is immense. This paper provides an introduction to of some of these changes, demonstrating that by enabling intelligent products, Auto ID systems will be instrumental in driving future supply chains. The paper also identifies specific application areas for this technology in the product supply chain.
Resumo:
When considering the potential uptake and utilization of technology management tools by industry, it must be recognized that companies face the difficult challenges of selecting, adopting and integrating individual tools into a toolkit that must be implemented within their current organizational processes and systems. This situation is compounded by the lack of sound advice on integrating well-founded individual tools into a robust toolkit that has the necessary degree of flexibility such that they can be tailored for application to specific problems faced by individual organizations. As an initial stepping stone to offering a toolkit with empirically proven utility, this paper provides a conceptual foundation to the development of toolkits by outlining an underlying philosophical position based on observations from multiple research and commercial collaborations with industry. This stance is underpinned by a set of operationalized principles that can offer guidance to organizations when deciding upon the appropriate form, functions and features that should be embodied by any potential tool/toolkit. For example, a key objective of any tool is to aid decision-making and a core set of powerful, flexible, scaleable and modular tools should be sufficient to allow users to generate, explore, shape and implement possible solutions across a wide array of strategic issues. From our philosophical stance, the preferred mode of engagement is facilitated workshops with a participatory process that enables multiple perspectives and structures the conversation through visual representations in order to manage the cognitive load in the collaborative environment. The generic form of the tools should be configurable for the given context and utilized in a lightweight manner based on the premise of 'start small and iterate fast'. © 2012 Elsevier Inc.
Resumo:
Dynamic Power Management (DPM) is a technique to reduce power consumption of electronic system by selectively shutting down idle components. In this article we try to introduce back propagation network and radial basis network into the research of the system-level power management policies. We proposed two PM policies-Back propagation Power Management (BPPM) and Radial Basis Function Power Management (RBFPM) which are based on Artificial Neural Networks (ANN). Our experiments show that the two power management policies greatly lowered the system-level power consumption and have higher performance than traditional Power Management(PM) techniques-BPPM is 1.09-competitive and RBFPM is 1.08-competitive vs. 1.79 . 1.45 . 1.18-competitive separately for traditional timeout PM . adaptive predictive PM and stochastic PM.
Resumo:
The buckling of compressively-loaded members is one of the most important factors limiting the overall strength and stability of a structure. I have developed novel techniques for using active control to wiggle a structural element in such a way that buckling is prevented. I present the results of analysis, simulation, and experimentation to show that buckling can be prevented through computer-controlled adjustment of dynamical behavior.sI have constructed a small-scale railroad-style truss bridge that contains compressive members that actively resist buckling through the use of piezo-electric actuators. I have also constructed a prototype actively controlled column in which the control forces are applied by tendons, as well as a composite steel column that incorporates piezo-ceramic actuators that are used to counteract buckling. Active control of buckling allows this composite column to support 5.6 times more load than would otherwise be possible.sThese techniques promise to lead to intelligent physical structures that are both stronger and lighter than would otherwise be possible.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently adopt a set of local policies that specify which routes it accepts and advertises from/to other networks, as well as which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme (APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially-conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts (policy conflict-avoidance vs. -control mode), each AS dynamically adjusts its own path preferences—increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the substability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, routing load, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.
Resumo:
Organizations that leverage lessons learned from their experience in the practice of complex real-world activities are faced with five difficult problems. First, how to represent the learning situation in a recognizable way. Second, how to represent what was actually done in terms of repeatable actions. Third, how to assess performance taking account of the particular circumstances. Fourth, how to abstract lessons learned that are re-usable on future occasions. Fifth, how to determine whether to pursue practice maturity or strategic relevance of activities. Here, organizational learning and performance improvement are investigated in a field study using the Context-based Intelligent Assistant Support (CIAS) approach. A new conceptual framework for practice-based organizational learning and performance improvement is presented that supports researchers and practitioners address the problems evoked and contributes to a practice-based approach to activity management. The novelty of the research lies in the simultaneous study of the different levels involved in the activity. Route selection in light rail infrastructure projects involves practices at both the strategic and operational levels; it is part managerial/political and part engineering. Aspectual comparison of practices represented in Contextual Graphs constitutes a new approach to the selection of Key Performance Indicators (KPIs). This approach is free from causality assumptions and forms the basis of a new approach to practice-based organizational learning and performance improvement. The evolution of practices in contextual graphs is shown to be an objective and measurable expression of organizational learning. This diachronic representation is interpreted using a practice-based organizational learning novelty typology. This dissertation shows how lessons learned when effectively leveraged by an organization lead to practice maturity. The practice maturity level of an activity in combination with an assessment of an activity’s strategic relevance can be used by management to prioritize improvement effort.
Resumo:
A series of short and long term service load tests were undertaken on the sixth floor of the full-scale, seven storey, reinforced concrete building at the Large Building Test Facility of the Building Research Establishment at Cardington. By using internally strain gauged reinforcing bars cast into an internal and external floor bay during the construction process it was possible to gain a detailed record of slab strains resulting from the application of several arrangements of test loads. Short term tests were conducted in December 1998 and long term monitoring then ensued until April 2001. This paper describes the test programmes and presents results to indicate slab behaviour for the various loading regimes.
Resumo:
BACKGROUND: Acute ankle sprains are usually managed functionally, with advice to undertake progressive weight-bearing and walking. Mechanical loading is an important modular of tissue repair; therefore, the clinical effectiveness of walking after ankle sprain may be dose dependent. The intensity, magnitude and duration of load associated with current functional treatments for ankle sprain are unclear.
AIM: To describe physical activity (PA) in the first week after ankle sprain and to compare results with a healthy control group.
METHODS: Participants (16-65 years) with an acute ankle sprain were randomised into two groups (standard or exercise). Both groups were advised to apply ice and compression, and walk within the limits of pain. The exercise group undertook additional therapeutic exercises. PA was measured using an activPAL accelerometer, worn for 7 days after injury. Comparisons were made with a non-injured control group.
RESULTS: The standard group were significantly less active (1.2 ± 0.4 h activity/day; 5621 ± 2294 steps/day) than the exercise (1.7 ± 0 .7 h/day, p=0.04; 7886 ± 3075 steps/day, p=0.03) and non-injured control groups (1.7 ± 0.4 h/day, p=0.02; 8844 ± 2185 steps/day, p=0.002). Also, compared with the non-injured control group, the standard and exercise groups spent less time in moderate (38.3 ± 12.7 min/day vs 14.5 ± 11.4 min/day, p=0.001 and 22.5 ± 15.9 min/day, p=0.003) and high-intensity activity (4.1 ± 6.9 min/day vs 0.1 ± 0.1 min/day, p=0.001 and 0.62 ± 1.0 min/day p=0.005).
CONCLUSION: PA patterns are reduced in the first week after ankle sprain, which is partly ameliorated with addition of therapeutic exercises. This study represents the first step towards developing evidence-based walking prescription after acute ankle sprain.
Resumo:
Scrapers have established an important position in the earthmoving field as they are independently capable of accomplishing an earthmoving operation. Given that loading a scraper to its capacity does not entail its maximum production, optimizing the scraper’s loading time is an essential prerequisite for successful operations management. The relevant literature addresses the loading time optimization through a graphical method that is founded on the invalid assumption that the hauling time is independent of the load time. To correct this, a new algorithmic optimization method that incorporates the golden section search and the bisection algorithm is proposed. Comparison of the results derived from the proposed and the existing method demonstrates that the latter entails the systematic needless prolongation of the loading stage thus resulting in reduced hourly production and increased cost. Therefore, the proposed method achieves an improved modeling of scraper earthmoving operations and contributes toward a more efficient cost management.