984 resultados para Load Management


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shellfish bed closures along the North Carolina coast have increased over the years seemingly concurrent with increases in population (Mallin 2000). More and faster flowing storm water has come to mean more bacteria, and fecal indicator bacterial (FIB) standards for shellfish harvesting are often exceeded when no source of contamination is readily apparent (Kator and Rhodes, 1994). Could management reduce bacterial loads if the source of the bacteria where known? Several potentially useful methods for differentiating human versus animal pollution sources have emerged including Ribotyping and Multiple Antibiotic Resistance (MAR) (US EPA, 2005). Total Maximum Daily Load (TMDL) studies on bacterial sources have been conducted for streams in NC mountain and Piedmont areas (U.S. EPA, 1991 and 2005) and are likely to be mandated for coastal waters. TMDL analysis estimates allowable pollutant loads and allocates them to known sources so management actions may be taken to restore water to its intended uses (U.S. EPA, 1991 and 2005). This project sought first to quantify and compare fecal contamination levels for three different types of land use on the coast, and second, to apply MAR and ribotyping techniques and assess their effectiveness for indentifying bacterial sources. Third, results from these studies would be applied to one watershed to develop a case study coastal TMDL. All three watershed study areas are within Carteret County, North Carolina. Jumping Run Creek and Pettiford Creek are within the White Oak River Basin management unit whereas the South River falls within the Neuse River Basin. Jumping Run Creek watershed encompasses approximately 320 ha. Its watershed was a dense, coastal pocosin on sandy, relic dune ridges, but current land uses are primarily medium density residential. Pettiford Creek is in the Croatan National Forest, is 1133 ha. and is basically undeveloped. The third study area is on Open Grounds Farm in the South River watershed. Half of the 630 ha. watershed is under cultivation with most under active water control (flashboard risers). The remaining portion is forested silviculture.(PDF contains 4 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When considering the potential uptake and utilization of technology management tools by industry, it must be recognized that companies face the difficult challenges of selecting, adopting and integrating individual tools into a toolkit that must be implemented within their current organizational processes and systems. This situation is compounded by the lack of sound advice on integrating well-founded individual tools into a robust toolkit that has the necessary degree of flexibility such that they can be tailored for application to specific problems faced by individual organizations. As an initial stepping stone to offering a toolkit with empirically proven utility, this paper provides a conceptual foundation to the development of toolkits by outlining an underlying philosophical position based on observations from multiple research and commercial collaborations with industry. This stance is underpinned by a set of operationalized principles that can offer guidance to organizations when deciding upon the appropriate form, functions and features that should be embodied by any potential tool/toolkit. For example, a key objective of any tool is to aid decision-making and a core set of powerful, flexible, scaleable and modular tools should be sufficient to allow users to generate, explore, shape and implement possible solutions across a wide array of strategic issues. From our philosophical stance, the preferred mode of engagement is facilitated workshops with a participatory process that enables multiple perspectives and structures the conversation through visual representations in order to manage the cognitive load in the collaborative environment. The generic form of the tools should be configurable for the given context and utilized in a lightweight manner based on the premise of start small and iterate fast. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When considering the potential uptake and utilization of technology management tools by industry, it must be recognized that companies face the difficult challenges of selecting, adopting and integrating individual tools into a toolkit that must be implemented within their current organizational processes and systems. This situation is compounded by the lack of sound advice on integrating well-founded individual tools into a robust toolkit that has the necessary degree of flexibility such that they can be tailored for application to specific problems faced by individual organizations. As an initial stepping stone to offering a toolkit with empirically proven utility, this paper provides a conceptual foundation to the development of toolkits by outlining an underlying philosophical position based on observations from multiple research and commercial collaborations with industry. This stance is underpinned by a set of operationalized principles that can offer guidance to organizations when deciding upon the appropriate form, functions and features that should be embodied by any potential tool/toolkit. For example, a key objective of any tool is to aid decision-making and a core set of powerful, flexible, scaleable and modular tools should be sufficient to allow users to generate, explore, shape and implement possible solutions across a wide array of strategic issues. From our philosophical stance, the preferred mode of engagement is facilitated workshops with a participatory process that enables multiple perspectives and structures the conversation through visual representations in order to manage the cognitive load in the collaborative environment. The generic form of the tools should be configurable for the given context and utilized in a lightweight manner based on the premise of 'start small and iterate fast'. © 2012 Elsevier Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently adopt a set of local policies that specify which routes it accepts and advertises from/to other networks, as well as which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme (APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially-conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts (policy conflict-avoidance vs. -control mode), each AS dynamically adjusts its own path preferences—increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the substability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, routing load, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of short and long term service load tests were undertaken on the sixth floor of the full-scale, seven storey, reinforced concrete building at the Large Building Test Facility of the Building Research Establishment at Cardington. By using internally strain gauged reinforcing bars cast into an internal and external floor bay during the construction process it was possible to gain a detailed record of slab strains resulting from the application of several arrangements of test loads. Short term tests were conducted in December 1998 and long term monitoring then ensued until April 2001. This paper describes the test programmes and presents results to indicate slab behaviour for the various loading regimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Acute ankle sprains are usually managed functionally, with advice to undertake progressive weight-bearing and walking. Mechanical loading is an important modular of tissue repair; therefore, the clinical effectiveness of walking after ankle sprain may be dose dependent. The intensity, magnitude and duration of load associated with current functional treatments for ankle sprain are unclear.

AIM: To describe physical activity (PA) in the first week after ankle sprain and to compare results with a healthy control group.

METHODS: Participants (16-65 years) with an acute ankle sprain were randomised into two groups (standard or exercise). Both groups were advised to apply ice and compression, and walk within the limits of pain. The exercise group undertook additional therapeutic exercises. PA was measured using an activPAL accelerometer, worn for 7 days after injury. Comparisons were made with a non-injured control group.

RESULTS: The standard group were significantly less active (1.2 ± 0.4 h activity/day; 5621 ± 2294 steps/day) than the exercise (1.7 ± 0 .7 h/day, p=0.04; 7886 ± 3075 steps/day, p=0.03) and non-injured control groups (1.7 ± 0.4 h/day, p=0.02; 8844 ± 2185 steps/day, p=0.002). Also, compared with the non-injured control group, the standard and exercise groups spent less time in moderate (38.3 ± 12.7 min/day vs 14.5 ± 11.4 min/day, p=0.001 and 22.5 ± 15.9 min/day, p=0.003) and high-intensity activity (4.1 ± 6.9 min/day vs 0.1 ± 0.1 min/day, p=0.001 and 0.62 ± 1.0 min/day p=0.005).

CONCLUSION: PA patterns are reduced in the first week after ankle sprain, which is partly ameliorated with addition of therapeutic exercises. This study represents the first step towards developing evidence-based walking prescription after acute ankle sprain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scrapers have established an important position in the earthmoving field as they are independently capable of accomplishing an earthmoving operation. Given that loading a scraper to its capacity does not entail its maximum production, optimizing the scraper’s loading time is an essential prerequisite for successful operations management. The relevant literature addresses the loading time optimization through a graphical method that is founded on the invalid assumption that the hauling time is independent of the load time. To correct this, a new algorithmic optimization method that incorporates the golden section search and the bisection algorithm is proposed. Comparison of the results derived from the proposed and the existing method demonstrates that the latter entails the systematic needless prolongation of the loading stage thus resulting in reduced hourly production and increased cost. Therefore, the proposed method achieves an improved modeling of scraper earthmoving operations and contributes toward a more efficient cost management.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study characterizes the domestic loads suitable to participate in the load participation scheme to make the power system more carbon and economically efficient by shifting the electricity demand profile towards periods when there is plentiful renewable in-feed.

A series of experiments have been performed on a common fridge-freezer, both completely empty and half full. The results presented are ambient temperature, temperature inside the fridge, temperature inside the drawer of the fridge, temperature inside the freezer, thermal time constants, power consumption and electric energy consumed.

The thermal time constants obtained clearly demonstrate the potential of such refrigeration load for Smart Customer Load Participation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the application of the on-load exciting current Park's Vector Approach for diagnosing permanent and intermittent turn-to-turn winding faults in operating power transformers. First, an experimental investigation of the behaviour of the transformer under the occurrence of both permanent and intermittent winding faults is presented. Finally, experimental test results demonstrate the effectiveness of the proposed diagnostic technique, which is based on the on-line monitoring of the on-load exciting current Park's Vector patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power systems have been suffering huge changes mainly due to the substantial increase of distributed generation and to the operation in competitive environments. Virtual power players can aggregate a diversity of players, namely generators and consumers, and a diversity of energy resources, including electricity generation based on several technologies, storage and demand response. Resource management gains an increasing relevance in this competitive context, while demand side active role provides managers with increased demand elasticity. This makes demand response use more interesting and flexible, giving rise to a wide range of new opportunities.This paper proposes a methodology for managing demand response programs in the scope of virtual power players. The proposed method is based on the calculation of locational marginal prices (LMP). The evaluation of the impact of using demand response specific programs on the LMP value supports the manager decision concerning demand response use. The proposed method has been computationally implemented and its application is illustrated in this paper using a 32 bus network with intensive use of distributed generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The large penetration of intermittent resources, such as solar and wind generation, involves the use of storage systems in order to improve power system operation. Electric Vehicles (EVs) with gridable capability (V2G) can operate as a means for storing energy. This paper proposes an algorithm to be included in a SCADA (Supervisory Control and Data Acquisition) system, which performs an intelligent management of three types of consumers: domestic, commercial and industrial, that includes the joint management of loads and the charge/discharge of EVs batteries. The proposed methodology has been implemented in a SCADA system developed by the authors of this paper – the SCADA House Intelligent Management (SHIM). Any event in the system, such as a Demand Response (DR) event, triggers the use of an optimization algorithm that performs the optimal energy resources scheduling (including loads and EVs), taking into account the priorities of each load defined by the installation users. A case study considering a specific consumer with several loads and EVs is presented in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a simulated annealing (SA) approach to address energy resources management from the point of view of a virtual power player (VPP) operating in a smart grid. Distributed generation, demand response, and gridable vehicles are intelligently managed on a multiperiod basis according to V2G user´s profiles and requirements. Apart from using the aggregated resources, the VPP can also purchase additional energy from a set of external suppliers. The paper includes a case study for a 33 bus distribution network with 66 generators, 32 loads, and 1000 gridable vehicles. The results of the SA approach are compared with a methodology based on mixed-integer nonlinear programming. A variation of this method, using ac load flow, is also used and the results are compared with the SA solution using network simulation. The proposed SA approach proved to be able to obtain good solutions in low execution times, providing VPPs with suitable decision support for the management of a large number of distributed resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intensive use of Distributed Generation (DG) represents a change in the paradigm of power systems operation making small-scale energy generation and storage decision making relevant for the whole system. This paradigm led to the concept of smart grid for which an efficient management, both in technical and economic terms, should be assured. This paper presents a new approach to solve the economic dispatch in smart grids. The proposed methodology for resource management involves two stages. The first one considers fuzzy set theory to define the natural resources range forecast as well as the load forecast. The second stage uses heuristic optimization to determine the economic dispatch considering the generation forecast, storage management and demand response