967 resultados para time management


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management. Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed. This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose and evaluate admission control mechanisms for ACCORD, an Admission Control and Capacity Overload management Real-time Database framework-an architecture and a transaction model-for hard deadline RTDB systems. The system architecture consists of admission control and scheduling components which provide early notification of failure to submitted transactions that are deemed not valuable or incapable of completing on time. In particular, our Concurrency Admission Control Manager (CACM) ensures that transactions which are admitted do not overburden the system by requiring a level of concurrency that is not sustainable. The transaction model consists of two components: a primary task and a compensating task. The execution requirements of the primary task are not known a priori, whereas those of the compensating task are known a priori. Upon the submission of a transaction, the Admission Control Mechanisms are employed to decide whether to admit or reject that transaction. Once admitted, a transaction is guaranteed to finish executing before its deadline. A transaction is considered to have finished executing if exactly one of two things occur: Either its primary task is completed (successful commitment), or its compensating task is completed (safe termination). Committed transactions bring a profit to the system, whereas a terminated transaction brings no profit. The goal of the admission control and scheduling protocols (e.g., concurrency control, I/O scheduling, memory management) employed in the system is to maximize system profit. In that respect, we describe a number of concurrency admission control strategies and contrast (through simulations) their relative performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conventional meaning of culture is ‘widely shared and strongly held values’ of a particular group or society (Bradley and Parker, 2006: 89). Culture is not a rigid concept; it can be influenced or altered by new ideas or forces. This research examines the ways in which one set of ideas in particular, that is, those associated with New Public Management, have impacted upon the administrative culture of 'street-level' bureaucrats and professionals within Irish social policy. Lipsky (1980: 3) defined 'street-level' bureaucrats as ‘public service workers who interact directly with citizens in the course of their jobs, and who have substantial discretion in the execution of their work’. Utilising the Competing Values Framework (CVF) in the analysis of eighty three semi-structured interviews with 'street-level' bureaucrats and professionals, an evaluation is made as to the impact of NPM ideas on both visible and invisible aspects of administrative culture. Overall, the influence of NPM is confined to superficial aspects of administrative culture such as; increased flexibility in working hours and to some degree job contracts; increased time commitment; and a customer service focus. However, the extent of these changes varies depending on policy sector and occupational group. Aspects of consensual and hierarchical cultures remain firmly in place. These coincide with features of developmental and market cultures. Contrary to the view that members of hierarchical and consensual culture would pose resistance to change, this research clearly illustrates that a very large appetite for change exists in the attitudes of 'street-level' bureaucrats and professionals within Irish social policy, with many of them suggesting changes that correspond to NPM ideas. This study demonstrates the relevance of employing the CVF model as it is clear that administrative culture is very much a dynamic system of competing and co-existing cultures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In some supply chains, materials are ordered periodically according to local information. This paper investigates how to improve the performance of such a supply chain. Specifically, we consider a serial inventory system in which each stage implements a local reorder interval policy; i.e., each stage orders up to a local basestock level according to a fixed-interval schedule. A fixed cost is incurred for placing an order. Two improvement strategies are considered: (1) expanding the information flow by acquiring real-time demand information and (2) accelerating the material flow via flexible deliveries. The first strategy leads to a reorder interval policy with full information; the second strategy leads to a reorder point policy with local information. Both policies have been studied in the literature. Thus, to assess the benefit of these strategies, we analyze the local reorder interval policy. We develop a bottom-up recursion to evaluate the system cost and provide a method to obtain the optimal policy. A numerical study shows the following: Increasing the flexibility of deliveries lowers costs more than does expanding information flow; the fixed order costs and the system lead times are key drivers that determine the effectiveness of these improvement strategies. In addition, we find that using optimal batch sizes in the reorder point policy and demand rate to infer reorder intervals may lead to significant cost inefficiency. © 2010 INFORMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: There have been major changes in the management of anemia in US hemodialysis patients in recent years. We sought to determine the influence of clinical trial results, safety regulations, and changes in reimbursement policy on practice. METHODS: We examined indicators of anemia management among incident and prevalent hemodialysis patients from a medium-sized dialysis provider over three time periods: (1) 2004 to 2006 (2) 2007 to 2009, and (3) 2010. Trends across the three time periods were compared using generalized estimating equations. RESULTS: Prior to 2007, the median proportion of patients with monthly hemoglobin >12 g/dL for patients on dialysis 0 to 3, 4 to 6 and 7 to 18 months, respectively, was 42%, 55% and 46% declined to 41%, 54%, and 40% after 2007, and declined more sharply in 2010 to 34%, 41%, and 30%. Median weekly Epoeitin alpha doses over the same periods were 18,000, 12,400, and 9,100 units before 2007; remained relatively unchanged from 2007 to 2009; and decreased sharply in the patients 3-6 and 6-18 months on dialysis to 10,200 and 7,800 units, respectively in 2010. Iron doses, serum ferritin, and transferrin saturation levels increased over time with more pronounced increases in 2010. CONCLUSION: Modest changes in anemia management occurred between 2007 and 2009, followed by more dramatic changes in 2010. Studies are needed to examine the effects of declining erythropoietin use and hemoglobin levels and increasing intravenous iron use on quality of life, transplantation rates, infection rates and survival.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gemstone Team Future Firefighting Advancements

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.

This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.

On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.

In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.

We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,

and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.

In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2015 Chinese Nursing Association.Background Although self-management approaches have shown strong evidence of positive outcomes for urinary incontinence prevention and management, few programs have been developed for Korean rural communities. Objectives This pilot study aimed to develop, implement, and evaluate a urinary incontinence self-management program for community-dwelling women aged 55 and older with urinary incontinence in rural South Korea. Methods This study used a one-group pre- post-test design to measure the effects of the intervention using standardized urinary incontinence symptom, knowledge, and attitude measures. Seventeen community-dwelling older women completed weekly 90-min group sessions for 5 weeks. Descriptive statistics and paired t-tests and were used to analyze data. Results The mean of the overall interference on daily life from urine leakage (pre-test: M = 5.76 ± 2.68, post-test: M = 2.29 ± 1.93, t = -4.609, p < 0.001) and the sum of International Consultation on Incontinence Questionnaire scores (pre-test: M = 11.59 ± 3.00, post-test: M = 5.29 ± 3.02, t = -5.881, p < 0.001) indicated significant improvement after the intervention. Improvement was also noted on the mean knowledge (pre-test: M = 19.07 ± 3.34, post-test: M = 23.15 ± 2.60, t = 7.550, p < 0.001) and attitude scores (pre-test: M = 2.64 ± 0.19, post-test: M = 3.08 ± 0.41, t = 5.150, p < 0.001). Weekly assignments were completed 82.4% of the time. Participants showed a high satisfaction level (M = 26.82 ± 1.74, range 22-28) with the group program. Conclusions Implementation of a urinary incontinence self-management program was accompanied by improved outcomes for Korean older women living in rural communities who have scarce resources for urinary incontinence management and treatment. Urinary incontinence self-management education approaches have potential for widespread implementation in nursing practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Several trials have demonstrated the efficacy of nurse telephone case management for diabetes (DM) and hypertension (HTN) in academic or vertically integrated systems. Little is known about the real-world potency of these interventions. OBJECTIVE: To assess the effectiveness of nurse behavioral management of DM and HTN in community practices among patients with both diseases. DESIGN: The study was designed as a patient-level randomized controlled trial. PARTICIPANTS: Participants included adult patients with both type 2 DM and HTN who were receiving care at one of nine community fee-for-service practices. Subjects were required to have inadequately controlled DM (hemoglobin A1c [A1c] ≥ 7.5%) but could have well-controlled HTN. INTERVENTIONS: All patients received a call from a nurse experienced in DM and HTN management once every two months over a period of two years, for a total of 12 calls. Intervention patients received tailored DM- and HTN- focused behavioral content; control patients received non-tailored, non-interactive information regarding health issues unrelated to DM and HTN (e.g., skin cancer prevention). MAIN OUTCOMES AND MEASURES: Systolic blood pressure (SBP) and A1c were co-primary outcomes, measured at 6, 12, and 24 months; 24 months was the primary time point. RESULTS: Three hundred seventy-seven subjects were enrolled; 193 were randomized to intervention, 184 to control. Subjects were 55% female and 50% white; the mean baseline A1c was 9.1% (SD = 1%) and mean SBP was 142 mmHg (SD = 20). Eighty-two percent of scheduled interviews were conducted; 69% of intervention patients and 70% of control patients reached the 24-month time point. Expressing model estimated differences as (intervention--control), at 24 months, intervention patients had similar A1c [diff = 0.1 %, 95 % CI (-0.3, 0.5), p = 0.51] and SBP [diff = -0.9 mmHg, 95% CI (-5.4, 3.5), p = 0.68] values compared to control patients. Likewise, DBP (diff = 0.4 mmHg, p = 0.76), weight (diff = 0.3 kg, p = 0.80), and physical activity levels (diff = 153 MET-min/week, p = 0.41) were similar between control and intervention patients. Results were also similar at the 6- and 12-month time points. CONCLUSIONS: In nine community fee-for-service practices, telephonic nurse case management did not lead to improvement in A1c or SBP. Gains seen in telephonic behavioral self-management interventions in optimal settings may not translate to the wider range of primary care settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electronics industry is developing rapidly together with the increasingly complex problem of microelectronic equipment cooling. It has now become necessary for thermal design engineers to consider the problem of equipment cooling at some level. The use of Computational Fluid Dynamics (CFD) for such investigations is fast becoming a powerful and almost essential tool for the design, development and optimisation of engineering applications. However turbulence models remain a key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt fluctuations experienced by the turbulent energy and other parameters located at near wall regions and shear layers a particularly fine computational mesh is necessary which inevitably increases the computer storage and run-time requirements. This paper will discuss results from an investigation into the accuract of currently used turbulence models. Also a newly formulated transitional hybrid turbulence model will be introduced with comparisonsaagainst experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cross-domain workflow application may be constructed using a standard reference model such as the one by the Workflow Management Coalition (WfMC) [7] but the requirements for this type of application are inherently different from one organization to another. The existing models and systems built around them meet some but not all the requirements from all the organizations involved in a collaborative process. Furthermore the requirements change over time. This makes the applications difficult to develop and distribute. Service Oriented Architecture (SOA) based approaches such as the BPET (Business Process Execution Language) intend to provide a solution but fail to address the problems sufficiently, especially in the situations where the expectations and level of skills of the users (e.g. the participants of the processes) in different organisations are likely to be different. In this paper, we discuss a design pattern that provides a novel approach towards a solution. In the solution, business users can design the applications at a high level of abstraction: the use cases and user interactions; the designs are documented and used, together with the data and events captured later that represents the user interactions with the systems, to feed an intermediate component local to the users -the IFM (InterFace Mapper) -which bridges the gaps between the users and the systems. We discuss the main issues faced in the design and prototyping. The approach alleviates the need for re-programming with the APIs to any back-end service thus easing the development and distribution of the applications

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a methodology for deploying flexible dynamic configuration into embedded systems whilst preserving the reliability advantages of static systems. The methodology is based on the concept of decision points (DP) which are strategically placed to achieve fine-grained distribution of self-management logic to meet application-specific requirements. DP logic can be changed easily, and independently of the host component, enabling self-management behavior to be deferred beyond the point of system deployment. A transparent Dynamic Wrapper mechanism (DW) automatically detects and handles problems arising from the evaluation of self-management logic within each DP and ensures that the dynamic aspects of the system collapse down to statically defined default behavior to ensure safety and correctness despite failures. Dynamic context management contributes to flexibility, and removes the need for design-time binding of context providers and consumers, thus facilitating run-time composition and incremental component upgrade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an autonomics development tool which serves as both a powerful and flexible policy-expression language and a policy-based framework that supports the integration and dynamic composition of several autonomic computing techniques including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. Self-management behaviour is specified using the policy language semantics to bind the various technologies together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. The policy language and implementation library have integrated support for self-stabilising behaviour, enabling oscillation and other forms of instability to be handled at the policy level with very little effort on the part of the application developer. Example applications are presented to illustrate the integration of different autonomics techniques, and the achievement of dynamic composition.