988 resultados para Cost centers


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Relatório de Estágio apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Contabilidade e Finanças, sob orientação do Doutor Adalmiro Álvaro Malheiro de Castro Andrade Pereira

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although several profiling techniques for identifying performance bottlenecks in logic programs have been developed, they are generally not automatic and in most cases they do not provide enough information for identifying the root causes of such bottlenecks. This complicates using their results for guiding performance improvement. We present a profiling method and tool that provides such explanations. Our profiler associates cost centers to certain program elements and can measure different types of resource-related properties that affect performance, preserving the precedence of cost centers in the cali graph. It includes an automatic method for detecting procedures that are performance bottlenecks. The profiling tool has been integrated in a previously developed run-time checking framework to allow verification of certain properties when they cannot be verified statically. The approach allows checking global computational properties which require complex instrumentation tracking information about previous execution states, such as, e.g., that the execution time accumulated by a given procedure is not greater than a given bound. We have built a prototype implementation, integrated it in the Ciao/CiaoPP system and successfully applied it to performance improvement, automatic optimization (e.g., resource-aware specialization of programs), run-time checking, and debugging of global computational properties (e.g., resource usage) in Prolog programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although several profiling techniques for identifying performance bottlenecks in logic programs have been developed, they are generally not automatic and in most cases they do not provide enough information for identifying the root causes of such bottlenecks. This complicates using their results for guiding performance improvement. We present a profiling method and tool that provides such explanations. Our profiler associates cost centers to certain program elements and can measure different types of resource-related properties that affect performance, preserving the precedence of cost centers in the call graph. It includes an automatic method for detecting procedures that are performance bottlenecks. The profiling tool has been integrated in a previously developed run-time checking framework to allow verification of certain properties when they cannot be verified statically. The approach allows checking global computational properties which require complex instrumentation tracking information about previous execution states, such as, e.g., that the execution time accumulated by a given procedure is not greater than a given bound. We have built a prototype implementation, integrated it in the Ciao/CiaoPP system and successfully applied it to performance improvement, automatic optimization (e.g., resource-aware specialization of programs), run-time checking, and debugging of global computational properties (e.g., resource usage) in Prolog programs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Virtual Machine (VM) management is an obvious need in today's data centers for various management activities and is accomplished in two phases— finding an optimal VM placement plan and implementing that placement through live VM migrations. These phases result in two research problems— VM placement problem (VMPP) and VM migration scheduling problem (VMMSP). This research proposes and develops several evolutionary algorithms and heuristic algorithms to address the VMPP and VMMSP. Experimental results show the effectiveness and scalability of the proposed algorithms. Finally, a VM management framework has been proposed and developed to automate the VM management activity in cost-efficient way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity cost has become a major expense for running data centers and server consolidation using virtualization technology has been used as an important technology to improve the energy efficiency of data centers. In this research, a genetic algorithm and a simulation-annealing algorithm are proposed for the static virtual machine placement problem that considers the energy consumption in both the servers and the communication network, and a trading algorithm is proposed for dynamic virtual machine placement. Experimental results have shown that the proposed methods are more energy efficient than existing solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few years, there has been a steady increase in the attention, importance and focus of green initiatives related to data centers. While various energy aware measures have been developed for data centers, the requirement of improving the performance efficiency of application assignment at the same time has yet to be fulfilled. For instance, many energy aware measures applied to data centers maintain a trade-off between energy consumption and Quality of Service (QoS). To address this problem, this paper presents a novel concept of profiling to facilitate offline optimization for a deterministic application assignment to virtual machines. Then, a profile-based model is established for obtaining near-optimal allocations of applications to virtual machines with consideration of three major objectives: energy cost, CPU utilization efficiency and application completion time. From this model, a profile-based and scalable matching algorithm is developed to solve the profile-based model. The assignment efficiency of our algorithm is then compared with that of the Hungarian algorithm, which does not scale well though giving the optimal solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blasting is an integral part of large-scale open cut mining that often occurs in close proximity to population centers and often results in the emission of particulate material and gases potentially hazardous to health. Current air quality monitoring methods rely on limited numbers of fixed sampling locations to validate a complex fluid environment and collect sufficient data to confirm model effectiveness. This paper describes the development of a methodology to address the need of a more precise approach that is capable of characterizing blasting plumes in near-real time. The integration of the system required the modification and integration of an opto-electrical dust sensor, SHARP GP2Y10, into a small fixed-wing and multi-rotor copter, resulting in the collection of data streamed during flight. The paper also describes the calibration of the optical sensor with an industry grade dust-monitoring device, Dusttrak 8520, demonstrating a high correlation between them, with correlation coefficients (R2) greater than 0.9. The laboratory and field tests demonstrate the feasibility of coupling the sensor with the UAVs. However, further work must be done in the areas of sensor selection and calibration as well as flight planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The best wind sites in the United States are often located far from electricity demand centers and lack transmission access. Local sites that have lower quality wind resources but do not require as much power transmission capacity are an alternative to distant wind resources. In this paper, we explore the trade-offs between developing new wind generation at local sites and installing wind farms at remote sites. We first examine the general relationship between the high capital costs required for local wind development and the relatively lower capital costs required to install a wind farm capable of generating the same electrical output at a remote site,with the results representing the maximum amount an investor should be willing to pay for transmission access. We suggest that this analysis can be used as a first step in comparing potential wind resources to meet a state renewable portfolio standard (RPS). To illustrate, we compare the cost of local wind (∼50 km from the load) to the cost of distant wind requiring new transmission (∼550-750 km from the load) to meet the Illinois RPS. We find that local, lower capacity factor wind sites are the lowest cost option for meeting the Illinois RPS if new long distance transmission is required to access distant, higher capacity factor wind resources. If higher capacity wind sites can be connected to the existing grid at minimal cost, in many cases they will have lower costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Hand hygiene noncompliance is a major cause of nosocomial infection. Nosocomial infection cost data exist, but the effect of hand hygiene noncompliance is unknown. OBJECTIVE: To estimate methicillin-resistant Staphylococcus aureus (MRSA)-related cost of an incident of hand hygiene noncompliance by a healthcare worker during patient care. DESIGN: Two models were created to simulate sequential patient contacts by a hand hygiene-noncompliant healthcare worker. Model 1 involved encounters with patients of unknown MRSA status. Model 2 involved an encounter with an MRSA-colonized patient followed by an encounter with a patient of unknown MRSA status. The probability of new MRSA infection for the second patient was calculated using published data. A simulation of 1 million noncompliant events was performed. Total costs of resulting infections were aggregated and amortized over all events. SETTING: Duke University Medical Center, a 750-bed tertiary medical center in Durham, North Carolina. RESULTS: Model 1 was associated with 42 MRSA infections (infection rate, 0.0042%). Mean infection cost was $47,092 (95% confidence interval [CI], $26,040-$68,146); mean cost per noncompliant event was $1.98 (95% CI, $0.91-$3.04). Model 2 was associated with 980 MRSA infections (0.098%). Mean infection cost was $53,598 (95% CI, $50,098-$57,097); mean cost per noncompliant event was $52.53 (95% CI, $47.73-$57.32). A 200-bed hospital incurs $1,779,283 in annual MRSA infection-related expenses attributable to hand hygiene noncompliance. A 1.0% increase in hand hygiene compliance resulted in annual savings of $39,650 to a 200-bed hospital. CONCLUSIONS: Hand hygiene noncompliance is associated with significant attributable hospital costs. Minimal improvements in compliance lead to substantial savings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On 1 January 2012 Swiss Diagnosis Related Groups (DRG), a new uniform payment system for in-patients was introduced in Switzerland with the intention to replace a "cost-based" with a "case-based" reimbursement system to increase efficiency. With the introduction of the new payment system we aim to answer questions raised regarding length of stay as well as patients' outcome and satisfaction. This is a prospective, two-centre observational cohort study with data from University Hospital Basel and the Cantonal Hospital Aarau, Switzerland, from January to June 2011 and 2012, respectively. Consecutive in-patients with the main diagnosis of either community-acquired pneumonia, exacerbation of COPD, acute heart failure or hip fracture were included. A questionnaire survey was sent out after discharge investigating changes before and after SwissDRG implementation. Our primary endpoint was LOS. Of 1,983 eligible patients 841 returned the questionnaire and were included into the analysis (429 in 2011, 412 in 2012). The median age was 76.7 years (50.8% male). Patients in the two years were well balanced in regard to main diagnoses and co-morbidities. Mean LOS in the overall patient population was 10.0 days and comparable between the 2011 cohort and the 2012 cohort (9.7 vs 10.3; p = 0.43). Overall satisfaction with care changed only slightly after introduction of SwissDRG and remained high (89.0% vs 87.8%; p = 0.429). Investigating the influence of the implementation of SwissDRG in 2012 regarding LOS patients' outcome and satisfaction, we found no significant changes. However, we observed some noteworthy trends, which should be monitored closely.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The public health burden of coronary artery disease (CAD) is important. Perfusion cardiac magnetic resonance (CMR) is generally accepted to detect and monitor CAD. Few studies have so far addressed its costs and costeffectiveness. Objectives: To compare in a large CMR registry the costs of a CMR-guided strategy vs two hypothetical invasive strategies for the diagnosis and the treatment of patients with suspected CAD. Methods: In 3'647 patients with suspected CAD included prospectively in the EuroCMR Registry (59 centers; 18 countries) costs were calculated for diagnostic examinations, revascularizations as well as for complication management over a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive X-ray coronary angiography (CXA) and revascularization at the discretion of the treating physician (=CMR+CXA strategy). Ischemia was found in 20.9% of patients and 17.4% of them were revascularized. In ischemia-negative patients by CMR, cardiac death and non-fatal myocardial infarctions occurred in 0.38%/y. In a hypothetical invasive arm the costs were calculated for an initial CXA followed by FFR testing in vessels with ≥50% diameter stenoses (=CXA+FFR strategy). To model this hypothetical arm, the same proportion of ischemic patients and outcome was assumed as for the CMR+CXA strategy. The coronary stenosis - FFR relationship reported in the literature was used to derive the proportion of patients with ≥50% diameter stenoses (Psten) in the study cohort. The costs of a CXA-only strategy were also calculated. Calculations were performed from a third payer perspective for the German, UK, Swiss, and US healthcare systems.