954 resultados para Resource Utilization
Resumo:
Traffic Engineering has been the prime concern for Internet Service Providers (ISPs), with the main focus being minimization of over-utilization of network capacity even though additional capacity is available which is under-utilized, Furthermore, requirements of timely delivery of digitized audiovisual information raises a new challenge of finding a path meeting these requirements. This paper addresses the issue of (a) distributing load to achieve global efficiency in resource utilization. (b) Finding a path satisfying the real time requirements of, delay and bandwidth requested by the applications. In this paper we do a critical study of the link utilization that varies over time and determine the time interval during which the link occupancy remains constant across days. This information helps in pre-determining link utilization that is useful in balancing load in the network Finally, we run simulations that use a dynamic time interval for profiling traffic and show improvement in terms number of calls admitted/blocked.
Resumo:
The paper examines the resource utilization practices of the Lake Chad in view of the need for sustainable development of the natural resources of the area, which are being recklessly exploited. The issues of obnoxious fishing practices, inappropriate agricultural practices, indiscriminate grazing, reckless fuel-wood harvesting, water pollution etc were discussed. There are clear indications that the current resources utilization practices are pushing the natural resources of the area beyond the limit of their regenerative capacity. This is traceable to institutional weakness and inadequate management strategies at the Lake Chad basin. Suggestions were made towards witnessing a change of attitude to resource use, exploitation and management strategies
Resumo:
Background: Increasing emphasis is being placed on the economics of health care service delivery - including home-based palliative care. Aim: This paper analyzes resource utilization and costs of a shared-care demonstration project in rural Ontario (Canada) from the public health care system's perspective. Design: To provide enhanced end-of-life care, the shared-care approach ensured exchange of expertise and knowledge and coordination of services in line with the understood goals of care. Resource utilization and costs were tracked over the 15 month study period from January 2005 to March 2006. Results: Of the 95 study participants (average age 71 years), 83 had a cancer diagnosis (87%); the non-cancer diagnoses (12 patients, 13%) included mainly advanced heart diseases and COPD. Community Care Access Centre and Enhanced Palliative Care Team-based homemaking and specialized nursing services were the most frequented offerings, followed by equipment/transportation services and palliative care consults for pain and symptom management. Total costs for all patient-related services (in 2007 CAN) were 1,625,658.07 - or 17,112.19 per patient/117.95 per patient day. Conclusion: While higher than expenditures previously reported for a cancer-only population in an urban Ontario setting, the costs were still within the parameters of the US Medicare Hospice Benefits, on a par with the per diem funding assigned for long-term care homes and lower than both average alternate level of care and hospital costs within the Province of Ontario. The study results may assist service planners in the appropriate allocation of resources and service packaging to meet the complex needs of palliative care populations. © 2012 The Author(s).
Resumo:
Ethernet está empezando a pasar de las redes de área local a una red de transporte. Sin embargo, como los requisitos de las redes de transporte son más exigentes, la tecnología necesita ser mejorada. Esquemas diseñados para mejorar Ethernet para que cumpla con las necesidades de transporte se pueden categorizar en dos clases. La primera clase mejora solo los componentes de control de Ethernet (Tecnologías basadas en STP), y la segunda clase mejora tanto componentes de control como de encaminamiento de Ethernet (tecnologías basadas en etiquetas). Esta tesis analiza y compara el uso de espacio en las etiquetas de las tecnologias basadas en ellas para garantizar su escalabilidad. La aplicabilidad de las técnicas existentes y los estudios que se pueden utilizar para superar o reducir los problemas de escalabilidad de la etiqueta son evaluados. Además, esta tesis propone un ILP para calcular el óptimo rendimiento de las technologias basadas en STP y las compara con las basadas en etiquetas para ser capaz de determinar, dada una específica situacion, que technologia utilizar.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Objective: Major Depressive Disorder (MDD) is a debilitating condition with a marked social impact. The impact of MDD and Treatment-Resistant Depression (TRD+) within the Brazilian health system is largely unknown. The goal of this study was to compare resource utilization and costs of care for treatment-resistant MDD relative to non-treatment-resistant depression (TRD-). Methods: We retrospectively analyzed the records of 212 patients who had been diagnosed with MDD according to the ICD-10 criteria. Specific criteria were used to identify patients with TRD+. Resource utilization was estimated, and the consumption of medication was annualized. We obtained information on medical visits, procedures, hospitalizations, emergency department visits and medication use related or not to MDD. Results: The sample consisted of 90 TRD+ and 122 TRD-patients. TRD+ patients used significantly more resources from the psychiatric service, but not from non-psychiatric clinics, compared to TRD-patients. Furthermore, TRD+ patients were significantly more likely to require hospitalizations. Overall, TRD+ patients imposed significantly higher (81.5%) annual costs compared to TRD-patients (R$ 5,520.85; US$ 3,075.34 vs. R$ 3,042.14; US$ 1,694.60). These findings demonstrate the burden of MDD, and especially of TRD+ patients, to the tertiary public health system. Our study should raise awareness of the impact of TRD+ and should be considered by policy makers when implementing public mental health initiatives.
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
The present study was designed to examine the following: (1) the taxonomic. spatial, and temporal patterns of availability of all invertebrate species associated with Macrocystis (excluding protozoans and nematodes); (2) the utilization of this invertebrate assemblage as food by kelp forest fishes within the Macrocystis "foliage- searching" feeding guild, as well as proximal mechanisms leading to observed patterns of resource partitioning; and (3) the dynamic relationship between availability and utilization of this food resource. The approach was largely descriptive. with observations collected during a 19-month period from June 1975 to December 1976. Chapter I is an investigation of the resource utilization patterns of four species of kelp forest fishes with respect to food-related resource dimensions. and tests aspects of current theory involving inter- and intraspecific competition. Chapter II is a detailed examination of the invertebrate assemblage associated with Macrocystis and presents life histories of the fishes examined during this study. (PDFs contains 387 pages, chapter 1 is 203 pages, chapter 2 is 184 pages)
Sustainable utilization of inland water resources: an integrated program for research and management
Resumo:
In both developed and developing countries, there is increased competition for water resources, resulting in deficiencies in supply and in various forms of pollution. In developing countries, the nutritional potential of aquatic resources is very important. To realize this potential, integrated research and management for sustainable water resource use are needed. This requires a sound understanding of the structure and function of aquatic ecosystems. A programme is presented which stresses the interrelationships of the physical, chemical and biological components of aquatic systems and their catchments. The programme consists of 16 stages in 5 phases, which are as follows: System description; System functioning and modelling; Resource assessment/dynamics; Resource potential; and, Resource utilization for sustainability. This programme enables workers within different disciplines to identify how their expertise contributes to the overall research requirements to support resource development.
Resumo:
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.
Resumo:
Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd.
Resumo:
There is an inequality in resource utilization among acute psychiatric in-patients. About 20-30% of them absorb 60-80% of the total resources allocated to this form of treatment. This study intends to summarize findings related to heavy in-patient service use and to illustrate them by means of utilization data for acute psychiatric wards.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.