977 resultados para Resource Utilization
Resumo:
Purpose: To describe (1) the clinical profiles and the patterns of use of long-acting injectable (LAI) antipsychotics in patients with schizophrenia at risk of nonadherence with oral antipsychotics, and in those who started treatment with LAI antipsychotics, (2) health care resource utilization and associated costs. Patients and methods: A total of 597 outpatients with schizophrenia at risk of nonadherence, according to the psychiatrist's clinical judgment, were recruited at 59 centers in a noninterventional prospective observational study of 1-year follow-up when their treatment was modified. In a post hoc analysis, the profiles of patients starting LAI or continuing with oral antipsychotics were described, and descriptive analyses of treatments, health resource utilization, and direct costs were performed in those who started an LAI antipsychotic. Results: Therapy modifications involved the antipsychotic medications in 84.8% of patients, mostly because of insufficient efficacy of prior regimen. Ninety-two (15.4%) patients started an LAI antipsychotic at recruitment. Of these, only 13 (14.1%) were prescribed with first-generation antipsychotics. During 1 year, 16.3% of patients who started and 14.9% of patients who did not start an LAI antipsychotic at recruitment relapsed, contrasting with the 20.9% who had been hospitalized only within the prior 6 months. After 1 year, 74.3% of patients who started an LAI antipsychotic continued concomitant treatment with oral antipsychotics. The mean (median) total direct health care cost per patient per month during the study year among the patients starting any LAI antipsychotic at baseline was 1,407 ( 897.7). Medication costs (including oral and LAI antipsychotics and concomitant medication) represented almost 44%, whereas nonmedication costs accounted for more than 55% of the mean total direct health care costs. Conclusion: LAI antipsychotics were infrequently prescribed in spite of a psychiatrist-perceived risk of nonadherence to oral antipsychotics. Mean medication costs were lower than nonmedication costs.
Resumo:
Purpose: To describe (1) the clinical profiles and the patterns of use of long-acting injectable (LAI) antipsychotics in patients with schizophrenia at risk of nonadherence with oral antipsychotics, and in those who started treatment with LAI antipsychotics, (2) health care resource utilization and associated costs. Patients and methods: A total of 597 outpatients with schizophrenia at risk of nonadherence, according to the psychiatrist's clinical judgment, were recruited at 59 centers in a noninterventional prospective observational study of 1-year follow-up when their treatment was modified. In a post hoc analysis, the profiles of patients starting LAI or continuing with oral antipsychotics were described, and descriptive analyses of treatments, health resource utilization, and direct costs were performed in those who started an LAI antipsychotic. Results: Therapy modifications involved the antipsychotic medications in 84.8% of patients, mostly because of insufficient efficacy of prior regimen. Ninety-two (15.4%) patients started an LAI antipsychotic at recruitment. Of these, only 13 (14.1%) were prescribed with first-generation antipsychotics. During 1 year, 16.3% of patients who started and 14.9% of patients who did not start an LAI antipsychotic at recruitment relapsed, contrasting with the 20.9% who had been hospitalized only within the prior 6 months. After 1 year, 74.3% of patients who started an LAI antipsychotic continued concomitant treatment with oral antipsychotics. The mean (median) total direct health care cost per patient per month during the study year among the patients starting any LAI antipsychotic at baseline was 1,407 ( 897.7). Medication costs (including oral and LAI antipsychotics and concomitant medication) represented almost 44%, whereas nonmedication costs accounted for more than 55% of the mean total direct health care costs. Conclusion: LAI antipsychotics were infrequently prescribed in spite of a psychiatrist-perceived risk of nonadherence to oral antipsychotics. Mean medication costs were lower than nonmedication costs.
Resumo:
Ethernet está empezando a pasar de las redes de área local a una red de transporte. Sin embargo, como los requisitos de las redes de transporte son más exigentes, la tecnología necesita ser mejorada. Esquemas diseñados para mejorar Ethernet para que cumpla con las necesidades de transporte se pueden categorizar en dos clases. La primera clase mejora solo los componentes de control de Ethernet (Tecnologías basadas en STP), y la segunda clase mejora tanto componentes de control como de encaminamiento de Ethernet (tecnologías basadas en etiquetas). Esta tesis analiza y compara el uso de espacio en las etiquetas de las tecnologias basadas en ellas para garantizar su escalabilidad. La aplicabilidad de las técnicas existentes y los estudios que se pueden utilizar para superar o reducir los problemas de escalabilidad de la etiqueta son evaluados. Además, esta tesis propone un ILP para calcular el óptimo rendimiento de las technologias basadas en STP y las compara con las basadas en etiquetas para ser capaz de determinar, dada una específica situacion, que technologia utilizar.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Objective: Major Depressive Disorder (MDD) is a debilitating condition with a marked social impact. The impact of MDD and Treatment-Resistant Depression (TRD+) within the Brazilian health system is largely unknown. The goal of this study was to compare resource utilization and costs of care for treatment-resistant MDD relative to non-treatment-resistant depression (TRD-). Methods: We retrospectively analyzed the records of 212 patients who had been diagnosed with MDD according to the ICD-10 criteria. Specific criteria were used to identify patients with TRD+. Resource utilization was estimated, and the consumption of medication was annualized. We obtained information on medical visits, procedures, hospitalizations, emergency department visits and medication use related or not to MDD. Results: The sample consisted of 90 TRD+ and 122 TRD-patients. TRD+ patients used significantly more resources from the psychiatric service, but not from non-psychiatric clinics, compared to TRD-patients. Furthermore, TRD+ patients were significantly more likely to require hospitalizations. Overall, TRD+ patients imposed significantly higher (81.5%) annual costs compared to TRD-patients (R$ 5,520.85; US$ 3,075.34 vs. R$ 3,042.14; US$ 1,694.60). These findings demonstrate the burden of MDD, and especially of TRD+ patients, to the tertiary public health system. Our study should raise awareness of the impact of TRD+ and should be considered by policy makers when implementing public mental health initiatives.
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
Ecological specialization in resource utilization has various facades ranging from nutritional resources via host use of parasites or phytophagous insects to local adaptation in different habitats. Therefore, the evolution of specialization affects the evolution of most other traits, which makes it one of the core issues in the theory of evolution. Hence, the evolution of specialization has gained enormous amounts of research interest, starting already from Darwin’s Origin of species in 1859. Vast majority of the theoretical studies has, however, focused on the mathematically most simple case with well-mixed populations and equilibrium dynamics. This thesis explores the possibilities to extend the evolutionary analysis of resource usage to spatially heterogeneous metapopulation models and to models with non-equilibrium dynamics. These extensions are enabled by the recent advances in the field of adaptive dynamics, which allows for a mechanistic derivation of the invasion-fitness function based on the ecological dynamics. In the evolutionary analyses, special focus is set to the case with two substitutable renewable resources. In this case, the most striking questions are, whether a generalist species is able to coexist with the two specialist species, and can such trimorphic coexistence be attained through natural selection starting from a monomorphic population. This is shown possible both due to spatial heterogeneity and due to non-equilibrium dynamics. In addition, it is shown that chaotic dynamics may sometimes inflict evolutionary suicide or cyclic evolutionary dynamics. Moreover, the relations between various ecological parameters and evolutionary dynamics are investigated. Especially, the relation between specialization and dispersal propensity turns out to be counter-intuitively non-monotonous. This observation served as inspiration to the analysis of joint evolution of dispersal and specialization, which may provide the most natural explanation to the observed coexistence of specialist and generalist species.
Resumo:
Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd.
Resumo:
There is an inequality in resource utilization among acute psychiatric in-patients. About 20-30% of them absorb 60-80% of the total resources allocated to this form of treatment. This study intends to summarize findings related to heavy in-patient service use and to illustrate them by means of utilization data for acute psychiatric wards.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.
Resumo:
With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.
Resumo:
Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one crop define the starting conditions of the following crop. In an attempt to overcome these limitations and to reconcile a range of modelling approaches, existing model components that worked demonstrably well were combined with new components for aspects where existing capabilities were inadequate. This resulted in the Integrated Wheat Model (I_WHEAT), which was developed as a module of the cropping systems model APSIM. To increase predictive capability of the model, process detail was reduced, where possible, by replacing groups of processes with conservative, biologically meaningful parameters. I_WHEAT does not contain a soil water or soil nitrogen balance. These are present as other modules of APSIM. In I_WHEAT, yield is simulated using a linear increase in harvest index whereby nitrogen or water limitations can lead to early termination of grainfilling and hence cessation of harvest index increase. Dry matter increase is calculated either from the amount of intercepted radiation and radiation conversion efficiency or from the amount of water transpired and transpiration efficiency, depending on the most limiting resource. Leaf area and tiller formation are calculated from thermal time and a cultivar specific phyllochron interval. Nitrogen limitation first reduces leaf area and then affects radiation conversion efficiency as it becomes more severe. Water or nitrogen limitations result in reduced leaf expansion, accelerated leaf senescence or tiller death. This reduces the radiation load on the crop canopy (i.e. demand for water) and can make nitrogen available for translocation to other organs. Sensitive feedbacks between light interception and dry matter accumulation are avoided by having environmental effects acting directly on leaf area development, rather than via biomass production. This makes the model more stable across environments without losing the interactions between the different external influences. When comparing model output with models tested previously using data from a wide range of agro-climatic conditions, yield and biomass predictions were equal to the best of those models, but improvements could be demonstrated for simulating leaf area dynamics in response to water and nitrogen supply, kernel nitrogen content, and total water and nitrogen use. I_WHEAT does not require calibration for any of the environments tested. Further model improvement should concentrate on improving phenology simulations, a more thorough derivation of coefficients to describe leaf area development and a better quantification of some processes related to nitrogen dynamics. (C) 1998 Elsevier Science B.V.
Resumo:
The MASS III Trial is a large project from a single institution, The Heart Institute of the University of Sao Paulo, Brazil (InCor), enrolling patients with coronary artery disease and preserved ventricular function. The aim of the MASS III Trial is to compare medical effectiveness, cerebral injury, quality of life, and the cost-effectiveness of coronary surgery with and without of cardiopulmonary bypass in patients with multivessel coronary disease referred for both strategies. The primary endpoint should be a composite of cardiovascular mortality, cerebrovascular accident, nonfatal myocardial infarction, and refractory angina requiring revascularization. The secondary end points in this trial include noncardiac mortality, presence and severity of angina, quality of life based on the SF-36 Questionnaire, and cost-effectiveness at discharge and at 5-year follow-up. In this scenario, we will analyze the cost of the initial procedure, hospital length of stay, resource utilization, repeat hospitalization, and repeat revascularization events during the follow-up. Exercise capacity will be assessed at 6-months, 12-months, and the end of follow-up. A neurocognitive evaluation will be assessed in a subset of subjects using the Brain Resource Center computerized neurocognitive battery. Furthermore, magnetic resonance imaging will be made to detect any cerebral injury before and after procedures in patients who undergo coronary artery surgery with and without cardiopulmonary bypass.