32 resultados para Application performance monitoring.

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The widespread deployment of wireless mobile communications enables an almost permanent usage of portable devices, which imposes high demands on the battery of these devices. Indeed, battery lifetime is becoming one the most critical factors on the end-users satisfaction when using wireless communications. In this work, the optimized power save algorithm for continuous media applications (OPAMA) is proposed, aiming at enhancing the energy efficiency on end-users devices. By combining the application specific requirements with data aggregation techniques, {OPAMA} improves the standard {IEEE} 802.11 legacy Power Save Mode (PSM) performance. The algorithm uses the feedback on the end-user expected quality to establish a proper tradeoff between energy consumption and application performance. {OPAMA} was assessed in the OMNeT++ simulator, using real traces of variable bitrate video streaming applications, and in a real testbed employing a novel methodology intended to perform an accurate evaluation concerning video Quality of Experience (QoE) perceived by the end-users. The results revealed the {OPAMA} capability to enhance energy efficiency without degrading the end-user observed QoE, achieving savings up to 44 when compared with the {IEEE} 802.11 legacy PSM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During the last decade wireless mobile communications have progressively become part of the people’s daily lives, leading users to expect to be “alwaysbest-connected” to the Internet, regardless of their location or time of day. This is indeed motivated by the fact that wireless access networks are increasingly ubiquitous, through different types of service providers, together with an outburst of thoroughly portable devices, namely laptops, tablets, mobile phones, among others. The “anytime and anywhere” connectivity criterion raises new challenges regarding the devices’ battery lifetime management, as energy becomes the most noteworthy restriction of the end-users’ satisfaction. This wireless access context has also stimulated the development of novel multimedia applications with high network demands, although lacking in energy-aware design. Therefore, the relationship between energy consumption and the quality of the multimedia applications perceived by end-users should be carefully investigated. This dissertation addresses energy-efficient multimedia communications in the IEEE 802.11 standard, which is the most widely used wireless access technology. It advances the literature by proposing a unique empirical assessment methodology and new power-saving algorithms, always bearing in mind the end-users’ feedback and evaluating quality perception. The new EViTEQ framework proposed in this thesis, for measuring video transmission quality and energy consumption simultaneously, in an integrated way, reveals the importance of having an empirical and high-accuracy methodology to assess the trade-off between quality and energy consumption, raised by the new end-users’ requirements. Extensive evaluations conducted with the EViTEQ framework revealed its flexibility and capability to accurately report both video transmission quality and energy consumption, as well as to be employed in rigorous investigations of network interface energy consumption patterns, regardless of the wireless access technology. Following the need to enhance the trade-off between energy consumption and application quality, this thesis proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA). By using the end-users’ feedback to establish a proper trade-off between energy consumption and application performance, OPAMA aims at enhancing the energy efficiency of end-users’ devices accessing the network through IEEE 802.11. OPAMA performance has been thoroughly analyzed within different scenarios and application types, including a simulation study and a real deployment in an Android testbed. When compared with the most popular standard power-saving mechanisms defined in the IEEE 802.11 standard, the obtained results revealed OPAMA’s capability to enhance energy efficiency, while keeping end-users’ Quality of Experience within the defined bounds. Furthermore, OPAMA was optimized to enable superior energy savings in multiple station environments, resulting in a new proposal called Enhanced Power Saving Mechanism for Multiple station Environments (OPAMA-EPS4ME). The results of this thesis highlight the relevance of having a highly accurate methodology to assess energy consumption and application quality when aiming to optimize the trade-off between energy and quality. Additionally, the obtained results based both on simulation and testbed evaluations, show clear benefits from employing userdriven power-saving techniques, such as OPAMA, instead of IEEE 802.11 standard power-saving approaches.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Contemporary models of self-regulated learning emphasize the role of distal motivational factors for student's achievement, on the one side, and the proximal role of metacognitive monitoring and control for learning and test outcomes, on the other side. In the present study, two larger samples of elementary school children (9- and 11-year-olds) were included and their mastery-oriented motivation, metacognitive monitoring and control skills were integrated into structural equation models testing and comparing the relative impact of these different constituents for self-regulated learning. For one, results indicate that the factorial structure of monitoring, control and mastery motivation was invariant across the two age groups. Of specific interest was the finding that there were age-dependent structural links between monitoring, control, and test performance (closer links in the older compared to the younger children), with high confidence yielding a direct and positive effect on test performance and a direct and negative effect on adequate control behavior in the achievement test. Mastery-oriented motivation was not found to be substantially associated with monitoring (confidence), control (detection and correction of errors), or test performance underlining the importance of proximal, metacognitive factors for test performance in elementary school children.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This study focuses on relations between 7- and 9-year-old children’s and adults’ metacognitive monitoring and control processes. In addition to explicit confidence judgments (CJ), data for participants’ control behavior during learning and recall as well as implicit CJs were collected with an eye-tracking device (Tobii 1750). Results revealed developmental progression in both accuracy of implicit and explicit monitoring across age groups. In addition, efficiency of learning and recall strategies increases with age, as older participants allocate more fixation time to critical information and less time to peripheral or potentially interfering information. Correlational analyses, recall performance, metacognitive monitoring, and controlling indicate significant interrelations between all of these measures, with varying patterns of correlations within age groups. Results are discussed in regard to the intricate relationship between monitoring and recall and their relation to performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is a demand for technologies able to assess the perfusion of surgical flaps quantitatively and reliably to avoid ischemic complications. The aim of this study is to test a new high-speed high-definition laser Doppler imaging (LDI) system (FluxEXPLORER, Microvascular Imaging, Lausanne, Switzerland) in terms of preoperative mapping of the vascular supply (perforator vessels) and postoperative flow monitoring. The FluxEXPLORER performs perfusion mapping of an area 9 x 9 cm with a resolution of 256 x 256 pixels within 6 s in high-definition imaging mode. The sensitivity and predictability to localize perforators is expressed by the coincidence of preoperatively assessed LDI high flow spots with intraoperatively verified perforators in nine patients. 18 free flaps are monitored before, during, and after total ischemia. 63% of all verified perforators correspond to a high flow spot, and 38% of all high flow spots correspond to a verified perforator (positive predictive value). All perfused flaps reveal a value of above 221 perfusion units (PUs), and all values obtained in the ischemic flaps are beneath 187 PU. In summary, we conclude that the present LDI system can serve as a reliable, fast, and easy-to-handle tool to detect ischemia in free flaps, whereas perforator vessels cannot be detected appropriately.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the last decade, translational science has come into the focus of academic medicine, and significant intellectual and financial efforts have been made to initiate a multitude of bench-to-bedside projects. The quest for suitable biomarkers that will significantly change clinical practice has become one of the biggest challenges in translational medicine. Quantitative measurement of proteins is a critical step in biomarker discovery. Assessing a large number of potential protein biomarkers in a statistically significant number of samples and controls still constitutes a major technical hurdle. Multiplexed analysis offers significant advantages regarding time, reagent cost, sample requirements and the amount of data that can be generated. The two contemporary approaches in multiplexed and quantitative biomarker validation, antibody-based immunoassays and MS-based multiple (or selected) reaction monitoring, are based on different assay principles and instrument requirements. Both approaches have their own advantages and disadvantages and therefore have complementary roles in the multi-staged biomarker verification and validation process. In this review, we discuss quantitative immunoassay and multiple reaction monitoring/selected reaction monitoring assay principles and development. We also discuss choosing an appropriate platform, judging the performance of assays, obtaining reliable, quantitative results for translational research and clinical applications in the biomarker field.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multiparameter cerebral monitoring has been widely applied in traumatic brain injury to study posttraumatic pathophysiology and to manage head-injured patients (e.g., combining O(2) and pH sensors with cerebral microdialysis). Because a comprehensive approach towards understanding injury processes will also require functional measures, we have added electrophysiology to these monitoring modalities by attaching a recording electrode to the microdialysis probe. These dual-function (microdialysis/electrophysiology) probes were placed in rats following experimental fluid percussion brain injuries, and in a series of severely head-injured human patients. Electrical activity (cell firing, EEG) was monitored concurrently with microdialysis sampling of extracellular glutamate, glucose and lactate. Electrophysiological parameters (firing rate, serial correlation, field potential occurrences) were analyzed offline and compared to dialysate concentrations. In rats, these probes demonstrated an injury-induced suppression of neuronal firing (from a control level of 2.87 to 0.41 spikes/sec postinjury), which was associated with increases in extracellular glutamate and lactate, and decreases in glucose levels. When placed in human patients, the probes detected sparse and slowly firing cells (mean = 0.21 spike/sec), with most units (70%) exhibiting a lack of serial correlation in the spike train. In some patients, spontaneous field potentials were observed, suggesting synchronously firing neuronal populations. In both the experimental and clinical application, the addition of the recording electrode did not appreciably affect the performance of the microdialysis probe. The results suggest that this technique provides a functional monitoring capability which cannot be obtained when electrophysiology is measured with surface or epidural EEG alone.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bovine spongiform encephalopathy (BSE) rapid tests and routine BSE-testing laboratories underlie strict regulations for approval. Due to the lack of BSE-positive control samples, however, full assay validation at the level of individual test runs and continuous monitoring of test performance on-site is difficult. Most rapid tests use synthetic prion protein peptides, but it is not known to which extend they reflect the assay performance on field samples, and whether they are sufficient to indicate on-site assay quality problems. To address this question we compared the test scores of the provided kit peptide controls to those of standardized weak BSE-positive tissue samples in individual test runs as well as continuously over time by quality control charts in two widely used BSE rapid tests. Our results reveal only a weak correlation between the weak positive tissue control and the peptide control scores. We identified kit-lot related shifts in the assay performances that were not reflected by the peptide control scores. Vice versa, not all shifts indicated by the peptide control scores indeed reflected a shift in the assay performance. In conclusion these data highlight that the use of the kit peptide controls for continuous quality control purposes may result in unjustified rejection or acceptance of test runs. However, standardized weak positive tissue controls in combination with Shewhart-CUSUM control charts appear to be reliable in continuously monitoring assay performance on-site to identify undesired deviations.