10 resultados para performance availability

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increased availability of new technologies, geography educators are revisiting their pedagogical approaches to teaching and calling for opportunities to share local and international practices which will enhance the learning experience and improve students’ performance. This paper reports on the use of handheld mobile devices, fitted with GPS, by secondary (high) school pupils in geography. Two location-aware activities were completed over one academic year (one per semester) and pre-test and post-test scores for both topics revealed a statistically significant increase in pupils’ performance as measured by the standard national assessments. A learner centred educational approach was adopted with the first mobile learning activity being created by the teacher as an exemplar of effective mobile learning design. Pupils built on their experiences of using mobile learning when they were required to created their own location aware learning task for peer use. An analysis of the qualitative data from the pupils’ journals, group diaries and focus group interviews revealed the five pillars of learner centred education are addressed when using location aware technologies and the use of handheld mobile devices offered greater flexibility and autonomy to the pupils thus altering the level of power and control away from the teacher. Due to the relatively small number of participants in the study, the results are more informative than generalisable however in light of the growing interest in geo-spatial technologies in geography education, this paper offers encouragement and insight into the use of location aware technology in a compulsory school context

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the ability of the doubly fed induction generator (DFIG) to deliver multiple reactive power objectives during variable wind conditions. The reactive power requirement is decomposed based on various control objectives (e.g. power factor control, voltage control, loss minimisation, and flicker mitigation) defined around different time frames (i.e. seconds, minutes, and hourly), and the control reference is generated by aggregating the individual reactive power requirement for each control strategy. A novel coordinated controller is implemented for the rotor-side converter and the grid-side converter considering their capability curves and illustrating that it can effectively utilise the aggregated DFIG reactive power capability for system performance enhancement. The performance of the multi-objective strategy is examined for a range of wind and network conditions, and it is shown that for the majority of the scenarios, more than 92% of the main control objective can be achieved while introducing the integrated flicker control scheme with the main reactive power control scheme. Therefore, optimal control coordination across the different control strategies can maximise the availability of ancillary services from DFIG-based wind farms without additional dynamic reactive power devices being installed in power networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrokinetic process is a potential in situ soil remediation process which transports the contaminants via electromigration and electroosmosis. For organic compounds contaminated soil, Fenton’s reagent is utilized as a flushing agent in electrokinetic process (Electrokinetic-Fenton) so that removal of organic contaminants could be achieved by in situ oxidation/destruction. However, this process is not applied widely in industries as the stability issue for Fenton’s reagent is the main drawback. The aim of this mini review is to summarize the developments of Electrokinetic-Fenton process on enhancing the stability of Fenton’s reagent and process efficiency in past decades. Generally, the enhancements are conducted via four paths: (1) chemical stabilization to delay H2O2 decomposition, (2) increase of oxidant availability by monitoring injection method for Fenton’s reagent, (3) electrodes operation and iron catalysts and (4) operating conditions such as voltage gradient, electrolytes and H2O2 concentration. In addition, the types of soils and contaminants are also showing significant effect as the soil with low acid buffering capacity, adequate iron concentration, low organic matter content and low aromatic ring organic contaminants generally gives better efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cancer registries must provide complete and reliable incidence information with the shortest possible delay for use in studies such as comparability, clustering, cancer in the elderly and adequacy of cancer surveillance. Methods of varying complexity are available to registries for monitoring completeness and timeliness. We wished to know which methods are currently in use among cancer registries, and to compare the results of our findings to those of a survey carried out in 2006.

Methods
In the framework of the EUROCOURSE project, and to prepare cancer registries for participation in the ERA-net scheme, we launched a survey on the methods used to assess completeness, and also on the timeliness and methods of dissemination of results by registries. We sent the questionnaire to all general registries (GCRs) and specialised registries (SCRs) active in Europe and within the European Network of Cancer Registries (ENCR).

Results
With a response rate of 66% among GCRs and 59% among SCRs, we obtained data for analysis from 116 registries with a population coverage of ∼280 million. The most common methods used were comparison of trends (79%) and mortality/incidence ratios (more than 60%). More complex methods were used less commonly: capture–recapture by 30%, flow method by 18% and death certificate notification (DCN) methods with the Ajiki formula by 9%.

The median latency for completion of ascertainment of incidence was 18 months. Additional time required for dissemination was of the order of 3–6 months, depending on the method: print or electronic. One fifth (21%) did not publish results for their own registry but only as a contribution to larger national or international data repositories and publications; this introduced a further delay in the availability of data.

Conclusions
Cancer registries should improve the practice of measuring their completeness regularly and should move from traditional to more quantitative methods. This could also have implications in the timeliness of data publication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 1990s in Ireland saw a series of highly successful theatre productions in which actors played a multiplicity of roles. This has often been attributed to the economic exigencies of the times, but it also depended on the availability of flexible actors with the physical and psychological capacity to embody a wide range of identifiable characters within the one production.

This second of two posts considers the acting techniques required for this style of performance in relation to the differentiation of one character from another. The discussion will focus primarily on my own empirical exploration of the demands multi-roling places on an actor through the direction of recent revivals of Mojo Mickybo for Belfast’s Chatterbox Theatre Company (2013) and Bedlam Productions (2015).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dynamic spectrum access networks, cognitive radio terminals monitor their spectral environment in order to detect and opportunistically access unoccupied frequency channels. The overall performance of such networks depends on the spectrum occupancy or availability patterns. Accurate knowledge on the channel availability enables optimum performance of such networks in terms of spectrum and energy efficiency. This work proposes a novel probabilistic channel availability model that can describe the channel availability in different polarizations for mobile cognitive radio terminals that are likely to change their orientation during their operation. A Gaussian approximation is used to model the empirical occupancy data that was obtained through a measurement campaign in the cellular frequency bands within a realistic operational scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive radio has been proposed as a means of improving the spectrum utilisation and increasing spectrum efficiency of wireless systems. This can be achieved by allowing cognitive radio terminals to monitor their spectral environment and opportunistically access the unoccupied frequency channels. Due to the opportunistic nature of cognitive radio, the overall performance of such networks depends on the spectrum occupancy or availability patterns. Appropriate knowledge on channel availability can optimise the sensing performance in terms of spectrum and energy efficiency. This work proposes a statistical framework for the channel availability in the polarization domain. A Gaussian Normal approximation is used to model real-world occupancy data obtained through a measurement campaign in the cellular frequency bands within a realistic scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.