108 resultados para BENCHMARK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Outwith clinical trials, patient outcomes specifically related to SACT (systemic anti-cancer therapy) are not well reported despite a significant proportion of patients receiving active treatment at the end of life. The NCEPOD reviewing deaths within 30 days of SACT found SACT caused or hastened death in 27% of cases.

Method: Across the Northern Ireland cancer network, 95 patients who died within 30 days of SACT for solid tumours were discussed at the Morbidity and Mortality monthly meeting during 2013. Using a structured template, each case was independently reviewed, with particular focus on whether SACT caused or hastened death.

Results: Lung, GI and breast cancers were the most common sites. Performance status was recorded in 92% at time of final SACT cycle (ECOG PS 0-2 89%).

In 57% the cause of death was progressive disease. Other causes included thromboembolism (13%) and infection (5% neutropenic sepsis, 6% non-neutropenic sepsis). In 26% with death from progressive disease, the patient was first cycle of first line treatment for metastatic disease. In the majority discussion regarding treatment aims and risks was documented. Only one patient was receiving SACT with curative intent, who died from appropriately managed neutropenic sepsis.

A definitive decision regarding SACT's role in death was made in 60%: in 49% SACT was deemed non-contributory and in 11% SACT was deemed the cause of death. In 40% SACT did not play a major role, but a definitive negative association could not be made.

Conclusion: Development of a robust review process of 30-day mortality after SACT established a benchmark for SACT delivery for future comparisons and identified areas for SACT service organisation improvement. Moreover it encourages individual practice reflection and highlights the importance of balancing patients' needs and concerns with realistic outcomes and risks, particularly in heavily pre-treated patients or those of poor performance status.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most popular techniques of generating classifier ensembles is known as stacking which is based on a meta-learning approach. In this paper, we introduce an alternative method to stacking which is based on cluster analysis. Similar to stacking, instances from a validation set are initially classified by all base classifiers. The output of each classifier is subsequently considered as a new attribute of the instance. Following this, a validation set is divided into clusters according to the new attributes and a small subset of the original attributes of the instances. For each cluster, we find its centroid and calculate its class label. The collection of centroids is considered as a meta-classifier. Experimental results show that the new method outperformed all benchmark methods, namely Majority Voting, Stacking J48, Stacking LR, AdaBoost J48, and Random Forest, in 12 out of 22 data sets. The proposed method has two advantageous properties: it is very robust to relatively small training sets and it can be applied in semi-supervised learning problems. We provide a theoretical investigation regarding the proposed method. This demonstrates that for the method to be successful, the base classifiers applied in the ensemble should have greater than 50% accuracy levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper implements momentum among a host of market anomalies. Our investment universe consists of the 15 top (long-leg) and 15 bottom (short-leg) anomaly portfolios. The proposed active strategy buys (sells short) a subset of the top (bottom) anomaly portfolios based on past one-month return. The evidence shows statistically strong and economically meaningful persistence in anomaly payoffs. Our strategy consistently outperforms a naive benchmark that equal weights anomalies and yields an abnormal monthly return ranging between 1.27% and 1.47%. The persistence is robust to the post-2000 period, and various other considerations, and is stronger following episodes of high investor sentiment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with the application of an automated hybrid approach in addressing the university timetabling problem. The approach described is based on the nature-inspired artificial bee colony (ABC) algorithm. An ABC algorithm is a biologically-inspired optimization approach, which has been widely implemented in solving a range of optimization problems in recent years such as job shop scheduling and machine timetabling problems. Although the approach has proven to be robust across a range of problems, it is acknowledged within the literature that there currently exist a number of inefficiencies regarding the exploration and exploitation abilities. These inefficiencies can often lead to a slow convergence speed within the search process. Hence, this paper introduces a variant of the algorithm which utilizes a global best model inspired from particle swarm optimization to enhance the global exploration ability while hybridizing with the great deluge (GD) algorithm in order to improve the local exploitation ability. Using this approach, an effective balance between exploration and exploitation is attained. In addition, a traditional local search approach is incorporated within the GD algorithm with the aim of further enhancing the performance of the overall hybrid method. To evaluate the performance of the proposed approach, two diverse university timetabling datasets are investigated, i.e., Carter's examination timetabling and Socha course timetabling datasets. It should be noted that both problems have differing complexity and different solution landscapes. Experimental results demonstrate that the proposed method is capable of producing high quality solutions across both these benchmark problems, showing a good degree of generality in the approach. Moreover, the proposed method produces best results on some instances as compared with other approaches presented in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generating timetables for an institution is a challenging and time consuming task due to different demands on the overall structure of the timetable. In this paper, a new hybrid method which is a combination of a great deluge and artificial bee colony algorithm (INMGD-ABC) is proposed to address the university timetabling problem. Artificial bee colony algorithm (ABC) is a population based method that has been introduced in recent years and has proven successful in solving various optimization problems effectively. However, as with many search based approaches, there exist weaknesses in the exploration and exploitation abilities which tend to induce slow convergence of the overall search process. Therefore, hybridization is proposed to compensate for the identified weaknesses of the ABC. Also, inspired from imperialist competitive algorithms, an assimilation policy is implemented in order to improve the global exploration ability of the ABC algorithm. In addition, Nelder–Mead simplex search method is incorporated within the great deluge algorithm (NMGD) with the aim of enhancing the exploitation ability of the hybrid method in fine-tuning the problem search region. The proposed method is tested on two differing benchmark datasets i.e. examination and course timetabling datasets. A statistical analysis t-test has been conducted and shows the performance of the proposed approach as significantly better than basic ABC algorithm. Finally, the experimental results are compared against state-of-the art methods in the literature, with results obtained that are competitive and in certain cases achieving some of the current best results to those in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electron-impact ionization cross sections for the 1s2s 1S and 1s2s 3S metastable states of Li+ are calculated using both perturbative distorted-wave and non-perturbative close-coupling methods. Term-resolved distorted-wave calculations are found to be approximately 15% above term-resolved R-matrix with pseudostates calculations. On the other hand, configuration-average time-dependent close-coupling calculations are found to be in excellent agreement with the configuration-average R-matrix with pseudostates calculations. The non-perturbative R-matrix and close-coupling calculations provide a benchmark for experimental studies of electron-impact ionization of metastable states along the He isoelectronic sequence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A comprehensive continuum damage mechanics model [1] had been developed to capture the detailed
behaviour of a composite structure under a crushing load. This paper explores some of the difficulties
encountered in the implementation of this model and their mitigation. The use of reduced integration
element and a strain softening model both negatively affect the accuracy and stability of the
simulation. Damage localisation effects demanded an accurate measure of characteristic length. A
robust algorithm for determining the characteristic length was implemented. Testing showed that this
algorithm produced marked improvements over the use of the default characteristic length provided
by Abaqus. Zero-energy or hourglass modes, in reduced integration elements, led to reduced
resistance to bending. This was compounded by the strain softening model, which led to the formation
of elements with little resistance to deformation that could invert if left unchecked. It was shown,
through benchmark testing, that by deleting elements with excess distortions and controlling the mesh
using inbuilt distortion/hourglass controls, these issues can be alleviated. These techniques
contributed significantly to the viability and usability of the damage model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new heuristic based on Nawaz–Enscore–Ham (NEH) algorithm is proposed for solving permutation flowshop scheduling problem in this paper. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion for the objective of minimizing both makespan and machine idle-time. Statistical tests illustrate better solution quality of the proposed algorithm, comparing to existing benchmark heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Corrosion fatigue is a fracture process as a consequence of synergistic interactions between the material structure, corrosive environment and cyclic loads/strains. It is difficult to be detected and can cause unexpected failure of engineering components in use. This study reveals a comparison of corrosion fatigue behaviour of laser-welded and bare NiTi wires using bending rotation fatigue (BRF) test coupled with a specifically-designed corrosion cell. The testing medium was Hanks’ solution (simulated body fluid) at 37.5 oC. Electrochemical impedance spectroscopic (EIS) measurement was carried out to monitor the change of corrosion resistance of sample during the BRF test at different periods of time. Experiments indicate that the laser-welded NiTi wire would be more susceptible to the corrosion fatigue attack than the bare NiTi wire. This study can serve as a benchmark for the product designers and engineers to understand the corrosion fatigue behaviour of the NiTi laser weld joint and determine the fatigue life safety factor for NiTi medical devices/implants involving laser welding in the fabrication process.