961 resultados para Kähler Metrics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a generalisation of the k-nearest neighbour (k-NN) retrieval method based on an error function using distance metrics in the solution and problem space. It is an interpolative method which is proposed to be effective for sparse case bases. The method applies equally to nominal, continuous and mixed domains, and does not depend upon an embedding n-dimensional space. In continuous Euclidean problem domains, the method is shown to be a generalisation of the Shepard's Interpolation method. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The performance of the retrieval method is examined with reference to the Iris classification problem,and to a simulated sparse nominal value test problem. The introducion of a solution-space metric is shown to out-perform conventional nearest neighbours methods on sparse case bases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a case base reduction technique which uses a metric defined on the solution space. The technique utilises the Generalised Shepard Nearest Neighbour (GSNN) algorithm to estimate nominal or real valued solutions in case bases with solution space metrics. An overview of GSNN and a generalised reduction technique, which subsumes some existing decremental methods, such as the Shrink algorithm, are presented. The reduction technique is given for case bases in terms of a measure of the importance of each case to the predictive power of the case base. A trial test is performed on two case bases of different kinds, with several metrics proposed in the solution space. The tests show that GSNN can out-perform standard nearest neighbour methods on this set. Further test results show that a caseremoval order proposed based on a GSNN error function can produce a sparse case base with good predictive power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The aim of this study is to compare the sensitivity of different metrics to detect differences in complexity of intensity modulated radiation therapy (IMRT) plans following upgrades, changes to planning parameters, and patient geometry. Correlations between complexity metrics are also assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-threaded processors execute multiple threads concurrently in order to increase overall throughput. It is well documented that multi-threading affects per-thread performance but, more importantly, some threads are affected more than others. This is especially troublesome for multi-programmed workloads. Fairness metrics measure whether all threads are affected equally. However defining equal treatment is not straightforward. Several fairness metrics for multi-threaded processors have been utilized in the literature, although there does not seem to be a consensus on what metric does the best job of measuring fairness. This paper reviews the prevalent fairness metrics and analyzes their main properties. Each metric strikes a different trade-off between fairness in the strict sense and throughput. We categorize the metrics with respect to this property. Based on experimental data for SMT processors, we suggest using the minimum fairness metric in order to balance fairness and throughput.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper metrics for assessing the performance of directional modulation (DM) physical-layer secure wireless systems are discussed. In the paper DM systems are shown to be categorized as static or dynamic. The behavior of each type of system is discussed for QPSK modulation. Besides EVM-like and BER metrics, secrecy rate as used in information theory community is also derived for the purpose of this QPSK DM system evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods.

This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state-of-the-art registration methodologies used in a variety of targeted applications.

Key features:
- Provides a state-of-the-art review of image and video registration techniques, allowing readers to develop an understanding of how well the techniques perform by using specific quality assessment criteria
- Addresses a range of applications from familiar image and video processing domains to satellite and medical imaging among others, enabling readers to discover novel methodologies with utility in their own research
- Discusses quality evaluation metrics for each application domain with an interdisciplinary approach from different research perspectives

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The end of Dennard scaling has pushed power consumption into a first order concern for current systems, on par with performance. As a result, near-threshold voltage computing (NTVC) has been proposed as a potential means to tackle the limited cooling capacity of CMOS technology. Hardware operating in NTV consumes significantly less power, at the cost of lower frequency, and thus reduced performance, as well as increased error rates. In this paper, we investigate if a low-power systems-on-chip, consisting of ARM's asymmetric big.LITTLE technology, can be an alternative to conventional high performance multicore processors in terms of power/energy in an unreliable scenario. For our study, we use the Conjugate Gradient solver, an algorithm representative of the computations performed by a large range of scientific and engineering codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The end of Dennard scaling has promoted low power consumption into a firstorder concern for computing systems. However, conventional power conservation schemes such as voltage and frequency scaling are reaching their limits when used in performance-constrained environments. New technologies are required to break the power wall while sustaining performance on future processors. Low-power embedded processors and near-threshold voltage computing (NTVC) have been proposed as viable solutions to tackle the power wall in future computing systems. Unfortunately, these technologies may also compromise per-core performance and, in the case of NTVC, xreliability. These limitations would make them unsuitable for HPC systems and datacenters. In order to demonstrate that emerging low-power processing technologies can effectively replace conventional technologies, this study relies on ARM’s big.LITTLE processors as both an actual and emulation platform, and state-of-the-art implementations of the CG solver. For NTVC in particular, the paper describes how efficient algorithm-based fault tolerance schemes preserve the power and energy benefits of very low voltage operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the use of primary frequency response metrics to assess the dynamics of frequency disturbance data with the presence of high system non synchronous penetration (SNSP) and system inertia variation. The Irish power system has been chosen as a study case as it experiences a significant level of SNSP from wind turbine generation and imported active power from HVDC interconnectors. Several recorded actual frequency disturbances were used in the analysis. These data were measured and collected from the Irish power system from October 2010 to June 2013. The paper has shown the impact of system inertia and SNSP variation on the performance of primary frequency response metrics, namely: nadir frequency, rate of change of frequency, inertial and primary frequency response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery.

METHODS: 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius(®) phantom and seven29(®) 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared.

RESULTS: For Varian(®) linear accelerators (Varian(®) Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = -0.84, p < 0.01).

CONCLUSION: MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality.

ADVANCES IN KNOWLEDGE: Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery.