977 resultados para metrics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The end of Dennard scaling has pushed power consumption into a first order concern for current systems, on par with performance. As a result, near-threshold voltage computing (NTVC) has been proposed as a potential means to tackle the limited cooling capacity of CMOS technology. Hardware operating in NTV consumes significantly less power, at the cost of lower frequency, and thus reduced performance, as well as increased error rates. In this paper, we investigate if a low-power systems-on-chip, consisting of ARM's asymmetric big.LITTLE technology, can be an alternative to conventional high performance multicore processors in terms of power/energy in an unreliable scenario. For our study, we use the Conjugate Gradient solver, an algorithm representative of the computations performed by a large range of scientific and engineering codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The end of Dennard scaling has promoted low power consumption into a firstorder concern for computing systems. However, conventional power conservation schemes such as voltage and frequency scaling are reaching their limits when used in performance-constrained environments. New technologies are required to break the power wall while sustaining performance on future processors. Low-power embedded processors and near-threshold voltage computing (NTVC) have been proposed as viable solutions to tackle the power wall in future computing systems. Unfortunately, these technologies may also compromise per-core performance and, in the case of NTVC, xreliability. These limitations would make them unsuitable for HPC systems and datacenters. In order to demonstrate that emerging low-power processing technologies can effectively replace conventional technologies, this study relies on ARM’s big.LITTLE processors as both an actual and emulation platform, and state-of-the-art implementations of the CG solver. For NTVC in particular, the paper describes how efficient algorithm-based fault tolerance schemes preserve the power and energy benefits of very low voltage operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the use of primary frequency response metrics to assess the dynamics of frequency disturbance data with the presence of high system non synchronous penetration (SNSP) and system inertia variation. The Irish power system has been chosen as a study case as it experiences a significant level of SNSP from wind turbine generation and imported active power from HVDC interconnectors. Several recorded actual frequency disturbances were used in the analysis. These data were measured and collected from the Irish power system from October 2010 to June 2013. The paper has shown the impact of system inertia and SNSP variation on the performance of primary frequency response metrics, namely: nadir frequency, rate of change of frequency, inertial and primary frequency response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery.

METHODS: 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius(®) phantom and seven29(®) 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared.

RESULTS: For Varian(®) linear accelerators (Varian(®) Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = -0.84, p < 0.01).

CONCLUSION: MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality.

ADVANCES IN KNOWLEDGE: Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Bioinformática), Universidade de Lisboa, Faculdade de Ciências, 2015

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advantages a DSL and the benefits its use potentially brings imply that informed decisions on the design of a domain specific language are of paramount importance for its use. We believe that the foundations of such decisions should be informed by analysis of data empirically collected from systems to highlight salient features that should then form the basis of a DSL. To support this theory, we describe an empirical study of a large OSS called Barcode, written in C, and from which we collected two well-known 'slice' based metrics. We analyzed multiple versions of the system and sliced its functions in three separate ways (i.e., input, output and global variables). The purpose of the study was to try and identify sensitivities and traits in those metrics that might inform features of a potential slice-based DSL. Results indicated that cohesion was adversely affected through the use of global variables and that appreciation of the role of function inputs and outputs can be revealed through slicing. The study presented is motivated primarily by the problems with current tools and interfaces experienced directly by the authors in extracting slicing data and the need to promote the benefits that analysis of slice data and slicing in general can offer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Container Loading Problem (CLP) literature has traditionally evaluated the dynamic stability of cargo by applying two metrics to box arrangements: the mean number of boxes supporting the items excluding those placed directly on the floor (M1) and the percentage of boxes with insufficient lateral support (M2). However, these metrics, that aim to be proxies for cargo stability during transportation, fail to translate real-world cargo conditions of dynamic stability. In this paper two new performance indicators are proposed to evaluate the dynamic stability of cargo arrangements: the number of fallen boxes (NFB) and the number of boxes within the Damage Boundary Curve fragility test (NB_DBC). Using 1500 solutions for well-known problem instances found in the literature, these new performance indicators are evaluated using a physics simulation tool (StableCargo), replacing the real-world transportation by a truck with a simulation of the dynamic behaviour of container loading arrangements. Two new dynamic stability metrics that can be integrated within any container loading algorithm are also proposed. The metrics are analytical models of the proposed stability performance indicators, computed by multiple linear regression. Pearson’s r correlation coefficient was used as an evaluation parameter for the performance of the models. The extensive computational results show that the proposed metrics are better proxies for dynamic stability in the CLP than the previous widely used metrics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grapevine winter hardiness is a key factor in vineyard success in many cool climate wine regions. Winter hardiness may be governed by a myriad of factors in addition to extreme weather conditions – e.g. soil factors (texture, chemical composition, moisture, drainage), vine water status, and yield– that are unique to each site. It was hypothesized that winter hardiness would be influenced by certain terroir factors , specifically that vines with low water status [more negative leaf water potential (leaf ψ)] would be more winter hardy than vines with high water status (more positive leaf ψ). Twelve different vineyard blocks (six each of Riesling and Cabernet franc) throughout the Niagara Region in Ontario, Canada were chosen. Data were collected during the growing season (soil moisture, leaf ψ), at harvest (yield components, berry composition), and during the winter (bud LT50, bud survival). Interpolation and mapping of the variables was completed using ArcGIS 10.1 (ESRI, Redlands, CA) and statistical analyses (Pearson’s correlation, principal component analysis, multilinear regression) were performed using XLSTAT. Clear spatial trends were observed in each vineyard for soil moisture, leaf ψ, yield components, berry composition, and LT50. Both leaf ψ and berry weight could predict the LT50 value, with strong positive correlations being observed between LT50 and leaf ψ values in eight of the 12 vineyard blocks. In addition, vineyards in different appellations showed many similarities (Niagara Lakeshore, Lincoln Lakeshore, Four Mile Creek, Beamsville Bench). These results suggest that there is a spatial component to winter injury, as with other aspects of terroir, in the Niagara region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Puisque l’altération des habitats d’eau douce augmente, il devient critique d’identifier les composantes de l’habitat qui influencent les métriques de la productivité des pêcheries. Nous avons comparé la contribution relative de trois types de variables d’habitat à l’explication de la variance de métriques d’abondance, de biomasse et de richesse à l’aide de modèles d’habitat de poissons, et avons identifié les variables d’habitat les plus efficaces à expliquer ces variations. Au cours des étés 2012 et 2013, les communautés de poissons de 43 sites littoraux ont été échantillonnées dans le Lac du Bonnet, un réservoir dans le Sud-est du Manitoba (Canada). Sept scénarios d’échantillonnage, différant par l’engin de pêche, l’année et le moment de la journée, ont été utilisés pour estimer l’abondance, la biomasse et la richesse à chaque site, toutes espèces confondues. Trois types de variables d’habitat ont été évalués: des variables locales (à l’intérieur du site), des variables latérales (caractérisation de la berge) et des variables contextuelles (position relative à des attributs du paysage). Les variables d’habitat locales et contextuelles expliquaient en moyenne un total de 44 % (R2 ajusté) de la variation des métriques de la productivité des pêcheries, alors que les variables d’habitat latérales expliquaient seulement 2 % de la variation. Les variables les plus souvent significatives sont la couverture de macrophytes, la distance aux tributaires d’une largeur ≥ 50 m et la distance aux marais d’une superficie ≥ 100 000 m2, ce qui suggère que ces variables sont les plus efficaces à expliquer la variation des métriques de la productivité des pêcheries dans la zone littorale des réservoirs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The image comparison operation ??sessing how well one image matches another ??rms a critical component of many image analysis systems and models of human visual processing. Two norms used commonly for this purpose are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric better captures the perceptual notion of image similarity than the other. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created via vector quantization. In both conditions the subjects showed a consistent preference for images matched using the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity.