929 resultados para Hardware and software


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Key Performance Indicators (KPIs) are the main instruments of Business Performance Management. KPIs are the measures that are translated to both the strategy and the business process. These measures are often designed for an industry sector with the assumptions about business processes in organizations. However, the assumptions can be too incomplete to guarantee the required properties of KPIs. This raises the need to validate the properties of KPIs prior to their application to performance measurement. This paper applies the method called EXecutable Requirements Engineering Management and Evolution (EXTREME) for validation of the KPI definitions. EXTREME semantically relates the goal modeling, conceptual modeling and protocol modeling techniques into one methodology. The synchronous composition built into protocol modeling enables raceability of goals in protocol models and constructive definitions of a KPI. The application of the method clarifies the meaning of KPI properties and procedures of their assessment and validation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We describe infinitely scalable pipeline machines with perfect parallelism, in the sense that every instruction of an inline program is executed, on successive data, on every clock tick. Programs with shared data effectively execute in less than a clock tick. We show that pipeline machines are faster than single or multi-core, von Neumann machines for sufficiently many program runs of a sufficiently time consuming program. Our pipeline machines exploit the totality of transreal arithmetic and the known waiting time of statically compiled programs to deliver the interesting property that they need no hardware or software exception handling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: A new methodology was created to measure the energy consumption and related green house gas (GHG) emissions of a computer operating system (OS) across different device platforms. The methodology involved the direct power measurement of devices under different activity states. In order to include all aspects of an OS, the methodology included measurements in various OS modes, whilst uniquely, also incorporating measurements when running an array of defined software activities, so as to include OS application management features. The methodology was demonstrated on a laptop and phone that could each run multiple OSs, results confirmed that OS can significantly impact the energy consumption of devices. In particular, the new versions of the Microsoft Windows OS were tested and highlighted significant differences between the OS versions on the same hardware. The developed methodology could enable a greater awareness of energy consumption, during both the software development and software marketing processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recruitment of patients to a clinical trial usually occurs over a period of time, resulting in the steady accumulation of data throughout the trial's duration. Yet, according to traditional statistical methods, the sample size of the trial should be determined in advance, and data collected on all subjects before analysis proceeds. For ethical and economic reasons, the technique of sequential testing has been developed to enable the examination of data at a series of interim analyses. The aim is to stop recruitment to the study as soon as there is sufficient evidence to reach a firm conclusion. In this paper we present the advantages and disadvantages of conducting interim analyses in phase III clinical trials, together with the key steps to enable the successful implementation of sequential methods in this setting. Examples are given of completed trials, which have been carried out sequentially, and references to relevant literature and software are provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Document engineering is the computer science discipline that investigates systems for documents in any form and in all media. As with the relationship between software engineering and software, document engineering is concerned with principles, tools and processes that improve our ability to create, manage, and maintain documents (http://www.documentengineering.org). The ACM Symposium on Document Engineering is an annual meeting of researchers active in document engineering: it is sponsored by ACM by means of the ACM SIGWEB Special Interest Group. In this editorial, we first point to work carried out in the context of document engineering, which are directly related to multimedia tools and applications. We conclude with a summary of the papers presented in this special issue.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper aims at identifying some of the key factors in adopting an organization-wide software reuse program. The factors are derived from practical experience reported by industry professionals, through a survey involving 57 Brazilian small, medium and large software organizations. Some of them produce software with commonality between applications, and have mature processes, while others successfully achieved reuse through isolated, ad hoe efforts. The paper compiles the answers from the survey participants, showing which factors were more associated with reuse success. Based on this relationship, a guide is presented, pointing out which factors should be more strongly considered by small, medium and large organizations attempting to establish a reuse program. (C) 2007 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reusable and evolvable Software Engineering Environments (SEES) are essential to software production and have increasingly become a need. In another perspective, software architectures and reference architectures have played a significant role in determining the success of software systems. In this paper we present a reference architecture for SEEs, named RefASSET, which is based on concepts coming from the aspect-oriented approach. This architecture is specialized to the software testing domain and the development of tools for that domain is discussed. This and other case studies have pointed out that the use of aspects in RefASSET provides a better Separation of Concerns, resulting in reusable and evolvable SEEs. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Grid is a large-scale computer system that is capable of coordinating resources that are not subject to centralised control, whilst using standard, open, general-purpose protocols and interfaces, and delivering non-trivial qualities of service. In this chapter, we argue that Grid applications very strongly suggest the use of agent-based computing, and we review key uses of agent technologies in Grids: user agents, able to customize and personalise data; agent communication languages offering a generic and portable communication medium; and negotiation allowing multiple distributed entities to reach service level agreements. In the second part of the chapter, we focus on Grid service discovery, which we have identified as a prime candidate for use of agent technologies: we show that Grid-services need to be located via personalised, semantic-rich discovery processes, which must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. We present UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. The outcome is a flexible service registry which is compatible with existing standards and also provides metadata-enhanced service discovery.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador: