56 resultados para software management infrastructure
Resumo:
Branch prediction feeds a speculative execution processor core with instructions. Branch mispredictions are inevitable and have negative effects on performance and energy consumption. With the advent of highly accurate conditional branch predictors, nonconditional branch instructions are gaining importance.
Towards an understanding of the causes and effects of software requirements change: two case studies
Resumo:
Changes to software requirements not only pose a risk to the successful delivery of software applications but also provide opportunity for improved usability and value. Increased understanding of the causes and consequences of change can support requirements management and also make progress towards the goal of change anticipation. This paper presents the results of two case studies that address objectives arising from that ultimate goal. The first case study evaluated the potential of a change source taxonomy containing the elements ‘market’, ‘organisation’, ‘vision’, ‘specification’, and ‘solution’ to provide a meaningful basis for change classification and measurement. The second case study investigated whether the requirements attributes of novelty, complexity, and dependency correlated with requirements volatility. While insufficiency of data in the first case study precluded an investigation of changes arising due to the change source of ‘market’, for the remainder of the change sources, results indicate a significant difference in cost, value to the customer and management considerations. Findings show that higher cost and value changes arose more often from ‘organisation’ and ‘vision’ sources; these changes also generally involved the co-operation of more stakeholder groups and were considered to be less controllable than changes arising from the ‘specification’ or ‘solution’ sources. Results from the second case study indicate that only ‘requirements dependency’ is consistently correlated with volatility and that changes coming from each change source affect different groups of requirements. We conclude that the taxonomy can provide a meaningful means of change classification, but that a single requirement attribute is insufficient for change prediction. A theoretical causal account of requirements change is drawn from the implications of the combined results of the two case studies.
Resumo:
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.
Resumo:
An approach to the management of non-functional concerns in massively parallel and/or distributed architectures that marries parallel programming patterns with autonomic computing is presented. The necessity and suitability of the adoption of autonomic techniques are evidenced. Issues arising in the implementation of autonomic managers taking care of multiple concerns and of coordination among hierarchies of such autonomic managers are discussed. Experimental results are presented that demonstrate the feasibility of the approach.
Resumo:
Task-based dataflow programming models and runtimes emerge as promising candidates for programming multicore and manycore architectures. These programming models analyze dynamically task dependencies at runtime and schedule independent tasks concurrently to the processing elements. In such models, cache locality, which is critical for performance, becomes more challenging in the presence of fine-grain tasks, and in architectures with many simple cores.
This paper presents a combined hardware-software approach to improve cache locality and offer better performance is terms of execution time and energy in the memory system. We propose the explicit bulk prefetcher (EBP) and epoch-based cache management (ECM) to help runtimes prefetch task data and guide the replacement decisions in caches. The runtimem software can use this hardware support to expose its internal knowledge about the tasks to the architecture and achieve more efficient task-based execution. Our combined scheme outperforms HW-only prefetchers and state-of-the-art replacement policies, improves performance by an average of 17%, generates on average 26% fewer L2 misses, and consumes on average 28% less energy in the components of the memory system.
Resumo:
This paper reports a case study conducted in Quinta da Aveleda, one of
the three largest Portuguese wine companies. Our aim was to explore the
relationship established between a newly implemented Balanced Scorecard
(BSC) and the elements of the Management Control System (MCS) in the
organization. Thus, two specific objectives were pursued. Firstly, to identify
the influences (barriers, opportunities) of the existing MCS on the implementation
of the BSC. Secondly, to identify the impacts the BSC implementation
was able to exert on the configuration of the organization’s MCS.
We found that the budgeting process, the planning system, the information
infrastructure, and the organizational structure and culture were the elements
of the previous MCS that influenced the BSC implementation process.
Eventually, the BSC implementation led to important changes in the budgeting,
planning, reporting systems and processes. In order to explain these
findings, we briefly explored the main issues and factors accounting for the
scope and nature of the BSC’s impacts on Quinta da Aveleda. These issues
and factors were the mobilized organizational resources, the implementation
approach, the communication, and the organizational support.
Resumo:
Cloud services are exploding, and organizations are converging their data centers in order to take advantage of the predictability, continuity, and quality of service delivered by virtualization technologies. In parallel, energy-efficient and high-security networking is of increasing importance. Network operators, and service and product providers require a new network solution to efficiently tackle the increasing demands of this changing network landscape. Software-defined networking has emerged as an efficient network technology capable of supporting the dynamic nature of future network functions and intelligent applications while lowering operating costs through simplified hardware, software, and management. In this article, the question of how to achieve a successful carrier grade network with software-defined networking is raised. Specific focus is placed on the challenges of network performance, scalability, security, and interoperability with the proposal of potential solution directions.
Resumo:
Policy-based network management (PBNM) paradigms provide an effective tool for end-to-end resource
management in converged next generation networks by enabling unified, adaptive and scalable solutions
that integrate and co-ordinate diverse resource management mechanisms associated with heterogeneous
access technologies. In our project, a PBNM framework for end-to-end QoS management in converged
networks is being developed. The framework consists of distributed functional entities managed within a
policy-based infrastructure to provide QoS and resource management in converged networks. Within any
QoS control framework, an effective admission control scheme is essential for maintaining the QoS of
flows present in the network. Measurement based admission control (MBAC) and parameter basedadmission control (PBAC) are two commonly used approaches. This paper presents the implementationand analysis of various measurement-based admission control schemes developed within a Java-based
prototype of our policy-based framework. The evaluation is made with real traffic flows on a Linux-based experimental testbed where the current prototype is deployed. Our results show that unlike with classic MBAC or PBAC only schemes, a hybrid approach that combines both methods can simultaneously result in improved admission control and network utilization efficiency
Resumo:
The ability of building information modeling (BIM) to positively impact projects in the AEC through greater collaboration and integration is widely acknowledged. This paper aims to examine the development of BIM and how it can contribute to the cold-formed steel (CFS) building industry. This is achieved through the adoption of a qualitative methodology encompassing a literature review, exploratory interviews with industry experts, culminating in the development of e-learning material for the sector. In doing so, the research team have collaborated with one of the United Kingdom’s largest cold-formed steel designer/fabricators. By demonstrating the capabilities of BIM software and providing technical and informative videos in its creation, this project has found two key outcomes. Firstly, to provide invaluable assistance in the transition from traditional processes to a fully collaborative 3D BIM as required by the UK Government under the “Government Construction Strategy” by 2016 in all public sector projects. Secondly, to demonstrate BIM’s potential not only within CFS companies, but also within the AEC sector as a whole. As the flexibility, adaptability and interoperability of BIM software is alluded to, the results indicate that the introduction and development of BIM and the underlying ethos suggests that it is a key tool in the development of the industry as a whole.
Resumo:
Purpose
– Traditionally, most studies focus on institutionalized management-driven actors to understand technology management innovation. The purpose of this paper is to argue that there is a need for research to study the nature and role of dissident non-institutionalized actors’ (i.e. outsourced web designers and rapid application software developers). The authors propose that through online social knowledge sharing, non-institutionalized actors’ solution-finding tensions enable technology management innovation.
Design/methodology/approach
– A synthesis of the literature and an analysis of the data (21 interviews) provided insights in three areas of solution-finding tensions enabling management innovation. The authors frame the analysis on the peripherally deviant work and the nature of the ways that dissident non-institutionalized actors deviate from their clients (understood as the firm) original contracted objectives.
Findings
– The findings provide insights into the productive role of solution-finding tensions in enabling opportunities for management service innovation. Furthermore, deviant practices that leverage non-institutionalized actors’ online social knowledge to fulfill customers’ requirements are not interpreted negatively, but as a positive willingness to proactively explore alternative paths.
Research limitations/implications
– The findings demonstrate the importance of dissident non-institutionalized actors in technology management innovation. However, this work is based on a single country (USA) and additional research is needed to validate and generalize the findings in other cultural and institutional settings.
Originality/value
– This paper provides new insights into the perceptions of dissident non-institutionalized actors in the practice of IT managerial decision making. The work departs from, but also extends, the previous literature, demonstrating that peripherally deviant work in solution-finding practice creates tensions, enabling management innovation between IT providers and users.
Resumo:
Risk management in software engineering has become a recognized project management practice but it seems that not all companies are systematically applying it. At the same time, agile methods have become popular, partly because proponents claim that agile methods implicitly reduce risks due
to, for example, more frequent and earlier feedback, shorter periods of development time and easier prediction of cost. Therefore, there is a need to investigate how risk management can be usable in iterative and evolutionary software development processes. This paper investigates the gathering of empirical data on risk management from the project environment and presents
a novel approach to manage risk in agile projects. Our approach is based on a prototype tool, Agile Risk Tool (ART). This tool reduces human effort in risk management by using software agents to identify, assess and monitor risk, based on input and data collected from the project environment and by applying
some designated rules. As validation, groups of student project data were used to provide evidence of the efficacy of this approach. We demonstrate the approach and the feasibility of using a lightweight risk management tool to alert, assess and monitor risk with reduced human effort.
Resumo:
Objective:
The aim of this study was to identify sources of anatomical misrepresentation due to the location of camera mounting, tumour motion velocity and image processing artefacts in order to optimise the 4DCT scan protocol and improve geometrical-temporal accuracy.
Methods:A phantom with an imaging insert was driven with a sinusoidal superior-inferior motion of varying amplitude and period for 4DCT scanning. The length of a high density cube within the insert was measured using treatment planning software to determine the accuracy of its spatial representation. Scan parameters were varied including the tube rotation period and the cine time between reconstructed images. A CT image quality phantom was used to measure various image quality signatures under the scan parameters tested.
Results:No significant difference in spatial accuracy was found for 4DCT scans carried out using the wall mounted or couch mounted camera for sinusoidal target motion. Greater spatial accuracy was found for 4DCT scans carried out using a tube rotation speed of 0.5s rather than 1.0s. The reduction in image quality when using a faster rotation speed was not enough to require an increase in patient dose.
Conclusions:4DCT accuracy may be increased by optimising scan parameters, including choosing faster tube rotation speeds. Peak misidentification in the recorded breathing trace leads to spatial artefacts and this risk can be reduced by using a couch mounted infrared camera.
Advances in knowledge:This study explicitly shows that 4DCT scan accuracy is improved by scanning with a faster CT tube rotation speed.
Resumo:
This paper describes an end-user model for a domestic pervasive computing platform formed by regular home objects. The platform does not rely on pre-planned infrastructure; instead, it exploits objects that are already available in the home and exposes their joint sensing, actuating and computing capabilities to home automation applications. We advocate an incremental process of the platform formation and introduce tangible, object-like artifacts for representing important platform functions. One of those artifacts, the application pill, is a tiny object with a minimal user interface, used to carry the application, as well as to start and stop its execution and provide hints about its operational status. We also emphasize streamlining the user's interaction with the platform. The user engages any UI-capable object of his choice to configure applications, while applications issue notifications and alerts exploiting whichever available objects can be used for that purpose. Finally, the paper briefly describes an actual implementation of the presented end-user model. © (2010) by International Academy, Research, and Industry Association (IARIA).