79 resultados para embedded systems software
Towards an understanding of the causes and effects of software requirements change: two case studies
Resumo:
Changes to software requirements not only pose a risk to the successful delivery of software applications but also provide opportunity for improved usability and value. Increased understanding of the causes and consequences of change can support requirements management and also make progress towards the goal of change anticipation. This paper presents the results of two case studies that address objectives arising from that ultimate goal. The first case study evaluated the potential of a change source taxonomy containing the elements ‘market’, ‘organisation’, ‘vision’, ‘specification’, and ‘solution’ to provide a meaningful basis for change classification and measurement. The second case study investigated whether the requirements attributes of novelty, complexity, and dependency correlated with requirements volatility. While insufficiency of data in the first case study precluded an investigation of changes arising due to the change source of ‘market’, for the remainder of the change sources, results indicate a significant difference in cost, value to the customer and management considerations. Findings show that higher cost and value changes arose more often from ‘organisation’ and ‘vision’ sources; these changes also generally involved the co-operation of more stakeholder groups and were considered to be less controllable than changes arising from the ‘specification’ or ‘solution’ sources. Results from the second case study indicate that only ‘requirements dependency’ is consistently correlated with volatility and that changes coming from each change source affect different groups of requirements. We conclude that the taxonomy can provide a meaningful means of change classification, but that a single requirement attribute is insufficient for change prediction. A theoretical causal account of requirements change is drawn from the implications of the combined results of the two case studies.
Resumo:
Sphere Decoding (SD) is a highly effective detection technique for Multiple-Input Multiple-Output (MIMO) wireless communications receivers, offering quasi-optimal accuracy with relatively low computational complexity as compared to the ideal ML detector. Despite this, the computational demands of even low-complexity SD variants, such as Fixed Complexity SD (FSD), remains such that implementation on modern software-defined network equipment is a highly challenging process, and indeed real-time solutions for MIMO systems such as 4 4 16-QAM 802.11n are unreported. This paper overcomes this barrier. By exploiting large-scale networks of fine-grained softwareprogrammable processors on Field Programmable Gate Array (FPGA), a series of unique SD implementations are presented, culminating in the only single-chip, real-time quasi-optimal SD for 44 16-QAM 802.11n MIMO. Furthermore, it demonstrates that the high performance software-defined architectures which enable these implementations exhibit cost comparable to dedicated circuit architectures.
Resumo:
The emergence of programmable logic devices as processing platforms for digital signal processing applications poses challenges concerning rapid implementation and high level optimization of algorithms on these platforms. This paper describes Abhainn, a rapid implementation methodology and toolsuite for translating an algorithmic expression of the system to a working implementation on a heterogeneous multiprocessor/field programmable gate array platform, or a standalone system on programmable chip solution. Two particular focuses for Abhainn are the automated but configurable realisation of inter-processor communuication fabrics, and the establishment of novel dedicated hardware component design methodologies allowing algorithm level transformation for system optimization. This paper outlines the approaches employed in both these particular instances.
Resumo:
Just as conventional institutions are organisational structures for coordinating the activities of multiple interacting individuals, electronic institutions provide a computational analogue for coordinating the activities of multiple interacting software agents. In this paper, we argue that open multi-agent systems can be effectively designed and implemented as electronic institutions, for which we provide a comprehensive computational model. More specifically, the paper provides an operational semantics for electronic institutions, specifying the essential data structures, the state representation and the key operations necessary to implement them. We specify the agent workflow structure that is the core component of such electronic institutions and particular instantiations of knowledge representation languages that support the institutional model. In so doing, we provide the first formal account of the electronic institution concept in a rigorous and unambiguous way.
Resumo:
With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.
Resumo:
We propose a novel admission control policy for database queries. Our methodology uses system measurements of CPU utilization and query backlogs to determine interference between queries in execution on the same database server. Query interference may arise due to the concurrent access of hardware and software resources and can affect performance in positive and negative ways. Specifically our admission control considers the mix of jobs in service and prioritizes the query classes consuming CPU resources more efficiently. The policy ignores I/O subsystems and is therefore highly appropriate for in-memory databases. We validate our approach in trace-driven simulation and show performance increases of query slowdowns and throughputs compared to first-come first-served and shortest expected processing time first scheduling. Simulation experiments are parameterized from system traces of a SAP HANA in-memory database installation with TPC-H type workloads. © 2012 IEEE.
Resumo:
SPHERE (Stormont Parliamentary Hansards: Embedded in Research and Education) was a JISC-funded project based at King’s College, London and Queen’s University, Belfast, working in Partnership with the Northern Ireland Assembly Library, and the NIA Official Report (Hansard). Its purpose was to assess the use, value and impact of The Stormont Papers digital resource, and to use the results of this assessment to make recommendations for a series of practical approaches to embed the resource within teaching, learning and research among the wider user community. The project began in November 2010 and was concluded in April 2010.
A series of formal reports on the project are published by JISC online at http://www.jisc.ac.uk/whatwedo/programmes/digitisation/impactembedding/sphere.aspx
SPHERE Impact analysis summary
Portable Document Format
SPHERE interviews report
SPHERE Outreach use case
SPHERE research use case
SPHERE teaching use_case
SPHERE web survey report
SPHERE web analysis
Resumo:
In essence, optimal software engineering means creating the right product, through the right process, to the overall satisfaction of everyone involved. Adopting the agile approach to software development appears to have helped many companies make substantial progress towards that goal. The purpose of this paper is to clarify that contribution from comparative survey information gathered in 2010 and 2012. The surveys were undertaken in software development companies across Northern Ireland. The paper describes the design of the surveys and discusses optimality in relation to the results obtained. Both surveys aimed to achieve comprehensive coverage of a single region rather than rely on a voluntary sample. The main outcome from the work is a collection of insights into the nature and advantages of agile development, suggesting how further progress towards optimality might be achieved.
Resumo:
The agile model of software development has been mainstream for several years, and is now in a phase where its principles and practices are maturing. The purpose of this paper is to describe the results of an industry survey aimed at understanding how maturation is progressing. The survey was taken across 40 software development companies in Northern Ireland at the beginning of 2012. The paper describes the design of the survey and examines maturity by comparing the results obtained in 2012 with those from a study of agile adoption in the same region in 2010. Both surveys aimed to achieve comprehensive coverage of a single area rather than rely on a voluntary sample. The main outcome from the work is a collection of ‘insights’ into the nature and practice of agile development, the main two of which are reported in this paper.
Resumo:
The increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in the Smart Grid has exposed them to a wide range of cyber-security issues, and there are a multitude of potential access points for cyber attackers. This paper presents a SCADA-specific cyber-security test-bed which contains SCADA software and communication infrastructure. This test-bed is used to investigate an Address Resolution Protocol (ARP) spoofing based man-in-the-middle attack. Finally, the paper proposes a future work plan which focuses on applying intrusion detection and prevention technology to address cyber-security issues in SCADA systems.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
The research aims to carry out a detailed analysis of the loads applied by the ambulance workers when loading/unloading ambulance stretchers. The forces required of the ambulance workers for each system are measured using a load cell in a force handle arrangement. The process of loading and unloading is video recorded for all the systems to register the posture of the ambulance workers in different stages of the process. The postures and forces exerted by the ambulance workers are analyzed using biomechanical assessment software to examine if the work loads at any stage of the process are harmful. Kinetic analysis of each stretcher loading system is performed. Comparison of the kinetic analysis and measurements shows very close agreement for most of the cases. The force analysis results are evaluated against derived failure criteria. The evaluation is extended to a biomechanical failure analysis of the ambulance worker's lower back using 3DSSPP software developed at the Centre for Ergonomics at the University of Michigan. The critical tasks of each ambulance worker during the loading and unloading operations for each system are identified. Design recommendations are made to reduce the forces exerted based on loading requirements from the kinetic analysis. © 2006 IPEM.