813 resultados para Software Development– metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic fields are used in a number of processes related to the extraction of metals, production of alloys and the shaping of metal components. Computational techniques have an increasingly important role to play in the simulation of such processes, since it is often difficult or very costly to conduct experiments in the high temperature conditions encountered and the complex interaction of fluid flow, heat transfer and magnetic fields means simple analytic models are often far removed from reality. In this paper an overview of the computational activity at the University of Greenwich is given in this area, covering the past ten years. The overview is given from the point of view of the modeller and within the space limitations imposed by the format it covers the numerical methods used, attempts at validation against experiments or analytic procedures; it highlights successes, but also some failures. A broad range of models is covered in the review (and accompanying lecture), used to simulate (a) A-C field applications: induction melting, magnetic confinement and levitation, casting and (b) D-C field applications such as: arc welding and aluminium electroloysis. Most of these processes involve phase change of the metal (melting or solidification), the presence of a dynamic free surface and turbulent flow. These issues affect accuracy and need to be address by the modeller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Direct chill (DC) casting is a core primary process in the production of aluminum ingots. However, its operational optimization is still under investigation with regard to a number of features, one of which is the issue of curvature at the base of the ingot. Analysis of these features requires a computational model of the process that accounts for the fluid flow, heat transfer, solidification phase change, and thermomechanical analysis. This article describes an integrated approach to the modeling of all the preceding phenomena and their interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a generalisation of the k-nearest neighbour (k-NN) retrieval method based on an error function using distance metrics in the solution and problem space. It is an interpolative method which is proposed to be effective for sparse case bases. The method applies equally to nominal, continuous and mixed domains, and does not depend upon an embedding n-dimensional space. In continuous Euclidean problem domains, the method is shown to be a generalisation of the Shepard's Interpolation method. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The performance of the retrieval method is examined with reference to the Iris classification problem,and to a simulated sparse nominal value test problem. The introducion of a solution-space metric is shown to out-perform conventional nearest neighbours methods on sparse case bases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The newly formed Escape and Evacuation Naval Authority regulates the provision of abandonment equipment and procedures for all Ministry of Defence Vessels. As such, it assures that access routes on board are evaluated early in the design process to maximize their efficiency and to eliminate, as far as possible, any congestion that might occur during escape. This analysis can be undertaken using a computer-based simulation for given escape scenarios and replicates the layout of the vessel and the interactions between each individual and the ship structure. One such software tool that facilitates this type of analysis is maritimeEXODUS. This tool, through large scale testing and validation, emulates human shipboard behaviour during emergency scenarios; however it is largely based around the behaviour of civilian passengers and fixtures and fittings of merchant vessels. Hence there existed a clear requirement to understand the behaviour of well-trained naval personnel as opposed to civilian passengers and be able to model the fixtures and fittings that are exclusive to warships, thus allowing improvements to both maritimeEXODUS and other software products. Human factor trials using the Royal Navy training facilities at Whale Island, Portsmouth were recently undertaken to collect data that improves our understanding of the aforementioned differences. It is hoped that this data will form the basis of a long-term improvement package that will provide global validation of these simulation tools and assist in the development of specific Escape and Evacuation standards for warships. © 2005: Royal Institution of Naval Architects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines different ways for measuring similarity between software design models for the purpose of software reuse. Current approaches to this problem are discussed and a set of suitable similarity metrics are proposed and evaluated. Work on the optimisation of weights to increase the competence of a CBR system is presented. A graph matching algorithm and associated metrics capturing the structural similarity between UML class diagrams is presented and demonstrated through an example case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software metrics are the key tool in software quality management. In this paper, we propose to use support vector machines for regression applied to software metrics to predict software quality. In experiments we compare this method with other regression techniques such as Multivariate Linear Regression, Conjunctive Rule and Locally Weighted Regression. Results on benchmark dataset MIS, using mean absolute error, and correlation coefficient as regression performance measures, indicate that support vector machines regression is a promising technique for software quality prediction. In addition, our investigation of PCA based metrics extraction shows that using the first few Principal Components (PC) we can still get relatively good performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines different ways of measuring similarity between software design models for Case Based Reasoning (CBR) to facilitate reuse of software design and code. The paper considers structural and behavioural aspects of similarity between software design models. Similarity metrics for comparing static class structures are defined and discussed. A Graph representation of UML class diagrams and corresponding similarity measures for UML class diagrams are defined. A full search graph matching algorithm for measuring structural similarity diagrams based on the identification of the Maximum Common Sub-graph (MCS) is presented. Finally, a simple evaluation of the approach is presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role intra-organizational knowledge exchanges play in innovation processes has been widely acknowledged in the organizational literature. This paper contributes to the understanding of which specific configurations knowledge networks assume during different phases of radical and incremental innovation processes. The case study we selected is a FLOSS (Free/Libre Open Source Software) community consisting of 233 developers committed to the development of a web browser application since November 2002. By harvesting the mailing list, official blog and code repository of a FLOSS community, we investigate the patterns of knowledge exchange and individual contributions of its developers. We measure structural cohesion and compare global and local network properties at different points in time. Preliminary results show that phases of radical and incremental innovation are associated with specific configurations of the knowledge network as a whole as well as with different network positions of the core developers of the software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents novel collaboration methods implemented using a centralized client/server product development integration architecture, and a decentralized peer-to-peer network for smaller and larger companies using open source solutions. The product development integration architecture has been developed for the integration of disparate technologies and software systems for the benefit of collaborative work teams in design and manufacturing. This will facilitate the communication of early design and product development within a distributed and collaborative environment. The novelty of this work is the introduction of an‘out-of-box’ concept which provides a standard framework and deploys this utilizing a proprietary state-of-the-art product lifecycle management system (PLM). The term ‘out-of-box’ means to modify the product development and business processes to suit the technologies rather than vice versa. The key business benefits of adopting such an approach are a rapidly reconfigurable network and minimal requirements for software customization to avoid systems instability

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been proposed that the field of appropriate technology (AT) - small-scale, energy efficient and low-cost solutions, can be of tremendous assistance in many of the sustainable development challenges, such as food and water security, health, shelter, education and work opportunities. Unfortunately, there has not yet been a significant uptake of AT by organizations, researchers, policy makers or the mainstream public working in the many areas of the development sector. Some of the biggest barriers to higher AT engagement include: 1) AT perceived as inferior or ‘poor persons technology’, 2) questions of technological robustness, design, fit and transferability, 3) funding, 4) institutional support, as well as 5) general barriers associated with tackling rural poverty. With the rise of information and communication technologies (ICTs) for online networking and knowledge sharing, the possibilities to tap into the collaborative open-access and open-source AT are growing, and so is the prospect for collective poverty reducing strategies, enhancement of entrepreneurship, communications, education and a diffusion of life-changing technologies. In short, the same collaborative philosophy employed in the success of open source software can be applied to hardware design of technologies to improve sustainable development efforts worldwide. To analyze current barriers to open source appropriate technology (OSAT) and explore opportunities to overcome such obstacles, a series of interviews with researchers and organizations working in the field of AT were conducted. The results of the interviews confirmed the majority of literature identified barriers, but also revealed that the most pressing problem for organizations and researchers currently working in the field of AT is the need for much better communication and collaboration to share the knowledge and resources and work in partnership. In addition, interviews showcased general receptiveness to the principles of collaborative innovation and open source on the ground level. A much greater focus on networking, collaboration, demand-led innovation, community participation, and the inclusion of educational institutions through student involvement can be of significant help to build the necessary knowledge base, networks and the critical mass exposure for the growth of appropriate technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a multi-language framework to FPGA hardware development which aims to satisfy the dual requirement of high-level hardware design and efficient hardware implementation. The central idea of this framework is the integration of different hardware languages in a way that harnesses the best features of each language. This is illustrated in this paper by the integration of two hardware languages in the form of HIDE: a structured hardware language which provides more abstract and elegant hardware descriptions and compositions than are possible in traditional hardware description languages such as VHDL or Verilog, and Handel-C: an ANSI C-like hardware language which allows software and hardware engineers alike to target FPGAs from high-level algorithmic descriptions. On the one hand, HIDE has proven to be very successful in the description and generation of highly optimised parameterisable FPGA circuits from geometric descriptions. On the other hand, Handel-C has also proven to be very successful in the rapid design and prototyping of FPGA circuits from algorithmic application descriptions. The proposed integrated framework hence harnesses HIDE for the generation of highly optimised circuits for regular parts of algorithms, while Handel-C is used as a top-level design language from which HIDE functionality is dynamically invoked. The overall message of this paper posits that there need not be an exclusive choice between different hardware design flows. Rather, an integrated framework where different design flows can seamlessly interoperate should be adopted. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware languages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is believed to be the first documented account of a full adoption of lean by a software company. Lean techniques were devised by Toyota and other manufacturers over the last 50 years. The techniques are termed lean because they require less resource to produce more product and exceptional quality. Lean ideas have also been successful in service industries and product development. Applying lean to software has been advocated for over 10 years. Timberline, Inc started their lean initiative in Spring 2001 and this paper records their journey, results and lessons learned up to Fall 2003. This case study demonstrates that lean thinking can work successfully for software developers. It also indicates that the extensive lean literature is a valuable source of new ideas for software engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exam timetabling is one of the most important administrative activities that takes place in academic institutions. In this paper we present a critical discussion of the research on exam timetabling in the last decade or so. This last ten years has seen an increased level of attention on this important topic. There has been a range of significant contributions to the scientific literature both in terms of theoretical andpractical aspects. The main aim of this survey is to highlight the new trends and key research achievements that have been carried out in the last decade.We also aim to outline a range of relevant important research issues and challenges that have been generated by this body of work.

We first define the problem and review previous survey papers. Algorithmic approaches are then classified and discussed. These include early techniques (e.g. graph heuristics) and state-of-the-art approaches including meta-heuristics, constraint based methods, multi-criteria techniques, hybridisations, and recent new trends concerning neighbourhood structures, which are motivated by raising the generality of the approaches. Summarising tables are presented to provide an overall view of these techniques. We discuss some issues on decomposition techniques, system tools and languages, models and complexity. We also present and discuss some important issues which have come to light concerning the public benchmark exam timetabling data. Different versions of problem datasetswith the same name have been circulating in the scientific community in the last ten years which has generated a significant amount of confusion. We clarify the situation and present a re-naming of the widely studied datasets to avoid future confusion. We also highlight which research papershave dealt with which dataset. Finally, we draw upon our discussion of the literature to present a (non-exhaustive) range of potential future research directions and open issues in exam timetabling research.