43 resultados para software-defined networking (SDN)
Resumo:
In this paper we propose a case base reduction technique which uses a metric defined on the solution space. The technique utilises the Generalised Shepard Nearest Neighbour (GSNN) algorithm to estimate nominal or real valued solutions in case bases with solution space metrics. An overview of GSNN and a generalised reduction technique, which subsumes some existing decremental methods, such as the Shrink algorithm, are presented. The reduction technique is given for case bases in terms of a measure of the importance of each case to the predictive power of the case base. A trial test is performed on two case bases of different kinds, with several metrics proposed in the solution space. The tests show that GSNN can out-perform standard nearest neighbour methods on this set. Further test results show that a caseremoval order proposed based on a GSNN error function can produce a sparse case base with good predictive power.
Resumo:
The two-stage assembly scheduling problem is a model for production processes that involve the assembly of final or intermediate products from basic components. In our model, there are m machines at the first stage that work in parallel, and each produces a component of a job. When all components of a job are ready, an assembly machine at the second stage completes the job by assembling the components. We study problems with the objective of minimizing the makespan, under two different types of batching that occur in some manufacturing environments. For one type, the time to process a batch on a machine is equal to the maximum of the processing times of its operations. For the other type, the batch processing time is defined as the sum of the processing times of its operations, and a setup time is required on a machine before each batch. For both models, we assume a batch availability policy, i.e., the completion times of the operations in a batch are defined to be equal to the batch completion time. We provide a fairly comprehensive complexity classification of the problems under the first type of batching, and we present a heuristic and its worst-case analysis under the second type of batching.
Resumo:
This presentation will attempt to address the issue of whether the engineering design community has the knowledge, data and tool sets required to undertake advanced evacuation analysis. In discussing this issue I want to draw on examples not only from the building industry but more widely from where ever people come into contact with an environment fashioned by man. Prescriptive design regulations the world over suggest that if we follow a particular set of essentially configurational regulations concerning travel distances, number of exits, exit widths, etc it should be possible to evacuate a structure within a pre-defined acceptable amount of time. In the U.K. for public buildings this turns out to be 2.5 minutes, internationally in the aviation industry this is 90 seconds, in the UK rail industry this is 90 seconds and the international standard adopted by the maritime industry is 60 minutes. The difficulties and short comings of this approach are well known and so I will not repeat them here, save to say that this approach is usually littered with “magic numbers” that do not stand up to scrutiny. As we are focusing on human behaviour issues, it is also worth noting that more generally, the approach fails to take into account how people actually behave, preferring to adopt an engineer’s view of what people should do in order to make their design work. Examples of the failure of this approach are legion and include the; Manchester Boeing 737 fire, Kings Cross underground station fire, Piper Alpha oil platform explosion, Ladbroke Grove Rail crash and fire, Mont Blanc tunnel fire, Scandinavian Star ferry fire and the Station Nightclub fire.
Resumo:
This paper describes recent developments made to the stress analysis module within FLOTHERM, extending its capability to handle viscoplastic behavior. It also presents the validation of this approach and results obtained for an SMT resistor as an illustrative example. Lifetime predictions are made using the creep strain energy based models of Darveaux. Comment is made about the applicability of the damage model to the geometry of the joint under study.
Resumo:
The newly formed Escape and Evacuation Naval Authority regulates the provision of abandonment equipment and procedures for all Ministry of Defence Vessels. As such, it assures that access routes on board are evaluated early in the design process to maximize their efficiency and to eliminate, as far as possible, any congestion that might occur during escape. This analysis can be undertaken using a computer-based simulation for given escape scenarios and replicates the layout of the vessel and the interactions between each individual and the ship structure. One such software tool that facilitates this type of analysis is maritimeEXODUS. This tool, through large scale testing and validation, emulates human shipboard behaviour during emergency scenarios; however it is largely based around the behaviour of civilian passengers and fixtures and fittings of merchant vessels. Hence there existed a clear requirement to understand the behaviour of well-trained naval personnel as opposed to civilian passengers and be able to model the fixtures and fittings that are exclusive to warships, thus allowing improvements to both maritimeEXODUS and other software products. Human factor trials using the Royal Navy training facilities at Whale Island, Portsmouth were recently undertaken to collect data that improves our understanding of the aforementioned differences. It is hoped that this data will form the basis of a long-term improvement package that will provide global validation of these simulation tools and assist in the development of specific Escape and Evacuation standards for warships. © 2005: Royal Institution of Naval Architects.
Resumo:
At 8.18pm on 2 September 1998, Swissair Flight 111 (SR 111), took off from New York’s JFK airport bound for Geneva, Switzerland. Tragically, the MD-11 aircraft never arrived. According to the crash investigation report, published on 27 March 2003, electrical arcing in the ceiling void cabling was the most likely cause of the fire that brought down the aircraft. No one on board was aware of the disaster unfolding in the ceiling of the aircraft and, when a strange odour entered the cockpit, the pilots thought it was a problem with the air-conditioning system. Twenty minutes later, Swissair Flight 111 plunged into the Atlantic Ocean five nautical miles southwest of Peggy’s Cove, Nova Scotia, with the loss of all 229 lives on board. In this paper, the Computational Fluid Dynamics (CFD) analysis of the in-flight fire that brought down SR 111 is described. Reconstruction of the wreckage disclosed that the fire pattern was extensive and complex in nature. The fire damage created significant challenges to identify the origin of the fire and to appropriately explain the heat damage observed. The SMARTFIRE CFD software was used to predict the “possible” behaviour of airflow as well as the spread of fire and smoke within SR 111. The main aims of the CFD analysis were to develop a better understanding of the possible effects, or lack thereof, of numerous variables relating to the in-flight fire. Possible fire and smoke spread scenarios were studied to see what the associated outcomes would be. This assisted investigators at Transportation Safety Board (TSB) of Canada, Fire & Explosion Group in assessing fire dynamics for cause and origin determination.
Resumo:
This paper examines different ways for measuring similarity between software design models for the purpose of software reuse. Current approaches to this problem are discussed and a set of suitable similarity metrics are proposed and evaluated. Work on the optimisation of weights to increase the competence of a CBR system is presented. A graph matching algorithm and associated metrics capturing the structural similarity between UML class diagrams is presented and demonstrated through an example case.
Resumo:
This paper describes the use of a blackboard architecture for building a hybrid case based reasoning (CBR) system. The Smartfire fire field modelling package has been built using this architecture and includes a CBR component. It allows the integration into the system of qualitative spatial reasoning knowledge from domain experts. The system can be used for the automatic set-up of fire field models. This enables fire safety practitioners who are not expert in modelling techniques to use a fire modelling tool. The paper discusses the integrating powers of the architecture, which is based on a common knowledge representation comprising a metric diagram and place vocabulary and mechanisms for adaptation and conflict resolution built on the Blackboard.
Resumo:
Software metrics are the key tool in software quality management. In this paper, we propose to use support vector machines for regression applied to software metrics to predict software quality. In experiments we compare this method with other regression techniques such as Multivariate Linear Regression, Conjunctive Rule and Locally Weighted Regression. Results on benchmark dataset MIS, using mean absolute error, and correlation coefficient as regression performance measures, indicate that support vector machines regression is a promising technique for software quality prediction. In addition, our investigation of PCA based metrics extraction shows that using the first few Principal Components (PC) we can still get relatively good performance.
Resumo:
[This abstract is based on the authors' abstract.]Three new standards to be applied when adopting commercial computer off-the-shelf (COTS) software solutions are discussed. The first standard is for a COTS software life cycle, the second for a software solution user requirements life cycle, and the third is a checklist to help in completing the requirements. The standards are based on recent major COTS software solution implementations.
Resumo:
The parallelization of real-world compute intensive Fortran application codes is generally not a trivial task. If the time to complete the parallelization is to be significantly reduced then an environment is needed that will assist the programmer in the various tasks of code parallelization. In this paper the authors present a code parallelization environment where a number of tools that address the main tasks such as code parallelization, debugging and optimization are available. The ParaWise and CAPO parallelization tools are discussed which enable the near automatic parallelization of real-world scientific application codes for shared and distributed memory-based parallel systems. As user involvement in the parallelization process can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform nearly automatic relative debugging of a program that has been parallelized using the tools. A high quality interprocedural dependence analysis as well as user-tool interaction are also highlighted and are vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of benchmark and real-world application codes parallelized are presented and show the benefits of using the environment
Resumo:
In this chapter we look at JOSTLE, the multilevel graph-partitioning software package, and highlight some of the key research issues that it addresses. We first outline the core algorithms and place it in the context of the multilevel refinement paradigm. We then look at issues relating to its use as a tool for parallel processing and, in particular, partitioning in parallel. Since its first release in 1995, JOSTLE has been used for many mesh-based parallel scientific computing applications and so we also outline some enhancements such as multiphase mesh-partitioning, heterogeneous mapping and partitioning to optimise subdomain shape